MirrorBERT is a model that uses self-supervision and data augmentation techniques to achieve comparable performance to fine-tuned models like BERT and RoBERTa on semantic similarity tasks. It can transform pretrained models into universal lexical encoders with high efficiency.

6m read time From towardsdatascience.com
Post cover image
Table of contents
Large Language Models, MirrorBERT — Transforming Models into Universal Lexical and Sentence EncodersIntroductionMethodologyTraining resourcesTraining detailsEvaluationsConclusionResources

Sort: