Llama 2 pretrained models are trained on 2 trillion tokens, and have double the context length than Llama 1. The fine-tuned model, Llama-2-chat, leverages publicly available instruction datasets and over 1 million human annotations. The model was pretrained on publicly available online data sources.

1m read timeFrom ai.meta.com
Post cover image

Sort: