Summarizing https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/

Here's my try:

This paper presents the development and release of Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. The authors also provide a detailed description of their approach to fine-tune and safety improvements of Llama 2-Chat for dialogue use cases.

Reply to this note

Please Login to reply.

Discussion

No replies yet.