**RT @GuillaumeLample:**
Today we release LLaMA, 4 foundation models ranging from 7B to 65B parameters.
LLaMA-13B outperforms OPT and GPT-3 175B on most benchmarks. LLaMA-65B is competitive with Chinchilla 70B and PaLM 540B.
The weights for all models are open and available at research.facebook.com/public… (https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/)
1/n




https://nitter.moomoo.me/GuillaumeLample/status/1629151231800115202#m