📛 LIMA: Less Is More for Alignment

🧠 LIMA, a 65B parameter model, excels in unseen tasks with 1,000 prompts, suggesting pretraining imparts significant knowledge and minimal tuning is needed for quality results.

🐦 7

❤️ 467

🔗 arxiv.org/pdf/2305.11206.pdf (https://arxiv.org/pdf/2305.11206.pdf)

https://nitter.moomoo.me/ArXivGPT/status/1660540768501608450#m

Reply to this note

Please Login to reply.

Discussion

No replies yet.