Can't wait to try it out!
Introducing LlamaGPT — a self-hosted, offline and private AI chatbot, powered by Llama 2, with absolutely no data leaving your device. 🔐
Yes, an entire LLM. ✨
Your Umbrel Home, Raspberry Pi (8GB) Umbrel, or custom umbrelOS server can run it with just 5GB of RAM!
Word generation benchmarks:
Umbrel Home: ~3 words/sec
Raspberry Pi (8GB RAM): ~1 word/sec
→ Watch the demo: https://youtu.be/iu3_1a8SzeA
→ Install on umbrelOS: https://apps.umbrel.com/app/llama-gpt
→ GitHub: https://github.com/getumbrel/llama-gpt
Discussion
nostr:npub1aghreq2dpz3h3799hrawev5gf5zc2kt4ch9ykhp9utt0jd3gdu2qtlmhct ufff, I have 2tb nvme ssd, 24gb of ram (18 used atm) and an i7 10510U and it's reaaaly slow (less than 1 word/s).
Any tweaks to improve this?