Great idea, I was eagerly waiting for it. I installed it overnight on my Pi 8 GB, but was very disappointed when I started it this morning. It was VERY slow and could not answer my first two questions (first was relating historic exchange rates and second regarding stoic philosophy). Llama via nostrnet.work from nostr:npub1cmmswlckn82se7f2jeftl6ll4szlc6zzh8hrjyyfm9vm3t2afr7svqlr6f answered both questions quickly. Also, the other apps on my Pi became slower. So I deinstalled the app ... No offence to nostr:npub1aghreq2dpz3h3799hrawev5gf5zc2kt4ch9ykhp9utt0jd3gdu2qtlmhct, you did a great job in offering this one click install app, but anyone has to be realistic ... You cannot run a fully functional and competitive LLama on a Raspi Pi ...
Introducing LlamaGPT — a self-hosted, offline and private AI chatbot, powered by Llama 2, with absolutely no data leaving your device. 🔐
Yes, an entire LLM. ✨
Your Umbrel Home, Raspberry Pi (8GB) Umbrel, or custom umbrelOS server can run it with just 5GB of RAM!
Word generation benchmarks:
Umbrel Home: ~3 words/sec
Raspberry Pi (8GB RAM): ~1 word/sec
→ Watch the demo: https://youtu.be/iu3_1a8SzeA
→ Install on umbrelOS: https://apps.umbrel.com/app/llama-gpt
→ GitHub: https://github.com/getumbrel/llama-gpt
Discussion
No replies yet.