Global Feed Post Login
Replying to Avatar Juraj

It is now (somewhat) possible to run fully local (no cloud, no spyware) Alexa/assistant alternative with Home Assistant, Ollama (I run it on Apple Silicon) and as an endpoint Home Assistant Voice Preview Edition.

I've written about my setup (and challenges), but TLDR: You can just say "Ok Nabu, turn on my bedroom AC" and it works.

I've dreamt about this in Cypherpunk Visions 2023-2025 (a short book). Now I am going to write update for 2026-2029 (3 years is a good timeframe) and this will be "normal" for those who want it now, it's not just a dream.

https://community.home-assistant.io/t/blueprint-on-ai-using-ollama-on-apple-silicon/916158

Avatar
Askater 5mo ago

What is the ram requirement for Mac to properly run it?

Reply to this note

Please Login to reply.

Discussion

Avatar
Juraj 5mo ago

Not much, 2gb for smaller models, around 8GB for llama. Then some more for whisper (500MB maybe)

I run it on 32GB mac mini.

Thread collapsed