It is now (somewhat) possible to run fully local (no cloud, no spyware) Alexa/assistant alternative with Home Assistant, Ollama (I run it on Apple Silicon) and as an endpoint Home Assistant Voice Preview Edition.

I've written about my setup (and challenges), but TLDR: You can just say "Ok Nabu, turn on my bedroom AC" and it works.

I've dreamt about this in Cypherpunk Visions 2023-2025 (a short book). Now I am going to write update for 2026-2029 (3 years is a good timeframe) and this will be "normal" for those who want it now, it's not just a dream.

https://community.home-assistant.io/t/blueprint-on-ai-using-ollama-on-apple-silicon/916158

Reply to this note

Please Login to reply.

Discussion

What is the ram requirement for Mac to properly run it?

Not much, 2gb for smaller models, around 8GB for llama. Then some more for whisper (500MB maybe)

I run it on 32GB mac mini.

In theory you could also run this in a cloud. If they use TEE, then your data should remain private.

Yes.

But then I couldn't turn on the lights with my voice if internet is down. Not apocalypse ready.

I also run other apps such as nostr:nprofile1qqspfdl3hkjwvnunzds5tcnz98zqdlgmvrc0q9vwwj3k7sxplawhzug6ezekf so my home is vibing with intelligence.