if you haven't worked out local LLMs yet, you probably should soon. Its not hard.

Reply to this note

Please Login to reply.

Discussion

Here is a single command to get you started: curl -fsSL https://ollama.com/install.sh | sh && ollama run gemma3:270m

that model (gemma3:270m) should run on pretty much any cpu and ram config at a usable speed. its not a super smart model. Explore http://ollama.com/search to see other models. My daily driver is qwen3:4b - it fits in 12gb of vram and is good enough for most tasks.

if you are running SpywareOSs (Mac/Windows) ollama has its own download for them: https://ollama.com/download