Thank you for your testimony Calle, I would like not to miss the AI tick but I don’t know where to start. Any advice? I want to learn how to run a local LLM to start for example.

Reply to this note

Please Login to reply.

Discussion

In my experience local LLMs don't cut it. Just not good enoughbin most cases. So you need to be ready to use commercial ones and by doing so, train them and accelerate their onset.

Depends on what you want to do with them. You can run local llms using ollama

I have to spend time trying, thank you.

Try out using Claude Code. Works as a CLI so you can still use vim (or emacs if you really hate your pinky) or whatever editor you want. No plugin needed although they’re available.

You’ll also want to learn about prompt engineering and its twin sister context engineering [1]. From my experience most people who have a bad time with AI assistants are just giving bad prompts. Exceptions of course and plenty of valid criticism. These things aren’t infallible.

1. https://rlancemartin.github.io/2025/06/23/context_engineering/

Thank you Jonny

Checkout Ollama. Not the best performance but it's rather easy to use and will work even if you don't have much VRAM/GPU compute.

If you're just curios and don't have time for the tech stuff, let me know and I'll give you an account on my system. OpenWebUI, publicly available, 3-4 models installed, up to 32B. Not great but it's all you get out of midrange consumer graphics cards.

That’s very kind of you.