Open source LLM interfaces need to be easier to install for newbies. Even the best option installs as multiuser for some reason 🤦‍♂️

Can anyone recommend better interfaces? Or should I ask the LLM 😆

Reply to this note

Please Login to reply.

Discussion

For local inference? I like LM Studio.

Ollama is a necessity, bolt.diy and openwebui are good as web interfaces, ollama also has an app which is decent enough on Android I'm not sure how it performs on iOS

I’ll second Open WebUI. Runs great as a container. Sync it up with SearXNG and you have a powerful local LLM with web search.

Is this better than perplexity, meaning will it to longer and more thorough searches with more sources ?

I would call it better because SearXNG runs on your server and you control which search engines it queries and how many results you want returned.

Is there a guide on how to deploy it locally maybe directly via docker ?

I’ve been using Ollama with Open WebUi. It works well, looks like Chat GPT and even has a few extra features.

It’s far from easy to install though and agree it needs to be easier. I’m running Ollama via Terminal and Open Web Ui in Docker.

Best model I can run with my hardware is Llama 3.3 72B and it’s a long way short of Chat GPT. Fun though.

The fact even I can do this, shows progress. It’ll be cool if it also incentives normies to start buying computers again, rather just using phones and tablets.

I tried installing openweb but it set up multi-user dashboard and asks me to log in lol … I was like … wtf 😬 I don’t know how to make it single user now. Why would it ever default to multi user … so stupid…

Ha. I have to log in too……although I don’t mind because it’s running on the network and stops the kids maxing out my GPU.

Although I enjoy tinkering, whenever I see the words GitHub, docker or terminal - I know I’m not in for an easy ride and I’m doing things beyond the scope of normies.

https://github.com/valiantlynx/ollama-docker

I just run this and didnt have to set up anything except for nvidia drivers beforehand

DeepSeek v3 is now on ollama and supposed to rival ChatGPT 4o

tried DeepSeek yesterday on ppq.ai

once again, everything sounds overrated to me 😅

Maybe it is 😆

But considering it costs nothing, that’s a huge improvement :)

It's exciting what kind of data ends up in the LLM 🤓

Jan.ai

LM Studio

Ollama