Nice. Would regular users put the API end point address into settings?
Discussion
yeah ux would be just select ollama provider since it is a standard endpoint
What model are you running and what’s the performance like on the Mac?
Just this one i had on my ollama instance but i think there are better ones now
I’m using deepseek-r1:14B at the moment but need to try more out. Currently running it concurrently against GPT-4o (which queries OpenAI API) in Open Web UI to compare output (which is a nice feature) I think I need a bigger GPU!
Also been building a custom model (prompt) for Nini.