Howdy! Are you using something like Ollama? You should be able to select 'OpenAI compatible' as an option for 'stacks configure' and point an OpenAI-like endpoint to it. In Ollama's case, this should be http://localhost:11434/v1/

More information on this can be found here:

https://github.com/ollama/ollama/blob/main/docs/openai.md

Let me know if you have any questions!

Reply to this note

Please Login to reply.

Discussion

It will be really funny if I have been messing with this for more than a month only to have it need /v1/ 😂😂

Thank you. 🙏

I believe this worked. my compyter couldnt handle the model i have. but i have another compyter with a better gpu i can try.

thank you.

Nice!! And indeed, that is the true struggle of local models. I wish you luck. 🙏

Why am I writing computer with a y 😆

If it’s ok, I have a number of questions because it seems that my setup is still clunky.

I updated the url to the local host, this seems to be working now. I also seem to need to open the the project file and change the default agent name from Claude to the one I’m running. (Currently trying qwen2.5-coder)

The agent is replying to me in text, instead of actually developing an app… I don’t understand why.

Any suggestions?

I know that model supports tool use, but I recall a very similar experience when trying to use it with goose a while back. I'll need to set some of these models up to experiment so I can provide you with more robust feedback.

I would really appreciate the help!

I finally had a moment to test. This interaction needs 'some love'; I believe it would require an update to stacks-cli to be more aware of how this model interacts with tools.

I'm currently experimenting to see if we can make this work a bit more nicely.

Sweet! Thank you for testing.

That’s what I reached as well based on your instructions.

Had to go and edit the agent.json manually to get here.

Stacks-CLI should ask me for the agents name at least, that would be very helpful.

But yeah, somehow it’s still not using tools correctly.

Indeed, this would require updates to stacks-cli's codebase directly to properly support this model.

Also, in case this might prove helpful, you can do `stacks agent -m ` to directly override the active model at runtime.

Very helpful.