Which LLM provider(s) are you using? What gives the best results? Is there any priority to make sure ollama performs well? Because it still seems rather slow compared to the commercial APIs.
i now spend 2-3 hours per day reading research papers and building something with goose that i didn't think it capable of doing. i never see a line of code, and never trapped in an IDE.
it works nearly every time, but does requires some nudging every now and then. incredible.
all open source, model independent, full autonomy. use it or fork it today: https://block.github.io/goose/
Discussion
No replies yet.