Anyone using #goose with #ollama? Which ollama models are working best?

nostr:nprofile1qqsd0hut8c2pveuk4zkcws9sdap8465am9dh9cp8d2530yssuflcracpp4mhxue69uhkummn9ekx7mqpz3mhxue69uhhyetvv9ujuerpd46hxtnfdum5yj2m

#asknostr

Reply to this note

Please Login to reply.

Discussion

Got it to work with qwen2.5:14b but it's just too slow unfortunately.

Smaller models work fine with system stuff like opening the browser quickly but they're quickly overwhelmed and start to hallucinate. Unfortunately bigger models don't for into VRAM.