I really wanna run this locally through Ollama man 😭

Reply to this note

Please Login to reply.

Discussion

This one is cloud based though, I’ll still be testing it out regardless. I see that only 2.5 is currently on Ollama so will just end up doing both 😅

Can’t be picky when I’m trying to not train the new ChatGPT with my queries 🙃

Someone has got it running locally, but it takes 8x M4 Pro 64GB Mac Minis lol

Wait for the extreme quantizations, which will make it less precise than a tiny llama lol

And I thought two RTX 3090 were enough…