After playing around with the Gemma 3n (4b) model on my Mac mini (M2 - 8/256) for a few mins, it seems like Ollama takes a sec for it to load, but after it’s “loaded the model” it runs pretty quickly… gonna use it through OpenWebUI rq..

Reply to this note

Please Login to reply.

Discussion

No replies yet.