I'm not sure a lot of you know, but if you host your own ollama server, you can get translations for free on pollerama.fun

Reply to this note

Please Login to reply.

Discussion

I saw the call action to the extension on the site but I don't have an ollama server, can you explain it to me?

It's a server wraps an open router compatible API around locally hosted LLM models, which means you can use these models in your regular workflows , in applications or on websites like we do on pollerama.fun or formstr.app

A lot of these models will work on your laptops and raspberry pis too.

Which model would you recommend for translations?

And do they perform better than libre translate?

https://libretranslate.com/

They perform well enough, I've not compared them to hosted solutions but just checking with Google translate they work fine

I use gemma3:8b

That is I'll give that one a go. I'm using libre translate locally (docker), it's pretty fast but is notable worse than google translate, especially on short sentences and/or from Chinese/Hindi/Thai.

I love that the LLM tries to teach me the language as well 😂 often times it breaks down words and gives individual meanings, I love it.

Also we might use ollama for other things on pollerama as well (not sure what? Maybe paraphrasing your feed or something. So it might be useful in that way too.

Been using ollama for text summaries, so far the mistral-small3.2, mistral-nemo and gemma3 models seem like they do the best job.

They can all hallucinate stuff into the summaries, not sure how to minimize the chance of that happening.

Yeah I really want to add other cool local AI features, but I'm not sure what would be really useful, maybe I'll let them add a quote below each note just for giggles 😂

Interested in integrating https://translate.jumble.social/ ? It's powered by gemini 2.5 pro

It can be powered by any locally hosted model including Gemini, I've also tested with very small models like llama3.2 and it works well.

My screenshot is using gemma3

That's awesome. But not everyone can run a local model, haha.

Everybody using a laptop can, there are very small models available these days