How does the translation in Amethyst work? Is it using an internal LLM or is it connecting to a remote server? nostr:nprofile1qqsyvrp9u6p0mfur9dfdru3d853tx9mdjuhkphxuxgfwmryja7zsvhqpzamhxue69uhhv6t5daezumn0wd68yvfwvdhk6tcpz9mhxue69uhkummnw3ezuamfdejj7qgwwaehxw309ahx7uewd3hkctcscpyug?

Reply to this note

Please Login to reply.

Discussion

In device AI, it downloads the model and runs the translations locally.

That is so cool!! How big is the model? The babel fish idea is close to reality!

Maybe that's why Amethyst is using 1.4 GB on my phone 😄

Ahaha maybe..