just me that’s more excited about local #ai models?

google just posted about gemma 3n a nano model which they claim runs on 2GB of RAM (although god knows what gpu they used)

offline, fast(?), no limits, privacy friendly

seems cool – i am already impressed by running deepseek on my iphone atm https://developers.googleblog.com/en/introducing-gemma-3n/?utm_source=X&utm_medium=link&utm_campaign=IO&utm_id=I/O

Reply to this note

Please Login to reply.

Discussion

No replies yet.