Hey client devs! Most smart phones come with a simple and small on-device LLM these days. For example Google's Gemma should be available on most Pixel devices. I wouldn't be surprised if iOS had a similar thing.

How about you run every post in my timeline through the local LLM and filter it using a prompt that I can enter in natural language? Example: "do not show me any political posts and amplify technical posts about Bitcoin". The prompt could be wrapped by a simple "score this post from 0-10" and you show me only those with a score > 5 (or whatever threshold I set)?

Please try this. Afaik this hasn't been done before. We can just do things for the first time ever. Let's break all rules and become the inspiration for everything else to copy us.

Reply to this note

Please Login to reply.

Discussion

Vitor likes to be tempted with a good time.

Our translations are local AIs so it could add Gemma to that. The only issue is the context window. We might not be able to add all the posts into the context for the AI to process. But I can try.

THANK YOU I AM YOUR BIGGEST FAN

sorry, large caps is so over

Likewise, even though you only make custodial systems :P

I just work with what I have, no time to wait.

Spectacular!!

When one comes to graphene, it'll be a hard maybe, leaning no.

Skill issue

Then make it easy rockstar

There are countless frameworks that can do this with FOSS

That's great but I don't even want to run them.

ok thanks for letting us know

actually is a really good idea

Great idea, if not on mobile or desktop, on a personal relay that pulls notes for all contacts. This could also work for relevancy for comments. Could also be useful for email servers and spam?

I worked on a draft for a similar idea for NSFW (Not Safe For Work) posts.

It used the Yahoo! NSFW model but significantly increased the downloadable size of nostr:npub18m76awca3y37hkvuneavuw6pjj4525fw90necxmadrvjg0sdy6qsngq955 and was difficult to get to work with the Kingfisher (for images/GIFs) stuff that was used at the time. Might be a similar way to do it with built-in AI in the device though.

This sounds amazing, if technically doable.

Only issue I see is batter consumption and speed but that's why we have TPUs in phones

Great idea 💥