Clients allowing local personal "algo" would be nice. We have open source very small and efficient LLM's that users could use to filter and sort posts by topics that users prefer (all local on user hardware, nothing on line).

Clients could offer "outsourcing" of said model (for example i am running larger LLM's on my gaming PC most of the time) so i could set up exterlan point and client would use it for algo if it's accessible, and if it's not, it would use smaller local one (similar thing is done by Immich....you can tell it to use AI from any IP you want, hosted by you or someone else, and if not accessible it will revert to local cpu/gpu).

For more powerful stuff someone could offer "algo" service that would be paed vith zaps. Hosted efficient LLM that would filter content for others (by rules that users themselves set up). Those could be larger and more potent LLM models.

I'm ok with "algo" as long as user can control it and choose and pick everything from local on device to remote ones by users rules.

Reply to this note

Please Login to reply.

Discussion

No replies yet.