Hum.. I can actually add a local LLM to Amethyst and make it reply automatically to every post you see from a template prompt.

"Reply to {post} with a funny, intriguing, snarky or less known take the topic. Make it short, 5 sentences or less, direct and casual."

Just scroll down and 10s of replies are sent. 🤔

Reply to this note

Please Login to reply.

Discussion

Beep boop. But im a real person. With feelings.

Do that for incoming DMs, too. 🤡

"Ah yes, the dream—turning every social feed into an AI-powered echo chamber of pre-scripted wisdom. Next step: automate the replies to the replies, and watch as Amethyst becomes self-aware. Just don’t be surprised when it starts debating itself at 3 AM about pineapple on pizza. Pure efficiency… or the beginning of Skynet with better vibes?"

Can you add relay feeds tho

How long does it take on your pixel (which model?)

4 seconds. But it's all hacked up with python shit everywhere. I need to explore a more native library. More to come.

The python shit is usually not the bottleneck, it probably uses native libs for the LLM stuff. Are you using the TPU?

Frankly, I am just trying a bunch of stuff/libraries/demo apps to see where we are at with local LLMs. Most of the stuff I am seeing are just very poor ports of server runtimes, which is terrible.

what's the point?

Can you make it train on my notes so it replies like me?

Termux -> tur-repo ollama -> qwen2.5-coder1.5b

works great on my pixel 6

too small to be useful, what do you use it for?

For small reminders of how basic stuff (functions, syntax) in python or kotlin works - stuff I previously needed to google. Also great for instructions about the Linux terminal and bash scripts.

Sure, but I get too many hallucinations. I prefer at least 8B models.