i am impressed by llms.
using only prompts, i could make an llm filter out chatty notes from informative notes. these informative (encyclopedia stuff) notes will go into training another llm. so one llm helping out another llm! weird times. sometimes they argue sometimes they support each other :)
in the past we had to show thousands of examples for them to learn something. now i give it 10 examples and it understands the task.
this could also be used to keep the best notes on the relay and discard most of chat. things like GM can be classified as daily chat and could be removed. things talking about nostr tech or bitcoin tech could stay in the relay for a longer time. the problem with this approach is analyzing each note takes about 2 seconds with a 70b model. and i tried 8b llama3, it sucks. so one approach could be generate 10000 responses using 70b and teach those to 8b. then use 8b to analyze 1.6 million notes quickly.