prompt injection is still a big deal sadly… there’s actually dedicated models for summarization out there.
you could fine tune existing summarization models to take additional context input like reply chains via tokens
prompt injection is still a big deal sadly… there’s actually dedicated models for summarization out there.
you could fine tune existing summarization models to take additional context input like reply chains via tokens
Curator llms for doomer and whitepill feeds