Replying to Avatar HoloKat

One idea might be to pay one individual to run apps through an LLM to pre-screen for goal alignment and other criteria as decided by open sats. Assign scores to applications and create 2 lists - those that go forward to human review (much fewer numbers) and those that don’t meet the score threshold. But, instead of discarding them completely, pay a human to do a fast review of them to see if anything was overlooked. The application itself could probably be structured in a way that made the choices of what to green light much easier.

This saves on review time, shortlists the best prospects and gives the “losers” a chance to still make it.

The process would become more accurate and stronger the more the criteria is refine, resulting in better scoring.

I’d introduce acceptable timelines for review process and introduce perks for volunteers. This might motivate them to be more responsive and get applications reviewed faster. And if someone is falling behind, look for new volunteers who can commit more time. Emphasis of course would be on filtering out the noise so humans have more solid applications to look at. nostr:note13hau2cmge9mzxaqnawum0plg7qtthgmtmcsu27usvv6pwmcmv2tqly9all

Avatar
semisol 1y ago

Skip the LLM and have someone do an initial review

Reply to this note

Please Login to reply.

Discussion

No replies yet.