One idea might be to pay one individual to run apps through an LLM to pre-screen for goal alignment and other criteria as decided by open sats. Assign scores to applications and create 2 lists - those that go forward to human review (much fewer numbers) and those that don’t meet the score threshold. But, instead of discarding them completely, pay a human to do a fast review of them to see if anything was overlooked. The application itself could probably be structured in a way that made the choices of what to green light much easier.

This saves on review time, shortlists the best prospects and gives the “losers” a chance to still make it.

The process would become more accurate and stronger the more the criteria is refine, resulting in better scoring.

I’d introduce acceptable timelines for review process and introduce perks for volunteers. This might motivate them to be more responsive and get applications reviewed faster. And if someone is falling behind, look for new volunteers who can commit more time. Emphasis of course would be on filtering out the noise so humans have more solid applications to look at. nostr:note13hau2cmge9mzxaqnawum0plg7qtthgmtmcsu27usvv6pwmcmv2tqly9all

Reply to this note

Please Login to reply.

Discussion

I wish OpenSats was transparent about the application queue but couldn't find anything. If they would be transparent, they would have an incentive to fix the problems and I doubt using LLMs (that themselves are biased) would be needed.

I wish applicants could get some weekly update. "We are reviewing 73 applications before getting to yours. Last week, 15 decissions were taken. We should get to yours in 5 weeks."

Any improvement in keeping people informed is a huge plus.

Skip the LLM and have someone do an initial review