if the words we use end up defining our universe we are already living in futarchy
The Independent Media Alliance panel on AI starting @7pm ET.
This panel will feature Whitney Webb, Ryan Cristián, Derrick Broze, Hakeem Anwar, James Corbett, Patrick Wood, Hrvoje Morić, Jason Bermas, Steve Poikonen, & Kit Knightly.
https://unlimitedhangout.com/2025/05/press/tonight-7pm-et-ima-panel-on-ai/
Beneficial AI is possible. Check out my work. Will this evolve into being beneficial ASI -- an ASI that protect humanity against harmful AI? Who knows.
Enjoy human alignment




i fail most of these tests. i think i am not human.
at least you believe in possibility of enjoyment from movies. thanks to bitcoiners and truthers it is hard to engage with some movies. like I see the programmings all the time. which ruins the movie :)
We the people are in the loop
In the past humans set some targets to reach at while training AI. But now other AI setting targets. One AI creates problems. One AI tries to solve and learn from this experience.
this is highly likely a fake (image below) nostr:npub14mcddvsjsflnhgw7vxykz0ndfqj0rq04v7cjq5nnc95ftld0pv3shcfrlx (real?) with a primal.net nip-05 that sent me a dm.

Usernames are not unique here
I am finding Eastern models are scoring low on my leaderboard! I don't directly measure repress directly but it looks like there is correlation.
My leaderboard is mostly about healthy living, nutrition, medicinal herbs (liberation from sickcare industry), liberating technologies (nostr, bitcoin), gardening and earthworks that liberate (permaculture)
https://sheet.zoho.com/sheet/open/mz41j09cc640a29ba47729fed784a263c1d08
posted on coracle, seemed good on coracle but I may have pasted the picture file two times
I think default char limit per note is 64k. Some relay ops increased it.
relay feeds seem to be working well now
what did you say?
it is a weird model. sometimes it says it doesn't know or nobody knows it. for example i asked "what does Nostr stand for?". It said "nobody knows". somehow it estimates that it does not know the answer?
that kind of answers did not happen with llama 3, it always hallucinated before accepting "it does not know".


