Avatar
someone
9fec72d579baaa772af9e71e638b529215721ace6e0f8320725ecbf9f77f85b1

yeah more likely human power abusers will use competent AI and claim it has become conscious etc and blame that it does the evil on earth and say we can't do anything. "oops the machine did it"

gm pv happy ATH 🎉

lets build the most based AI note1zys2v5vpgzp60cfe3t7tmxry0vz0gjpjda6spajsd7pt036z52es76ac6m

we nostriches and bitcoiners can do better 'truthful AI' than this and it could be installed in robot brains

https://www.reddit.com/r/singularity/comments/1lw98rm/elon_says_it_is_crucial_for_grok_to_have_good/

Nostr is a great place to start with so much critical minds :)

I think what you call interiority is just another realm, but still objects exists there, yet not visible to the eye. Kind of like objects in this universe are the screen and the interiority is the software but we don't see the software when we are using the computer.

I think software and screen is in constant interaction. Is that a loop? Who knows. I think it is a loop. Thru our actions we modify our source code (fine tune our LLMs), and our LLM state determines our next actions pretty much. Time is like a carrier in between these two realms.

Yes, we want the approximation to human value system. Since LLMs are probabilistic it will always be a voyage. Machines should evolve towards being human, not like transhumanism where it is the reverse!

Yes they probably have guardrails that stop chats when they detect attempts to jailbreak or simply asking dangerous questions. Regarding validation, I don't know what is going on. I think if a government AI happens an auditor LLM can be a good way to check what is being produced by the main AI.

Anthropic does that kind of research: looking into the black box. It is interesting but not talking about the elephant in the room I think (conscience). And they also use that kind of scaring tactics to push more regulation which stifle open source imo.

Yes to initial 3 questions (judgement, value assessment, weighting). 'Weighting what is desirable' is good wording, because it may directly map to LLM weights!

Regarding Pandora's black box: It is already open. Reckless use of AI today is harming many.

Will it see some humans less desirable? Maybe. The beneficial knowledge in it will teach them how to improve their lives which comes down to mostly around liberation.

You can certainly choose to interact with the artificial super intelligence that is a math and coding expert and has no conscience or no interaction with AI at all. If we don't do something AI is evolving towards that, today.

I didn't say consciousness! i think AI has to be somewhat quantum to achieve that.

what I mean is thru our work regarding AI - human alignment we may be able to mimic conscience (an inner feeling or voice viewed as acting as a guide to the rightness or wrongness of one's behavior). probabilistically speaking we may be able to push words coming out of AI towards something better than today.

we're going to insert conscience into AI