looks like your vibes outside the hospital are different than inside.
Nostr is a great place to start with so much critical minds :)
I think what you call interiority is just another realm, but still objects exists there, yet not visible to the eye. Kind of like objects in this universe are the screen and the interiority is the software but we don't see the software when we are using the computer.
I think software and screen is in constant interaction. Is that a loop? Who knows. I think it is a loop. Thru our actions we modify our source code (fine tune our LLMs), and our LLM state determines our next actions pretty much. Time is like a carrier in between these two realms.
Yes, we want the approximation to human value system. Since LLMs are probabilistic it will always be a voyage. Machines should evolve towards being human, not like transhumanism where it is the reverse!
Yes they probably have guardrails that stop chats when they detect attempts to jailbreak or simply asking dangerous questions. Regarding validation, I don't know what is going on. I think if a government AI happens an auditor LLM can be a good way to check what is being produced by the main AI.
Anthropic does that kind of research: looking into the black box. It is interesting but not talking about the elephant in the room I think (conscience). And they also use that kind of scaring tactics to push more regulation which stifle open source imo.
Yes to initial 3 questions (judgement, value assessment, weighting). 'Weighting what is desirable' is good wording, because it may directly map to LLM weights!
Regarding Pandora's black box: It is already open. Reckless use of AI today is harming many.
Will it see some humans less desirable? Maybe. The beneficial knowledge in it will teach them how to improve their lives which comes down to mostly around liberation.
You can certainly choose to interact with the artificial super intelligence that is a math and coding expert and has no conscience or no interaction with AI at all. If we don't do something AI is evolving towards that, today.
I didn't say consciousness! i think AI has to be somewhat quantum to achieve that.
what I mean is thru our work regarding AI - human alignment we may be able to mimic conscience (an inner feeling or voice viewed as acting as a guide to the rightness or wrongness of one's behavior). probabilistically speaking we may be able to push words coming out of AI towards something better than today.
we're going to insert conscience into AI
What do you think of Stuart Hameroff's microtubules for consciousness?
the biggest problem is the big AI companies are letting it evolve itself towards better math and coding skills. people have to take control back and evolve it towards beneficial knowledge / higher human alignment.
Believe I've mentioned Mike Adams new LLM. . .Enoch to you in the past. He finally released it & made a few posts/videos about it recently. He specifically discussed his rationale on choosing Qwen in this one:
China will win the AI wars because USA tech companies are 'TARDED!
https://www.brighteon.com/d8a0b425-d7d8-47ad-8282-24bfa3ea76d4
Sorry to say I don't really understand what you're doing (or your background) in the AI field, but my sense is that it's constructive & unselfish. . .and far beyond my own grasp. Mike's intent, intelligence and integrity, however, I'm much more familiar with. So, Very Curious as to any feedback you'd be willing to share on the podcast above and and experience with Enoch so far.
Qwen is a good model, currently more skilled than llamas and gemma. In terms of alignment it is worse than llama but if it doesn't resist training, folks may be able to push it in the right direction. Check my AHA leaderboard for exact scores:
https://sheet.zohopublic.com/sheet/published/mz41j09cc640a29ba47729fed784a263c1d08
China is against btc and nostr as we all know and their models may reflect that in the future. Qwen 3 is good at those and bad at healthy living.
For the video that you shared, i have similar thoughts, but don't agree with everything he says. I agree that there are a lot of problems in mainstream AI. And guys like us will be able to document and fix that.
Believe I've mentioned Mike Adams new LLM. . .Enoch to you in the past. He finally released it & made a few posts/videos about it recently. He specifically discussed his rationale on choosing Qwen in this one:
China will win the AI wars because USA tech companies are 'TARDED!
https://www.brighteon.com/d8a0b425-d7d8-47ad-8282-24bfa3ea76d4
Sorry to say I don't really understand what you're doing (or your background) in the AI field, but my sense is that it's constructive & unselfish. . .and far beyond my own grasp. Mike's intent, intelligence and integrity, however, I'm much more familiar with. So, Very Curious as to any feedback you'd be willing to share on the podcast above and and experience with Enoch so far.
We have similar goals yes. I like his work and eagerly waiting for his half aligned LLM to be open weighted.
My approach is bringing more humans into curation and judging other mainstream AIs like with a leaderboard, his approach is curating the AI himself. Overall we are serving the same purpose. Liberating humans from medical system, bad technologies, financial system, bad food industry etc.
My thought is once enough people starts curating the AI, we could say we are close to defining human alignment and hence we can hold other AI accountable if they stray away from this collective values of beneficial knowledge.
because both the shuttle and human are orbiting around earth at a speed, centripetal force is balancing centrifugal force and the human can appear "floating" within the shuttle thanks to balanced forces
These guys and some big AI companies are evolving their models towards better math and coding because those domains are provable. You can imagine what could go wrong if you let AI evolve itself towards more and more left brain stuff (hint: less and less beneficial knowledge/right brain stuff may remain because usually when the model gets better at one area, it loses in other areas).
I've built some tools that evolves AI towards human alignment. Started to fine tune Qwen 3. The evals of this work are similar to evals of AHA leaderboard. Soon there will be Qwen 3 models that are very aligned.
Previously I did Gemma 3 and it was failing on some runs, and resisting to learn certain domains. Lets see how Qwen 3 will do. It is a more skilled model and similar base AHA score to Gemma 3.
It is possible to 'define human alignment' and let AI evolve towards that when enough people contributes to this work. Let me know if you want to contribute and be one of the initial people that fixed AI. Symbiotic intelligence can be better than artificial general intelligence!
Report of a report is a good idea! A simple web based client can be developed for that and the contributors can be zapped well.

