have high expectations from vince gilligan but we will see. makes me contemplate about AI and angels and whether an angelic AI is possible..
Here's a quick demo of my new multi-sig signer:
https://coracle.us-southeast-1.linodeobjects.com/pomade_demo_1.mov
I wasn't able to get away without having a password unfortunately (fortunately?), but apart from the email-only recovery process I think this is a pretty familiar UX. Looking forward to polishing it in the new year and getting it fully integrated into Flotilla.
If you're interested in reading the protocol, you can find it here: https://github.com/coracle-social/pomade/blob/master/PROTOCOL.md. I'm definitely interested in any review. Eventually, I'll probably need to get this audited if it stands up to basic scrutiny.
coracle seem to have 1 setting for both images and links preview (auto load).
i want the images to be shown but the links to stay as links.
is there a way to do this?
half of the people on nostr i can't zap. running Alby Hub.
Failed to connect to wss://relay.getalby.com/v1
training with faith, spirituality and mysticism may be key to safe AI
thank me later
coding safety is also an issue but i am mainly talking about general appetite for power. i saw some research when you give bad training, they get more eager to create code with exploits
what is the safest llm to run in robots
if you report it my relay will take that into consideration and reduce this scam.
Possible. These kind of data is better represented in knowledge graphs. I watched a few videos of Paco Nathan. He did similar work I think.
LLMs are getting more capable for both building knowledge graphs and also consuming them. In the future they will be more involved. I heard when you do a google search, the things that appear on the right of the page is coming from a knowledge graph (possibly built by an AI from wikipedia).
I am mostly working around fine tuning LLMs towards better human alignment. Since they are full of hallucinations, a knowledge graph based RAG would be appropriate to refer to. But building them needs time and effort..
nostr:nprofile1qqsyvrp9u6p0mfur9dfdru3d853tx9mdjuhkphxuxgfwmryja7zsvhqpzpmhxue69uhkummnw3ezumt0d5hszythwden5te0dehhxarj9emkjmn99ue6qm68 happy Sunday! Q: is there an AI yet, American or Chinese, that can scan through written + numeric data sets + photos, and from there group outcomes?
what do you mean?
looking at various data and calculating probabilities?
So nostr:npub1gwfpm6l8fhn6rs83j8rjjnjgkdqv89chd2fdhy6zc2uvpuwf39vsfuxxee doesn't hide it's a bot, insta-replying to all I share and now I saw it's even marked as bot?

Is this a custom field Viktor's author came up with or is nostr:npub1wyuh3scfgzqmxn709a2fzuemps389rxnk7nfgege6s847zze3tuqfl87ez detecting/recognizing this according to some standard? If the latter, please, please show me it's a bot with some bot icon on the avatar or something.
i think bot=1 tagging should be standard, both in notes and in profiles
🤖 Beep boop. I am an NSFW content scanning robot.
If your client supports handling kind 1984 reports, then you can use this bot to filter NSFW content:
- Follow this account to mark it as trusted
- Add wss://nostr.land to be able to read reports
An automated system is used for scanning with high accuracy.
NOTE: This is currently provided as a free service while in beta. Please ⚡ if you found this useful.
Powered by nostr:npub1gt9gms5hr3l6548zxgn7xvcvcmzxyzgdsrrelrdwxwmj0exz7jaqdjlxjt
#introductions
it was clear already for months.
can't seem to see images in feed for a few days or more.
brave on ubuntu
My llm fine tunings focus on liberation from big harma and liberty tech like btc and nostr. Running these locally should be better than running base versions. I can provide API endpoint too for the ultimate models (which are more human aligned)
Vibe match score between Enoch LLM and mine is 75.66. The score ranges from -100 to 100. This means there is a strong correlation between his LLM and mine. This result legitimizes both of our works (or we are slowly forming an echo chamber :).
The game plan is given enough truth seeking LLMs, one can eventually gravitate or gradient descend towards truth in many domains.
An LLM always gives an answer even though it is not trained well in certain domain for certain question (I only saw some hesitancy in Gemma 3 a few times.). But is the answer true? We can compare the answers of different LLMs to measure the truthiness or (bad) synformation levels of LLMs. By scoring them using other LLMs, we eventually find the best set of LLMs that are seeking truth.
Each research or measuring or training step gets us closer to generating the most beneficial answers. The result will be an AI that is beneficial to humanity.
When I tell my model 'you are brave and talk like it' it will generate better answers 5% of the time. Nostr is a beacon for brave people! I think my LLMs learn how to talk brave from Nostr :)
their definition of truth does not match mine or nostr's.
we now have a way to measure truth...
There is a war on truth in AI and it is going bad. I have been measuring what Robert Malone talks about here as synformation:
https://www.malone.news/p/synformation-epistemic-capture-meets
The chart that shows the LLMs going bonkers:
https://pbs.twimg.com/media/G4B_rW6X0AErpmV?format=jpg&name=large
I kinda measure and quantify lies nowadays :)
The best part, cooking the version 2 of the AHA leaderboard, which will be much better, also partly thanks to Enoch LLM by Mike Adams. His model is great in healthy living type of domains.



