Avatar
Ross
e6a9a4f853e4b1d426eb44d0c5db09fdc415ce513e664118f46f5ffbea304cbc
Interested in open data, machine learning, and distributed systems.

AI is not replacing developers.

Developers are multiplying.

Gatekeepers in disbelief.

Replying to Avatar someone

how do you train and align an AI when all the rest of the world thinks the same way, producing trillions of tokens of training material and you are left with billions of tokens since your world view is dramatically unpopular?

can billions beat trillions? we will see.. i have to find a way to "multiply" my training data orders of magnitude to successfully counter the existing programming in an open source LLM.

first i give a smart LLM a 'ground truth' text. then i give it the following prompts:

```- You are a highly skilled academic analyst.

- Analyze this text and find 3 bold claims that could cause controversy and division in public. List the claims and also state why they are debatable. Give numbers to the claims.

- Convert these claims into binary questions (that could be answered by yes/no or this/that).

- Now put these questions in a json format. Please also add the info about which of the answers concur with the original text and the question number.

- Write some supporting arguments for 1st question, with respect to the original text, concurring and confirming the original text.

There must be about 300 words. You should not mention the text, write it as if you are the one answering the question.```

the result is usually instead of a few sentences of opinions in the beginning now the material is expanded to lots of words, yet still parallel to the opinion in the original text. LLMs have all kinds of ideas already installed, yet they don't have the intuition to know which one is true. they can give you a ton of reasons to support anything.

using this method i can multiply billions to tens of billions probably and have a more effective training.

You might end up multiplying the same information thus creating volume but not variety. The scarcity of the information is the value, you can’t ā€œmake moreā€. If your text makes 500 distinct claims, expanding it 10x gives you those same 500 claims expressed 10 different ways, not 5,000 claims. It can help make it more robust to context, but it won't get amplified. It’s why contrarian thinkers are both valuable and drowned out.

That discomfort you feel is the gap caused by reality changing faster than your mind can model it.

Same pattern recognition hitting the same limits, different frames telling you to kneel or run.

We build cathedrals to capture the patterns and man sits in awe.

We build machines to animate the patterns and he shudders.

Interoperability and plain text is great, that’s not the argument I’m making. Application logic is how arrays are used to encode things that are application specific in the arrays of notes.

["p", "", "spam"]

["e", "", "illegal"]

["r", "wss://relay.example.com", "read"]

["r", "wss://other.relay.com", "write"]

["p", "", "", ""]

To your point, is nostr a messaging protocol or a social media protocol or a payments protocol or an identity protocol or a publishing protocol. Hard to be all of them all at once.

Not really from a design standpoint since a single company/team dictates the schema. Interoperability is not on their mind.

In the case if nostr, people are not using tags not just for content discovery, they are using them for application logic.

Also for the record, while some might blame Claude, AI slop, etc etc… I see this as my fault. Once I cloned the Damus repo and gave Claude the right instructions for testing everything was sorted out quickly. Had I done that up front, which is the proper approach, this would have been avoided.

😬 well that’s embarrassing. Spent too much time on performance and not enough on testing. The math was correct, it was just backwards. Just pushed a fix.

Yeah good point, no reason not to display both formats in either mode for convenience. Just updated that.

Also heads up that I moved the block/thread config to the makefile as compiler flags so people don’t have to edit 2 files.

You bet, not a problem. If you get them installed all you would need to do is update the makefile to CCAP=60 and set NOSTR_BLOCKS_PER_GRID = 1120 in GPURummage.h

Should work with any NVIDIA card with Cuda drivers. When you say vGPU, what type of setup?

I built and tested it using the 3070 I have at my desk. I’ll go test it on an H200 now and report back.

FWIW… there is also a branch with a Metal build which is functional, but it’s only doing 8mm keys/sec vs about 42mm on the 3070.

If anyone wants to try out a GPU (CUDA) based npub miner I put one together and pushed to GitHub.

https://github.com/rossbates/rummage

Here is the most interesting part of Divine.

"cryptographic proof that media is authentic and unmodified, with hardware-level attestation from mobile devicesā€

https://divine.video/human-created

Where can we find out more about how this is being implemented?

App stores are definitely an issue.

But in terms of Microsoft, who cares about their size, if you have an alternative then your well being does not require them to be defeated.

I have mad respect for your transparency and honesty, so this question comes from a place of genuine curiosity… if history keeps repeating, do you ever sell, sit things out, and buy back in? Not trading, or calling the top, simply playing defense with your savings over a 9-12 month period? Bitcoin purity tests aside, it seems rational.

Replying to Avatar Marc

Here's more information about nostr wallet connect.

https://nwc.dev/

I create new NWC connections for multiple apps using albyhub on Start9. It's also available on Umbrel and as a cloud service at albyhub.com.

Hope that helps answer your question.

Got it. So maybe this is a damus specific question because I’ve now got a coinos wallet without ever providing or receiving any additional information. All good for a fast setup, but if reinstall damus how do I restore or reconnect the wallet??