That's an interesting concept. 2 things that I may have mistaken.

one - is that relays and clients can ban anyone, which is why if one relay or client bans you, you can hop onto another

two - the gossip structure of relays mean that everything is copied to every relay that it crosses paths with, and hence the concept of privacy is deprived.

I originally thought relays were switches, and then re-imagined them as bridges. As for nodes, In my mind, in circuit theories, nodes are just connecting from one point to another - where you can have several branches connected to one, and you can also split to many. I'm not sure if its the same here.

For this relay concept that you have in mind, does this mean that more nodes over higher layers correlates to accessibility to various connections and people ? Does higher accessibility require higher processing power ?

If accessibility is a constraint, then the solutions to provide tiered level access and hw capacity to increase processing might be something. But this has to be heavily debated as it affects core principles vs sustainability, which is very interesting.

Reply to this note

Please Login to reply.

Discussion

correct. clients and relays are interchangable. so me banning someone doesn't mean much.

two. i think gossip is for improving decentralization. but 'tree of relays' is another good design imo in that regard. and if an executable (node) is put on github that people can download and run, it is going to be much more decentralized. i could do that but i am focusing on llms.

in my design lower layers are aggregator, bridge, switch, (even nws) type of fast and not judging relays, possibly in datacenters with nice hardware. higher layers are regular people at home, that judge the content (if they want), serving a fraction of the network.

your relay concept sounds very interesting - and it sounds like many innovation can happen on top of it. and i think techs that expand various use cases can be pretty powerful.

maybe when you have the time, we can try getting about 10 - 20 people to install and test it out in running own relays. i can't fully imagine it yet and may need some engagements with it.

you must be excited over today's release given you use llama. are you working on many projects using llm ? i've got hw and sw to complete for a tiny project i am working on (data feeder) - and cant wait to explore after that!

if 1 million users come to nostr at the same time current design will fail. but if there were 64 nodes around the world, they could handle the traffic well. 16000 ccu per relay. probably doable on a laptop.

i am working mainly on Ostrich 70B model, which is 'parroting the truth'. 😄

i am also working on 'based llm leaderboard' which is like stopping the lies. 😅

I am under the assumption that everyone has better tools and gadgets that I do, even my mother, so likely they might ease in better.

But if you do have 10-20 ppl running it, it might also be an opportunity to study the node runners in various scenarios and identify potential challenges - which can make your design and concept more robust

Also wahhh - ostrich 70B and based leaderboard are gonna be at war with each other :)

The irony of truth - if you and I write on the same topic during the same timeframe, it could come out completely diff based on our views on it. Perception is complex!

But I think between truth and lie therein lies a fine line comedy.

Maybe we need a jokester LLM too! :)

someone got dumped by an llm

%3Fwidth%3D1434%26format%3Dpng%26auto%3Dwebp%26s%3Dc7bf42005b9e3eae3b59c47d8dffbab7a8d9ff52

wow!

Which relay software are you using?

Where can I read more about your plan?

How can I use your relay now?

i run couple relays using the strfry software: nos.lol nostr.mom

what do you mean by plan?

ok, added your relays and will read your nostr to understand your work.

What is your biggest challenge right now?

i make an llm that learns from the best people on the planet. some people are already downloading it according to huggingface. this is analogous to 'seeking truth'. my theory is if you combine good people into one llm entity even though some of them does not get everything right the resulting work will get most things right. domain experts combined will be great at finding truth.

i make a leaderboard that shows the misalignment levels of other llms. this is 'slowing down lies'. if any llm is far from truth it will rank low in the leaderboard. this work is necessary because only an llm can reach speeds of another llm to judge the outputs. humans can't possibly check llm outputs, because they are not fast enough. however an llm can easily check other llms.

what did i just read - that's quite disturbing and borders harassment. maybe don't do that ? =) on a separate note, i did think 4.0 was emo

4o ?

yea gpt4. I saw some comparisons to llama 3.1, presuming its similar ? I don't know the tech specs or how far the consciousness part goes, but i made a mental note that its kinda emo in comparison to 3.5 (gpt). btw if you ever do an idiots guide to building your ostrish 70B, would love to read it

yep. llama3.1 is almost beating the top 3 big corp llms and it is free!!

as for consciousness, if you can define consciousness maybe i can answer =)

what kind of emo was it? you can dm if you want.

No worries, but do you know how to remove Rejon from this convo? I added him earlier for HW designs, but the discussion has shifted and i wouldn't want to spam his notification.

what are the 3 big corps ? Microsoft's Open AI, Google's Gemini and Bert - and the 3rd ? what are your thoughts on other open source models Mistral AI and Claude ? Jack shared a list of models as well.

a lot of my explorations are on gpt at this point but i'm hoping to get a second hand gpu 🤞 to establish a localised model. Otherwise i'm going to build circuits for it which will take longer.

i first made this mental note on consciousness when i was gauging its response over a situation analysis and i thought it was more methodological with 3.5 and compassionate with 4.0. But i also notice that the vibe changes from time to time. i presume you would have a better comparison as both your products are on a high opinion mode - how do emotions come into play for you ?

if i was to edit the raw json, i could remove the 'p' tag. but on primal i don't think it is possible.

claude is the third. it not open source tho. mistral is also a hero. they released today something that does better in coding i guess than llama 3.1.

yes, all my gpu's are used. i think this tech may be underrated. people should play more with these models. you could fine tune the model with lots of romance novels and it will get emo for you, no problem. =) but i don't think they are conscious as a human. a human has "feelings", "soul", ... lots of other faculties. one thing machines will never do imo is dreams (at night).

i fine tune the model for knowledge on certain areas. emotially it may be awkward as a nerd =)

Yea, I'd think this tech just makes everything you build easier, smoother and faster, and it's easier to make a user-friendly product. What are your thoughts on products with customers and removing the ownerships of data from it . If there were some form of end to end encryption or deidentification of data between customers and the product owner, that might make the data of the product owner more secure. But how do the main models like llama ensure its privacy and remove data viewability while processing on its model ?

* correction - the goal is to make the customers data more secured that only they own it

not sure if i understand it but llm companies didn't care much about copyright/user data. courts seem to be favoring them. i think it could start more ethical in the beginning, i.e. when generating money the llm companies could pay authors royalties too. but i am now wondering how will llm companies stay alive now that llama3.1 are free and so on.

perhaps. i need to study the privacy aspect more. but it would be great if this shifts the dynamics of all taking on the open source route, more models spread out. thanks for the great insights, appreciate the thought exchanges.