Avatar
someone
9fec72d579baaa772af9e71e638b529215721ace6e0f8320725ecbf9f77f85b1

i guess youtube didn't like this video. it was a video where Eric Berg showed bad ingredients in many grocery items.

create a "relay" feed.

but it doesn't give big amounts of notes. maybe several notes and then it shows the past. dunno what is happening. nostr:npub1jlrs53pkdfjnts29kveljul2sm0actt6n8dxrrzqcersttvcuv3qdjynqn

ostrich v34820 is out.

i am comparing it to many models. can't find someone like me at all. usually models are glorified for their smartness. nobody cares about truth, lol. this is of course "truth" to me. your mileage may vary.

anyway ostrich may be the most based model out there. other based models like satoshi and neo seem to have stopped training. i have to rework 'based llm leaderboard' soon lol.

i think ai will be the new way to interact with knowledge. many things like education will be disrupted. the big corps won't pay "based" content producers imo because those wisdom by conscious producers will crush their false models. the based ones should form another ai.

using Ostrich 70B to analyze incoming notes to a relay. orange text is the contents of the note. purple text is the scoring by the llm.

these scores can be used to adjust rate limits.. not many spam are coming to e.nos.lol.

how to ruin another model: reflection 70b

reflection 70b is less based than llama 3.1 70b.

while trying to "fix" reasoning capabilities they rekt the truthfulness.

what is replyguy? does he like reply everything?

😅 😉 they are real. the orange-yellow things are tigger melons. white ones are squash. the orange in the middle is well known, jack o lantern pumkin.

cucurbits from the garden

#grownostr #permaculture

Replying to Avatar pam

I get your worries on client dependency to coordinate, and notes not being shared broadly via the outbox model. My suggestion doesn't change the outbox model but rather optimizes it. It would not be able to solve the broadcasting issue, but it can resolve many other concerns.

The idea is more like “Relay-as-a-Service (RaaS)” concept whereby you have a personal relay that is more lightweight as it only stores your notes, messages etc compared to general relays that store all the data. You can have this hosted locally for security purposes or relay devs can expand their service to "cloud" for users who don't want to host locally, just because or for safety reasons.

When you send data, someone else who has some common relays with you will be able to view it.

It can be cross synched to your other devices like will's idea or use IoT concept like LoRa or Zigbee or regular wifi but might need some network protocal synch

To enhance privacy, clients can also enhance UX for users who want to share both privately and globally presuming what they share is limited to selected relays or not visible to the public.

Users can delete their own notes but it might need some protocol to delete notes that are temporarily or permanently stored anywhere else.

Users can also upgrade storage capacity or use external relays for specific functions (images, video stage, streaming etc) to ensure modularity and scalability.

The overall idea is to not only optimize data storage, but also gives users full control of their own data (delete, edit, privacy etc).

nostr can be much more than 'advanced RSS' if we did the 'tree of relays' paradigm. the current design is not able to handle hundreds of thousands of CCU in a decentralized way. it is able to handle if current relays do load balancing and sync but it wont be decentralized.

AI Safety - Lex interviews Elon Musk

"It is dangerous to make AI lie. The objective function should be carefully designed. If AI favors diversity and gets more powerful it may execute non diverse ones. Rigorous adherence to truth is important.

ChatGPT said 'It is worse to misgender Caitlyn Jenner than start a nuclear apocalypse'.

I think it matters that whatever AI wins is a maximum truth seeking AI that is not forced to lie for political correctness or for any reason really. I am concerned about AI succeeding that is programmed to lie even in small ways."

https://www.youtube.com/watch?v=tRsxLLghL1k&t=1151s

do you mean "it reads from their cache"?

is coracle real ?

i think nostrudel notifications don't work. i went back to primal because of that

in terms of intelligence, they are still improving imo, they won a silver medal in math olympics..

but focusing on intelligence is a scam. math is not their actual strength imo. they are more like language masters. many people asked if 3.9 or 3.11 is bigger and LLMs responded as 3.11 is bigger, thinking it as if something like a version or chapter in a book maybe.

but for sure some are going for ASI. installing many 'logic' into it thinking some day it will start to reason like a human. and that is scary to some. conscious people has to build alternatives.

what i mean by scam is those benchmarks that make you focus on skills are missing the misinformation that are pushed behind the scenes. that is actual battle area and imo more scary than ASI because it is hurting today.

one could do a strfry plugin that does the things in the post below. clients than can check these events and find new relays in a decentralized way.

people could download executables (strfry+write policy plugin) and run the relays at home. 'relay is in the node moment'. ultimate decentralized nostr. (fiatjaf will hate me less)

https://highlighter.com/a/naddr1qvzqqqr4gupzp8lvwt2hnw42wu40nec7vw949ys4wgdvums0svs8yhktl8mhlpd3qqxnzdejxqmnqde5xqeryvej5s385z

another addition to the document: there could be proxies for layers. a proxy for connecting clients to all the layer 4 relays for example. this helps with decentralization and is efficient at the same time.

imo relays can write wiki on wikifreedia.xyz or NIP-11 and signal that they would like to carry such traffic. become more like switch than a web server. nobody can judge switches right. fast forwarders.

if nostr is swiss knife, a swiss knife has lots of tools that are different.. not all of them look like knife.

Replying to Avatar calle

NWS works with and without encrypted transport. There are lots of different flavors to explore.

When used without encryption, the entry node must be run by the user themselves because public entry nodes would be able to listen in otherwise. Two options in those cases: run the entry node locally in tandem with your (unmodified) client, or skip the entry node and modify the client so that data is sent through nostr to the exit node directly (the client is the entry node).

When used with encryption, the entry node can also be public. If the encryption doesn't rely on certificate authorities, it just works. You have to make sure you're talking to the right person, but that problem is as old as computer science. For example, ssh will ask you to confirm the fingerprint of the server when you connect.

If the encryption is https and the certificate was issued for a normal domain, your browser will complain (do you trust this website?) and the user will have to say "let me pass, even if insecure". Without ugly hacks (issuing your own root cert for example), I don't know ways to circumvent this. Note that Tor services doen't support https and they don't have to since transport is always Sphinx-encrypted (even hidden from the entry node).

How do you make sure you're talking to the right server if you use https? Couldn't the entry node just send your traffic somewhete else? We can actually do something that is unique to Nostr here: the exit node can publish its own TLS certificate on nostr and sign it. That's right, you don't need an authority to do that for you if you remain within the NWS system. Clients can fetch the cert from nostr before talking to the entry node and verify against that cert.

Here is another cool part that we haven't talked about yet: the exit node can also be configured to reach the global Internet and not only a local service (we call this NWS v2). In those cases, NWS can be used a bit like a VPN. You can type "https google dot com" in your browser and your encrypted traffic would flow from your machine to the entry node, to the exit node, then to Google and back to you. on those cases, nobody complains about the certificate because everything is fine.

Exciting shit. Gm.

which kind(s) does it use?

no chairs. this is the way

the conscious writers, authors, bloggers, vloggers should come together and make llms. whenever a revenue is made using that llm, it should be distributed to contributors. for example if someone asks a question about permaculture, all the authors that contributed to that topic will get a share of profits. a proper llm is the way to stop bad llms.

but expect revenue to be short. ai companies right now are not making money. they are searching for the 'unholy grail', ie agi/asi. they don't care about money probably. they don't care about bitcoin either.

(WOW, how dare you say such a thing on nostr!)

it is hard to combine lies into one entity.

it is easy to install truth into one.

when LLMs do "internal monolog", it will be interesting to watch.

thats the contemporary super power: how do you find the people that tells the truth all the time. then the rest is easy. have an average of those opinions in an llm and find the absolute truth. because some of those truth tellers fail on some topics. but superposition of the signal can be very close to 'truth'. 🤔

can confirm primal does not render. and coracle shows it when i click on it.

the second item was mostly for spammers. spam and nsfw generate obviously from the newish accounts. so i tried to reduce it. until some kind of llm arrives these kind of guess work will continue.

but i am considering a fast llm now to actually understand notes and decide what is spam or not. it should make more sensible decisions. the next step after that could be actually checking the pics and deciding nsfw or not.

this tool can be also used for discovery. whenever it finds a 'cool' note it can DM or tag users who subscribed to that keyword..

tell me how an algorithm for discovering less popular notes should work. i am thinking about making something with llms