my first love in programming was GUIs and stuff like painting deadlines is so juicy for me

how do we get enough of a solution to be satisfactory if we get a bottleneck upstream?

Reply to this note

Please Login to reply.

Discussion

I hate GUIs. I pretended I couldn't code, on here, for nearly 2 years, because I didn't want to have to touch a GUI. 😂

But the under-the-hood part of GUIs is legit fun.

i really liked the "immediate mode" GUI model but unfortunately the current state of the best one, Gio, is pretty poor in the widget department, and best practices for ensuring minimal wait to do the painting is far from settled, i had lots of fun with that stuff, built a couple of extra wingdings for it when i was working on it (a scrollbar, for one) and was in the process of building a searchable log viewer tool but we didn't get the bugs ironed out before we ran out of runway

my colleague was very ambitious but wasted so much of his time in stupid things, by the last 1/4 of the process i was basically left to clean up his mess and of course you know how much longer it takes to fix stuff than build it

I'll never not find it funny how I heard "Learn to code," a million times, on here.

They've all gone so strangely quiet. I miss them. They were like my little, idiotic, chauvenistic mascots.

Some of it is elegantly handling failure on the UX side, so if there is a bottleneck it doesn't feel so bad to the user.

Some of it is building redundancy: using fallback relays, like nostr:nprofile1qqs06gywary09qmcp2249ztwfq3ue8wxhl2yyp3c39thzp55plvj0sgprdmhxue69uhhg6r9vehhyetnwshxummnw3erztnrdakj74c23x6 has been doing, is one way.

this is also why i designed the filter query on realy's http API to return event IDs instead of pumping the whole event back, this opens up clients to the possibility of sipping at the list to cope with user input (scrolling) and allowing that to pipeline into the render stage

you just can't do that if you can only get the whole event every time, and as well, if you have the event id, if one relay is down you can request it from others, whereas if you just throw the whole shebang at the user you have to have these idiotic "max limit" things that make spidering relays also a lot more complicated

Also preparing the filter efficiently and making assumptions and guesses to make more specific requests.

And parallelization of the requests, breaking off requests early, chunking large result sets, workers, graceful exit-reform-and-retry

or in other words, front end needs Go

none of that is advanced concurrency in Go, now if you want to get into advanced concurrency lol... D,:

Well, we'll soon have it all in C++ 🤔

you see, that's gonna be a problem

Go was literally invented to replace C++ for servers, by people of which two of them are the most legendary names in Unix history

Anyone who knows C++ doesn't see that as a problem.

have fun doing coroutines without clumsy syntax then

or atomic queues, fun fun fun

C++ has the throughput advantage but Go can reduce request latency

which is more priority in the domain of servers?

This is for the client-midware, tho.

in the real world of the internet, latency is everything

throughput is for training your AI

"training your AI"

Don't threaten me with a good time, bro.

😁 Okay, gotta go. Getting carsick and Hubby needs to grab a coffee.

I'm aware.. use case.. Network effects would be felt better with go because of those choices made by other teams. for relays I see a lot of people get very upset when it's not go or rust specifically. I'm not religious when it comes to language and it's not a good idea to get stuck into one or two.

We're using Typescript, PHP, Python, Go, SQL, Rust, and C/C++.

Super glad to see! Kudos! 💯

Seriously.

Well, there's just so many of us. 😂

The PO whines that it's too many languages and that adds technical debt, but we pretend to be deaf.

We're not programming a server in C++ though.

not only that, there's no need to because i've already got a pretty much mature relay server already built that is easy to extend, and it probably beats strfry already for latency of responses

A lot of it is just my experience being a Nostr PowerUser. I can make educated guess about what someone is looking for and how that something probably looks and how fuzzy the logic should be and where it might be located and...

An AI could do that, too, with enough data, but I think my hit-rate is still higher because I understand the human motivation to search for _some particular thing from some particular place at some particular time_. If we then juice up the resulting search with smart engines, it's... awesome.

A big thing is the "where"? What topic are you searching for? Who else really likes that topic and what relays do they have? 🤔

Find the cluster of relays for that tribe and search around there. nostr:nprofile1qyw8wumn8ghj7argv43kjarpv3jkctnwdaehgu339e3k7mf0qydhwumn8ghj7argv4nx7un9wd6zumn0wd68yvfwvdhk6tcpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsqgxufnggdntuukccx2klflw3yyfgnzqd93lzjk7tpe5ycqdvaemuqcmsvq8y 's work on visualisation could allow for this sort of targeted fetches.

I’d also like a WoT-lookalike that uses content instead of follows to map users by topic proximity if anyone volunteers to make it happen. Then you can feed it a topic and it gives you sources and sources lead to content even if it’s not in the same place.

the word index i've been building could probably help you find this kind of clustering at some primitive level of precision, was quite funny trying to figure out how to make it language agnostic, found a nice library for segmenting unicode UTF-8 text that did a pretty good job, then i just had to filter out common things like filename extensions and nostr entities and whatnot

i gotta finish building that thing... i'm actually done with the draft now and really just need to hook it up to a query endpoint

Yeah, it's the sort of service that saves client devs from having to think through the filters and algorithms.

Or they just use Aedile's topicGraph component. 😉

Are you promising Aedile features? 👀

😁 No pressure.

Something I was thinking about is starting with highly-prepared searches and then expanding iteratively, if they reclick the search button.

Like an LLM does, but with no chat. Just keep looking deeper and broader until you've exhausted the possibilities.

Ooh I do like that. Like the next page of Google, but smarter.

Could have an auto-iterate toggle and there's already a Cancel button, to stop searches underway, and a progress bar. The final stage could be full-text on all active relays or something ridiculous. 😂

We could call that the "Great time to grab a snack and a coffee." Iteration.

I'm trying to corral LLMs into their lane enough in my workflows that I can turn them loose and grab a snack.

😂

that won’t even be necessary though

If they use your server, no, since you do it on the backend, but we promised that Alexandria would be worth running, even with a crappy relay. There's a lot that can be done with normal computer science.

Trying to make it work for crappy search is not worth it

Never sleep. 🤙🏻

SEARCH HARDER, BABY

Yeah, Just one bar, that isn't an LLM, but you can say "longform article from liminal from last week about LLMs" and ta-da!

Semantic search ftw

It's actually not that difficult, but nobody has built it yet and I want to find stuff. I'm so tired of not being able to find "bible KJV" because the search is too retarded to normalize and prepare the filter properly and is like,

Yo, I found no "bible KJV". 🤙🏻

Okaaaay, but you found a "KJV Bible" right? 🤦🏻‍♀️

The worst is when people are like, Just ask am LLM. Ugh. It's like four lines of code, you morons.

I don't want to have to have a full conversation with a robot just to find an article.

☝🏻💯🫂

ng2

your trolling powers are weak, mister i will run the internet

The wiki disambiguation page will be a topical search page, for all sorts of notes, with wiki pages listed at the top and more prominently. I was thinking of adding a button that searches "deeper" over a megalist of relays, and then returns counts of how many hits it has to that topic from which relays. And if you go to the profile page, it'll list "top ten hashtags" and you can click one and find out which other users also have it as their top ten. With some fuzzy logic and some keywords and d-tags and titles mixed in.

Just think it'd be cooler to receive the results with a link to the Visualisation page. Especially since that's so pretty, now.

Graph > lists

I tested a software project for visualizing topically-related scientific journals and I want to recreate that effect.