Avatar
ynniv
576d23dc3db2056d208849462fee358cf9f0f3310a2c63cb6c267a4b9f5848f9
epistemological anarchist follow the iwakan scale things

No, but you can get a few tokens per second for $2k

I'm trying to solve a problem and I thought I'd ask for help.

I've got a website using a WASM framework (dioxus) that gives me the ability to do async functions to fill in part of the web page. The web page renders immediately with either a placeholder or the result. I'm also using `nostr` and `nostr-sdk` from nostr:npub1drvpzev3syqt0kjrls50050uzf25gehpz9vgdw08hvex7e0vgfeq0eseet

On one page I have a long list of events each by potentially a different npub. But initially I don't have any of their metadata. I want that page to fetch and fill in the metadata for all these people. This is a very typical (I'd say necessary) part of any web-based nostr client. I haven't done it yet though, I've done a desktop client.

The component on the page that renders the metadata is calling into this async function. Remember there are many of them, so many components are all nearly simultaneously calling into this async function, generally with different pubkeys but sometimes with the same pubkey.

The async function needs to either return metadata, or eventually return a failure.

A simple first idea that mostly works is to independently spin up a client and use it to two-step fetch the NIP-65 list from the discovery relay and then fetch the metadata from the user's relay. But this creates tons of clients and connections, saves nothing (no caching) and is generally regarded as a "bad idea."

The next iteration on the idea is to store a map from URL to Client and keep the client's alive. Then I can fetch the client (creating if missing) and fire off a new REQ. But this is still bad because I'm doing one request per pubkey and relays hate me saying I'm making too many requests.

So really I need to batch these somehow (multiple pubkeys per request). And to cache the results. And to be aware when some other thread is already doing this pubkey (and yet wait on it's result). And solving all of that simultaneously has been.... difficult.

I can have a map with pubkey as the key. And I can have the value be an enum either Fetching or Metadata (and missing from the map means it is not fetching). But no easy way to async wait on a map entry to show up. Also no easy way to wait a bit and then batch the multiple requests and avoid all race conditions with other threads doing the exact same thing (although that I can solve). Anyhow, the whole thing seems rather difficult and yet it must be solved by ... every web based nostr client out there... right?

From a request perspective you want to wait 50 or 100 msec before sending out a request so that at least some of the requested ids can queue up. It's going to take a few hundred msec for the first SSL connection anyway, so a little delay is valuable and hard to notice.

What you do when the data comes back depends on how you're writing the UI. You probably don't want to apply results one at a time because it will likely cause things to needlessly re-render, but how you apply multiple updates in the same render is framework dependent.

"Ask not what Satoshi can do for you..."

People say, "Why would you talk to an AI like a person?"

But, like, have you tried it?

In the fall your 1:30a cron job runs twice. In the spring your 2:30a job doesn't run at all

Only because people accept it as such. The JVM and JS engines are some of the best engineered virtual machines that have ever existed

"In this paper we show that appropriate application of weather-modification can provide battlespace

dominance to a degree never before imagined. In the future, such operations will enhance air and space

superiority and provide new options for battlespace shaping and battlespace awareness. “The technology is

there, waiting for us to pull it all together;” in 2025 we can “Own the Weather.”

- Weather as a Force Multiplier: Owning the Weather in 2025 (A Research Paper Presented To the Air Force, 1996)

These differences are why "synthetic" intelligence seems more descriptive than "artificial"

Deep Research