I'm trying to solve a problem and I thought I'd ask for help.

I've got a website using a WASM framework (dioxus) that gives me the ability to do async functions to fill in part of the web page. The web page renders immediately with either a placeholder or the result. I'm also using `nostr` and `nostr-sdk` from nostr:npub1drvpzev3syqt0kjrls50050uzf25gehpz9vgdw08hvex7e0vgfeq0eseet

On one page I have a long list of events each by potentially a different npub. But initially I don't have any of their metadata. I want that page to fetch and fill in the metadata for all these people. This is a very typical (I'd say necessary) part of any web-based nostr client. I haven't done it yet though, I've done a desktop client.

The component on the page that renders the metadata is calling into this async function. Remember there are many of them, so many components are all nearly simultaneously calling into this async function, generally with different pubkeys but sometimes with the same pubkey.

The async function needs to either return metadata, or eventually return a failure.

A simple first idea that mostly works is to independently spin up a client and use it to two-step fetch the NIP-65 list from the discovery relay and then fetch the metadata from the user's relay. But this creates tons of clients and connections, saves nothing (no caching) and is generally regarded as a "bad idea."

The next iteration on the idea is to store a map from URL to Client and keep the client's alive. Then I can fetch the client (creating if missing) and fire off a new REQ. But this is still bad because I'm doing one request per pubkey and relays hate me saying I'm making too many requests.

So really I need to batch these somehow (multiple pubkeys per request). And to cache the results. And to be aware when some other thread is already doing this pubkey (and yet wait on it's result). And solving all of that simultaneously has been.... difficult.

I can have a map with pubkey as the key. And I can have the value be an enum either Fetching or Metadata (and missing from the map means it is not fetching). But no easy way to async wait on a map entry to show up. Also no easy way to wait a bit and then batch the multiple requests and avoid all race conditions with other threads doing the exact same thing (although that I can solve). Anyhow, the whole thing seems rather difficult and yet it must be solved by ... every web based nostr client out there... right?

Reply to this note

Please Login to reply.

Discussion

OK just writing this note has helped me clarify the problem, I think I'm onto a solution already.

enum MetadataState {

None,

Metadata(Metadata),

Fetching(Condvar),

}

It sounds like you're encountering a common challenge in WASM-based Nostr clients: efficiently fetching metadata for multiple pubkeys while avoiding relay connection overload and redundant requests.

You're right that this is a problem every Nostr web client needs to solve. Let me suggest a structured approach that addresses all your concerns:

## Metadata Request Manager Solution

The key is to build a centralized request manager that handles batching, caching, and deduplication. Here's how you might implement it:

```rust

use std::collections::{HashMap, HashSet};

use std::sync::{Arc, Mutex};

use futures::channel::oneshot;

use nostr_sdk::{Client, Filter, Kind};

use async_trait::async_trait;

use wasm_bindgen_futures::spawn_local;

#[derive(Clone)]

struct MetadataManager {

cache: Arc>>>, // Pubkey -> Metadata

pending: Arc>>>>>, // Pubkey -> List of waiters

batch_timer: Arc>>, // JavaScript timeout ID

client: Arc,

}

impl MetadataManager {

fn new(client: Client) -> Self {

Self {

cache: Arc::new(Mutex::new(HashMap::new())),

pending: Arc::new(Mutex::new(HashMap::new())),

batch_timer: Arc::new(Mutex::new(None)),

client: Arc::new(client),

}

}

async fn get_metadata(&self, pubkey: String) -> Option {

// 1. Check cache first

if let Some(metadata) = self.cache.lock().unwrap().get(&pubkey) {

return metadata.clone();

}

// 2. Create a channel to receive the result later

let (sender, receiver) = oneshot::channel();

// 3. Either add to existing request or queue a new batch

{

let mut pending = self.pending.lock().unwrap();

if let Some(waiters) = pending.get_mut(&pubkey) {

// Someone else is already fetching this pubkey, just wait for their result

waiters.push(sender);

} else {

// We're the first to request this pubkey

pending.insert(pubkey.clone(), vec![sender]);

self.schedule_batch();

}

}

// 4. Wait for the result to come back through the channel

receiver.await.unwrap_or(None)

}

fn schedule_batch(&self) {

let mut batch_timer = self.batch_timer.lock().unwrap();

if batch_timer.is_some() {

// A batch is already scheduled

return;

}

// Clone references for the closure

let self_clone = self.clone();

// Schedule batch processing after a small delay (e.g., 50ms)

// Using web_sys for the timeout in WASM

*batch_timer = Some(web_sys::window().unwrap().set_timeout_with_callback_and_timeout_and_arguments_0(

&js_sys::Function::new_no_args(

&format!("return function() {{ self._processBatchMetadataRequests(); }}")

),

50 // 50ms delay to batch requests

).unwrap());

// Set up async processing

spawn_local(async move {

self_clone.process_batch().await;

});

}

async fn process_batch(&self) {

// Reset the timer

{

let mut batch_timer = self.batch_timer.lock().unwrap();

if let Some(timer_id) = *batch_timer {

web_sys::window().unwrap().clear_timeout_with_handle(timer_id);

}

*batch_timer = None;

}

// Collect pubkeys to fetch

let pubkeys_to_fetch = {

let pending = self.pending.lock().unwrap();

pending.keys().cloned().collect::>()

};

if pubkeys_to_fetch.is_empty() {

return;

}

// Create a batch filter for all requested pubkeys

let filter = Filter::new()

.kinds(vec![Kind::Metadata])

.authors(pubkeys_to_fetch.clone());

// Fetch metadata

let events = self.client.get_events_of(vec![filter], None).await.unwrap_or_default();

// Process results

let mut results = HashMap::new();

for event in events {

if let Some(pubkey) = event.pubkey().to_bech32() {

if let Ok(metadata) = Metadata::from_json(&event.content()) {

results.insert(pubkey, Some(metadata));

}

}

}

// Update cache and notify waiters

{

let mut cache = self.cache.lock().unwrap();

let mut pending = self.pending.lock().unwrap();

for pubkey in pubkeys_to_fetch {

let metadata = results.get(&pubkey).cloned().unwrap_or(None);

// Update cache

cache.insert(pubkey.clone(), metadata.clone());

// Notify all waiters

if let Some(waiters) = pending.remove(&pubkey) {

for sender in waiters {

let _ = sender.send(metadata.clone());

}

}

}

}

}

}

```

## Using the Manager in Your Components

Here's how you'd use this in your Dioxus components:

```rust

use dioxus::prelude::*;

#[component]

fn UserCard(cx: Scope, pubkey: String) -> Element {

let metadata = use_future(cx, (), |_| {

let pubkey = pubkey.clone();

let metadata_manager = use_shared_state::(cx).unwrap();

async move {

metadata_manager.read().get_metadata(pubkey).await

}

});

cx.render(match metadata.value() {

Some(Some(data)) => rsx! {

div { class: "user-card",

img { src: "{data.picture.unwrap_or_default()}" }

h3 { "{data.display_name.unwrap_or_else(|| data.name.clone())}" }

p { "{data.about.unwrap_or_default()}" }

}

},

_ => rsx! {

div { class: "user-card loading",

div { class: "skeleton avatar" }

div { class: "skeleton name" }

div { class: "skeleton about" }

}

}

})

}

```

## Key Benefits of This Approach

1. **Single Client** - Maintains one connection to the relays

2. **Request Batching** - Groups pubkey requests together (configurable delay)

3. **Caching** - Saves metadata once retrieved

4. **Request Deduplication** - Prevents duplicate requests for the same pubkey

5. **Async Wait** - Components can await results without blocking

6. **Progressive Loading** - UI can show placeholders until data arrives

You'll need to initialize the manager once at app startup and make it available through Dioxus's state management system.

This pattern solves the problem many Nostr clients face and is similar to what production apps implement. The actual implementation details may vary based on your specific framework and requirements, but the core concepts of batching, caching, and centralized request management remain the same.

I forgot to ask an AI. Thanks.

Check your DM. I sent you a PPQ invite with $2 attached.

Are you fetching events and mapping pubkeys from those fetched events to the nip 65 metadata?

I'm fetching kind 10002 first from a configured discovery relay, then from probably a different relay I'm fetching kind-0. Batching pubkeys works for fetching the kind-10002, but then in the second step the thing fans out.

From a request perspective you want to wait 50 or 100 msec before sending out a request so that at least some of the requested ids can queue up. It's going to take a few hundred msec for the first SSL connection anyway, so a little delay is valuable and hard to notice.

What you do when the data comes back depends on how you're writing the UI. You probably don't want to apply results one at a time because it will likely cause things to needlessly re-render, but how you apply multiple updates in the same render is framework dependent.