Negentropy messages have a subscription ID. But do we really need to support multiple negentropy syncs within the same session at the same time? I'm thinking of just allowing one. Do you know of reasons to allow multiple in parallel in the same session?
Would be nice to have negentropy support in chorus.
If it helps, here's how I implemented it for nostr-relay-builder: https://github.com/rust-nostr/nostr/blob/25a76ff3d90ea4307dd1078ccc2b910619571971/crates/nostr-relay-builder/src/local/inner.rs#L614
Discussion
I'm happy you're implementing negent ❤️ kinda essential for real world apps
Unlike the REQs, the negentropy messages can take only one filter, so it may be useful in some case to allow multiple sync at the same time. But it's also true that sync can be executed in sequence, like I'm doing with `Relay::sync_multi`: https://github.com/rust-nostr/nostr/blob/e4dbf484a8d1e57a05f0cc0312dd52c6c43cddd5/crates/nostr-relay-pool/src/relay/inner.rs#L1776
Here I use the `sync_multi`: https://github.com/rust-nostr/nostr/blob/e4dbf484a8d1e57a05f0cc0312dd52c6c43cddd5/crates/nostr-sdk/src/client/mod.rs#L1850
I'm not sure of your API can you verify that this will work (it compiles):
1) Generate events from the filter, and convert into `Item`s, inserted into `NegentropyStorageVector`. [BUT I copied your NegentropyStorageVector and implemented `NegentropyStorageBase` for &NegentropyStorageVector to not consume it].
2) Store in a global HashMap under the subscription ID.
3) Look it up again immediately to get a `&NegentropyStorageVector` and build a `Negentropy` on it.
4) `reconcile()` the incoming bytes to get the outgoing bytes.
5) Next NEG-MSG, just start at step (3)
Also I deployed this at wss://chorus.mikedilger.com:444 but I don't have any tools for testing it. If anybody could easily check if negentropy is working there, that would be great. Otherwise I'll have to write some tool.
The steps looks good. I'll test it soon. Is the new code with negentropy already public?
I didn't push it public, I wanted to test it first.
I receive an error when I try to connect: "peer closed connection without sending TLS close_notify"
nostr:npub1acg6thl5psv62405rljzkj8spesceyfz2c32udakc2ak0dmvfeyse9p35c, I'm trying with this example:
```toml
[dependencies]
nostr-sdk = { version = "0.38", features = ["lmdb"] }
tokio = { version = "1.42", features = ["full"] }
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
```
```rust
use nostr_sdk::prelude::*;
#[tokio::main]
async fn main() -> Result<()> {
tracing_subscriber::fmt::init();
let keys = Keys::parse("nsec1ufnus6pju578ste3v90xd5m2decpuzpql2295m3sknqcjzyys9ls0qlc85")?;
let database = NostrLMDB::open("./db/nostr-lmdb")?;
let client: Client = ClientBuilder::default()
.signer(keys.clone())
.database(database)
.build();
client.add_relay("wss://chorus.mikedilger.com:444").await?;
client.connect().await;
let since = Timestamp::from_secs(1736726400);
let until = Timestamp::from_secs(1737331200);
let filter = Filter::new().kind(Kind::TextNote).since(since).until(until); // Sync the text notes of the last week
// Negentropy options
let (tx, mut rx) = SyncProgress::channel();
let opts = SyncOptions::default().progress(tx);
// Keep track of the progress
tokio::spawn(async move {
while rx.changed().await.is_ok() {
let progress = *rx.borrow_and_update();
if progress.total > 0 {
println!("{:.2}%", progress.percentage() * 100.0);
}
}
});
// Sync
let output = client.sync(filter, &opts).await?;
// Sync output
println!("Local: {}", output.local.len());
println!("Remote: {}", output.remote.len());
println!("Sent: {}", output.sent.len());
println!("Received: {}", output.received.len());
println!("Failures:");
for (url, map) in output.send_failures.iter() {
println!("* '{url}':");
for (id, e) in map.iter() {
println!(" - {id}: {e}");
}
}
Ok(())
}
```