Avatar
waxwing
675b84fe75e216ab947c7438ee519ca7775376ddf05dadfba6278bd012e1d728
Bitcoin, cryptography, Joinmarket etc.

Well, a poll doesn't *have* to be close. Also is it bad if people are engaged? I guess that heavily depends on your worldview :)

Yes of course that's not realistic, it's designed only to point out the difference in an obvious way.

In the real world, if the polls are say 60-40 for an extended period close to the election, the pred market is likely to be 95-5 or similar, *if* the market finds the polls trustworthy. It is not going to be anywhere near 60-40. Again, they are not measuring the same thing!

If the poll is 51-49 and people never change their mind and everyone is honest, the prediction market would settle close to 100-0.

Is it me or are there countless people out there who can't comprehend the difference between a prediction market and a poll? 馃槃

#beatsaber

Missed Injections mapped by Nolanimations

93.8% FC

https://replay.beatleader.xyz/?scoreId=18419932

The end of section 3 explains; the math there has a small error but it's easily fixed and doesn't affect the main point. Not only is it easier to reduce the forgery to a discrete log break using their new algo, but more importantly, there's no way to create things like DOS by deliberately crafting entries in the leaves (the set of keys) that will take an unreasonable amount of time to pre-process, as was technically possible in the old system (not an issue for most use cases, but for their v-cash use case, it would be). More generally, they're claiming security in scenarios where the accumulator is constructed maliciously, though that doesn't apply to the kinds of use cases we care about.

I don't think the proving and verifying speedups are necessarily significant, especially if they only apply to batching. I will check in a bit more detail though. Basically this is important from my pov because it can make the pre-processing step faster and also make the code simpler. The preprocessing of say 0.5M keys is very nontrivial!

Interesting it is :) But practical? I guess maybe not? We need a version of these ideas that doesn't involve non-trivial interaction between the client and the server (relay), right? I'm finding myself drawn back to "HMAC"; because that was always the traditional solution to this problem, i.e. only the two parties involved in the conversation can verify. There are so called "algebraic HMACs" that can use EC arithmetic instead of hashes. I'll take another look at that.

Replying to Avatar Vitor Pamplona

The idea is indeed to disallow re-sharing.

Picture a company relay. All the information should be strictly contained into that main relay.

However, for Nostr clients to work, they need to verify events by themselves. Which means they receive a full copy of the event and can re-broadcast that copy to another relay very easily.

That creates a problem.

We could just delete the signature field and ask Nostr clients to not verify and "trust" that the company or its relay is not modifying the message from its original author. But relying on trust defeats the purpose of using Nostr in the first place.

Since the company relay authenticates who is connecting to them, it could easily modify the event to make sure only that user can verify it.

My initial solution was simply to encrypt the signature field to the pubkey of the connecting user. Then the client would have to decrypt it before verifying. The issue is that once the user has decrypted, the user has access to the full signature in plain text and can add it back in the event and re-share it with another relay.

Which is not really a solution to the problem.

This led me to the question in this post. How do we make a modified event signature that only one user can verify. It could be still possible to allow other people to verify the new event, but that implies having to make the user's main private key public and hopefully there is enough sensitive information in that private key to serve as a deterrant from users doing so.

Right, thanks, that helps quite a lot. I do get where you're coming from with the "leak private key" concept, that's of course intrinsic/fundamental to Schnorr sigs so it makes sense to at least think about it as a deterrent.

It's pretty whacky, but this combination gives you something like what you want: imagine 2 of 2 musig between user A and relay R. A gives R an adaptor on its partial sig sigma_A' where the adaptor secret is its own private key. Then R gives sigma_R and A can *internally* verify the full signature on the musig aggregated key against the message. If it broadcasts that full signature, it leaks its private key.

Replying to Avatar waxwing

Been expecting this, it arrived today:

https://eprint.iacr.org/2024/1647

Curve Trees without permissible points, which i am expecting will significantly improve performance (and have better security). Also some batxhinh amortization type improvements.

Now renamed 'Curve Forests' :) still reading...

umm batching 馃槃

Been expecting this, it arrived today:

https://eprint.iacr.org/2024/1647

Curve Trees without permissible points, which i am expecting will significantly improve performance (and have better security). Also some batxhinh amortization type improvements.

Now renamed 'Curve Forests' :) still reading...

First thing is that signatures are publically verifiable; generally when you want verifiability restricted, you use structures like HMAC which can only be checked with the/a secret.

I read your description several times. For the second paragraph (the first, I'll come back to), I *think* what you mean is the case where *you* (A) are giving a user (B) a signature, but you don't want them to be able to re-transmit it or share it? There's several ways to look at it but it depends on details of your use case. First thing to remember is that non-interactive signature schemes that we use are built from *interactive* identity protocols. The latter are *not* transferrable, but they are interactive. So, if A wants to convince B that the message being transferred is indeed from A who owns priv a, just follow a standard 3 pass commit, challenge, response (sigma-protocol).

Alternatively, it's often the case that you're not trying to keep any data secret, you're just trying to make the protocol disallow reuse. Then it can be fine to just include context into the message being signed. If instead of signing "Hello" I sign "Hello from A to B" then if B tries to send to C, the protocol can disallow it because the message does not contain "B to C".

Back to the first paragraph, I see three different things being requested: that the signature (s) be tweaked with a pubkey (to s' say), that the signature is verifiable only with that private key, and that s' is not linkable to s.

This feels like asking for 6 impossible things before breakfast :) Signatures can be tweaked easily (see the adaptor concept), but to state the obvious, they can't be tweaked with a scalar you don't know. So if user B has secret key b, s' = s+ b is verifiable by people not owning b but knowing B (that's adaptors), but the opposite seems impossible: to have s' be verifiable *only* if you know b, but constructible *without* b. That's kind of opposite to how public key crypto works; the world knows B, not b. That is even setting aside the problem of s not being linkable to s'.

Also BIP340 uses key prefixing (pubkey in hash), which means taking existing signatures and malleating them is impossible.

Yes i was thinking along similar lines. Also given the scenario requires physical proximity of sender and receiver, the payer already has internet access at that moment, which makes several things possible.

To be fair, iirc, this was the *exact* application of ecash that was imagined by people like Chaum, Brands etc. in their original papers (vendor in meatspace has network access, customer doesn't) so I'm definitely not claiming this doesn't make sense.

True. But any 2 impls is fine here, I guess - because when you search for liquidity ads, even if only 15% of peers or whatever, are offering, you would still be motivated to do it to have balanced channels by default. The clearing price for liquidity would go down from current centralized providers (which are another CPOF for the state to attack), is my bet.

Dual funded means eg. I contribute 100k sats and my channel counterparty does as well, so that when the channel is opened, it has 200k sat capacity but is balanced, with 100k inbound and outbound.

Entirely possible that I'm mis{sing,interpret}ing things here but: it's a shame that the dual funding in c-lightning is still behind a non-default experimental-dual-fund flag and that when i search for the "option-will-fund" in the listnodes output I only get like 5-10 out of 90K. To be clear, by following fairly simple instructions in an old blog post, I was able to make a well-funded and perfectly balanced channel, within a couple of minutes.

For those interested in coinjoin, note that this *is* coinjoin and arguably one of the best types, since by nature of the offchain payments, this kind of CJ can actually hide flows better. If we want to get the real power of such a thing we'd ideally start batching *multiple* such channel opens together, but that's putting the cart before the horse here. Dual funded channels are such an obvious good, why is it not a more widely used system, or am I missing something? #lightning

(Btw said blog post by nostr:npub1e0z776cpe0gllgktjk54fuzv8pdfxmq6smsmh8xd7t8s7n474n9smk0txy : https://medium.com/blockstream/setting-up-liquidity-ads-in-c-lightning-54e4c59c091d

Link isn't working here (amethyst android) probably because H not h