Right, and polling is designed (IMO) to perpetually make the race look close to keep people engaged.
Well, a poll doesn't *have* to be close. Also is it bad if people are engaged? I guess that heavily depends on your worldview :)
Yes of course that's not realistic, it's designed only to point out the difference in an obvious way.
In the real world, if the polls are say 60-40 for an extended period close to the election, the pred market is likely to be 95-5 or similar, *if* the market finds the polls trustworthy. It is not going to be anywhere near 60-40. Again, they are not measuring the same thing!
If the poll is 51-49 and people never change their mind and everyone is honest, the prediction market would settle close to 100-0.
Yes but the point is that they are measuring totally different things.
I'm not talking about one being more reliable than the other.
Is it me or are there countless people out there who can't comprehend the difference between a prediction market and a poll? 馃槃
#beatsaber
Missed Injections mapped by Nolanimations
93.8% FC
The end of section 3 explains; the math there has a small error but it's easily fixed and doesn't affect the main point. Not only is it easier to reduce the forgery to a discrete log break using their new algo, but more importantly, there's no way to create things like DOS by deliberately crafting entries in the leaves (the set of keys) that will take an unreasonable amount of time to pre-process, as was technically possible in the old system (not an issue for most use cases, but for their v-cash use case, it would be). More generally, they're claiming security in scenarios where the accumulator is constructed maliciously, though that doesn't apply to the kinds of use cases we care about.
I don't think the proving and verifying speedups are necessarily significant, especially if they only apply to batching. I will check in a bit more detail though. Basically this is important from my pov because it can make the pre-processing step faster and also make the code simpler. The preprocessing of say 0.5M keys is very nontrivial!
Interesting it is :) But practical? I guess maybe not? We need a version of these ideas that doesn't involve non-trivial interaction between the client and the server (relay), right? I'm finding myself drawn back to "HMAC"; because that was always the traditional solution to this problem, i.e. only the two parties involved in the conversation can verify. There are so called "algebraic HMACs" that can use EC arithmetic instead of hashes. I'll take another look at that.
Right, thanks, that helps quite a lot. I do get where you're coming from with the "leak private key" concept, that's of course intrinsic/fundamental to Schnorr sigs so it makes sense to at least think about it as a deterrent.
It's pretty whacky, but this combination gives you something like what you want: imagine 2 of 2 musig between user A and relay R. A gives R an adaptor on its partial sig sigma_A' where the adaptor secret is its own private key. Then R gives sigma_R and A can *internally* verify the full signature on the musig aggregated key against the message. If it broadcasts that full signature, it leaks its private key.
Been expecting this, it arrived today:
https://eprint.iacr.org/2024/1647
Curve Trees without permissible points, which i am expecting will significantly improve performance (and have better security). Also some batxhinh amortization type improvements.
Now renamed 'Curve Forests' :) still reading...
umm batching 馃槃
Been expecting this, it arrived today:
https://eprint.iacr.org/2024/1647
Curve Trees without permissible points, which i am expecting will significantly improve performance (and have better security). Also some batxhinh amortization type improvements.
Now renamed 'Curve Forests' :) still reading...
First thing is that signatures are publically verifiable; generally when you want verifiability restricted, you use structures like HMAC which can only be checked with the/a secret.
I read your description several times. For the second paragraph (the first, I'll come back to), I *think* what you mean is the case where *you* (A) are giving a user (B) a signature, but you don't want them to be able to re-transmit it or share it? There's several ways to look at it but it depends on details of your use case. First thing to remember is that non-interactive signature schemes that we use are built from *interactive* identity protocols. The latter are *not* transferrable, but they are interactive. So, if A wants to convince B that the message being transferred is indeed from A who owns priv a, just follow a standard 3 pass commit, challenge, response (sigma-protocol).
Alternatively, it's often the case that you're not trying to keep any data secret, you're just trying to make the protocol disallow reuse. Then it can be fine to just include context into the message being signed. If instead of signing "Hello" I sign "Hello from A to B" then if B tries to send to C, the protocol can disallow it because the message does not contain "B to C".
Back to the first paragraph, I see three different things being requested: that the signature (s) be tweaked with a pubkey (to s' say), that the signature is verifiable only with that private key, and that s' is not linkable to s.
This feels like asking for 6 impossible things before breakfast :) Signatures can be tweaked easily (see the adaptor concept), but to state the obvious, they can't be tweaked with a scalar you don't know. So if user B has secret key b, s' = s+ b is verifiable by people not owning b but knowing B (that's adaptors), but the opposite seems impossible: to have s' be verifiable *only* if you know b, but constructible *without* b. That's kind of opposite to how public key crypto works; the world knows B, not b. That is even setting aside the problem of s not being linkable to s'.
Also BIP340 uses key prefixing (pubkey in hash), which means taking existing signatures and malleating them is impossible.
Yes i was thinking along similar lines. Also given the scenario requires physical proximity of sender and receiver, the payer already has internet access at that moment, which makes several things possible.
To be fair, iirc, this was the *exact* application of ecash that was imagined by people like Chaum, Brands etc. in their original papers (vendor in meatspace has network access, customer doesn't) so I'm definitely not claiming this doesn't make sense.
Los caballos son considerados seres sanadores de la 5陋 dimensi贸n capaces de establecer una estrecha conexi贸n emocional con los seres humanos.
Por eso, la terapia de interacci贸n asistida con caballos para promover la sanaci贸n f铆sica y emocional ha tenido excelentes resultados
#bullishbounty
https://video.nostr.build/b01aa8d83c15c21e11ff20f4287b5d72b7f4cbd6a2fe841ec52b29849fdba535.mp4
驴C贸mo?
Oh yes, i certainly assume liquidity ads; it makes no sense not to include market coordination, especially considering fees are offchain :)
True. But any 2 impls is fine here, I guess - because when you search for liquidity ads, even if only 15% of peers or whatever, are offering, you would still be motivated to do it to have balanced channels by default. The clearing price for liquidity would go down from current centralized providers (which are another CPOF for the state to attack), is my bet.
1. Non-spec-final features are always behind a flag.
2. Spec final requires two independent implementations which interoperate.
3. Then nostr:nprofile1qqsvh300dvquh50l5t9et2257pxrsk5ndsdgdcdmnnxl9nc0f6l2ejcpz3mhxue69uhhyetvv9ujuerpd46hxtnfduq3samnwvaz7tmjv4kxz7fwwdhx7un59eek7cmfv9kqzrrhwden5te0vfexytnfdu7y9l4j needs to put in PR to make it on by default.
4. Then we need a release!
2 independent - that's what what i was missing/forgetting. Thanks!
Dual funded means eg. I contribute 100k sats and my channel counterparty does as well, so that when the channel is opened, it has 200k sat capacity but is balanced, with 100k inbound and outbound.
Entirely possible that I'm mis{sing,interpret}ing things here but: it's a shame that the dual funding in c-lightning is still behind a non-default experimental-dual-fund flag and that when i search for the "option-will-fund" in the listnodes output I only get like 5-10 out of 90K. To be clear, by following fairly simple instructions in an old blog post, I was able to make a well-funded and perfectly balanced channel, within a couple of minutes.
For those interested in coinjoin, note that this *is* coinjoin and arguably one of the best types, since by nature of the offchain payments, this kind of CJ can actually hide flows better. If we want to get the real power of such a thing we'd ideally start batching *multiple* such channel opens together, but that's putting the cart before the horse here. Dual funded channels are such an obvious good, why is it not a more widely used system, or am I missing something? #lightning
(Btw said blog post by nostr:npub1e0z776cpe0gllgktjk54fuzv8pdfxmq6smsmh8xd7t8s7n474n9smk0txy : https://medium.com/blockstream/setting-up-liquidity-ads-in-c-lightning-54e4c59c091d
Link isn't working here (amethyst android) probably because H not h