(continuing on this topic, at possibly absurd length):
The most interesting thing about this write up is that it's principally advocating for using curve25519 (see the 3rd recommendation at the end of the post) for ECDH and thus encryption, based on the idea that it's been designed to handle tricky adversarial behaviour. For example, the curve is designed to make constant-time implementation easier to limit/remove sidechannel attacks. And one thing in particular it has, which is quite special, is: *any* 32 byte string is an acceptable pubkey; this is done with some clever math magic in the curve's design definition. DJB (the author) therefore actually tells people to *not* validate input keys; as long as they're 32 bytes, they're to be accepted.
Yes indeed. 'Cofactor' is a bit of an obtuse term, i think it's related to the beautiful Lagrange's theorem. Or maybe i only think that because of the term 'coset'. Not sure.
I'm always a bit torn about stuff like this. On the one hand, CRT is cool and it's even cooler that people have made successful attacks on real world systems using these so-called 'twist attacks' (basically the fuck up is to not check is the "point" you're provided, is actually on the curve).
But, on the other hand, calling it a danger when using secp256k1 for encryption seems a bit wrong when the danger is specifically that you *didn't* use secp256k1!
Indeed the substance of this attack is to exploit the fact that if a curve group has small subgroups, you can apply CRT to get info about secret keys. But secp256k1 doesn't have *any* subgroups. So the attack depends on the victim not checking if an externally given public key, is actually on secp256k1. Combining that key in a DH type exchange leads to naughtiness.
Not nothing.
You can't get a beer for 99p any more.
Good point, and notice that this risk also points in the direction of mining centralization being the ultimate weak point.
Finger in the air: 6BTC in fees per block for 'very high' fees.
144×6×365 = 315K BTC per year.
The issue for such an attacker is not the nominal cost in USD, at least initially. It's that this amount of continuous buying might end up being good for Bitcoin's price, supporting 'hodl' mindset/strategy instead of undermining it.
Which has its logic. Bitcoin would be proving its usefulness :)
I also totally understand that this logic is *extremely* shaky.
Meanwhile, the actual reason I've never found this scenario worrying is because bureaucracies will never commit to actions like this, actually buying btc in bulk. They are much more likely to try to attack miners, which is why mining centralization has always felt like the biggest danger to be wary of.
Very good analysis.
I'd like to explain a nuance which some might miss.
When Ben says here 'it was never expected that blocks would reach the 4MB limit', he's *not* saying 'it was never expected that blocks would become permanently full'. The size of a full block depends on the kinds of transactions people are doing. If normal payment txs, it might be around 1.7-2.5MB, say. Only if you have weird txs with hugely bloated witnesses, do you have the possibility of creating blocks with close to 4MB total size.
'In a graph vs isolation' - yeah that's always been my biggest gripe when people compared the Zerolink ot Samourai model to the Joinmarket model. Though obviously, can of worms there.
Ah right, makes sense. But still pronounced the same, I guess?
Tend to agree, but of course it's a very non-trivial question.
Always saw these fingerprint questions as: one of the 2 extremes is always preferable: 100% strict rule following, or complete randomness. Raises the interesting question of whether we should campaign for all wallets to randomize everything all the time :)
(Even though some can't, unfortunately, because they have specific features... but taproot can help with that)
I just realised that, in Spanish, a woman from Argentina is 'una Argentina', which tickles me a lot.
Imagine: 'hey, meet my new girlfriend, she's an England' 😆
Maybe? My reluctance to suggest Tor onion services isn't overcome by what (admittedly very little) I know about I2P for this kind of broader/larger audience application.
In Core 26.0, if you specify a conf file with `-conf=/somewhere/bitcoin.conf`, bitcoind refuses to start, if you also have another `bitcoin.conf` in your datadir, and issues a warning, unless you also have another option set : `allowignoredconf=1`. This seems like overengineering, how likely is it that I would explicitly set the conf file location and not want to use it?
https://stacker.news/items/354028
'even if .... pennies of value'.
Is there any solution outside of a Tor onion service?
Don't think it's the number (2, or > 1) that matters, it's that there are things not yet tested, that it successfully predicts (and that other models/theories fail to predict).
I think it's best to stick with Popper's definition of a good scientific theory: it needs to make falsifiable predictions and have them not falsified by future experimental data.
General Relativity, e.g., passed this with flying colours (perihelion of Mercury, atomic clocks on airplanes etc). From what I've heard, string theory, e.g., doesn't.
An interesting example of where it gets tricky is the neutrino. Iirc it was postulated to explain an energy deficit in a nuclear reaction. But then people were able to use it to predict other experimental results. Is dark matter like that? Cosmology is really tough from this point of view.
Has anyone tried out the new payjoin-cli ?
https://github.com/payjoin/rust-payjoin/tree/master/payjoin-cli
Interesting! Especially because using Core.
Sure, see https://eprint.iacr.org/2022/756
Read the intro sections for links to some of the prior art. This not specific to secp/q see e.g. tweedledum and tweedledee
Hmm it looks like the situation has changed sine I last looked about a year ago. Arkworks algebra repo now seems to have it:
https://github.com/arkworks-rs/algebra/tree/master/curves/secq256k1