What is the fastest known vanity address generator?
Thanks man! i'm glad to see the library is going to get some use π
Do you love bitcoin smart contracts?
Sick of constantly renewing your DLCs with on-chain transactions?
Wouldn't you prefer your adorable lil' smart contract to live forever?
Now it can.
Apologies, I think I need to retract my link above - that describes the work that clients have to do. Oracles don't have to do all that, though they do sign prefixes of the numeric outcome instead.
I suppose having a tag could work, but it looks like I'm at the limit of my knowledge on DLCs here.
In looking around, I also found this,
https://github.com/nostr-protocol/nips/pull/919
It may have more work in it, but not I'm sure if the PR is still active.
nostr:nprofile1qqs0awzzutnzfj6cudj03a7txc7qxsrma9ge44yrym6337tkkd23qkgpzemhxue69uhk2er9dchxummnw3ezumrpdejz7qgewaehxw309akxjemgw3hxjmn8wfjkccte9e3k7mf0qy88wumn8ghj7mn0wvhxcmmv9uuxfel8 was the last to touch that.
A numeric DLC oracle publishes a set of nonces, one per digit, for some future price/time/asset-pair event. Then when that time rolls around in the future, it signs each digit of the price using one of those pre-announced nonces, plus the oracle's static signing key. I think nostr:nprofile1qy88wumn8ghj7mn0wvhxcmmv9uq3qamnwvaz7tmwdaehgu3wd4hk6tcpz9mhxue69uhkummnw3ezuamfdejj7qghwaehxw309amxjar0wghxummnw3erztnrdakj7qpqgcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqyw43nv 's approach here is to simply sign a price inside by virtue of its inclusion in a signed nostr event object.
I really like the NIP88 proposal you linked, i've done some work on it and the draft seems like it'd fit most people's needs. I'd be in favor of standardizing the oracle events (kind 88 and 89), but not the DLC offer/accept events (kind 8888, 30088). I think we should spend more time improving the performance and privacy of the dlc specs before standardizing those messages downstream in Nostr.
The oracle messages are important to get started on right away, because without a vibrant market full of trustworthy oracles, nobody is going to use DLCs. Walk before you run, right?
https://github.com/nostr-protocol/nips/pull/919
perhaps one day, my nostr client can fetch announcements for tomorrow's bitcoin price from 9 different oracles, and use it to create a multi-oracle DLC offer which anyone on the same relays can accept. But for now i'd be happy with simply having nostr as an oracle platform which other apps can build off.
I definitely agree, taproot approach is much better. I updated my post to point to yours. DASK is neat but I don't think we need to be that fancy, and your Taproot approach is more soft-fork friendly.
I do still think using a lighter-weight signature scheme like WOTS for certification would be better and more future proof than committing directly to SPHINCS right now.
If we do end up wanting to use SPHINCS in 20-30 years, it'd still be an option: Just enforce that the WOTS key must sign a SPHINCS pubkey, and verify everything else against the certified SPHINCS key. That would only add a kilobyte to the already-huge SPHINCS signature data.
Think of it this way: if you're correct and this is just an edgecase fallback opcode that only a few people ever end up using, then do we really want the huge kitchen-sink of bringing SPHINCS into the bitcoin consensus layer, just for it to be barely ever used?
On the other hand, if this PQ fallback opcode is used a LOT, then we stand to save a LOT of witness space by biding our time and seeing where the cards fall on the landscape of many-time post-quantum signatures, rather than committing to huge SPHINCS signatures today.
Turns out Taproot is great for post-Quantum in Bitcoin. https://groups.google.com/g/bitcoindev/c/8O857bRSVV8
Hey nostr:npub185h9z5yxn8uc7retm0n6gkm88358lejzparxms5kmy9epr236k2qcswrdp, I can't reply on google groups but you should check out my article on hash-based signatures and my DASK proposal, it's very similar to your idea, in the way clients can opt into future quantum resistance without any consensus changes.
https://conduition.io/cryptography/quantum-hbs/#Upgrading-Bitcoin
I chose a different mechanism for my approach. Instead of an opcode in a tapscript leaf, DASK uses a PQ signature scheme pubkey as an secp256k1 secret key, and assumes future PQ bitcoin clients will validate against that PQ pubkey.
I think your proposal has a major benefit over mine in that it makes soft-fork compliance way easier. My DASK idea seems like it would need a hard fork on Q-Day, but yours would seem to be fully compatible with old clients. Big props π
One idea from my post which I think you might want to consider copying is: Instead of committing to a SPHINCS key on-chain, commit to a hash-based certification key with shorter signatures, like WOTS.
Yes, it's a one-time signature scheme so we can't reuse the key, but we can be pretty sure than post-quantum cryptography will progress a long way in the coming decades, and we can use that one-time signature (which we are highly confident is secure today) to endorse a post-quantum key for a signature algorithm that might not exist yet, or that might exist today but that we don't know is secure, like Kyber.
Your opcode soft-fork would have to specify the exact validation semantics later, but the WOTS authentication key has already been committed to, so the coins are safe.
What's amazing to me is that despite crossing into six figures of USD per bitcoin, median on-chain transaction fees are still at single-digit sats-per-byte rates.
$100k Bitcoin guys and girls. Inevitability makes it no less awe inspiring. Clever purposeful dev teams are the reason why it happened. When people ask me why i'm so confident in Bitcoin, I say it's because i know how many brilliant minds are actively working out the kinks and making Bitcoin practical. With so much π§ power we can't help but succeed at least a little bit.
Anyways, tick tock next block, I'll see you tomorrow internet.
Article fixed, now I'm assuming transparent setup ZKPs only (e.g. STARKs). Thanks for pointing this out π
Great catch, thank you! Will fix this
ahah! I was sure someone must've thought of this idea before me, but I didn't have the right search terms. "ZKCP". This is great background reading, thank you nostr:npub1vadcfln4ugt2h9ruwsuwu5vu5am4xaka7pw6m7axy79aqyhp6u5q9knuu7!
Interesting to learn that the SNARK setup is more sensitive than I had thought. I'll be sure to correct my article to reference the original idea and fix the flaw which I unwittingly duplicated.
As for performance, so far I've only tested the PTLC-bridging program here shown in this section: https://conduition.io/bitcoin/zkpreimage/#Optimizing
With that, it took 3 minutes to generate a proof with the RISC0 STARK prover. Not great but also not completely impractical. I'd like to test other prover programs too
"Sell me this preimage."
You can verifiably purchase the solution to any NP-Complete problem using Bitcoin (and Lightning), combined with zero-knowledge proofs.
- Buy the solution to a sudoku puzzle
- Buy the prime factors of an RSA key
- Bridge HTLCs with PTLCs
- Buy a valid proof-of-work
- and more...
That's completely true! The transition to PQ crypto is a slow march across all digital industries. I know for sure OpenSSH is actively working on this. https://linuxiac.com/openssh-9-9-released/
The most important part of the overall migration IMO will be TLS. Almost all TLS traffic today is basically plaintext to a quantum computer (incl passwords sent to log into online services, and access keys downloaded over TLS). Cloudflare has a good post about that here: https://blog.cloudflare.com/pq-2024/
First time for me hearing of Dark Skippy, but it sounds like a pretty obvious idea: Malicious firmware causes compromise of hardware wallets. That idea applies to pre and post quantum signatures of any algorithm.
I went on a deep dive into post-quantum hash-based signatures and tried to apply them to bitcoin. At the end of the article I propose a way to insure today's Bitcoin wallets with a quantum-resistant fallback key, without any consensus changes needed.
nostr:nprofile1qqs0awzzutnzfj6cudj03a7txc7qxsrma9ge44yrym6337tkkd23qkgpzemhxue69uhk2er9dchxummnw3ezumrpdejz7qgewaehxw309akxjemgw3hxjmn8wfjkccte9e3k7mf0qy88wumn8ghj7mn0wvhxcmmv9uuxfel8 Hi !
I have a question about lnaddrd, if I set 1 min_pay_request_sats in the yaml config file, the server show 1000 as you can see here
https://ln.fredix.xyz/.well-known/lnurlp/fredix
Any Idea ?
Just sent you a few sats to test, seems like it's working
nostr:nprofile1qqs0awzzutnzfj6cudj03a7txc7qxsrma9ge44yrym6337tkkd23qkgpzemhxue69uhk2er9dchxummnw3ezumrpdejz7qgewaehxw309akxjemgw3hxjmn8wfjkccte9e3k7mf0qy88wumn8ghj7mn0wvhxcmmv9uuxfel8 Hi !
I have a question about lnaddrd, if I set 1 min_pay_request_sats in the yaml config file, the server show 1000 as you can see here
https://ln.fredix.xyz/.well-known/lnurlp/fredix
Any Idea ?
nostr:npub1vwldp9k95xxe9mgnzakykpmhmazj5q2veldt6jvju2dvrfkxvlaq6dzfhz That's because LNURL specifies that the min/max sendable fields in the LNURL response are in millisatoshis (thousands of a sat). 1000 msat = 1 sat, so everything should be working correctly for you there π
I agree about the DoS and scaling risks for websockets. Publishing data about spent proofs intermittently sounds more realistic to me, but I don't see any point in using Nostr to do that. A cashu mint is itself an always-on server already. What is gained by pushing that data to other people's servers besides offloading the DoS/scaling problems onto someone else? Instead the mint should just make that data available directly to clients via a new API endpoint.
Publishing one data point per spent proof wouldn't scale well - the mint needs some kind of accumulator or filter which compresses the information about spent proofs while allowing client-side validation. This seems like a use-case for Golomb-Coded sets (GCS), as used in BIP158 compact filters.
https://github.com/bitcoin/bips/blob/master/bip-0158.mediawiki
The mint computes a GCS which compresses all proofs spent recently, and updates the filter regularly. If the mint keeps the GCS hash representations sorted in memory, it could easily perform this update atomically for every single swap/melt without much of a performance hit (it'd be O(log n) insertion complexity).
If the filter gets too big, the mint compresses and archives it, and starts building a new one from scratch, or else evicts older GCS members (spent proofs) with a FIFO queue.
Clients can download and check the compact GCS filter at any time to see if their proofs have been spent recently, without revealing which ecash notes they're curious about. There's a chance for false positives but no chance of false negatives, so you might think your token was spent when it wasn't, but if your token was spent you'll always know about it.
I made a self-hosted Lightning Address webserver, because for some reason I couldn't find any good ones out there already.
https://github.com/conduition/lnaddrd
This is what powers my snazzy new `conduition@conduition.io` LN address π
I just realized my lnaddrd server was configured not to accept zaps less than 500 sats... sorry to anyone who was trying to zap me on nostr for the past few months, i'm not that greedy, i swear!
nostr:note1c4pa7a3jqzhf078msmsy7j3uyduegde967nhqr3e7nyvpj7xc3jqscwl64
nostr:npub10vlhsqm4qar0g42p8g3plqyktmktd8hnprew45w638xzezgja95qapsp42's reply below is correct, FROST with trusted dealer would solve your use-case here.
Of course nostr:npub1vadcfln4ugt2h9ruwsuwu5vu5am4xaka7pw6m7axy79aqyhp6u5q9knuu7's point is also valid: If you want true multisig security you'd need to be certain that you securely erase the distributed shares on your machine afterwards, and of course erase the original secret key (except for maybe keeping a paper backup). Still, it's a valid approach if you are OK with those assumptions.
I'm currently working on a BIP340 compatibility pull request into the ZCashFoundation's FROST implementation (Rust), I believe it should work to produce valid Nostr sigs.
https://github.com/ZcashFoundation/frost/pull/730
If you use a python stack, check out nostr:npub1tr7ndmjkguzupvqhn9clasx4r08q2sklhal4g60844nsnnalw0tqpwnc05's FROST implementation here
https://github.com/jesseposner/FROST-BIP340
nostr:npub160t5zfxalddaccdc7xx30sentwa5lrr3rq4rtm38x99ynf8t0vwsvzyjc9 linked to my FROST BYOK article but that would only be practical if you wanted to create a joint key from existing (static) npubs. I don't think it's useful for splitting an existing key.