nip04 encrypt: 5,000 iter/s

aes-cbc without a shared secret encrypt: 40,000 iter/s

aes-gcm and nip44 basically the same.

The point is if you're encrypting a message to only yourself, it's 10x faster to not do this diffie-hellman thing and just BYO secret.

Please note that I don't actually understand all these words, I just ran a few benchmarks, and decided to encrypt secrets in my database with aes-gcm and just one key instead of nip44/nip04.

Reply to this note

Please Login to reply.

Discussion

Interesting. I am currently using NIP-44 to encrypted private events. What you are suggesting is that if I have a properly-seeded secret, with aes I can get 10× performance.

If you are just speaking about symmetric encryption yes of course it's going to be much faster. The same reason most of the world (TLS) uses it after doing a key exchange. The goal of nip04 and 44 was to bring "asymmetric" encryption using ECDH. The key derivation part is expensive. EC is typically far more computationally expensive than most other forms of asymmetric encryption, obviously depending on the data size. Large blocks can have better performance over RSA because you're using a symmetric cipher.

You're still generally going to want to generate IV's and store them with your data.

Finally, unfamiliar with Postgress, but it's generally bad practice to use your own encryption functions if the DB offers encrypted columns. Most SQL implementations do.

https://www.vaughnnugent.com/public//resources/downloads/cms/c/6jzklj4a2ku5cdzyfkl64pgciy.webp

> EC is typically far more computationally expensive than most other forms of asymmetric encryption

There is no EC encryption.

No kidding? The following comment

> The goal of nip04 and 44 was to bring "asymmetric" encryption using ECDH

Yes. The goal of nip04 and 44 was to bring symmetric encryption using AES derived from a key exchange using ECDH. It's the AES doing the encryption.

If you're using RSA at all, usually by RNGing an AES key and using RSA to encrypt the AES key. And then use that to encrypt the clear text. So in both cases it's AES doing the encryption.

If you're comparing performance, the computational cost of encryption part using AES depends on the size of the clear text and is the same in both cases whereas the ECDH key exchange or the RSA encryption of the AES key are a once off.

You're right. I was being rude for no reason. I'm sorry. I'm assuming you only saw the shared context, not my reply? If it helps, I'm the author of the nip44 C reference implementation.

https://github.com/paulmillr/nip44/tree/main/c

GCM is theoretically less secure with its' 12 byte initialization vector (nonce) and you have to roll over the key after deriving 4gb of keystream

as far as i know, nip-44 uses chacha-20-poly-1305 keystream hash function, which is like 4x faster than AES or something like this

there is a CSPRNG library written by Luke Champine called `frand` which uses chacha12 or chacha20... for our needs we'd need to stick with 20 but for decent entropy for test environments chacha12 is fast enough to provide extremely fast key streams or random numbers based on random seeds (like timestamps)

it's best to use a fresh secret whenever possible with ECDH key derivation as the more you reuse the same key pair the more likely you are to expose a possible plaintext attack

i have thought about but not yet benchmarked using SHA256 as the cipher stream derivation hash in a CTR mode cipher for a custom noise protocol encryption mode... maybe i'll get back to that at some point... the reason why i'm talking about it is there is AVX2 implementations of SHA256 that if you deliberately use the parallelism possible in a counter mode encryption using this: https://github.com/minio/sha256-simd you can have it doing at least 2 hash calculations in parallel if the code is written right, which wouldn't be possible with a feedback mode

What's aes-cbc without a shared secret? You mean without a key exchange step that generates a secret?

The key exchange is a once off whereas the encryption will depend on the amount of data.