I never understood why base64 requires those "=" at the end. They are so annoying to copy and can be easily inferred programmatically.
Discussion
Is it a delimiter?
how's life bro, you good?
bmV2ZXIgZ29ubmEgZ2l2ZSB5b3UgdXA=
Se eu não me engano é pra preencher espaço vazio
Why are most of your notes being edited into annoying gibberish?
You’re using Amethyst?
It's making me hate him but I don't know if it's his fault (it probably is) in which case fuck you nostr:nprofile1qqsrhuxx8l9ex335q7he0f09aej04zpazpl0ne2cgukyawd24mayt8gprfmhxue69uhhq7tjv9kkjepwve5kzar2v9nzucm0d5hszxmhwden5te0wfjkccte9emk2um5v4exucn5vvhxxmmd9uq3xamnwvaz7tmhda6zuat50phjummwv5hsx7c9z9
It's a protest against amethyst edit feature
i'm pretty sure it relates to optimization of the amount of counting and comparing
see this code for an example of a base 10 encoder
https://github.com/mleku/realy/blob/dev/ints/ints.go
you see the chunking i've used, this accelerates the decoding process, because you can just create an index table of short words that you can scan by magnitude and then immediately grab the encoded chunk that corresponds to it
you can also find similar chunking schemes used in the go standard library for multiple precision but when you have a fixed precision, you have a symbol unit length and every single time you parse a chunk you have to make decisions, by adding padding, you eliminate the need for those decisions, you can eliminate them with one size test - a modulus and comparision
base64 provides for storing 48 bits in 64 bits of data, so it has a word length of 64 and i think most usually would entail a lookup table of probably 12 bits per symbol, or 64k of symbols each two bytes for the code and 2 bytes for the (must be concatenated) fragment of the represented data
this thing about 64 bit words in modern CPUs is a huge part of why there is padding
It's just padding. Seems pretty natural from a decoding perspective to account for the lack of bits. And IIRC it's optional.