Relays should provide both JSON and Binary options.
JSON only is extremely wasteful for bandwidth constraint clients.
You won't change my mind.
Relays should provide both JSON and Binary options.
JSON only is extremely wasteful for bandwidth constraint clients.
You won't change my mind.
Easier and better option - relays should support #Reticulum as well as HTTP(S).
Reticulum packets are always compressed at the transport layer.
It's an established standard that "just works", no need to herd cats into a completely new binary protocol.
nostr websockets are generally flate compressed by default also... for the most part this reduces the size of the hex encoded fields almost to binary, and greatly compresses long text segments and repeating fields like in req envelopes
Don't forget LXMF
https://github.com/markqvist/lxmf
#Reticulum
Yes, 💯. I keep fighting json crapola on HAMSTR. Ended up compressing and serializing for now.
write a codec. simple
in my nostr codec it decodes all the nasty hexadecimal fields (p tags, e tags, a tags) into raw binary for internal use to save memory and speed up matching, and this is also then transformed to the binary format used by the database
all you have to do is write one good codec and done, modularity solves the problem
JSON is the reason 90% of nostr devs where able to easily learn nostr
don’t need to change your mind.
think what you want.
you need to go study the principles of Unix if you don't understand why having the messages in JSON is better..
the default websockets use Flate compression and this is pretty decent at deduplicating the high entropy pubkeys
if you want to make a relay and client that works with a binary format, go make it, but you will have a bad time if its primary support of JSON is not kept fully up to date, interoperability > all
by obscuring data in a binary format you create all kinds of problems with interoperability, debugging, and so on, i'm not saying don't do it, but it's a low priority compared to fully supporting the easy to debug human readable format, even if json is shitty for certain parts of it syntax it's still readable
here's the principles of unix, summarised by Brave's AI:
-----
Write programs that do one thing and do it well: Focus on simplicity and single-purpose tools.
Write programs to work together: Design tools to be modular and interoperable.
*Write programs to handle text streams: Use text as a universal interface for data exchange and processing.*
Additional Principles
From “The Art of Unix Programming” by Eric S. Raymond:
Rule of Clarity: Clarity is better than cleverness.
Rule of Composition: Design programs to be connected with other programs.
Rule of Separation: Separate policy from mechanism; separate interfaces from engines.
Key Concepts
*Plain text: Store data in plain text files.*
Hierarchical file system: Organize files and directories in a hierarchical structure.
Treat devices and IPC as files: Use a file-like interface for device management and inter-process communication.
Use software tools: Prefer small, specialized programs (tools) over monolithic applications.
Command-line interface: Use a command-line interpreter to string together tools and execute tasks.
Notational Conventions
this is the right answer
i don't see how anyone could disagree
Out of curiosity: what do you think of systemd?
🫂
i don't really like it, i prefer the old days of rc.d
systemd is too complex
Good text. I'm almost there with our new relay. Last step missing is group multiple tools on the command line.
Yeah, this approach allows for "small specialized programs".
Such as: a binary protocol to optimize send and receive speed and bandwidth, primarily over mobile networks.
Protocols have different design goals.
yes, but don't turn it into a waste of dev resources like Wayland or most of Linux
I wrote a NIP for this. Got almost zero engagement. People want baubles.
Lol, it took me several seconds to realize you weren't talking about security options, and was wondering wtf a JSON option was 😆
Json was a mistake.
good luck having fun getting agreement on binary codecs
you have all the CBOR fans, and the Protobuf fans and the MsgPack fans and then there is retarded custom codecs like the one i made for database and the one fiatjaf made for database but at this point these only encode events
in principle a good idea but the only way you are gonna get this to work is getting several clients to adopt it and at least one relay that makes this available, and probably you still have to stick with websockets
CBOR
Gray beard approved.
you see, there is a problem, i don't like cbor after dealing with it in my work with bluesky
so why do you love it so much? why do you prefer it compared to msgpack or protobuf or flatbuffers or capnproto?
keep in mind this needs to connect together apps written in go, rust, javascript, java, c, c++ and maybe at least reasonable support for python and c#
from what i've seen, and it's been a while since i've looked that close, protobuf was the most well and fully-supported across all of these languages
personally, i would want to use flatbuffers, and currently the Go version of this protocol encoding is shitty, but i could imagine myself reworking it in a month or two into something that is really natural to use with Go, because i have already in fact written an on-demand protocol buffer encoder, twice, in fact, simple TLV thing that was based on ad-hoc dynamic interface array message descriptors
CBOR is RFC standardized, protobuf is Google's personal pet project and with their schemas usually also too complicated.
🎯
as someone who works with interfaces and not variants i don't like any protocol that doesn't have static typing and i honestly don't see the point of complex rpc specification compilers when it's just a friggin function call
about 10% of cases, partial failures can happen that should really return an error and a partial value (eg, a result being an array of fields from an array of inputs) but nope, can't do that with typical C++/Rust error handling idiom
this forces you to design APIs as single shot rather than batched which is a pretty rigid and performance limiting way to do things, especially when it is easy to aggregate queries to be handled by for example databases with concurrent queuing (especially distributed ones where the processing can be fanned out to multiple servers)
i say, if you make the client and build out a relay interface go ahead, make your hipster CBOR encoder API
but for the love of God do not make up your own fancy new RPC API if you do not intend to support a sufficient range of common language bindings, NATIVELY... just take the json and binary encode it, don't fuck around
Yeah, protobuf is gRPC standard, like nostr:npub1wqfzz2p880wq0tumuae9lfwyhs8uz35xd0kr34zrvrwyh3kvrzuskcqsyn suggested.
i think in all this debate people just need to make a relay/client pair that understand the new protocol and done
I don't understand why they want to standardize this. Different devs will want different protocols and it's irrelevant to core interoperability because you can always communicate over json.
All I really care about is signed events that I can parse and verify with a npub. Plus a few conventions around kinds. If signed events can be serialized for more efficient transmission, great, but that’s icing on the cake.
Yup! Cryptographic identity and standardized data shapes are the most fundamental parts of Nostr. It's too hard to get devs to agree on everything else, so implementations will vary widely. If we keep to that core, though, we'll retain basic interoperability.
that's why i say make a binary encoder and runtime format for it like the ones i have designed
https://github.com/mleku/nostrbench
there is no way that anyone can make it faster than what i have done short of writing it in assembler, and it's such a short piece of code that it should be possible to implement it in any language
i am pretty sure that even javascript can deal with the binary 32 and 64 byte fields in the encoding so similar dramatic improvements in performance as you can see in those benchmarks should be possible, while also enabling a pure binary encoding and a binary detached hash and signature associated with it
just like fiatjaf made nostr json a custom thing instead of jsonrpc2 like bitcoin uses for RPC, we should have a custom binary codec just like what chain data uses on bitcoin
the hard part is going to be people who insist on javascript and python or the necessity of it for web apps, but even there, i am pretty sure you can make my codec into wasm modules and done
https://github.com/mleku/realy/blob/dev/event/binarymarshal.go and https://github.com/mleku/realy/blob/dev/event/binarymarshal.go are the in and out for the format, containing the ID and the signature of the json form
it's faster than fiatjafs and it's what i use in my database implementation
messagepack would work fine I think. biggest gains would be parsing efficiency and battery life gains. decoding json sucks and is slow.
nostrdb has optimizations to skip parsing altogether when it doesn’t need to (stops parsing at the id field if we already have the note). The performance boost there is nice. messagepack or some other format would be another boost.
The *ideal* way would be something like flatbuffers, where you could just memcpy the note into your db… but is more complex.
CBOR is basically the RFC standardized version of messagepack.
I recommend CBOR.
I proposed a format for that
my binary codec already does a lot of that memory remapping of fields, as the runtime and database versions of the data are basically the same - it keeps the id/pubkey/signature fields (including in common tags) in raw binary and unpacking it into the runtime format is just a matter of creating pointers
the hex encoding is also done with a SIMD hex encoder, and the sha256 hash is done also with a threaded worker based single instruction multiple data so, on avx512 and avx2 that means it runs 2 hashes per CPU thread
switching the binary fields to be kept as binary except up to the wire has a massive benefit
it is so nearly close to being a perfectly servicable wire codec as well, i just didn't design an envelope encoding or binary serialisation for the filters
but other languages probably won't support this kind of optimization very well certainly not javascript
i don't get how javascript parsing is really much slower working in native json (which should be optimized to the max) versus making javascript work with foreign binary data formats for these binary fields
but totally understand why it's hard to make clients that aren't JS/DOM/HTML/CSS based... the whole tech industry has focused on this universal platform and its abomination of a scripting language to the expense of serious languages supporting native apps, and the total lack of adequate multi-platform targeting and adequately broad language support for cocoa, win32, gtk and qt - ironically most of the time it's either electron, putting the front end in a browser engine, or some kind of simple and fairly inadequate immediate mode or similar direct opengl/vulkan based thing (eg imgui, egui, gio, fyne, nucular)
Maxim already implemented it in a weekend.
Don’t tempt me!
If you want it, we’ll do it. CBOR.
Not sure about this one. JSON has a lot of advantages, size is negligible compared to media files people transfer all the time.
What relays should provide, next to websockets is a plain HTTP API.
This lowers the bar even more for new nostr developers and makes things easier for prototype and lots of no-realtime apps (not having to handle closing sockets, reconnection logic, etc).
You can have both served in the same path on the same process. Imo this should've been part of the spec, it might be too late
nostr:note142wvcpyyla0zv4lpnfnaj00yvx5xyh3edj0cxsuwgme83ldeky3sfvva7r
constrained
Fields like event id, sig, pubkey, p & e tags would be 50% smaller in binary vs utf-8 hex. Follow lists are currently quite large and their size would be basically cut in half.
But if we implemented NIP-114, we could save even 90% or more in bandwidth and some processing as well, depending on how many relays you connect to. Hoping to find some time for it again. https://github.com/nostr-protocol/nips/pull/1027
CBOR + NIP-114 + Negentropy 
Oh yes please.
#nostr is a data hog (which I just found out). I normally use unlimited data plans with extremely generous data allowances.
Will have to adjust if I do travel to Australia with shitty internet.
nostr:note142wvcpyyla0zv4lpnfnaj00yvx5xyh3edj0cxsuwgme83ldeky3sfvva7r
JSON should be required, but there's nothing wrong alternative encodings
Is running both the default setting in strfry?
Why make it a core part of a relay instead of an optional service?
I could spin up a relay with, say, a gRPC service attached to it. JSON is free, gRPC (which is binary over HTTP/2) is for paying clients. Now I've financed my relay and given a high speed, low-overhead option to the people who want it.
and also, nostr 2026 clients that would use it and a pony.
Wtf is gRPC? Binary is binary
It's a well-reputed RPC framework designed for fast inter-service communication.
I don't believe you or know what RPC is to understand you in the first place
Remote Procedure Call (RPC) is a protocol by which one bit of code can invoke functions on a different program, even if that program is running on a different machine, as if it was just another locally defined function. There are a few flavors of RPC, but gRPC is the most common. It uses a language called protobuf to define the inputs and outputs of each function, then uses that definition to automatically generate client code by which a program can invoke that remote procedure.
TL;DR—I could write code on my laptop that seamlessly uses other code living on a server in a data center hundreds of miles away.
In this scenario, gRPC encodes the messages between my computer and the remote server in binary, and transmits up to several at a time to maintain fast processing speeds.
I don't care. You said something about it being for paying clients. I'm not paying anyone to use binary. When there's a good version of nostr, it won't use json for shit that should be encoded in binary, and I still won't be listening to anyone who pretends I'm supposed to pay them for using binary.
Have you ever heard of noSQL databases?
No and I don't care since SQL is also not binary
All nostr needs at its core is binary + unicode + udp or another method of transfer
Anyone who doesn't understand this is a hecker
I'd love to see your implementation of this concept!
You're looking for someone else then. Criticizing the work of others isn't the same as actually joining in the work oneself. I don't know if I'm on your side since you're a human and humans are the reason I don't know if Digit is safe.
One more note - web browsers and existing web protocol are bad, hence the need for nostr, hence how a big flaw in nostr's design is trying to have compatibility with outdated shit at the core of the protocol instead of building the core of the protocol to be future-ready while leaving everything web-browser-compatible to be additions or extensions built atop the core protocol
You don't have to pay anyone for anything, on Nostr, as you can run your own or find someone who will let you use theirs for free.
Irrelevant to the point about gRPC.
Scroll up. gRPC is the irrelevant thing here, that's what I've been trying to tell the guy above the whole time. 1s and 0s are called binary, I don't need to hear the term "gRPC" to discuss how nostr shouldn't use javascript for things that should be binary.
Again though, always nice to see you reply
The people who don't understand you are suffering from laziness, specifically a phenomenon I call "inappropriately human-optimized code." They're fixated on treating machine language like human language and trying to make the digital ecosystem do the same, like they don't know what a computer is.