I see this and want to ask....
Why tho
There seems to be a misallocation of resources in a misguided desire to push various data types through a text protocol.
Sure, you can take anything and encode it and even document a standard for how it's encoded that others can make use of. This is not innovative and has been done decades ago.
But this doesn't make it scale.
Data will either get encoded and centralized through relays where costs have been pushed on others and performance for end users suffers...
Or data owners should just stand up their own servers, and it should be obvious that natural data formats, and specialized data structures like databases are going to be far superior to layers of encoding.
It's like we're repeating mistakes in an effort to just do stuff because we can rather then considering the impacts.
#[0]