I am not sure how many devs here are familiar with forensics, and how they work, but demanding immutable media storage from the privacy and anonymity oriented media service is a conflicting request. There are algorithms that can detect media tampering based on the content and not rigid cryptographic hashing, and that can be used if required. Having to host and deliver exactly what was uploaded is nonsensical for the media. Mobile user on a shitty internet connection does not need to download 50 megapixels image when they can only see 1/10 of it on the screen, same goes for video. Compression of the content based on the client needs is the key to the user experience and pleasure, not waiting 10-20s to download all the pixels that will be discarded before they even see it. ๐ถ๐พ๐ซก
Discussion
fascinating problem. would it be possible to embed these media downsampling algorithms in the protocol and be exposed as a client side setting ? I saw earlier mention of the need for the separation of relay code from client viewer code
Not sure what you mean? Embedding what into what? What pros and cons? ๐ถ๐พ๐ซก
well I was assuming that there was some concern that substitutions would happen instead of downsampling so by having standard downsampling options written into the protocol substitution cannot take place, just downsampling and then bloom or whoever who only want to propagate originals can chose the non downsampled version as one particular choice offered by the protocol
If you want to ensure integrity of the content, use content hash and not bit hash. Downsampling is not the only way to make things better, there are hundreds of ways and it is not practical to mix them into the protocol ๐ถ๐พ๐ซก
For most content I personally think the user experience is more important than delivering a bit perfect copy. Does nostr.build currently do any sort of adaptive bitrate streaming for video or is that served exactly as the user uploads it?
Not yet, but if we get support we can implement adaptive bitrate streaming ๐ถ๐พ๐ซก