I've looked into the IPFS multi-hash docs before and it looks cool but I'm not sure what problem they solve. I don't know whats the use of using multiple hashing algorithms when all you want is a single universal ID for a file.

I still need to learn more though 😀

Reply to this note

Please Login to reply.

Discussion

or file slice ! you can add many and get the file. I think http supports partial get and file can be split in many slices ( Web/HTTP/Range_requests ) with hash for each slice.

i think the thing is that whether the hash is segmented or not is irrelevant at the network layer

true. front end, not backend

the individual hashes in a merkle tree can be numbered as they are sequential, so a query syntax can just add the segment number as an (optional) parameter, the top level hash, be it the actual hash of the file or the merkle tree of the segments the segments do not need to be separately addressed

just more complexity for no reason

the amount of times hash functions are called in these things is far less than requires serious optimization

just stick to one hash

and just stick to one network transport and encoding for as long as possible, and let someone else manage that shit

igaf about blake3 it's not that much better and not better in the regular implementation than sha256 in the SIMD implementation

blake3's main selling point is performance, its collision resistance and preimage resistance are considered to be about equal

when your codebase is already using SHA256 every further hash you want to add is interface complexity and execution complexity

without a compelling reason to do it, better to upgrade to SIMD than ADD blake3