I was thinking about this at one point myself and thought that maybe it would make sense to be able to embed multiple URLs for the same media file so if one host goes down it would still be accessible via another one. Maybe also with some sort of hint as to how the multiple URLs are dealt with, e.g. primary/secondary, round robin etc. The client could also potentially then load from the lowest latency host to improve image loading performance.

I think the NIPs that push media to relays could work for images if we had special purpose relays specifically for hosting images, but I agree they aren’t the best solution in general.

Reply to this note

Please Login to reply.

Discussion

Relay media is not a solution, it’s just not efficient and doesn’t scale. Embedded multiple URLs also doesn’t solve the problem, if both URLs are dead. I think a good abstraction layer here would be a solution but I am not sure if I didn’t miss something fundamental that will break things in adverse ways🐶🐾🫡

What’s wrong with URL’s?

You don’t own it! 🐶🐾🫡

Because you don’t want to?

Domain name is a rental property that you are not guaranteed to hold, that’s all. 🐶🐾🫡

Ah, I see.

URL works with IP, but same issue?

You can use @note as address space, maybe create a NIP that uses @file instead of @note and suddenly you can store media in multiple relays and retrieve them easily.

No, no meadow on relays, that’s not smart. Doable, but inefficient and doesn’t scale, and also legal issues may arise. NIP-95 tried to propose just that, and most of the relay operators refused to accept (rightfully so). I am talking about resolving specialized nostr URL to an actual hosted file URL. Then, we need a mechanism to resolve it, something like gossip or torrent magnet link, or something else. 🐶🐾🫡

Existing relays use @note I was proposing @file.

I wasn’t meaning the same relays serve both.

@file would probably work like CDN.

Ok, now I understand what you mean, yeas probably a good idea and additional abstraction 🐶🐾🫡

You would ideally embed enough URLs so hopefully they don’t all die but yeah it isn’t a perfect solution. I agree that an abstraction layer would be good but how exactly does the discovery work and how is the media distributed across multiple hosting services?

Many ways to go about it: host config in profile, gossip type of discovery, preconfigured mirrors/archives, special configurable proxy that does the discovery, etc. 🐶🐾🫡

I actually thought about a caching proxy type layer a while back that would sit in front of the media hosts as an alternative to using commercial CDNs. I was thinking about it more from the point of view of improving performance but the same sort of concept could work. Proxy would serve the content locally if it has it, otherwise fetch from the media hosts using some sort of discovery mechanism. Clients could have a list of proxies in the same way as they have a list of relays so if one goes down it uses another one.

There can be many ways, and translation layers/methods I guess 🐶🐾🤷‍♂️