Why is it dead?

Reply to this note

Please Login to reply.

Discussion

After messing around over 48 hours I realized that it doesn’t worth trying to spin it up on the weak raspberry pi ! Also it’s pain in the ass maintaining it!

From your experience, how quickly did you hit storage capacity for all of the notes being broadcast to you relay? Also, how scalable do you think storing everyone's notes is now/ in the future?

I run it only 24 hours then realized that I installed not on ssd. So my pi memory would eventually run out. The tried to move data from pi to ssd and that attempt failed. After this I started thinking about scaling and the options. Came to conclusion that this is not what I was thinking and needs a proper server to be able to scale and keep it running also needs a time! This is not something like a bitcoin node that you downloaded and keep it running. You gotta spend a time to maintain it! Not for me cause I don’t have that time and a good server!

Gotcha. To me it seems like it has to be maintained like a Bitcoin node that gets updated all the time instead of every ten minutes. Wondering how this will affect the decentralization of Nostr.

V4V via paid relays is probably the best solution we have right now.

Paid relay is good idea but it should be monthly or annually! Also, it shouldn’t be a toy!

Can you elaborate on “the attempt failed”? Did you accidentally store the data on an SD card? You might be able to plug that SD card into a serious computer and try to salvage a disk image out of it using ddrescue. https://www.gnu.org/software/ddrescue/

I mean the data was transferred from pit to SSD! When I started running ./scripts/start I was getting permission denied error ! So the docker was refusing to read the yml file

Yea, the relay software isn't plug and play. Needs some serious tinkering when setting up. Especially nostream

24 hours attempting of doing this made me realize that I don’t want to run it anymore and killing my time on maintaining it 24/7/365 😂😂

That doesnt seem like some kind of data corruption but a config issue. Your container is probably running as its own separate user and the folder structure exported to the container is somehow using a different user. Did you got those scripts from some github project?

Yeah, that script is in the repo. Sounds like a permission error. And those are not trivial to solve for people who haven't used docker and docker compose before

Have tried to eliminate that permission with chmod +x *

I setup my own little filter relay streaming all events of a handful of public relays into my database.

It's roughly ~700 MB a day without any spam protection

Nevermind. ~250 MB a day

Which implementation did you use?

In my experience https://github.com/hoytech/strfry is super fast and could easily run on a Rasperry Pi