if you turn them into jsonl with realy's admin listener you can push that much data up in literally seconds on a 100Mbit ethernet connection

Reply to this note

Please Login to reply.

Discussion

Yeah, but how to do it without looking like a spammer. 😂

Like, can we forward-signal a larger batch?

I get rate-limited all over, just for typing fast. 😂

it's an admin port, it just eats it as fast as it can swallow the data, literally, that's how i wrote it

at first i had problems with it blowing up memory

implementing import was a great way to bugfix

also, just to be clear, it is a single thread... it can eat about 10mb/s of events but probably on my 12 core system i could have 4 of these running concurrently before hitting the limit of the CPU and probably stretching memory a lot, literally takes in like 2000 event per second or so

Very nice.

We may need to have a couple of fast-upload relays, only for kind 30040s, geographically spread out on our various servers, and then it can drip drip drip into the other relays.

And we need a couple of different relay implementations. Don't want everything on strfry.

strfry is way overrated, anyway... realy is very new, only just at a level i call stable and complete... i'm sure you'll run into issues, but you certainly will have less problems figuring out how to extend it

It doesn't need to be complete, yet. Gives us a chance to influence the feature set. 😁

i've built it off the base of relayer, which is like a cut down, simplified form of khatru, and much of whatever is built for those will adapt fairly easily to it... just some differences in api that i made for reasons of potential optimizations impossible with the "standard" json encoder, plus, it's the fastest json encoder for nostr messages, at all, though i don't know what the numbers are on strfry's codec, i doubt they are as good

one of these days i will try this.. ive been following along from my strfry palace.. any relay that doesnt default to postgres, i am interested..

i need to implement this new nip for relay mgmt soon.. does your relay implement this? (acl mgmt w nip98 api) id paste the 'nip' but its a draft and stupid nevermerged nips and stupid github doesnt let me search from phone. 😂

I care far more about it working, without regression if possible, and being relatively open enough to modify and administrate. Most devs don't care about the administrators which is big sad, because most developers are usually terrible sysadmins.

ah, you see, i was a sysadmin for a long time before i got into actually writing servers... that's also why i like how i don't need containers with Go 😜

the configuration system is also ultra simple because in my experience with server writing those things are ugly and complex and have to redefined multiple times to work and fuck it, just set the damn environment variables, or write a file with the name `.env` to the root of your server's data directory and done... if it picks up the file tho, you can't override with inline or in-environment settings though

haven't got an answer to that problem yet, it's a pain in the arse

Where can I find realy's configuration documentation if you have any yet?

just add `help` to the commandline it prints out the information about the environment variables and describes what they do, and their defaults (or -h or --h or --help or even just `?`)

https://github.com/mleku/realy/blob/dev/cmd/realy/app/config.go

this is what the output from using the CLI help option (this is teh only CLI args that it understands, i made it as dumb as possible:

Environment variables that configure realy:

APP_NAME string default realy

PROFILE string default /mnt/old/home/mleku/.config/realy root path for all other path configurations (based on APP_NAME and OS specific location)

LISTEN string default 0.0.0.0 network listen address

PORT int default 3334 port to listen on

ADMIN_LISTEN string default 127.0.0.1 admin listen address

ADMIN_PORT int default 3337 admin listen port

LOG_LEVEL string default info debug level: fatal error warn info debug trace

DB_LOG_LEVEL string default info debug level: fatal error warn info debug trace

AUTH_REQUIRED bool default false requires auth for all access

OWNERS []string default [] list of npubs of users in hex format whose follow and mute list dictate accepting requests and events - follows and follows follows are allowed, mutes and follows mutes are rejected

DB_SIZE_LIMIT int default 0 the number of gigabytes (1,000,000,000 bytes) we want to keep the data store from exceeding, 0 means disabled

DB_LOW_WATER int default 60 the percentage of DBSizeLimit a GC run will reduce the used storage down to

DB_HIGH_WATER int default 80 the trigger point at which a GC run should start if exceeded

GC_FREQUENCY int default 3600 the frequency of checks of the current utilisation in minutes

PPROF bool default false enable pprof on 127.0.0.1:6060

MEMLIMIT int default 250000000 set memory limit, default is 250Mb

NWC string default NWC connection string for relay to interact with an NWC enabled wallet

CLI parameter 'help' also prints this information

.env file found at the ROOT_DIR/PROFILE path will be automatically loaded for configuration.

set these two variables for a custom load path, this file will be created on first startup.

environment overrides it and you can also edit the file to set configuration options

use the parameter 'env' to print out the current configuration to the terminal

set the environment using

/tmp/go-build1784177396/b001/exe/realy env>/home/mleku/.config/realy//home/mleku/.config/realy/.env

oh, yes, and there is a "env" option which outputs to stdout the configuration it is currently using, which can be the one that is already in the ~/.config/realy/.env file

also, no, NWC is not implemented yet, that's a WIP and not present on the current latest tag and commit on the "dev" branch

also, yes, the DB_SIZE_LIMIT feature works, i've tested it pretty extensively with a layer2 second level store i made that is just the same as the first level without any pruning... well, it seems to work pretty good, anyway, definitely is pruning out the events... it keeps the index though, i think...

i need to finish working on that with the first and original layer2 event store implementation that uses ICP shitcoin chain as a data storage... it's not finished without one proper second layer, i do also want to add blossom as a second layer option, in theory it would be possible to set it up so it primarily stores indexes and has a tiny low/high water setting and always goes to the L2 for events and keeps them in the DB only for as long as it takes for the users to move on to other data, a day or two worth in cache, everything else on a blossom

I went through the same thing

lol! well, this is why running a relay is nice. in this case with relay.tools there isnt a traditional rate limit (because i try to never limit by IP). the limit that can hit first is the CPU usage of event parsing for the access control list (spamblaster). so it should process just fine, just may slow things down while it processes.

one trick that could help, if youre worried your uploader wont know if the events made it would be to load it onto a local strfry, and then run negentropy sync. that would be kindof like an rsync, run it a few times and then itll fully sync if some events didnt make it the first time.

why do you think i go on and on about the importance of auth??

relays being dumb communist free for alls was never going to expand to anything except yet another gigarelay silo censorship fest so yeah, auth. that's what auth could let you do... just add a feature to the relay where there is a ratelimiter, that some users have exceptions

and if the relay provider isn't handling this, ask them to upgrade to a relay that actually can

We have paid whitelisting on theforest. Was thinking we could somehow AUTH against that list on fast-uploaders.

So, speedy uploads for relay subscribers.

Have to think about this, some more. 🤔

yeah, main thing is you need to have, basically, nostr auth over HTTP