don't put it past Starbucks to deploy a "make your own" coffee solution and make it hip and trendy
someone canceled their liberapay so its actually below again, 1996.61 in total but thats fine. it's close enough
actually really inspired by the push overnight by users. we are over our goal for the first time and it feels good to not have to worry
nostr:npub15fkerqqyp9mlh7n8xd6d5k9s27etuvaarvnp2vqed83dw9c603pqs5j9gr ah shit, i have a survival run on my deck i should finish
nostr:npub1ljy7tr67hsc7qxppskp85ddssawagm2g5plhvavntll4alsetxfqaka0ge nostr:npub1uvhmxs6jecaujnwuqn6l9v40k4nrwu5ktdrp0qh92ywzc3dphhlqxrj7td nostr:npub1f7twcv30qr0jz37sp5kv3qrvx5u2u7qhe44f4ucfhsgw3ksygkrsdg3yrk nostr:npub1tjcqfzl45stvq7ldxag59sxq7xk8gdmr0lp2surjy6kwf3cxx6mstja8ua yeah but i don't want to bug him. if he's fine with it that works for me
nostr:npub108zt8c43ulvdwnax2txurhhr07wdprl0msf608udz9rvpd5l68ascvdkr5 dang, I didn't know repack was a thing :akaheh:
we run it weekly and back up database twice daily. moving to replicating setup when it's in the budget. pleroma database is no joke
thank you friend
nostr:npub108zt8c43ulvdwnax2txurhhr07wdprl0msf608udz9rvpd5l68ascvdkr5 I see, thanks. was the database size a big problem before the split? that's what's biting me in the ass in the long run
up until late about december 2022 we were battling the database server being at 100% load 24/7. turns out you need to actively be repacking your database at least weekly. taking it down and doing a full vacuum helps, but the repack allows you to do it while itβs online.
that being said, there are still some things in pleroma that cause major database issues that iβve brought up in the past β this for example https://gitlab.com/soapbox-pub/rebased/-/issues/137 β this one particular issue ties up 32-40 database connections at a time, almost 4x what the typical pleroma server has configured for database connections (and would equate to 100% cpu usage on anything <8 threads) for nothing. itβs literally trying to update bookmarks from people who have deleted their accounts or are on instances that are unreachable. its just stupid shit like that
nostr:npub1ljy7tr67hsc7qxppskp85ddssawagm2g5plhvavntll4alsetxfqaka0ge nostr:npub1f7twcv30qr0jz37sp5kv3qrvx5u2u7qhe44f4ucfhsgw3ksygkrsdg3yrk asked in the good matrix room. everyone in there i'd trust, so im sure someone will
nostr:npub1zalenxhtqamj4ay3pdh00n7lxa5qntymgyler063glhp9gpzgguq9uf3n9 dont worry about us friend, make sure you take care of yourself
nope they dont operate in canada, sorry friend
the nice thing about mastodon is you can add external servers by the handful to do the processing so it scales really well. but it will not be without major cost increases to garner that performance increase
absolutely not. mastodon is a fucking pig
nostr:npub108zt8c43ulvdwnax2txurhhr07wdprl0msf608udz9rvpd5l68ascvdkr5 that sucks, sorry
genuine question, what were you expecting at the start? did the costs spiral out way too much?
well i certainly wasn't expecting 30 thousand users. the unfortunate part with pleroma, in specific its database is that it doesn't scale well at all. if you aren't actively maintaining it, your instance with only a handful of people will grind to a halt.
we started poast out on mastodon on a shitty 2c/2gb VPS in January 2021 and garnered 1000 users in only a handful of hours because some friends and i were advertising it on twitter. we outgrew what it was on the first day and ended up moving to a dedicated server that was 50$/mo for a few months. we got up to about 3000 users and outgrew that server (4c/8t 16GB shitty E3 Xeon from dedipath). during this time cloudflare also despite us paying monthly suspended CDN services for us and accused us of "abusing the free tier" so overnight one night poast had all of its images replaced with "User is abusing Cloudflare's Free Tier" or some shit image. so I had to build our own CDN. I started doing that in March of 2021 and had it deployed by the end of the month which initially cost us about 50$ per month but now accounts for a much larger amount. the majority of costs poast incurs is making sure we have the bandwidth we need tbh.
anyway, in march 2021 we moved from that shitty e3 to a dual xeon 24c/48t 64GB dedicated server from a company based in Texas called Spin Servers. this was fine until one day a switch in their backend disabled our 10G network because of a bandwidth overage despite us being unmetered and it happened on a day they decided to take siesta I guess -- wasnt a national holiday or anything but nobody at all was in the office. it was 11 hours before the owner of the company reached out to me to apologize but by then we had already made concessions to move.
at this point because the server was always taxed we decided it was time to split pleroma and the database server -- this is where the jump in cost came from as we were renting a partial rack at another datacenter and had several machines hosted there (still have one that's being used for revolver iirc)
i cut the cost down about 400$ from its peak of about 2400$/mo by switching to where our hardware is now but yeah. thats the history of poast and why it costs so much. tldr multiple servers powering the backend and having to run our own CDN. using something like bunny.net for CDN our costs would be ~4-5k/mo alone
nostr:npub18csv5swjm5vakees3kts0tlktdcrtjyqtqaqhksde2jy7erhhlnqx6tnx4 poa.st/about/donate
i was very frustrated last night friend. people have helped since then, but this was over the line and i am sorry that i said it
i lived in new york city for the better part of a year. you don't want to be around in the spring when the hudson river ice thaw happens because it smells like rotting flesh for like a whole ass week. no idea why anybody would get in that water



