Added the robots.txt earlier this month, yeah, trying to limit all scraping , increased compression an reduced video upload max.
New load balancing arch will likely solve these problems.
Do you see a lot of imagebots in the logs perhaps? Might want to grep logs for user agents. Can block nicely with robots.txt and if bad crawlers with AWS Firewall or WAF. Could be someone trying to mirror content by scraping so look for IPs and rate limit then. Can use https://goaccess.io/ against logs if do not already have something.
Added the robots.txt earlier this month, yeah, trying to limit all scraping , increased compression an reduced video upload max.
New load balancing arch will likely solve these problems.
No replies yet.