from benagain@lemmy.ml to selfhosted@lemmy.world on 26 Nov 13:13
https://lemmy.ml/post/39501313
Got a warning for my blog going over 100GB in bandwidth this month… which sounded incredibly unusual. My blog is text and a couple images and I haven’t posted anything to it in ages… like how would that even be possible?
Turns out it’s possible when you have crawlers going apeshit on your server. Am I even reading this right? 12,181 with 181 zeros at the end for ‘Unknown robot’? This is actually bonkers.
Edit: As Thunraz points out below, there’s a footnote that reads “Numbers after + are successful hits on ‘robots.txt’ files” and not scientific notation.
Edit 2: After doing more digging, the culprit is a post where I shared a few wallpapers for download. The bots have been downloading these wallpapers over and over, using 100GB of bandwidth usage in the first 12 days of November. That’s when my account was suspended for exceeding bandwidth (it’s an artificial limit I put on there awhile back and forgot about…) that’s also why the ‘last visit’ for all the bots is November 12th.
#selfhosted
threaded - newest
Fucking hell.
Yeah and that’s why people are using cloudflare so much.
One corporation DDOS’s your server to death so that you need the other corporations’ protection.
basically protection racket
That’s a nice website you gots there, would be ashame if something weres to happen to it.
We accidentally the whole config file
Somebody set up us the bomb
A friend (works in IT, but asks me about server related things) of a friend (not in tech at all) has an incredibility low traffic niche forum. It was running really slow (on shared hosting) due to bots. The forum software counts unique visitors per 15 mins and it was about 15k/15 mins for over a week. I told him to add Cloudflare. It dropped to about 6k/15 mins. We excitemented turning Cloudflare off/on and it was pretty consistent. So then I put Anubis on a server I have and they pointed the domain to my server. Traffic drops to less than 10/15 mins. I’ve been experimenting with toggling on/off Anubis/Cloudflare for a couple months now with this forum. I have no idea how the bots haven’t scrapped all of the content by now.
TLDR: in my single isolated test, Cloudflare blocks 60% of crawlers. Anubis blocks presumably all of them.
Also if anyone active on Lemmy runs a low traffic personal site and doesn’t know how or can’t run Anubis (eg shared hosting), I have plenty of excess resources I can run Anubis for you off one of my servers (in a data center) at no charge (probably should have some language about it not being perpetual, I have the right to terminate without cause for any reason and without notice, no SLA, etc). Be aware that it does mean HTTPS is terminated at my Anubis instance, so I could log/monitor your traffic if I wanted as well, so that’s a risk you should be aware of.
It’s interesting that anubis has worked so well for you in practice.
What do you think of this guy’s take?
https://lock.cmpxchg8b.com/anubis.html
I wouldn’t be surprised if most bots just don’t run any JavaScript so the check always fails
It could be, but they seem to get through Cloudflare’s JS. I don’t know if that’s because Cloudflare is failing to flag them for JS verification or if they specifically implement support for Cloudflare’s JS verification since it’s so prevalent. I think it’s probably due to an effective CPU time budget. For example, Google Bot (for search indexing) runs JS for a few seconds and then snapshots the page and indexes it in that snapshot state, so if your JS doesn’t load and run fast enough, you can get broken pages / missing data indexed. At least that’s how it used to work. Anyway, it could be that rather than a time cap, the crawlers have a CPU time cap and Anubis exceeds it whereas Cloudflare’s JS doesn’t – if they did use a cap, they probably set it high enough to bypass Cloudflare given Cloudflare’s popularity.
Is there a particular piece? I’ll comment on what I think are the key points from his article:
Wasted energy.
It interferes with legitimate human visitors in certain situations. Simple example would be wanting to download a bash script via curl/wget from a repo that’s using Anubis.
3A) It doesn’t strictly meet the requirement of a CAPTCHA (which should be something a human can do easily, but a computer cannot) and the theoretical solution to blocking bots is a CAPTCHA.
and very related
3B) It is actually not that computationally intensive and there’s no reason a bot couldn’t do it.
Maybe there were more, but those are my main takeaways from the article and they’re all legit. The design of Anubis is in many respects awful. It burns energy, breaks (some) functionality for legitimate users, unnecessarily challenges everyone, and probably the worst of it, it is trivial for the implementer of a crawling system to defeat.
I’ll cover wasted energy quickly – I suspect Anubis wastes less electricity than the site would waste servicing bot requests, granted this is site specific as it depends on the resources required to service a request and the rate of bot requests vs legitimate user requests. Still it’s a legitimate criticism.
So why does it work and why am I a fan? It works simply because crawlers haven’t implemented support to break it. It would be quite easy to do so. I’m actually shocked that Anubis isn’t completely ineffective already. I actually was holding out bothering testing it out because I had assumed that it would be adopted rather quickly by sites and given the simplicity in which it can be defeated, that it would be defeated and therefore useless.
I’m quite surprised for a few reasons that it hasn’t been rendered ineffective, but perhaps the crawler operators have decided that it doesn’t make economic sense. I mean if you’re losing say 0.01% (I have no idea) of web content, does that matter for your LLMs? Probably if it was concentrated in niche topic domains where a large amount of that niche content was inaccessible, then they would care, but I suspect that’s not the case. Anyway while defeating Anubis is trivial, it’s not without a (small) cost and even if it is small, it simply might not be worth it.
I think there may also be a legal element. At a certain point, I don’t see how these crawlers aren’t in violation of various laws related to computer access. What i mean is, these crawlers are in fact accessing computer systems without authorization. Granted, you can take the point of view that the act of connecting a computer to the internet is implying consent, that’s not the way the laws are, at least in the countries I’m familiar with. Things like robots.txt can sort of be used to inform what is/isn’t allowed to be accessed, but it’s a separate request and mostly used to help with search engine indexing, not all sites use it, etc. Something like Anubis is very clear and in your face, and I think it would be difficult to claim that a crawler operator specifically bypassed Anubis in a way that was not also unauthorized access.
I’ve dealt with crawlers as part of devops tasks for years and years ago it was almost trivial to block bots with a few heuristics that would need to be updated from time to time or temporarily added. This has become quite difficult and not really practical for people running small sites and probably even for a lot of open source projects that are short on people. Cloudflare is great, but I assure you, it doesn’t stop everything. Even in commercial environments years ago we used Cloudflare enterprise and it absolutely blocked some, but we’d get tons of bot traffic that wasn’t being blocked by Cloudflare. So what do you do if you run a non-profit, FOSS project, or some personal niche site that doesn’t have the money or volunteer time to deal with bots as they come up and those bots are using legitimate user-agents coming from thousands of random IPs (including residential! – it used to be you could block some data center ASNs in a particular country until it stopped).
I guess the summary is, bot blocking could be done substantially better than what Anubis does and with less down side for legitimate users, but it works (for now), so maybe we should only concern ourselves with the user hostile aspect of it at this time – preventing legitimate users from doing legitimate things. With existing tools, I don’t know how else someone running a small site can deal with this easily, cheaply, without introducing things like account sign ups, and without violating people’s privacy. I have some ideas related to this that could offer some big improvements, but I have a lot of other projects I’m bouncing between.
AI scrapers are the new internet DDoS.
Might want to throw something Infront of your blog to ward them off like Anubis or a Tarpit.
the one with the quadrillion hits is this bad boy: www.babbar.tech/crawler
Babbar.tech is operating a crawler service named Barkrowler which fuels and update our graph representation of the world wide web. This database and all the metrics we compute with are used to provide a set of online marketing and referencing tools for the SEO community.
we?
It’s a quote from the website
It is common custom to indicate quotes, with either “quotes” or for a longer quote a
The latter can be done by prefixing the line with a
>here on lemmy (uses the common markdown syntax).Doing either of this help avoid ambiguity.
Thanks the taking the time. I always find it hard to follow up and point out the ambiguity / alternative without coming across in some unwelcome way
You replied to the wrong person. I already know this, but clearly the person who posted the quote doesn’t ;)
Metrics on what - how much beating can a server take before it commits ritual Sudoku and fries itself?
Unknown Robot is your biggest fan.
It’s 12181 hits and the number behind the plus sign are robots.txt hits. See the footnote at the bottom of your screenshot.
Phew, so I’m a dumbass and not reading it right. I wonder how they’ve managed to use 3MB per visit?
The robots are a problem, but luckily we’re not into the hepamegaquintogilarillions… Yet.
12,000 visits, with 181 of those to the robots.txt file makes way, way more sense. The ‘Not viewed traffic’ adds up to 136,957 too - so I should have figured it out sooner.
I couldn’t wrap my head around how large the number was and how many visits that would actually entail to reach that number in 25 days. Turns out that would be roughly 5.64 quinquinquagintillion visits per nanosecond. Call it a hunch, but I suspect my server might not handle that.
The traffic is really suspicious. Have you by any chance a health or heartbeat endpoint which provides continuous output? That would explain why so little hits cause so much traffic.
It’s super weird for sure. I’m not sure how the bots have managed to use so much more bandwidth with only 30k more hits than regular traffic, I guess they probably don’t rely on any caching and fetch each page from scratch?
Still going through my stats, but it doesn’t look like I’ve gotten much traffic via any API endpoint (running WordPress). I had a few wallpapers available for download and it looks like for whatever reason the bots have latched onto those.
I run an ecommerce site and lately they’ve latched onto one very specific product with attempts to hammer its page and any of those branching from it for no readily identifiable reason, at the rate of several hundred times every second. I found out pretty quickly, because suddenly our view stats for that page in particular rocketed into the millions.
I had to insert a little script to IP ban these fuckers, which kicks in if I see a malformed user agent string or if you try to hit this page specifically more than 100 times. Through this I discovered that the requests are coming from hundreds of thousands of individual random IP addresses, many of which are located in Singapore, Brazil, and India, and mostly resolve down into those owned by local ISPs and cell phone carriers.
Of course they ignore your robots.txt as well. This smells like some kind of botnet thing to me.
I don’t really get those bots.
Like, there are bots that are trying to scrape product info, or prices, or scan for quantity fields. But why the hell do some of these bots behave the way they do?
Do you use Shopify by chance? With Shopify the bots could be scraping the product.json endpoint unless it’s disabled in your theme. Shopify just seems to show the updated at timestamp from the db in their headers+product data, so inventory quantity changes actually result in a timestamp change that can be used to estimate your sales.
There are companies that do that and sell sales numbers to competitors.
No idea why they have inventory info on their products table, it’s probably a performance optimization.
I haven’t really done much scraping work in a while, not since before these new stupid scrapers started proliferating.
Negative. Our solution is completely home grown. All artisinal-like, from scratch. I can’t imagine I reveal anything anyone would care about much except product specs, and our inventory and pricing really doesn’t change very frequently.
Even so, you think someone bothering to run a botnet to hound our site would distribute page loads across all of our products, right? Not just one. It’s nonsensical.
Yeah, that’s the kind of weird shit I don’t understand. Someone on the other hand is paying for servers and a residential proxy to send that traffic too. Why?
Could it be a competitor for that particular product? Hired some foreign entity to hit anything related to their own product?
Maybe, but I also carry literally hundreds of other products from that same brand including several that are basically identical with trivial differences, and they’re only picking on that one particular SKU.
Have you googled the SKU and see if anything else happens to share the number?
I have and there’s nothing noteworthy, other than tons of other retailers selling the same thing of course.
Can you just move that product to a new URL? What happens?
It doesn’t quite work that way, since the URL is also the model number/SKU which comes from the manufacturer. I suppose I could write an alias for just that product but it would become rather confusing.
What I did experiment with was temporarily deleting the product altogether for a day or two. (We barely ever sell it. Maybe 1 or 2 units of it a year. This is no great loss in the name of science.) This causes our page to return a 404 when you try to request it. The bots blithely ignored this, and continued attempting to hammer that nonexistent page all the same. Puzzling.
This is far beyond my limited coding experience but I do enjoy a good puzzle. In your opinion do you think it could be some gen AI scraper. Like the gen AI is deciding what page to scrape and cuz its stupid it keeps selecting your page?
Alternatively I wonder if the product page just happens to have an unsual combinination of keywords that the bot is looking for. Maybe its looking for cheap prices of RAM and the page has some keywords related to RAM?
Good luck I hope you are able to get them to start hammering that page.
In my case the pattern appears to be some manner of DDoS botnet, probably not an AI scraper. The request origins are way too widespread and none of them resolve down to anything that’s obviously datacenters or any sort of commercial enterprise. It seems to be a horde of devices in consumer IP ranges that have probably be compromised by some malware package or another, and whoever is controlling it directed it at our site for some reason. It’s possible that some bad actor is using a similar malware/bot farm arrangement to scrape for AI training, but I’d doubt it. It doesn’t fit the pattern from that sort of thing from what I’ve seen.
Anyway, my script’s been playing automated whack-a-mole with their addresses and steadily filtering them all out, and I geoblocked the countries where the largest numbers of offenders were. (“This is a bad practice!” I hear the hue and cry from specific strains of bearded louts on the Internet. That says maybe, but I don’t ship to Brazil or Singapore or India, so I don’t particularly care. If someone insists on connecting through a VPN from one of those regions for some reason, that’s their own lookout.)
They seem to have more or less run out of compromised devices to throw at our server, so now I only see one such request every few minutes rather than hundreds per second. I shudder to think how long my firewall’s block list is by now.
Have you ever tried writing a scrapper. I have for offline reference material. You’ll make a mistake like that a few times and know but there are sure to be other times you don’t notice. I usually only want a relatively small site (say a Khan Academy lesson which doesn’t save text offline, just videos) and put in a large delay between requests but I’ll still come back after thinking I have it down and it’s thrashed something
I see the same thing but hitting my lemmy instance. Not much you can do other than start up banning or geoip banning.
does your blog have a blackhole in it somewhere you forgot about 😄
Check out Anubis. If you have a reverse proxy it is very easy to add, and for the bots stopped spamming after I added it to mine
I also recommend it.
<img alt="lol" src="https://sh.itjust.works/pictrs/image/aa5f5daf-b154-4e06-95b7-e6100eed7a84.png">
It’s interesting that anubis has worked so well for you in practice.
What do you think of this guy’s take?
https://lock.cmpxchg8b.com/anubis.html
I don’t think the author understands the point of Anubis. The point isn’t to block bots completely from your site, bots can still get in. The point is to put up a problem at the door to the site. This problem, as the author states, is relatively trivial for the average device to solve, it’s meant to be solved by a phone or any consumer device.
The actual protection mechanism is scale, the scale of this solving solution is costly. Bot farms aren’t one single host or machine, they’re thousands, tens of thousands of VMs running in clusters constantly trying to scrape sites. So to them, a calculating something that trivial is simple once, very very costly at scale. Say calculating the hash once takes about 5 seconds. Easy for a phone. Let’s say that’s 1000 scrapes of your site, that’s now 5000 seconds to scrape, roughly an hour and a half. Now we’re talking about real dollars and cents lost. Scraping does have a cost, and having worked at a company that does professionally scrape content they know this. Most companies will back off after trying to load a page that takes too long, or is too intensive - and that is why we see the dropoff in bot attacks. It’s that it’s not worth it for them to scrape the site anymore.
So for Anubis they’re “judging your value” by saying “Are you willing to put your money where your mouth is to access this site?” For consumer it’s a fraction of a fraction of a penny in electricity spent for that one page load, barely noticeable. For large bot farms it’s real dollars wasted on my little lemmy instance/blog, and thankfully they’ve stopped caring.
The author demonstrated that the challenge can be solved in 17ms however, and that is only necessary once every 7 days per site. They need less than a second of compute time, per site, to be able to send unlimited requests 365 days a year.
The deterrent might work temporarily until the challenge pattern is recognised, but there’s no actual protection here, just obscurity. The downside is real however for the user on an old phone that must wait 30 seconds, or like the blogger, a user of a text browser not running JavaScript. The very need to support an old phone is what defeats this approach based on compute power, as it’s always a trivial amount for the data center.
Please tell me how you’re gonna un-obscure a proof-of-work challenge requiring calculation of hashes.
And since the challenge is adjustable, you can make it take as long as you want.
You just solve it as per the blog post, because it’s trivial to solve, as your browser is literally doing so in a slow language on a potentially slow CPU. It’s only solving 5 digits of the hash by default.
If a phone running JavaScript in the browser has to be able to solve it you can’t just crank up the complexity. Real humans will only wait tens of seconds, if that, before giving up.
This here is the implementation of sha256 in the slow language JavaScript:
const msgUint8 = new TextEncoder().encode(message); const hashBuffer = await window.crypto.subtle.digest("SHA-256", msgUint8); const hashHex = new Uint8Array(hashBuffer).toHex();You imagined that JS had to have that done from scratch, with sticks and mud? Every OS has cryptographic facilities, and every major browser supplies an API to that.
As for using it to filter out bots, Anubis does in fact get it a bit wrong. You have to incur this cost at every webpage hit, not once a week. So you can’t just put Anubis in front of the site, you need to have the JS on every page, and if the challenge is not solved until the next hit, then you pop up the full page saying ‘nuh-uh’, and probably make the browser do a harder challenge and also check a bunch of heuristics like go-away does.
It’s still debatable whether it will stop bots who would just have to crank sha256 24/7 in between page downloads, but it does add cost that bot owners have to eat.
That’s counting on one machine using the same cookie session continuously, or they code up a way to share the tokens across machines. That’s now how the bot farms work
It will obviously depend heavily on the type of bot crawling, but that is not hard coordination for harvesting data for LLM’s, as they will already have strategies to prevent nodes all crawling the same thing - a simple valkey cache can store a solved JWT.
but the vast majority of crawlers don’t care to do that. That’s a very specific implementation for this one problem. I actually did work at a big scraping farm, and if they encounter something like this,they just give up. It’s not worth it to them. That’s where the “worthiness” check is, you didn’t bother to do anything to gain access.
I recently added Anubis and its validation rate is under 40%. In other words, 60% of the incoming requests are likely bots and are now getting blocked. Definitely recommend.
I was a single server with only me and 2 others or so, and then saw that I had thousands of requests per minutes at times! Absolutely nuts! My cloud bill was way higher. Adding anubis and it dropped down to just our requests, and bills dropped too. Very very strong proponent now.
You have to grow spikes and make it painful for bots to crawl your site. It sucks, and it costs a lot of extra bandwidth for a few months, but eventually they all blacklist your site and leave you alone.
I just geo-restrict my server to my country, certain services I’ll run an ip-blacklist and only whitelist the known few networks.
Works okay I suppose, kills the need for a WAF, haven’t had any issues with it.
ProfitSilenceCan you just turn the robots.txt into a click wrap agreement to charge robots high fees for access above a certain threshold?
Puts the full EU regulations in robot.txt
why do a agreement when you can serve a zip bomb :D
This is why I use CloudFlare. They block the worst and cache for me to reduce the load of the rest. It’s not 100% but it does help.
LOL Someone took exception to your use of Cloudflare. Hilarious. Anyways, yeah, what Cloudflare doesn’t get, pFsense does.
Looks for me like actions of AI agents.
What is that log analysis tool you are using in the picture? Looks pretty neat.
It’s a mix, I put two screenshots together. On the left is my monthly bandwidth usage from CPanel on the right is Awstats (though I hid some sections so the Robots/Spiders section was closer to the top).
I thought I recognized it. Hell of a blast from the past, haven’t seen it in fifteen years at least.
I think they’re winding down the project unfortunately, so I might have to get with the times…
I mean, I thought it was long dead. It’s twenty-five years old, and the web has changed quite a bit in that time. No one uses Perl anymore, for starters. I used Open Web Analytics, Webalizer, or somesuch by 2008 or so. I remember Webalizer being snappy as heck.
I tinkered with log analysis myself back then, peeping into the source of AWStats and others. Learned that a humongous regexp with like two hundred alternative matches for the user-agent string was way faster than trying to match them individually — which of course makes sense seeing as regexps work as state-machines in a sort of a very specialized VM. My first attempts, in comparison, were laughably naive and slow. Ah, what a time.
Sure enough, working on a high-traffic site taught me that it’s way more efficient to prepare data for reading at the moment of change instead of when it’s being read — which translates to analyzing visits on the fly and writing to an optimized database like ElasticSearch.
You can also use crowdsec on your server to stop similar BS. They use a community based blacklist. You choose what you want to block. Check it out.
github.com/crowdsecurity/crowdsec
I’m going to try and implement crowdsec for all my ProxMox containers over Cloudflare tunnels. Wish me luck and that my wife and kids let me do this without constantly making shot up fore to do.
Good luck and if you need help drop by their discord. They have an active community.
https://discord.gg/crowdsec
Can they help me keep my wife and kids at bay too? That’s what I need the most help with 😂
I don’t think asking help about domestic issues on the Internet is healthy… However, who knows maybe they can ( ͡~ ͜ʖ ͡°)
They also have a plugin for opnsense (if you use that)
I used to, but moved on to a full Unifi infrastructure about 2 years ago.
Yeah, then you need to implement it at the webhost level.
Had the same thing happen on one of my servers. Got up one day a few weeks ago and the server was suspended (luckily the hosting provider unsuspended it for me quickly).
It’s mostly business sites, but we do have an old personal blog on there with a lot of travel pictures on it, and 4 or 5 AI bots were just pounding it. Went from 300GB per month average to 5TB on August, and 10/11 TB in September and October.
What is the blog about? It may be increased interest as search providers use them for normal searches now… or it could be a couple of already sentient doombots.
Please don’t be a blog about von Neumann probes. Please don’t be a blog about von Neumann probes. Please don’t be a blog about von Neumann probes…
What’s wrong with blogs about von Neumann probes? Genuinely curious!
I want to search for a blog on this now…
If an ai read it several thousand times, I thought it was too on the nose joke sorry
lol that’s funny. I guess I’m just slow
AI bots killing the internet again? You don’t say
I had to pull an all nighter to fix some unoptimized query because I had just launched a new website with barely any visitors and hadn’t implemented caching yet for something that I thought no one uses anyway, but a bot found it and broke my entire DB through hitting the endpoint again and again until nothing worked anymore
Hydrogen bomb vs coughing baby type shit
fracking clankers.
Downloading you wallpapers? Lol what for
I don’t know what “12,181+181” means (edit: thanks @Thunraz@feddit.org, see Edit 1) but absolutely not 1.2181 × 10^185^. That many requests can’t be made within the 39 × 10^9^ bytes of bandwidth − in fact, they exceed the number of atoms on Earth times its age in microseconds (that’s close to 10^70^). Also, “0+57” in another row would be dubious exponential notation, the exponent should be 0 (or omitted) if the mantissa (and thus the value represented) is 0.
That’s insane… Can’t a website owner require bots (at least those who are identifying themselves as such) to prove at least they’re affiliated with a certain domain?
It’s a shame we don’t have those banner ad schemes anymore. Cybersquatting could be a viable income stream if you could convince the cleaners to click banner ads for a faction of a penny each.