- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
Put something in robots.txt that isn’t supposed to be hit and is hard to hit by non-robots. Log and ban all IPs that hit it.
Imperfect, but can’t think of a better solution.
Good old honeytrap. I’m not sure, but I think that it’s doable.
Have a honeytrap page somewhere in your website. Make sure that legit users won’t access it. Disallow crawling the honeytrap page through robots.txt.
Then if some crawler still accesses it, you could record+ban it as you said… or you could be even nastier and let it do so. Fill the honeytrap page with poison - nonsensical text that would look like something that humans would write.
I think I used to do something similar with email spam traps. Not sure if it’s still around but basically you could help build NaCL lists by posting an email address on your website somewhere that was visible in the source code but not visible to normal users, like in a div that was way on the left side of the screen.
Anyway, spammers that do regular expression searches for email addresses would email it and get their IPs added to naughty lists.
I’d love to see something similar with robots.
Yup, it’s the same approach as email spam traps. Except the naughty list, but… holy fuck a shareable bot IP list is an amazing addition, it would increase the damage to those web crawling businesses.
robots.txt is purely textual; you can’t run JavaScript or log anything. Plus, one who doesn’t intend to follow robots.txt wouldn’t query it.
If it doesn’t get queried that’s the fault of the webscraper. You don’t need JS built into the robots.txt file either. Just add some line like:
here-there-be-dragons.html
Any client that hits that page (and maybe doesn’t pass a captcha check) gets banned. Or even better, they get a long stream of nonsense.
Nice idea! Better use
/dev/urandom
through, as that is non blocking. See here.
I actually love the data-poisoning approach. I think that sort of strategy is going to be an unfortunately necessary part of the future of the web.
Well the trump era has shown that ignoring social contracts and straight up crime are only met with profit and slavish devotion from a huge community of dipshits. So. Y’know.
Also, by the way, violating a basic social contract to not work towards triggering an intelligence explosion that will likely replace all biological life on Earth with computronium, but who’s counting? :)
That would be a danger if real AI existed. We are very far away from that and what is being called “AI” today (which is advanced ML) is not the path to actual AI. So don’t worry, we’re not heading for the singularity.
I request sources :)
https://www.lifewire.com/strong-ai-vs-weak-ai-7508012
Strong AI, also called artificial general intelligence (AGI), possesses the full range of human capabilities, including talking, reasoning, and emoting. So far, strong AI examples exist in sci-fi movies
Weak AI is easily identified by its limitations, but strong AI remains theoretical since it should have few (if any) limitations.
https://en.m.wikipedia.org/wiki/Artificial_general_intelligence
As of 2023, complete forms of AGI remain speculative.
Boucher, Philip (March 2019). How artificial intelligence works
Today’s AI is powerful and useful, but remains far from speculated AGI or ASI.
https://www.itu.int/en/journal/001/Documents/itu2018-9.pdf
AGI represents a level of power that remains firmly in the realm of speculative fiction as on date
Ah, I understand you now. You don’t believe we’re close to AGI. I don’t know what to tell you. We’re moving at an incredible clip; AGI is the stated goal of the big AI players. Many experts think we are probably just one or two breakthroughs away. You’ve seen the surveys on timelines? Years to decades. Seems wise to think ahead to its implications rather than dismiss its possibility.
See the sources above and many more. We don’t need one or two breakthroughs, we need a complete paradigm shift. We don’t even know where to start with for AGI. There’s a bunch of research, but nothing really came out of it yet. Weak AI has made impressive bounds in the past few years, but the only connection between weak and strong AI is the name. Weak AI will not become strong AI as it continues to evolve. The two are completely separate avenues of research. Weak AI is still advanced algorithms. You can’t get AGI with just code. We’ll need a completely new type of hardware for it.
Before Deep Learning recently shifted the AI computing paradigm, I would have written exactly what you wrote. But as of late, the opinion that we need yet another type of hardware to surpass human intelligence seems increasingly rare. Multimodal generative AI is already pretty general. To count as AGI for you, you would like to see the addition of continuous learning and agentification? (Or are you looking for “consciousness”?)
That said, I’m all for a new paradigm, and favor Russell’s “provably beneficial AI” approach!
Deep learning did not shift any paradigm. It’s just more advanced programming. But gen AI is not intelligence. It’s just really well trained ML. ChatGPT can generate text that looks true and relevant. And that’s its goal. It doesn’t have to be true or relevant, it just has to look convincing. And it does. But there’s no form of intelligence at play there. It’s just advanced ML models taking an input and guessing the most likely output.
Here’s another interesting article about this debate: https://ourworldindata.org/ai-timelines
What we have today does not exhibit even the faintest signs of actual intelligence. Gen AI models don’t actually understand the output they are providing, that’s why they so often produce self-contradictory results. And the algorithms will continue to be fine-tuned to produce fewer such mistakes, but that won’t change the core of what gen AI really is. You can’t teach ChatGPT how to play chess or a new language or music. The same model can be trained to do one of those tasks instead of chatting, but that’s not how intelligence works.
hmm, i though websites just blocked crawler traffic directly? I know one site in particular has rules about it, and will even go so far as to ban you permanently if you continually ignore them.
Detecting crawlers can be easier said than done 🙁
i mean yeah, but at a certain point you just have to accept that it’s going to be crawled. The obviously negligent ones are easy to block.