A large problem with this kind of TOS change is what happens if we ever end up with sentient AI that can think on its’ own?
How would you stop that sentient AI from scraping your site they are scraping it by going directly into your article, copying it word for word, and sending it to their own training algorithm without blocking access from everyone?
Paywalls can be bypassed and AI has been found to be better at solving those puzzles meant to stop them, so there isn’t a good solution that I can think of that doesn’t endanger the whole internet.
There’s no need to wait for a sentient AI for that. I mean, the current publicized method for blocking these bots is via robots.txt, which is only a very polite way of asking bots to duck off - they really have no reason to respect it, if they wanted to. OpenAI (or anyone else) could also use multiple public proxy servers for scraping, so websites won’t be able to point fingers at them. Even if the bot makers avoid using proxies, they could still get the content indirectly by scraping other sites which repost the content, such as archive.org or even just normal sites which repost stuff. Heck, they could scrape off say, Lemmy indirectly, for instance we’ve got the AutoTLDR bot here, combine that with comments and quotes from several people, and any competent LLM could easily learn the content of the original article without even having to directly touch it.
So unless the site has posted a 100% unique piece of information, which hasn’t been published anywhere else, AND they’ve also implemented a strict “no reproduction in any form” rule that also extends to prohibiting any discussion of the source material, it would be near-impossible to stop or blame the bot creators of bypassing ToS. And we all know what happens when you go to great lengths to try and silence a subject matter on the internet…
A large problem with this kind of TOS change is what happens if we ever end up with sentient AI that can think on its’ own?
How would you stop that sentient AI from scraping your site they are scraping it by going directly into your article, copying it word for word, and sending it to their own training algorithm without blocking access from everyone?
Paywalls can be bypassed and AI has been found to be better at solving those puzzles meant to stop them, so there isn’t a good solution that I can think of that doesn’t endanger the whole internet.
Listen buddy, if we get artificial general intelligence the last thing we gotta worry about is it reading the paper.
There’s no need to wait for a sentient AI for that. I mean, the current publicized method for blocking these bots is via robots.txt, which is only a very polite way of asking bots to duck off - they really have no reason to respect it, if they wanted to. OpenAI (or anyone else) could also use multiple public proxy servers for scraping, so websites won’t be able to point fingers at them. Even if the bot makers avoid using proxies, they could still get the content indirectly by scraping other sites which repost the content, such as archive.org or even just normal sites which repost stuff. Heck, they could scrape off say, Lemmy indirectly, for instance we’ve got the AutoTLDR bot here, combine that with comments and quotes from several people, and any competent LLM could easily learn the content of the original article without even having to directly touch it.
So unless the site has posted a 100% unique piece of information, which hasn’t been published anywhere else, AND they’ve also implemented a strict “no reproduction in any form” rule that also extends to prohibiting any discussion of the source material, it would be near-impossible to stop or blame the bot creators of bypassing ToS. And we all know what happens when you go to great lengths to try and silence a subject matter on the internet…