Opinionated article by Alexander Hanff, a computer scientist and privacy technologist who helped develop Europe’s GDPR (General Data Protection Regulation) and ePrivacy rules.
We cannot allow Big Tech to continue to ignore our fundamental human rights. Had such an approach been taken 25 years ago in relation to privacy and data protection, arguably we would not have the situation we have to today, where some platforms routinely ignore their legal obligations at the detriment of society.
Legislators did not understand the impact of weak laws or weak enforcement 25 years ago, but we have enough hindsight now to ensure we don’t make the same mistakes moving forward. The time to regulate unlawful AI training is now, and we must learn from mistakes past to ensure that we provide effective deterrents and consequences to such ubiquitous law breaking in the future.
I’m still not understanding the logic. Here is a copyrighted picture. I can search for it, download it, view it, see it with my own eye balls. My browser already downloaded the image for me, in order for me to see it in the browser. I can take that image and edit it in a photo editor. I can do whatever I want with the image on my own computer, as long as I don’t publish the image elsewhere on the internet. All of that is legal. None of it infringes on copyright.
Hell, it could be argued that if I transform the image to a significant degree, I can still publish it under Fair Use. But, that still gets into a gray area for each use case.
What is not a gray area is what AI training does. They download the image and use it in training, which is like me looking at a picture in a browser. The image isn’t republished, or stored in the published model, or represented in any way that could be reconstructed back to the source image in any reasonable form. It just changes a bunch of weights in a LLM model. It’s mathematically impossible for a 4GB model to somehow store the many many terabytes of images on the internet.
Where is the copyright infringement?
You want to use the same bullshit tactics and unreasonable math that the RIAA used in their court cases?
I agree that the models themselves are clearly transformative. That doesn’t mean it’s legal for Meta to pirate everything on earth to use for training. THAT’S where the infringement is. And they admitted they used pirated material: https://www.techspot.com/news/101507-meta-admits-using-pirated-books-train-ai-but.html
I would enjoying seeing megacorps held to at least the same standards as individuals. I would prefer for those standards to be reasonable across the board, but that’s not really on the table here.
If you take that image, copy it and then try to resell it for profit you’ll find you’re quickly in breach of copyright.
The LLM is, in most cases, being licensed out to users for a profit off of the input data without which it could not exist in its current form.
You could see it akin to plagiarism if you think ctrl+c, ctrl+v is too extreme.
That’s not what’s happening. Did you even read my comment?
OK, if you ignore the hyperbole of my pre-christmas stress aggressive start, how much of the rest do you disagree with?
Less combatitively, I’m of the stance that just make AI generated materials exempt from copyright and you’ll at least limit mass adoption in public facing things by big money. Doesn’t address all the issues, though.
AI-generated materials are already exempt from copyright. It falls under the same arguments as the monkey selfie. Which is great.
Crack copyright like a fucking egg. It only benefited the rich, anyway.
That’s good, and I’m glad to have been informed of it.
Thank you.
My copyright change is the 17 years from first publication. Feels maybe still a little long, but much better than what we have now.