The proposal was only for photos stored on iCloud. Apple has a legitimate interest in not wanting to actually host abuse material on their servers. The plan was also calibrated for one in one trillion false positives (it would require multiple matches before an account could be flagged), followed by a manual review by an employee before reporting to authorities. It was so very carefully designed.
Do you happen to know a good source for information on this? I don’t want to highjack this discission, since it’s not that closely related to the original subject… But I’d be interested in more technical information. Most news articles seem to be a bit biased and I get it, both privacy and protection of children are sensible topics and there are feelings envolved.
One in a trillion sounds like a probability of a hash collision. So that would be just checking if they already have the specific image in their database. It’ll trigger if someone downloaded an already existing image and not detect new images taken with a camera. I’m somewhat fine with that.
And I was under the impression that iPhones connected to the iCloud sync the pictures per default? So “only for photos stored on iCloud” would practically mean every image you take, unless you deliberately changed the settings on your iPhone?
Do you happen to know a good source for information on this?
Apple released detailed whitepapers and information about it when originally proposed but they shelved it so I don’t think they’re still readily available.
One in a trillion sounds like a probability of a hash collision.
Basically yes, but they’re assuming a much greater likelihood of a single hash collision. The system would upload a receipt of the on-device scan along with each photo. A threshold number of matches would be set to achieve the one in a trillion confidence level. I believe the initial estimate was roughly 30 images. In other words, you’d need to be uploading literally dozens of CSAM images for your account to get flagged. And these accompanying receipts use advanced cryptography so it’s not like they’re seeing “oh this account has 5 potential matches and this one has 10”; anything below the threshold would have zero flags. Only when enough “bad” receipts showed up for the same account would they collectively flag it.
And I was under the impression that iPhones connected to the iCloud sync the pictures per default?
This is for people who use iCloud Photo Library, which you have to turn on.
The proposal was only for photos stored on iCloud. Apple has a legitimate interest in not wanting to actually host abuse material on their servers. The plan was also calibrated for one in one trillion false positives (it would require multiple matches before an account could be flagged), followed by a manual review by an employee before reporting to authorities. It was so very carefully designed.
Do you happen to know a good source for information on this? I don’t want to highjack this discission, since it’s not that closely related to the original subject… But I’d be interested in more technical information. Most news articles seem to be a bit biased and I get it, both privacy and protection of children are sensible topics and there are feelings envolved.
One in a trillion sounds like a probability of a hash collision. So that would be just checking if they already have the specific image in their database. It’ll trigger if someone downloaded an already existing image and not detect new images taken with a camera. I’m somewhat fine with that.
And I was under the impression that iPhones connected to the iCloud sync the pictures per default? So “only for photos stored on iCloud” would practically mean every image you take, unless you deliberately changed the settings on your iPhone?
Apple released detailed whitepapers and information about it when originally proposed but they shelved it so I don’t think they’re still readily available.
Basically yes, but they’re assuming a much greater likelihood of a single hash collision. The system would upload a receipt of the on-device scan along with each photo. A threshold number of matches would be set to achieve the one in a trillion confidence level. I believe the initial estimate was roughly 30 images. In other words, you’d need to be uploading literally dozens of CSAM images for your account to get flagged. And these accompanying receipts use advanced cryptography so it’s not like they’re seeing “oh this account has 5 potential matches and this one has 10”; anything below the threshold would have zero flags. Only when enough “bad” receipts showed up for the same account would they collectively flag it.
This is for people who use iCloud Photo Library, which you have to turn on.