Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh that's right, they would upload a hash basically then flag it using whatever criteria they wanted on the server side. I think some of the outrage too was the question of what happens when they detect something, it was the issue of it reporting content outside of your control at all, which again, this Google thing doesnt seem to do.


> Oh that's right, they would upload a hash basically then flag it using whatever criteria they wanted on the server side.

No, that’s not right either. You should really read their white paper, it’s very interesting and not at all what people assumed it was like.


That's fair, but the point is still that "The Worst Thing Ever" was the online part, which this doesn't do, so the comparison makes little to no sense. I'll check out their write up though, sounds interesting.


It was protected by the magic of mathematics.

You had to provide multiple confirmed matches to the cloud before the human verifier could decrypt any of the images sent. They weren't scp:ing images to a share for all employees to see ffs.


The proposal was to scan photos people were uploading to iCloud not all photos[1]. The panic on HN was a) misunderstanding or deliberately misrepresenting the proposal as if it was scanning all offline photos, b) fantasising what if they don't do what they announced and instead scan all offline photos, c) realising that all Silicon Valley tech companies can change client software through updates, therefore Apple bad. [There were non-panicky comments about whether it's a legally significant move, whether it's a wedge change, whether it's abusable by governments in other ways, etc. the panicky ones were not those].

> "it was the issue of it reporting content outside of your control at all, which again, this Google thing doesnt seem to do."

All the big cloud providers [Google, Microsoft, Facebook] report abusive imagery sent to their clouds to the authorities (search for annual NECMEC reports), except Apple. The others slurp up unencrypted data (Facebook photos, Google Drive, Microsoft OneDrive) and scan it and report on it, and nobody [on HN] says anything. Apple was trying to do a more privacy-preserving approach and it seemed like they might be pushing it to the client upload code so they could offer fully encrypted storage where they couldn't scan photos on their side, and a couple of years later they did, they announced optional Advanced Data Protection[2] which fully encrypts iCloud photos among other things.

Dark patterns aside, it's in your control whether to upload data to companies, so 'reporting content outside your control' is deliberately misrepresenting it.

[1] https://www.wired.com/story/apple-photo-scanning-csam-commun...

[2] https://support.apple.com/en-gb/guide/security/sec973254c5f/...


> There were non-panicky comments about whether it's a legally significant move, whether it's a wedge change, whether it's abusable by governments in other ways, etc. the panicky ones were not those

That’s what I find most frustrating about the reaction to this. This was a sophisticated, privacy-preserving process that represented a genuine step forward in the state of the art and there was an interesting argument to be had about whether it struck the right balance. But it was impossible to have that argument because it was drowned out by an overwhelming amount of nonsense from people guessing incorrectly about how it worked and then getting angry about their fantasies.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: