I'm pretty sure what Apple proposed was an offline system for scanning, but any matches would be submitted to them. This is just the offline parts, and apps can choose to use it to detect nudity or other NSFW images. There's no scanning going on at all actually.
This is not correct. This is how most people assumed the Apple proposal worked, but it actually worked in a very different way. The device never knew if an image matched. Matches could only be determined via the combination of a receipt calculated on the device, plus information on the server, plus meeting a threshold of many matches. It was not offline scanning and uploading matches.
It was offline scanning and sending uploaded matches to law enforcement, this was very clearly laid out in their plans (that they eventually rolled back after massive uproar).
Of course, you could argue some people that choose to not use iCloud would not get that last bit, but considering they're turning it on by default (and even turning it on whenever you switch devices even though you restore a backup with it off), I'd say that would be a tiny minority of their customers.
Also, since we're on the subject of "unannounced scanning of all photos", Apple went and did it anyway, same as Google turning it on by default but claiming it's only to look for "landmarks" LOL :) https://news.ycombinator.com/item?id=42533685
It was not. The device was incapable of determining matches and the process only applied to photos being uploaded to iCloud.
> Also, since we're on the subject of "unannounced scanning of all photos", Apple went and did it anyway, same as Google turning it on by default but claiming it's only to look for "landmarks" LOL :) https://news.ycombinator.com/item?id=42533685
They did not. The landmark process works in an entirely different way for an entirely different purpose.
Oh that's right, they would upload a hash basically then flag it using whatever criteria they wanted on the server side. I think some of the outrage too was the question of what happens when they detect something, it was the issue of it reporting content outside of your control at all, which again, this Google thing doesnt seem to do.
That's fair, but the point is still that "The Worst Thing Ever" was the online part, which this doesn't do, so the comparison makes little to no sense. I'll check out their write up though, sounds interesting.
You had to provide multiple confirmed matches to the cloud before the human verifier could decrypt any of the images sent. They weren't scp:ing images to a share for all employees to see ffs.
The proposal was to scan photos people were uploading to iCloud not all photos[1]. The panic on HN was a) misunderstanding or deliberately misrepresenting the proposal as if it was scanning all offline photos, b) fantasising what if they don't do what they announced and instead scan all offline photos, c) realising that all Silicon Valley tech companies can change client software through updates, therefore Apple bad. [There were non-panicky comments about whether it's a legally significant move, whether it's a wedge change, whether it's abusable by governments in other ways, etc. the panicky ones were not those].
> "it was the issue of it reporting content outside of your control at all, which again, this Google thing doesnt seem to do."
All the big cloud providers [Google, Microsoft, Facebook] report abusive imagery sent to their clouds to the authorities (search for annual NECMEC reports), except Apple. The others slurp up unencrypted data (Facebook photos, Google Drive, Microsoft OneDrive) and scan it and report on it, and nobody [on HN] says anything. Apple was trying to do a more privacy-preserving approach and it seemed like they might be pushing it to the client upload code so they could offer fully encrypted storage where they couldn't scan photos on their side, and a couple of years later they did, they announced optional Advanced Data Protection[2] which fully encrypts iCloud photos among other things.
Dark patterns aside, it's in your control whether to upload data to companies, so 'reporting content outside your control' is deliberately misrepresenting it.
> There were non-panicky comments about whether it's a legally significant move, whether it's a wedge change, whether it's abusable by governments in other ways, etc. the panicky ones were not those
That’s what I find most frustrating about the reaction to this. This was a sophisticated, privacy-preserving process that represented a genuine step forward in the state of the art and there was an interesting argument to be had about whether it struck the right balance. But it was impossible to have that argument because it was drowned out by an overwhelming amount of nonsense from people guessing incorrectly about how it worked and then getting angry about their fantasies.
This is true of literally any software with auto updates.
You’re criticising them for something they did not do, did not intend to do, and designed a system that worked in an entirely different way… just because they could do it differently to what they actually proposed doing.
If they wanted to do it that other way, they could have just done it that other way in the first place and saved themselves a lot of effort.
In general, the less you trust in a company, the better. Free software allows to decrease the trust in the vendor by watching the code and forking whenever you have to.