Daring Fireball: Apple’s New ‘Child Safety’ Initiatives, and the Slippery Slope:
All of these features are fairly grouped together under a “child safety” umbrella, but I can’t help but wonder if it was a mistake to announce them together. Many people are clearly conflating them, including those reporting on the initiative for the news media. E.g. The Washington Post’s “never met an Apple story that couldn’t be painted in the worst possible light” Reed Albergotti’s report, the first three paragraphs of which are simply wrong1 and the headline for which is grossing misleading (“Apple Is Prying Into iPhones to Find Sexual Predators, but Privacy Activists Worry Governments Could Weaponize the Feature”).
Not surprisingly, this is the first really good, non-hyperbolic summary of everything Apple announced they’re doing on the topic.
Likewise on-device updates to Siri and Search around sensitive content, with the same kind of parental opt-in notifications for under 12 users, or just the users otherwise, similar to above.
Most misunderstood... CSAM image fingerprint comparisons. Not sending images, not even scanning content of images, but creating a verifiable hash of images which can be compared with fingerprints in the National Center for Missing and Exploited Children (NCMEC) systems... And if enough of those match the MCMEC system triggering a human review of those fingerprints for confirmation, before finally potentially raising further alarms. These cryptographic hashes, depending on the algorythm, should be entirely unique to any given image and so should be worse than lottery odds of ever creating a single false positive that a photo in your library matches a sensitive image in the NCMEC database, much less enough to trigger further action.
These seem to be exteremely well thought out, best compromise answers to really difficult problems and by far the most pprivacy forward answers of anyone in the tech world so far.