Here is something I send out to my mailing list. And wanted to share my thoughts here as well
Apple announced that it will start scanning images of photos uploaded too iCloud in an effort to identify child pornography. This has set off a firestorm of objections and no small amount of misinformation. The situation is still unfolding but here is my take based on a close examination of all the publicly available information:
** First, here is Apple’s announcement:
Personally, I applaud the fact that Apple has taken a strong and sincere stand on the the problem of child pornography. But the devil of course lies in the details, and like any complex issue there are pros and cons to Apple’s proposal, which I detail below.
** How the Apple system will work:
Apple is adding CSAM (Child Sexual Abuse Material) (Child Sexual Abuse Material) protection into the next version of iOS, the iPhone’s operating system in order to fight the distribution of child pornography. The CSAM detection software will only examine photos stored on your phone before they are uploaded into iCloud.
** A user can effectively opt out of the hash scanning by turning off use of iCloud.
The software also will not actually “look” at the actual images. Instead, it will examine a mathematical code “hash” of the content of the photos and compare it with the hashed data that is stored in a database containing known CSAM images of child pornography. Getting a bit more technical, both the pictures on the database and pictures in iCloud are interpreted as a “hash”, which is a long line of code that uniquely identifies an image down to a single pixel without knowing its content.
Apple’s CSAM software will compare the hashes of pictures on your iPhone with the hashes of known pornographic images. More specifically, when you take or import new images the software will create hashes and compare these to the database of known CSAM images. If there is no match between the hash of a new image and the CSAM database nothing happens. If a match is detected, an alert will be sent and the relevant image will be manually reviewed by someone at Apple. It will likely be disabled and reported to the NCMEC (National Center for Missing and Exploited Children), which in turn may report it to law enforcement.
** Unfortunately it will have almost no impact on the creation and spread of these despicable images.
The problem is that as long as the image hash isn’t in the database it will not be recognized. It will only apply to existing known content that a user has to store into the photos app that is enabled to use iCloud. New self created images will not be detected!
Frankly I think this entire system is to protect Apple from being liable for distributing CSAM images on via their servers.
Apple is not alone when it comes to scanning for hashes of offending images; Google, Facebook, Microsoft and others are doing the same but only after the images have been uploaded to their server. Note that Apple’s CSAM software is scanning the hashes of your pictures locally on your iPhone before the images go into the cloud. More on this rather confusing distinction further below.
** What are the risks of Apple’s proposal:
Risk of false positives:
Hashes are not always absolute, and false hits (called “collisions” ) can happen, but instances of collisions are extremely rare. In fact, the only instances of actual collisions to date have been found only in lab proof of concept experiments set up to test the potential for collisions.
In the event of a spurious collision, it (hopefully) will be caught and corrected when the image is referred for human inspection at Apple. The risk of a collision is thus vanishingly small, and the risk of collision slipping past a human inspector undetected is even smaller. The possibility of an error getting past both Apple’s software and their human inspectors is thus so vanishingly small that it is hard to come up with an adequate metaphor, but our non-technical take is that it would be comparable to getting attacked by a great white shark while flying in a commercial airliner at 35,000 ft at the very moment the plane is also struck by an incoming meteor.
** The risk of surveillance creep:
A bigger concern is surveillance creep. The news is full of abuse of software (like Pegasus) intended for limited law enforcement use. It is thus worrisome that Apple’s CSAM software could be abused by governments seeking to hash-match images unrelated to child pornography in order to identify and silence dissidents, journalist, etc.
It is clear that something needs to be done to cut off the flow of child pornography. I thus agree with the general principle of scanning hashes to look for offending images. What makes me uneasy is Apple’s specific approach of scanning files on our phones whether or not the images are uploaded to iCloud. Apple states that that they will not scan images of users who opt out of using iCloud, but the technical ability exists to do so and thus there is real potential for abuse by Apple, law enforcement authorities, foreign agents – or criminals.
** And then there is the slippery slope:
What happens with pictures today could happen with conversations tomorrow or any one of the other sensor inputs the the iPhone maintains? Mobile phone location data is already being used in law enforcement investigations to track down and question people who’s phones were in the vicinity of a crime that you might not be even aware off.
The success of existing detection schemes will create a strong temptation for law enforcement to expand what it wants to inspect. Will voice mails be scanned? Will caller and address book metadata be analyzed?
Will smart phone mics always be listening for indications of domestic violence and call police? Or will smartphones be unwillingly turned into a “shot-spotter” network, with microphones listening for gunshots and cameras watching for muzzle flashes?
This all borders on paranoid improbability, but recent history is replete with examples of theoretical risks becoming actual problems. This is why the metaphor of the slippery slope is so compelling – and so vulnerable. As Cicero observed centuries ago, events once underway “… glide readily down the path of ruin once they have taken a start.” Or more simply, once one steps upon the slippery slope, one inevitably ends up at the bottom.
** Two more new initiatives that are under-reported!
The big advantage of the scanning of images on the iPhone is used in a way that will have a much bigger impact in the protection of children.
Apple is adding image monitoring to iMessage. The idea is that when a child is about to send or receive sexually explicit images, iMessage will first flash up a warning message on their device and if “family sharing” is switched on, the alert will also be sent to the child’s parents. Also, offending images will be blurred and the software will display an additional warning if the child attempts to view the un-blurred image. If the child opens the image the parent(s) will also receive an additional notification.
** This technology will be much more effective to protect children!
This feature will be implemented as an opt-in-only feature performed locally on the affected iPhone. There will be no uploads to the cloud, and only family members specified in the Family Plan settings will receive alerts. Law enforcement will not be involved in any way and parents will have full control. I believe that this feature will be of far greater benefit to watchful parents than the CSAM image monitoring, especially as both settings and notification are left in the full control of parents.
Siri & Search
Siri and and search are also being updated to intervene when users search for queries related to child pornography. Apple hasn’t provided the details of how it will work, but whatever the final configuration, there remains the possibility (and to law enforcement, the temptation) of feature creep. And once the feature exists, there is the risk that certain state actors will demand Apple extend the feature to detect what others considered protected political speech. For example, it is easy to imagine a foreign government demanding a feature extension as a condition of selling Apple products in their country.
** Looking ahead:
It is still early in this process and much remains to be learned about Apple’s plans. Whatever one thinks of the details, Apple is clearly motivated by a sincere desire to counter what it sees as critical risks in the rapidly evolving digital world. But even good intentions and a sincere desire to help are not without risk.
** It this a self-inflicted Catch-22 for Apple?
Failure to stem the spread of child pornography on it’s platforms goes against Apple’s long-standing commitment to providing a safe, family-friendly walled garden for it’s customers. However, adding additional protections like those contemplated in its recent announcement opens Apple to accusations that it is lowering level of its customer promise of strong privacy protections.
** This is a slippery slope!
Apple is the role model for protection of privacy of its users. Lowering the bar even for this Nobel cause will lower the industry standard. Something that already leaves much to be desired. Looking at Google, Microsoft and Facebook for starters…
** As users, we are inevitably caught in the middle
As Apple customers, we have little say in what Apple does. Our only choice we is to use iCloud or not. And also to watch out that we don’t somehow innocently touch a topic that might later be added to the list of things to scan for and report back to Apple and the authorities.
** My bottom line…
Don’t get me wrong – I love apple iCloud service. It’s the glue that ties our Mac’s, iPhone and iPad together and synchronizes essential data. On balance, I much prefer living Apple’s walled garden approach over the wild west of Android and Windows. And I believe the same applies to my clients: more then once I have saved a client’s digital life simply by enabling iCloud services and restoring lost/corrupted data after a Mac was stolen or stopped working.
However, I have adopted a posture of constant vigilance regarding the future. Given the intrusion of ever more capable Artificial Intelligence functions intruding into our phones and computers, prudence dictates that we are prepared to take more dramatic steps in the event that current trends mature into a serious invasion of our privacy.
I thus am constantly evaluating other alternatives. So far, the available alternatives come at considerable cost in terms of reduced functionality and increased technical hassle compared to Apple’s offerings. But in a worst case scenario, trading convenience for privacy, might be a price worth paying.
Finally, as this post suggests, as deeply as I have explored the current state of this issue, I am only beginning what I am certain will be a long and complex journey of discovery.
If you are interested in learning more as I deepen my understanding, let me know by signing up for my newsletter dedicated to this topic.