WhatsApp bosses objected to Apple’s child security features

 


Apple’s move to implement a new child safety system in its products has been attacked. One of them is from WhatsApp boss, Will Cathcart.
Through his cult, Cathcart called Apple’s approach to security systems very worrying, and they (Twitter) wouldn’t use the same system.





I read the information Apple released yesterday and I was worried. I think this is the wrong approach and backwardness to the privacy of people around the world.



People are asking if we will use this system for WhatsApp. The answer is no.



- Will Cathcart (@wcathcart) Aug 6, 2021



According to him, WhatsApp has a different system to combat child exploitation, which relies on user reports and still relies on encryption like Apple.



As a result, in 2020 WhatsApp reported more than 400 thousand cases to the National Center for Missing and Exploited Children (NCMEC).



Cathcart has its own reasons for criticizing Apple’s system. That is a user privacy issue, where Apple will scan all photos that will be uploaded to iCloud, and look for photos that are suspected of being exploited by children.



But it should also be noted, Facebook - the owner of WhatsApp - is indeed the opposite of Apple. That’s when Apple changed its ad tracking rules in iOS 14.5, which Facebook said would be to the detriment of small businesses.



Cathcart isn’t the only one criticizing Apple’s new system. There’s also Matthew Green, an associate professor at Johns Hopkins University, who says the feature could be abused by governments or criminals who could break Apple’s security.



This tool will allow Apple to scan your iPhone photos for photos that match a particular perception hash, and report them to Apple’s servers if too many show up.



- Matthew Green (@matthew_d_green) Aug 5, 2021



Reported earlier, Apple created a special feature to detect child abuse by scanning photos in iCloud. Later the contents of the disturbance found will be reported to the enforcement.



The scans of the photos were made based on special cryptography to detect signs of child abuse found in the photos. The scan itself will only happen when the photo is uploaded to iCloud.



Then if the scanned photo is detected as child abuse from available cryptographic data, the photo will be reported to Apple if it meets the child sexual abuse material (CSAM) requirements.



For years, Apple used a hash system to scan emailed child abuse pictures, much like the system used by Gmail and other email service providers.



However, with this new cryptographic technology, Apple can scan and search for child abuse photos before sending them to other users, or even if those photos were never sent to other users.



Apple guarantees that they will not accept photos that do not match the CSAM database. They will also not accept metadata or other visual data from unqualified photos.
Previous Post Next Post

Contact Form