At the beginning of this month, Reuters reported that Apple officially is going to start scanning photos and messages stored in their cloud service called iCloud with the excuse that their system will scan Child Sexual Abuse Material (CSAM). As it was already posted in the following thread:
I think is better to create a thread specific to this subject and related ones to this.
This is their official statement:
Link: Child Safety
Oh C'mon! This goes into the category of 'How to tell you that we are gonna be spying you without telling you that we are going to be spying on you?'
The thing is they (Apple) along with any other major tech company have always spied on us (perhaps a little bunch without really knowing it or not) but the main difference in this case is that they are making it official! With an excuse, which sounds like ''good intentions'' but we (in this forum) know that is just another tactic for the PTB to expand officially their totalitarianism convincing the population that this is a good thing for us... to ''protect us'' in this case; to ''protect'' our children.
Later most logical thing to think is COVID, with new variants coming up, because of course it sounds logical to them to explot on the news a new variant, for vaccination purposes, this scanning system could start checking on photos or messages which provides ''misinformation?'' and make you a target to the authorities. After that they could easily implement it as a system to detect messages or photos that goes against government? Perhaps it can happen something like a new false flag attack and they will go with the scan to detect terrorism as well and can target you officially as a terrorist if you express a different version from the official one? You know how all this can go.
Anyways this is how they say that their will not be watching your photos directly (Spoilers alerts: they will):
Link: Apple Outlines Security and Privacy of CSAM Detection System in New Document
So at the end yes, they will see your photos directly OFFICIALLY.
Even Apple's employees are concerned about this feature:
At least the good thing is that people is noticing, the more moves like this they make it official the more people will be realizing how is everything in reality. I don't have any hope that this will go totally against them but I only wish.
Hmmm... Right! Until something happens and government pass a bill where companies will have to share the data officially (thats the main word here isn't?)
So lets see how the other major companies (I don't wanna even start with Android and Google) will start implementing this as well, expect the news very soon and we can track how this will evolve into a different type of vigilance besides CSAM, like COVID, then government rules etc etc.
In my case I'm not gonna lie; I'm one of the people who often prefer Apple products compare to other brands because of how simple everything works for me using their ecosystem. But I don't use at all their cloud services (I manually backup everything in a hard disk) and on their products I use always third party apps instead of the first party ones, I know this won't stop them to look at my stuff but I could make it just a tiny bit harder for them or anyone else to get much info about me. (Wishful thinking on my part I know)
Coronavirus Pandemic: Apocalypse Now! Or exaggerated scare story?
I thought people might appreciate this little gem from the ABC which I noticed online today... it’s a “conversation guide” offering a multitude of brainwashing techniques when dealing with people who are ‘vaccine hesitant’. Love the part where they say “If a person feels strongly against...
cassiopaea.org
This is their official statement:
Link: Child Safety
Expanded Protections for Children
At Apple, our goal is to create technology that empowers people and enriches their lives — while helping them stay safe. We want to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM).
Apple is introducing new child safety features in three areas, developed in collaboration with child safety experts. First, new communication tools will enable parents to play a more informed role in helping their children navigate communication online. The Messages app will use on-device machine learning to warn about sensitive content, while keeping private communications unreadable by Apple.
Next, iOS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy. CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos.
Finally, updates to Siri and Search provide parents and children expanded information and help if they encounter unsafe situations. Siri and Search will also intervene when users try to search for CSAM-related topics.
These features are coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.*
This program is ambitious, and protecting children is an important responsibility. These efforts will evolve and expand over time.
Communication safety in Messages
The Messages app will add new tools to warn children and their parents when receiving or sending sexually explicit photos.
When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo. As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it. Similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it.
Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages.
This feature is coming in an update later this year to accounts set up as families in iCloud for iOS 15, iPadOS 15, and macOS Monterey.*
Messages will warn children and their parents when receiving or sending sexually explicit photos.
CSAM detection
Another important concern is the spread of Child Sexual Abuse Material (CSAM) online. CSAM refers to content that depicts sexually explicit activities involving a child.
To help address this, new technology in iOS and iPadOS* will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies across the United States.
Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations. Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.
Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image.
Using another technology called threshold secret sharing, the system ensures the contents of the safety vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a threshold of known CSAM content. The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.
Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the contents of the safety vouchers associated with the matching CSAM images. Apple then manually reviews each report to confirm there is a match, disables the user’s account, and sends a report to NCMEC. If a user feels their account has been mistakenly flagged they can file an appeal to have their account reinstated.
This innovative new technology allows Apple to provide valuable and actionable information to NCMEC and law enforcement regarding the proliferation of known CSAM. And it does so while providing significant privacy benefits over existing techniques since Apple only learns about users’ photos if they have a collection of known CSAM in their iCloud Photos account. Even in these cases, Apple only learns about images that match known CSAM.
Oh C'mon! This goes into the category of 'How to tell you that we are gonna be spying you without telling you that we are going to be spying on you?'
The thing is they (Apple) along with any other major tech company have always spied on us (perhaps a little bunch without really knowing it or not) but the main difference in this case is that they are making it official! With an excuse, which sounds like ''good intentions'' but we (in this forum) know that is just another tactic for the PTB to expand officially their totalitarianism convincing the population that this is a good thing for us... to ''protect us'' in this case; to ''protect'' our children.
Later most logical thing to think is COVID, with new variants coming up, because of course it sounds logical to them to explot on the news a new variant, for vaccination purposes, this scanning system could start checking on photos or messages which provides ''misinformation?'' and make you a target to the authorities. After that they could easily implement it as a system to detect messages or photos that goes against government? Perhaps it can happen something like a new false flag attack and they will go with the scan to detect terrorism as well and can target you officially as a terrorist if you express a different version from the official one? You know how all this can go.
Anyways this is how they say that their will not be watching your photos directly (Spoilers alerts: they will):
Link: Apple Outlines Security and Privacy of CSAM Detection System in New Document
The system is designed so that a user need not trust Apple, any other single entity, or even any set of possibly-colluding entities from the same sovereign jurisdiction (that is, under the control of the same government) to be confident that the system is functioning as advertised. This is achieved through several interlocking mechanisms, including the intrinsic auditability of a single software image distributed worldwide for execution on-device, a requirement that any perceptual image hashes included in the on-device encrypted CSAM database are provided independently by two or more child safety organizations from separate sovereign jurisdictions, and lastly, a human review process to prevent any errant reports.
So at the end yes, they will see your photos directly OFFICIALLY.
Even Apple's employees are concerned about this feature:
Apple employees have flooded an Apple internal Slack channel with more than 800 messages on the plan announced a week ago, workers who asked not to be identified told Reuters. Many expressed worries that the feature could be exploited by repressive governments looking to find other material for censorship or arrests, according to workers who saw the days-long thread.
Past security changes at Apple have also prompted concern among employees, but the volume and duration of the new debate is surprising, the workers said. Some posters worried that Apple is damaging its leading reputation for protecting privacy.
At least the good thing is that people is noticing, the more moves like this they make it official the more people will be realizing how is everything in reality. I don't have any hope that this will go totally against them but I only wish.
Could governments force Apple to add non-CSAM images to the hash list?
Apple will refuse any such demands. Apple's CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government's request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.
Hmmm... Right! Until something happens and government pass a bill where companies will have to share the data officially (thats the main word here isn't?)
So lets see how the other major companies (I don't wanna even start with Android and Google) will start implementing this as well, expect the news very soon and we can track how this will evolve into a different type of vigilance besides CSAM, like COVID, then government rules etc etc.
In my case I'm not gonna lie; I'm one of the people who often prefer Apple products compare to other brands because of how simple everything works for me using their ecosystem. But I don't use at all their cloud services (I manually backup everything in a hard disk) and on their products I use always third party apps instead of the first party ones, I know this won't stop them to look at my stuff but I could make it just a tiny bit harder for them or anyone else to get much info about me. (Wishful thinking on my part I know)
Last edited: