eSafety lays out industry standards enforcing online message and file scans – Security – Software

Platforms like Google Drive and Facebook Messenger would be scanned for illegal material under enforceable standards proposed by Australia’s eSafety Commissioner.



The requirements for detecting, disrupting and removing child sexual abuse and pro-terror content cover most websites and apps, including messaging, file storage and open-source AI generation services.

Stakeholders have until December.21 to submit responses outlining costs, practicality, privacy risks or other concerns with the online safety standards.

eSafety Commissioner Julie Inman Grant said in a statement that detecting the illegal material does not require service providers “to monitor the content of private emails, instant messages, SMS, MMS, online chats and other private communications.”

Automated detection technologies like Microsoft’s hash-matching tool PhotoDNA do not amount to government surveillance, according to the Commissioner.

“PhotoDNA is not only extremely accurate, with a false positive rate of 1 in 50 billion, but is also privacy protecting as it only matches and flags known child sexual abuse imagery,” she said.

Some platforms like Meta also detect illegal content by using classifiers trained on verified content

Grant also said that the distribution of illegal content could be disrupted by blocking accounts with suspicious metadata or material posted on unencrypted surfaces like user profiles.   

“Meta’s end-to-end encrypted WhatsApp messaging service already scans the non-encrypted parts of its service including profile and group chat names and pictures that might indicate accounts are providing or sharing child sexual abuse material.”

Standards still require E2EE to be scanned 

Unlike the two industry-written codes eSafety rejected in February, the standards do not seek to make a separate category for end-to-end encrypted (E2EE) services.

“Operating an end-to-end encrypted service does not absolve companies of responsibility and cannot serve as a free pass to do nothing about criminal acts [performed over these services],” Grant said.

Grant said that the standard would not require “companies to design systematic vulnerabilities or weaknesses into any of their end-to-end encrypted services.” 

However, eSafety refutes that E2EE and automated detection technologies are intrinsically incompatible.

The Commissioner will require service providers to “demonstrate” that detection is “technically infeasible in the circumstances,” to be granted an exemption, according to materials published on complying with the standards [pdf].

“Technical feasibility” depends on “whether it is reasonable for service providers to incur the costs of taking action, having regard to the level of risk to the online safety of end-users.”

Making E2EE and content scanning interoperable 

In October, the eSafety Office’s ‘Updated Position Statement’ [pdf] on E2EE outlined examples of how material transmitted to a cloud storage or messaging platform could be scanned for illegal content before the encryption phase of E2EE. 

One of eSafety’s proposals was “E2EE communication being sent from a user’s device, and then checked…by a dedicated server before the communication is sent on to the recipient.”

“Such a system can be audited to ensure it fulfills only one function, and that no access is provided to the service provider or other third parties.”

eSafety also cited two examples by Apple: a child safety feature for iMessage and a discontinued tool that would have scanned content while it was uploaded to iCloud.

The iCloud solution would have scanned content from users’ devices before it was uploaded to their backup; police would have been alerted when illegal material was detected. 

Although digital rights groups opposed the idea, the solution [pdf] – which Apple abandoned in December last year – was designed with privacy in mind.

“Instead of scanning images in the cloud, the system performs on-device matching using a database of known…image hashes.

“Apple further transforms this database into an unreadable set of hashes, which is securely stored on users’ devices.”

Apple’s director of user privacy and child safety Erik Neuenschwander said in an email [pdf] obtained by Wired that the project was ditched over concerns it could create new “threat vectors for data thieves to find and exploit” and lead to authoritarian surveillance through function creep.

“How can users be assured that a tool for one type of surveillance has not been reconfigured to surveil for other content such as political activity or religious persecution?” the email – sent to a child rights group that supported Apple readopting the solution – said in August.

eSafety’s other example of device-side scanning – Apple’s parental control feature that detects nudity in iMessages that children receive or are about to send – remains operational.

When explicit images are detected, child users are reminded that they do not have to participate, and receive other safety resources. 

“Because the photos and videos are analysed on your child’s device, Apple doesn’t receive an indication that nudity was detected and doesn’t get access to the photos or videos as a result,” the company has said.

eSafety’s position statement said that the feature does not go far enough because it cannot “prevent the sharing of illegal material or activity, or enable accounts to be banned.” 

However, it still “demonstrates – at scale – that device side tools can be used alongside E2EE, without weakening encryption and while protecting privacy.”

https://www.itnews.com.au/news/esafety-lays-out-industry-standards-enforcing-online-message-and-file-scans-602600

Related Posts