CSAM EU Reporting obligations
CSAM EU Reporting obligations
Last update: 31 January 2025
Background
Twelve-App publishes this report in compliance with Regulation (EU) 2021/1232 of 14 July 2021.
Our platform operates internationally, with users spanning across different countries worldwide. We are committed to ensuring the same level of protection for everyone against child sexual exploitation and abuse (”CSEA”). For this reason, we have implemented comprehensive policies to address this critical issue.
Our policies are carefully designed to address child safety violations, including child solicitation and child sexual abuse material (”CSAM”), which may occur through our platform.
We have adopted a zero-tolerance approach to such behavior. As a result, we regularly remove content and ban accounts whenever we become aware of such activity — whether through user reports or our proactive efforts (either from technologies or safety specialists). Some violations may also be reported to the competent authorities.
This report covers the period from January to December 2024.
1. The type and volume of data processed;
We use automated technologies on a voluntary basis to detect CSEA in the user-generated content on the platform. These technologies can spot suspicious content and generate reports that safety specialists will review. Such reports may include user account information, metadata, and user-generated content (such as profile photos, messages or comments).
2. The specific ground relied on for the processing pursuant to Regulation (EU) 2016/679;
Depending on the case, the technology used to detect CSEA may rely on one or more of the following legal grounds:
- The protection of the vital interests of any natural person who is a victim of child abuse (article 6.1 (d) of the GDPR);
- Acting in the public interest against child sexual abuse (article 6.1 (e) of the GDPR);
- Twelve-App’s legitimate interest in proactively combating illicit or dangerous content, as well as any activities that violate Community Guidelines to ensure user safety, maintain trust, and preserve the integrity of its services (article 6.1 (f) of the GDPR).
More information on legal bases can be found in the Privacy policy available here: https://www.yubo.live/legal/privacy-policy.
3. The ground relied on for transfers of personal data outside the Union pursuant to Chapter V of Regulation (EU) 2016/679, where applicable;
In cases where Twelve-App transfers personal data outside the European Union for the purpose of combating CSEA, it relies on the legal grounds provided by the GDPR, including the European Commission’s Standard Contractual Clauses, the Data Privacy Framework, or the protection of the vital interests of any natural person who is a victim of abuse.
4. The number of cases of online child sexual abuse identified, differentiating between online child sexual abuse material and solicitation of children;
In 2024, we have identified 4484 CSEA-related cases in the EU, breaking down as follows:
- 742 cases of confirmed CSAM;
- 3742 cases of confirmed child solicitation.
5. The number of cases in which a user has lodged a complaint with the internal redress mechanism or with a judicial authority and the outcome of such complaints;
In 2024, we received around 31 complaints related to a ban decision related to child safety in the EU. Of these, 0 account were reinstated.
6. The numbers and ratios of errors (false positives) of the different technologies used;
Below are our statistics for the year 2024:
Images:
- We actioned 0 cases in the EU using the industry hash databases.
- Our technologies can detect and remove content depicting nudity, but they cannot specifically classify content as 'CSAM.' Content can only be classified as CSAM through manual review by a human specialist.
Text:
Accuracy rate for grooming detection reached approximately 87% on average. This means that in about 13% of cases spotted as ‘suspicious’, human specialists did not take action. Safety specialists are also subject to review and quality assurance.
7. The measures applied to limit the error rate and the error rate achieved;
The media matching technologies and hash databases are hold by reputable child safety organizations (like the NCMEC and the IWF). Such databases are fed by industry players. When there is a match between a media published on Yubo and the hashes of previously-identified CSAM in those databases, a human safety specialist will review this case and confirm whether or not this is a false positive.
Our safety algorithms are subject to continuous improvement. First, safety specialists review suspicious cases flagged automatically and sent in dedicated queues; moderators are trained and can therefore spot false positives and not take any action where appropriate. Then, Quality Assurance (QA) procedures are in place in which human specialists review the quality of work carried out by moderators.
8. The retention policy and the data protection safeguards applied pursuant to Regulation (EU) 2016/679;
Yubo has security measures in place such as encryption in transit (using TLS 1.3 protocol), access control, logging system, and data protection governance, including data protection impact assessments and limited retention periods. Data related to content moderation is kept for up to 12 months. Retention periods depend on the type of content, type of violation and storage conditions.
9. The names of the organisations acting in the public interest against child sexual abuse with which data has been shared pursuant to this Regulation;
- National Center for Missing and Exploited Children (NCMEC)
- Internet Watch Foundation (IWF)
- NCMEC for the U.S./Canadian market, PHAROS (Plateforme d'harmonisation, d'analyse, de recoupement et d'orientation des signalements) for France.