How do we make moderation decisions on Yubo?
Because we care about your safety and privacy, we’d like to be transparent about how we make moderation decisions and how we enforce our Community Guidelines.
An ‘Automated individual decision’ means a decision that is not made by a human but by an algorithm and that may impact an individual’s interests, rights, or freedoms.
Our moderation process is based on a combination of human moderators and AI technology. We may automatically remove some unacceptable content to prevent users from being exposed to unsafe or inappropriate behavior. For example, a live video that shows violence or nudity will be automatically closed if our algorithm detects it. Likewise, we can also automatically filter inappropriate or unlawful content sent in the chat.
Once we detect content that we suspect to be in violation of our Community Guidelines or applicable laws, we generate an internal report. This report will be escalated to our human moderators who will review it and take the appropriate actions. The reported account is flagged during the investigation time.
Moderators also review all reports made by users themselves.
Our automated detection and filtering tools remain under the supervision of our teams and moderators. This way, we limit errors (like false positives) and unfair moderation decisions.
If you think you have been mistakenly moderated, contact support here.