Because we care about your safety and privacy, we’d like to be transparent about how we make moderation decisions and how we enforce our Community Guidelines.
An ‘Automated individual decision’ means a decision that is not made by a human but by an algorithm and that may impact an individual’s interests, rights, or freedoms.
At YUBO, we do not track your online activity to push you things like tailored advertisements. For example, you can indicate in your profile that you love cats and like to play tennis; we will not use this information to push other companies' marketing agendas.
Our moderation process is based on a combination of human moderators and AI technology. We may automatically remove some unacceptable content to prevent users from being exposed to unsafe or inappropriate behavior. For example, a live video that shows violence or nudity will be automatically closed if our algorithm detects it. Likewise, we can also automatically filter inappropriate or unlawful content sent in the chat.
Once we detect content that we suspect to be in violation of our Community Guidelines or applicable laws, we generate an internal report. This report will be escalated to our human moderators who will review it and take the appropriate actions. The reported account is flagged during the investigation time.
All our automated detection and filtering tools remain under the supervision of our teams and moderators. This way, we limit errors (like false positives) and unfair moderation decisions.
If you think you have been mistakenly moderated, contact support here.