Trust & Safety

Content Moderation Statement

Effective date: May 12, 2026 Last updated: May 12, 2026

This Content Moderation Statement describes how Favor reviews and acts on content shared on the platform. It complements our Community Guidelines (which describe what's allowed and not allowed) by explaining how those rules are enforced — the systems and people behind the decisions.

We publish this statement for the same reason we publish everything else: members deserve to know how the platform protects them, and platforms that take moderation seriously should be willing to describe their work in detail.

For specific concerns about a moderation decision affecting you, or to report a violation, contact support@favorconnect.com.

Section 01

Our moderation philosophy

Moderation is about protecting members — both from harmful content and from the friction of low-quality interactions. We aim to be fast where speed matters (urgent safety), fair where accuracy matters (judgment calls), and transparent across the board.

Three principles

  • Prevention over punishment. The best moderation removes problems before they reach anyone. We invest more in upstream prevention — verification, automated content review, photo gating — than in downstream consequences.
  • Humans, not just algorithms. Automation handles scale; humans handle judgment. Every consequential decision involves human review. Algorithms flag; humans decide.
  • Consistent standards, not personal preferences. Our Community Guidelines exist precisely so moderation doesn't depend on individual taste. Reviewers apply the same rules to every member, regardless of who they are or how active they've been.

What we believe about content

Members shouldn't have to wade through low-quality, harmful, or fraudulent content to find the people they're here for. The quality of any community is a function of what it tolerates. We choose to maintain a high floor — even when that means saying no to content or members we'd otherwise welcome.

What moderation can and can't do

No moderation system catches everything. Determined bad actors will always find new approaches. We work to minimize the gap between policy and practice — and to act quickly when something slips through — but we don't promise perfection. We promise effort, transparency, and responsiveness.

Section 02

What we moderate

We moderate every kind of user-generated content that appears on Favor. The specific surfaces and the methods we use are listed below.

Profile content

  • Profile photos — every photo, before it appears in any feed.
  • Public gallery photos — every photo, on upload.
  • Selective (private) gallery photos — every photo, on upload.
  • Profile bios, headers, and free-text fields.
  • Lifestyle interests and other structured fields.

Messaging content

  • Text messages — scanned in real time as you send them.
  • Voice messages — flagged for review when patterns trigger.
  • Photos sent in chat — every photo, before delivery.
  • Location shares — pattern-checked.

Voice and video calls

We don't record voice or video calls. They're not stored on our servers, and we cannot review their content after the fact. What we can do is take action based on member reports if someone behaves inappropriately during a call. Reports about calls receive the same priority as text-based reports.

Behavioral content

Beyond what's posted, we look at patterns — how members interact with the Service, who they message and how often, who reports them and for what. Behavioral signals supplement (not replace) content-based review.

Section 03

How automated systems work

Automated review is the first layer of moderation. It runs continuously, at scale, and catches the vast majority of obvious violations before they reach another member.

Photo content scanning

Every photo uploaded to Favor — without exception — passes through automated content review. The scan checks for:

  • Nudity, including partial nudity and sexually suggestive imagery.
  • Sexually explicit content of any kind.
  • Faces that appear underage.
  • Graphic violence, weapons used to threaten, blood and gore.
  • Hate symbols and extremist imagery.
  • Drugs, paraphernalia, and content glorifying drug use.
  • Embedded text containing contact information, social handles, or external links.
  • Photos that appear to be of someone other than the account holder (when checked against the verification selfie).

Photos that fail review are either blocked outright (clear violations) or held for human review (ambiguous cases). Blocked photos generate an in-app notice to the uploader with a brief explanation.

Message scanning

Text messages are scanned for indicators of common policy violations, including:

  • Romance and investment scam patterns.
  • Pig-butchering scam scripts.
  • Solicitation language (sex work, escorting, transactional language).
  • Personal contact information when shared too early in a conversation.
  • Threats, harassment, and hate speech.
  • References to age that suggest a minor.
  • Mass-messaging patterns (the same or similar messages sent to many recipients).

Scanning happens server-side at the time messages are processed. Messages that trigger high-confidence violations are blocked from delivery. Messages that trigger lower-confidence flags are delivered but the sender is added to a moderation review queue.

Behavioral pattern detection

Beyond individual content, we look at patterns across accounts — bursts of similar messages, rapid creation of multiple accounts from the same device, accounts that receive disproportionate reports, login patterns consistent with automation. Pattern detection helps us identify coordinated abuse that wouldn't be obvious from any single piece of content.

Selfie verification automation

Selfie verification combines liveness detection (confirming the selfie is a live person, not a photo of a photo) with face matching (comparing the selfie to the submitted profile photos). The system uses industry-standard biometric analysis and is regularly evaluated for accuracy across different demographic groups.

What automation doesn't do

Automation never makes the final decision to permanently terminate an account. Automation can suspend accounts pending review and block individual pieces of content, but ban-level decisions involve human review. This is intentional — to limit the impact of any false positive in our automated systems.

Section 04

How human moderation works

Human moderators handle every decision that affects an account's standing on Favor. They also handle ambiguous content cases that automation can't resolve, and they review every member-submitted report.

Who moderates

Our moderation team consists of trained professionals who specialize in trust and safety for online platforms. They receive ongoing training on:

  • Our Community Guidelines and how to apply them consistently.
  • Common scam patterns, romance fraud, and emerging threats.
  • Child safety, including identification of grooming behavior and exploitative content.
  • Cultural and linguistic context across the markets where Favor operates.
  • Mental health support for the wellbeing of moderators themselves.

Moderators are bound by strict confidentiality requirements and have access only to the information necessary for their work.

Review workflow

When a report or flag arrives in the moderation queue, the workflow is:

  • Categorize the report by type (safety, content, billing, etc.).
  • Prioritize by severity (urgent safety reports surface to the top).
  • Investigate using the auto-attached context (last 50 chat messages, profile snapshots, timestamps).
  • Apply our standards consistently — moderators reference the same rule documents you can read on our website.
  • Decide on appropriate action, from no-action to permanent ban.
  • Document the decision so it's reviewable on appeal.
  • Notify the affected parties (where appropriate).

Service level commitments

We commit to the following response times:

  • Urgent safety reports (suspected minors, credible threats, non-consensual imagery): within 4 hours.
  • Standard reports of policy violations: within 48 hours.
  • Appeals: within 7 days.
  • Grievance Officer matters: as described in our Grievance Officer page.

Volume and complexity can sometimes affect timing. When we can't meet a target, we provide an update with revised timing rather than going silent.

Section 05

Reports and how they flow

Where reports come from

  • In-app reporting — the three-dot menu on any profile, chat, or message.
  • Email reports — to support@favorconnect.com.
  • Internal flags — from our automated systems.
  • Law enforcement requests — from authorities providing valid legal process.
  • Trusted-flagger referrals — from organizations focused on online safety, child protection, or other areas relevant to platform moderation.

What happens to a report

Every report follows the same general flow:

  • Acknowledgement — automatic confirmation that the report was received.
  • Triage — the report is categorized and prioritized.
  • Reporter is auto-blocked from the reported account (for in-app reports) so further contact is impossible during review.
  • Investigation — moderators review the context, the reported content, and the involved accounts.
  • Decision — appropriate action is taken (from no-action to permanent ban).
  • Outcome notification — the reporter receives a general confirmation that action was taken; specifics are not shared for privacy reasons.

What reporters see

Reporters receive a general acknowledgement that their report was reviewed and that action was taken — but they don't see the specific outcome. This is deliberate: we don't share whether a member was warned, suspended, or banned, because doing so would compromise the privacy of the reported member. Reporters who want more transparency about an outcome can write to support@favorconnect.com for case-specific follow-up.

Confidentiality of reporters

Reporter identities are never shared with the reported member. The reported member sees no information that suggests who reported them, when, or why. The only exception is when a report becomes part of a law enforcement investigation and disclosure is legally compelled.

Section 06

Actions we take

When we find a violation, the action we take is calibrated to severity. The table below summarizes our typical approach.

SeverityAction takenExample violations
Minor / first offenseContent removal, in-app warning with policy reminder.Off-topic profile bio, mild rudeness, low-quality photos.
Repeat / moderateProfile hidden from discovery, 7–14 day account suspension.Repeated impersonation reports, persistent rudeness, soft contact-info pushing.
Serious30-day suspension or permanent account termination, cross-account ban applied.Harassment, scam attempts, photos that fail content rules, fake profile, threats.
Most severeImmediate permanent termination, cross-account ban, reported to law enforcement.Sexual content involving minors, credible threats of violence, doxxing, sextortion.

Cross-account ban mechanics

When we permanently terminate an account for a serious violation, we apply a cross-account ban that prevents the same person from creating a new account. The ban uses multiple identifiers:

  • Email address(es) associated with the account.
  • Phone number(s) associated with the account.
  • Payment instruments used for subscriptions.
  • Device fingerprint identifiers.

A determined person could potentially evade some of these — by using a new device, a new phone number, a new payment method — but the combined ban catches casual attempts. Repeated evasion attempts trigger additional review and may be referred to law enforcement.

What happens to the data of banned accounts

When an account is permanently terminated, the personal data associated with it is handled according to our Privacy Policy:

  • Active account data is purged within 30 days of termination.
  • Some data may be retained longer for safety investigations or to enforce the cross-account ban.
  • Verification documents are deleted with the rest of the account data.
  • Anonymized records of the violation are retained indefinitely to inform future moderation decisions.
Section 07

Appeals

If you believe a moderation decision was wrong, you can appeal. Appeals are reviewed by a different person than the one who made the original decision.

How to appeal

Write to support@favorconnect.com within 30 days of the action. Include:

  • Your registered email and account information.
  • A clear description of the action you're appealing.
  • Why you believe the decision was wrong.
  • Any context that would help a reviewer understand.

How appeals are reviewed

A separate moderator reviews:

  • The original report or flag.
  • The evidence used in the original decision.
  • Your appeal and the context you've provided.
  • Our Community Guidelines and policies as they applied at the time of the action.

If the appeal reviewer reaches the same conclusion as the original decision, the action stands and you'll receive a final response. If the appeal reviewer disagrees, the action is reversed and your account is restored to its prior state.

Appeals we do not consider

Some categories of action are not subject to appeal because of their severity:

  • Permanent termination for sexual content involving minors.
  • Permanent termination for adults who interacted inappropriately with minors.
  • Permanent termination for credible threats of violence carried out off-platform.
  • Account terminations resulting from law enforcement requests or court orders.

Timing

Standard appeals are reviewed within 7 days. Complex appeals (involving multiple parties, ongoing investigations, or external coordination) may take longer; you'll be told if so.

Section 08

Transparency commitments

We commit to being transparent about how moderation works — including its limitations.

What we publish

  • This Content Moderation Statement, kept up to date as our practices evolve.
  • Our Community Guidelines, with the full list of what's allowed and prohibited.
  • Our Safety Center, with guidance for members on staying safe.
  • Our Underage User Policy, with explicit commitments around child safety.
  • Our Grievance Officer details, for formal complaints under Indian law.

What we may publish in the future

As Favor grows, we may publish additional materials such as:

  • Periodic transparency reports on the volume and types of moderation actions taken.
  • Information about government and law enforcement requests we've received.
  • Updates on emerging threats we're seeing on the platform.
  • Year-over-year changes in our moderation practices and infrastructure.

What we don't publish

Some information is deliberately kept private, including:

  • Specific signals our automated systems use to detect violations (publishing these would let bad actors evade them).
  • Details of individual moderation decisions (publishing these would compromise the privacy of the members involved).
  • The exact thresholds our systems use for confidence scoring (also evasion-related).
  • The internal training materials our moderators use (these contain examples drawn from real cases).
Section 09

How this connects to your privacy

Moderation requires access to content. We've built our systems to balance the necessary access against your reasonable expectation of privacy.

Content we access for moderation

  • Profile content — visible to all members anyway, so moderation review doesn't extend access.
  • Messages — accessed when reports involve them, or when automated systems flag specific patterns. Not browsed for any other purpose.
  • Photos — every photo is reviewed by automated systems before reaching another member; human review applies to flagged photos only.
  • Verification documents — reviewed during verification, then stored encrypted and accessed only if verification needs to be re-checked or if law enforcement requests access through valid legal process.

Content we don't access

  • Voice and video calls (not recorded, not stored).
  • Messages in conversations not flagged by automation or members.
  • Anything that has been deleted (subject to backup retention as described in our Privacy Policy).

Who at Favor can access what

Different roles have different levels of access:

  • Moderators see content related to the reports or flags they're investigating.
  • Engineers can access systems-level data needed to operate the Service but not specific content unless investigating a specific incident.
  • Support staff can access basic account information needed to help members but not message content.
  • Leadership reviews aggregated data and specific high-priority cases.

All access is logged. Access to sensitive content (verification photos, IDs, intimate content reported by members) is additionally audited.

Section 10

Working with external partners

Some moderation is done in partnership with external organizations specialized in specific areas of safety.

Child safety

We report content depicting child sexual exploitation to the National Center for Missing and Exploited Children (NCMEC) through their CyberTipline. We follow industry-standard hashing tools like PhotoDNA to identify known harmful imagery. We participate in coordinated efforts among technology companies to remove this content quickly across platforms.

Fraud and scams

We use commercial fraud detection services to identify patterns of romance scams, financial fraud, and account takeover attempts. These services don't have access to your message content beyond what's necessary for fraud detection.

Trust and safety industry

We participate in the broader trust and safety community — sharing information about emerging threats, learning from peers, and contributing to best practices for online safety. Where applicable, we exchange threat intelligence with other platforms about coordinated bad actors operating across services.

Legal and regulatory

We cooperate with law enforcement and regulators in India and other jurisdictions where Favor is available, following applicable legal process. See our Privacy Policy for more on how we handle law enforcement requests.

Section 11

Limitations and honest caveats

To be straight with you about what we can and can't do:

We can't prevent every bad outcome

No moderation system catches everything. Sometimes harmful content reaches members before we remove it. Sometimes legitimate content gets removed in error. Sometimes patient bad actors find ways through our checks. We work continuously to narrow these gaps, but we don't pretend they don't exist.

Automated systems make mistakes

Our automated systems have false positives (flagging content that's actually fine) and false negatives (missing content that's actually a violation). We tune them toward catching more violations even at the cost of some false positives, because the harm from missed violations is generally greater. The appeals process exists specifically to correct false positives.

Human reviewers make mistakes too

Reviewers are professionals applying our policies as consistently as they can, but they're also human. They have busy days, ambiguous cases, and tough judgment calls. The appeals process catches a portion of these mistakes; we keep working to improve consistency over time.

The threat landscape evolves

New scam scripts, new evasion tactics, new categories of harm emerge regularly. Some of what we catch today wasn't on our radar a year ago. Some of what we'll catch a year from now we haven't seen yet. Continuous adaptation is part of the work.