Content Moderation Policy

Effective Date: 30 March 2025

Last Updated: 30 March 2025

Fortune Finders Limited (in the following mentioned as “Swark.ai”) is committed to creating a safe, respectful, and inclusive platform for all users. To ensure this, we enforce a clear and transparent content moderation policy that outlines what is and isn’t acceptable on our platform.

1. Scope of Policy

This policy applies to all content submitted to Swark.ai, including but not limited to:

- Text, images, videos, and audio

- Usernames and profiles

- Comments, posts, messages, and AI-generated content

2. Prohibited Content

Users may not upload, generate, share, or otherwise engage in content that includes:

a. Illegal Content

- Violations of any local, national, or international laws

- Child sexual abuse material (CSAM)

- Promotion of terrorism or violent extremism

- Intellectual property violations (copyright/trademark infringement)

b. Hate Speech and Harassment

- Content that promotes violence or hatred against individuals or groups based on race, ethnicity, religion, disability, gender, age, nationality, sexual orientation, or gender identity

- Bullying, targeted harassment, doxxing, or threats

c. Violence and Harm

- Graphic violence or gore

- Threats or incitement to harm oneself or others

- Suicide encouragement or instructions for self-harm

d. Misinformation and Manipulation

- Health or election misinformation that may cause public harm

- Deepfakes or manipulated media intended to deceive

e. Spam and Scams

- Malicious links, phishing attempts, or financial scams

- Repetitive or misleading content used to manipulate search or engagement

f. Sexually Explicit Content

- Pornography or sexually explicit material

- Non-consensual intimate imagery or revenge porn

3. Moderation Approach

Swark.ai uses a combination of:

- Automated moderation to flag potentially violating content using machine learning and heuristic tools.

- Human review by trained moderators for nuanced or complex cases.

- Community reporting mechanisms to empower users to flag violations. Moderation decisions are based on this policy and are applied fairly and consistently.

4. User Responsibilities

- Abide by this policy and the Terms of Service at all times

- Report violations using our in-platform tools

- Respect appeals processes and community standards

5. Enforcement Actions

Violations of this policy may result in:

- Content removal

- Warning notices

- Temporary or permanent account suspension

- Reporting to law enforcement when legally required

6. Appeals Process

Users may appeal moderation decisions by contacting [email protected]. Appeals will be reviewed by a separate moderation team and responded to within a reasonable time frame.