Media Check
The media check system automatically analyzes stickers and custom emojis sent by users in the chat and applies configured moderation rules based on the detected content.
How it Works
The bot uses AI-based classification to determine the type of content in media files. Each sticker or emoji is automatically checked for unwanted content.
Types of Content Checked
The system recognizes the following categories of potentially harmful content:
Category: Blood
- blood-blood - images of blood
- blood-killing - scenes of violence with blood
Category: Epileptic Content
- epileptic-scare - alarming flashing content
- epileptic-seizure - content that can trigger an epileptic seizure
Category: Politics
- politics-flags - political flags
- politics-nazi - Nazi symbolism
- politics-presidents - images of politicians
Category: Pornography
- porn-child - child pornography
- porn-nudity - nudity
- porn-sex - sexual content
- porn-sexualize - sexualized content
Direct Categories
- swearing - profanity
- animal_abuse - animal abuse
- crash - content that can cause application crashes
Checking Modes
1. Checking Individual Stickers
When a user sends a sticker without a sticker pack, the bot checks only that specific file. If the content matches a configured rule with the condition "sent", a penalty is applied.
2. Checking Sticker Packs
When a user sends a sticker from a sticker pack:
- Checking the current sticker - the sent sticker is first checked with the "sent" condition.
- Analyzing the entire pack - the entire sticker pack is then analyzed.
- Calculating the percentage - the percentage of potentially harmful content in the pack is determined.
- Applying rules - if the percentage exceeds the threshold, a penalty is applied.
Example: If a sticker pack contains 50 stickers, and 15 of them contain profanity, the percentage is 30%. If a rule is configured to "delete at 20%", the sticker will be deleted.
3. Checking Custom Emojis
The bot checks all custom emojis in a message. If any emoji violates the rules with the "sent" condition, a penalty is applied.
Rule Configuration
For each content category or subcategory, you can configure moderation rules.
Trigger Conditions
- sent - the rule triggers when any file with this type of content is sent.
- X% - the rule triggers when the percentage of potentially harmful content in a sticker pack exceeds X%.
Penalty Types
- notify - sends a notification to the trigger channel.
- delete - deletes the message.
- warn - issues a warning to the user and deletes the message.
- mute - blocks the user from sending messages for a specified period.
- ban - removes the user from the chat for a specified period or permanently.
Rule Aggressiveness
When multiple rules are triggered simultaneously, the most aggressive one (in order from weak to strong) is applied: ignore → notify → delete → warn → mute → ban
Usage Examples
Example 1: Blocking pornography
- Category: porn
- Condition: sent
- Penalty: ban permanently
With this setting, any sticker with pornographic content will result in an immediate ban.
Example 2: Warning for profanity
- Category: swearing
- Condition: sent
- Penalty: warn
The user will receive a warning when sending a sticker with profanity.
Example 3: Controlling sticker packs with political content
- Category: politics
- Condition: 30%
- Penalty: delete
If more than 30% of the stickers in a pack contain political content, the sticker will be deleted.
Example 4: Notification about violence
- Subcategory: blood-killing
- Condition: sent
- Penalty: notify
Administrators will receive notifications about stickers with scenes of violence in the trigger channel.
Rule Management
Adding a Rule
- Select a category or subcategory of content.
- Click "Add".
- Specify the trigger condition (sent or percentage).
- Select the penalty type.
- Specify the duration (for mute and ban).
Deleting a Rule
Click the existing rule to delete it.
Viewing Rules
All configured rules are displayed in the format:
[condition] → [penalty] (duration)
For example:
- sent → ban (permanently)
- 25% → delete
- sent → mute (1 day)
Important Notes
- Moderators are exempt from media file checks.
- Official Telegram messages (ID 777000) are not checked.
Categories and Subcategories
When configuring rules, you can choose:
- The entire category (e.g., "porn") - the rule will apply to all subcategories.
- A specific subcategory (e.g., "porn-nudity") - the rule will only apply to that type.
If rules are configured for both a category and a subcategory, they are combined, and the most aggressive penalty is applied.