A multilingual profanity detection and filtering engine for modern applications — by GLINCKER
A multilingual profanity detection and filtering engine for modern applications — by GLINCKER
Glin-Profanity is a high-performance, cross-platform library built to detect, filter, and sanitize profane or harmful language in user-generated content. Available for both JavaScript/TypeScript and Python, it provides unified APIs with support for 20+ languages, configurable severity levels, obfuscation detection, and seamless framework integration.
Whether you're moderating chat messages, community forums, or content input forms, Glin-Profanity empowers you to:
- 🛡️ Filter text with real-time or batch processing
- 🗣️ Detect offensive terms in 20+ human languages
- 💬 Catch obfuscated profanity like
sh1t
,f*ck
,a$hole
- 🎚️ Adjust severity thresholds (
Exact
,Fuzzy
,Merged
) - 🔁 Replace bad words with symbols or emojis
- 🧩 Works in any JavaScript environment -
- 🛡️ Add custom word lists or ignore specific terms
- ⚡ Enjoy identical APIs across JavaScript and Python
- 🌍 Multi-language Support: 20+ languages including English, Spanish, French, German, Arabic, Chinese, and more
- 🎯 Context-Aware Filtering: Advanced context analysis to reduce false positives
- ⚙️ Highly Configurable: Customize word lists, severity levels, and filtering behavior
- 🚀 High Performance: Optimized algorithms for speed and efficiency
- 🔧 Easy Integration: Simple APIs that work with any JavaScript/TypeScript or Python application
- 📝 Unified API: Identical functionality across both languages with consistent naming
- 🧪 Well Tested: Comprehensive test suite ensuring reliability and cross-language parity
- ⚛️ React Hook: Built-in
useProfanityChecker
hook for React applications - 🔍 Obfuscation Detection: Advanced pattern matching for disguised profanity
- 🎚️ Severity Levels: Configurable severity detection and filtering
- 🚀 Key Features
- 📦 Installation
- 🌍 Supported Languages
- ⚙️ Quick Start
- 🧠 API Reference
- 🔧 Advanced Usage
- 📁 Monorepo Structure
- 🛠 Use Cases
⚠️ Important Notes- 📄 License
npm install glin-profanity
yarn add glin-profanity
pnpm add glin-profanity
pip install glin-profanity
poetry add glin-profanity
Glin-Profanity includes comprehensive profanity dictionaries for 23 languages:
🇸🇦 Arabic • 🇨🇳 Chinese • 🇨🇿 Czech • 🇩🇰 Danish • 🇬🇧 English • 🌍 Esperanto • 🇫🇮 Finnish • 🇫🇷 French • 🇩🇪 German • 🇮🇳 Hindi • 🇭🇺 Hungarian • 🇮🇹 Italian • 🇯🇵 Japanese • 🇰🇷 Korean • 🇳🇴 Norwegian • 🇮🇷 Persian • 🇵🇱 Polish • 🇵🇹 Portuguese • 🇷🇺 Russian • 🇪🇸 Spanish • 🇸🇪 Swedish • 🇹🇭 Thai • 🇹🇷 Turkish
Note: The JavaScript and Python packages maintain cross-language parity, ensuring consistent profanity detection across both ecosystems.
const { checkProfanity } = require('glin-profanity');
// Basic usage
const result = checkProfanity("This is a damn example", {
languages: ['english', 'spanish'],
replaceWith: '***'
});
console.log(result.containsProfanity); // true
console.log(result.profaneWords); // ['damn']
console.log(result.processedText); // "This is a *** example"
import { checkProfanity, ProfanityCheckerConfig } from 'glin-profanity';
const config: ProfanityCheckerConfig = {
languages: ['english', 'spanish'],
severityLevels: true,
autoReplace: true,
replaceWith: '***'
};
const result = checkProfanity("inappropriate text", config);
from glin_profanity import Filter, SeverityLevel
# Basic usage
filter_instance = Filter()
# Check if text contains profanity
if filter_instance.is_profane("This is a damn example"):
print("Profanity detected!")
# Get detailed results
result = filter_instance.check_profanity("This is a damn example")
print(result["profane_words"]) # ['damn']
print(result["contains_profanity"]) # True
print(result["processed_text"]) # "This is a **** example" (if replace_with is set)
# Advanced configuration
advanced_filter = Filter({
"languages": ["english", "spanish"],
"case_sensitive": False,
"replace_with": "***",
"severity_levels": True,
"allow_obfuscated_match": True,
"custom_words": ["badword", "anotherbad"],
"ignore_words": ["exception"]
})
import React, { useState } from 'react';
import { useProfanityChecker, SeverityLevel } from 'glin-profanity';
const ChatModerator = () => {
const [message, setMessage] = useState('');
const { result, checkText } = useProfanityChecker({
languages: ['english', 'spanish'],
severityLevels: true,
autoReplace: true,
replaceWith: '***',
minSeverity: SeverityLevel.EXACT
});
const handleSubmit = () => {
checkText(message);
if (result && !result.containsProfanity) {
sendMessage(message);
} else {
alert('Please keep your message clean!');
}
};
return (
<div>
<input
value={message}
onChange={(e) => setMessage(e.target.value)}
placeholder="Type your message..."
/>
<button onClick={handleSubmit}>Send</button>
{result && result.containsProfanity && (
<div style={{color: 'red'}}>
⚠️ Inappropriate content detected: {result.profaneWords.join(', ')}
</div>
)}
</div>
);
};
<template>
<div>
<input v-model="text" @input="checkContent" />
<p v-if="hasProfanity">{{ cleanedText }}</p>
</div>
</template>
<script setup>
import { ref } from 'vue';
import { checkProfanity } from 'glin-profanity';
const text = ref('');
const hasProfanity = ref(false);
const cleanedText = ref('');
const checkContent = () => {
const result = checkProfanity(text.value, {
languages: ['english'],
autoReplace: true,
replaceWith: '***'
});
hasProfanity.value = result.containsProfanity;
cleanedText.value = result.autoReplaced;
};
</script>
import { Component } from '@angular/core';
import { checkProfanity, ProfanityCheckResult } from 'glin-profanity';
@Component({
selector: 'app-comment',
template: `
<textarea [(ngModel)]="comment" (ngModelChange)="validateComment()"></textarea>
<div *ngIf="profanityResult?.containsProfanity" class="error">
Please remove inappropriate language
</div>
`
})
export class CommentComponent {
comment = '';
profanityResult: ProfanityCheckResult | null = null;
validateComment() {
this.profanityResult = checkProfanity(this.comment, {
languages: ['english', 'spanish'],
severityLevels: true
});
}
}
Both packages provide identical functionality with language-appropriate naming conventions:
JavaScript | Python | Description |
---|---|---|
checkProfanity(text, config) |
check_profanity(text, config) |
Framework-agnostic profanity detection |
isProfane(text) |
is_profane(text) |
Check if text contains profanity |
checkProfanityAsync(text, config) |
check_profanity_async(text, config) |
Async profanity detection |
isWordProfane(word, config) |
is_word_profane(word, config) |
Check single word |
Filter class |
Filter class |
Low-level filter class for advanced usage |
useProfanityChecker hook |
N/A | React-specific hook |
interface ProfanityCheckerConfig {
languages?: Language[]; // Specific languages to check
allLanguages?: boolean; // Check all available languages
caseSensitive?: boolean; // Case-sensitive matching
wordBoundaries?: boolean; // Enforce word boundaries
replaceWith?: string; // Replacement text for profane words
severityLevels?: boolean; // Enable severity level detection
customWords?: string[]; // Add custom profane words
ignoreWords?: string[]; // Words to ignore
allowObfuscatedMatch?: boolean; // Detect obfuscated profanity
fuzzyToleranceLevel?: number; // Fuzzy matching tolerance (0-1)
minSeverity?: SeverityLevel; // Minimum severity to flag
autoReplace?: boolean; // Auto-replace profanity
customActions?: (result) => void; // Custom callback
}
from typing import TypedDict, List, Optional
class FilterConfig(TypedDict, total=False):
languages: Optional[List[str]] # Specific languages to check
all_languages: Optional[bool] # Check all available languages
case_sensitive: Optional[bool] # Case-sensitive matching
word_boundaries: Optional[bool] # Enforce word boundaries
replace_with: Optional[str] # Replacement text for profane words
severity_levels: Optional[bool] # Enable severity level detection
custom_words: Optional[List[str]] # Add custom profane words
ignore_words: Optional[List[str]] # Words to ignore
allow_obfuscated_match: Optional[bool] # Detect obfuscated profanity
fuzzy_tolerance_level: Optional[float] # Fuzzy matching tolerance (0-1)
enable_context_aware: Optional[bool] # Context-aware filtering
context_window: Optional[int] # Context analysis window size
confidence_threshold: Optional[float] # Context confidence threshold
log_profanity: Optional[bool] # Enable debug logging
JavaScript/TypeScript:
interface ProfanityCheckResult {
containsProfanity: boolean; // Whether profanity was detected
profaneWords: string[]; // List of detected profane words
processedText?: string; // Text with replacements
severityMap?: Record<string, SeverityLevel>; // Word-to-severity mapping
filteredWords: string[]; // Words filtered by minSeverity
autoReplaced: string; // Text with auto-replacements
}
Python:
from typing import TypedDict, List, Optional, Dict
class CheckProfanityResult(TypedDict):
contains_profanity: bool # Whether profanity was detected
profane_words: List[str] # List of detected profane words
processed_text: Optional[str] # Text with replacements
severity_map: Optional[Dict[str, SeverityLevel]] # Word-to-severity mapping
matches: Optional[List[Match]] # Detailed match information
context_score: Optional[float] # Context analysis score
reason: Optional[str] # Analysis explanation
// JavaScript
import { checkProfanity, SeverityLevel } from 'glin-profanity';
const result = checkProfanity("This sh1t is damn bad", {
customWords: ['companyname', 'competitorname'],
ignoreWords: ['assassin', 'classical'], // False positives
severityLevels: true,
minSeverity: SeverityLevel.EXACT,
fuzzyToleranceLevel: 0.7
});
console.log(result.filteredWords); // Only exact matches
# Python
from glin_profanity import Filter, SeverityLevel
filter_instance = Filter({
"custom_words": ["companyname", "competitorname"],
"ignore_words": ["assassin", "classical"], # False positives
"severity_levels": True,
"fuzzy_tolerance_level": 0.7
})
# Filter by minimum severity
result = filter_instance.check_profanity_with_min_severity(
"This sh1t is damn bad",
SeverityLevel.EXACT
)
// JavaScript
import { checkProfanity } from 'glin-profanity';
const result = checkProfanity("What the f*ck is this sh1t?", {
allowObfuscatedMatch: true,
wordBoundaries: false, // Required for obfuscation detection
fuzzyToleranceLevel: 0.8
});
// Detects: f*ck, sh1t, a$$hole, etc.
console.log(result.containsProfanity); // true
# Python
filter_instance = Filter({
"allow_obfuscated_match": True,
"word_boundaries": False, # Required for obfuscation detection
"fuzzy_tolerance_level": 0.8
})
# Detects: f*ck, sh1t, a$$hole, etc.
filter_instance.is_profane("What the f*ck is this sh1t?") # True
// JavaScript
const result = checkProfanity("This is merde and puta content", {
languages: ['english', 'spanish', 'french'],
// or use: allLanguages: true
});
console.log(result.profaneWords); // ['merde', 'puta']
# Python
multi_lang_filter = Filter({
"languages": ["english", "spanish", "french"],
# or use: "all_languages": True
})
# Detects profanity in multiple languages
text = "This is merde and puta content"
result = multi_lang_filter.check_profanity(text)
glin-profanity/
├── packages/
│ ├── js/ # JavaScript/TypeScript package
│ │ ├── src/ # TypeScript source code
│ │ │ ├── core/ # Framework-agnostic functions
│ │ │ ├── filters/ # Filter class implementation
│ │ │ ├── hooks/ # React hooks
│ │ │ ├── types/ # TypeScript definitions
│ │ │ └── utils/ # Utility functions
│ │ ├── lib/ # Built CJS + ESM outputs
│ │ ├── tests/ # Jest test suite
│ │ └── package.json # npm package configuration
│ └── py/ # Python package
│ ├── glin_profanity/ # Python source code
│ │ ├── __init__.py # Package exports
│ │ ├── filters/ # Filter implementation
│ │ ├── data/ # Dictionary loader
│ │ ├── types/ # Type definitions
│ │ └── nlp/ # NLP utilities
│ ├── tests/ # pytest test suite
│ └── pyproject.toml # Python package configuration
├── shared/
│ └── dictionaries/ # JSON word lists (20+ languages)
│ ├── english.json
│ ├── spanish.json
│ ├── french.json
│ └── ...
├── tests/ # Cross-language parity tests
├── scripts/ # Build and release utilities
├── .github/workflows/ # CI/CD pipelines
└── assets/ # Documentation assets
- 🔐 Chat Moderation: Real-time filtering in messaging applications
- 🧼 Content Sanitization: Clean user-generated content for blogs and forums
- 🕹️ Gaming: Moderate player communications in multiplayer games
- 🤖 AI Content Filters: Pre-process input before AI model training
- 📱 Social Media: Automated content moderation at scale
- 🎓 Educational Platforms: Maintain appropriate learning environments
- 💼 Corporate Communications: Filter internal chat and collaboration tools
⚠️ Best Effort Tool: Glin-Profanity is a best-effort solution. Language evolves constantly, and no filter is 100% perfect.- 👥 Human Moderation: Always supplement automated filtering with human moderation for high-risk or sensitive platforms.
- 🔄 Regular Updates: Keep the library updated to benefit from new language patterns and improved detection algorithms.
- ⚖️ Context Matters: Consider enabling context-aware filtering to reduce false positives in legitimate discussions.
- 🌍 Cultural Sensitivity: Different cultures have varying standards - configure accordingly for your audience.
For proprietary licensing inquiries, contact GLINCKER.
This software is available under a dual license:
This project is primarily licensed under the MIT License - see the LICENSE file for details. You are free to use, modify, and distribute this software for both personal and commercial purposes.
This software is also available under the GLINCKER LLC proprietary license for enterprise use cases requiring:
- Commercial support and guarantees
- Custom feature development
- Enhanced SLA commitments
- Dedicated technical consultation
For proprietary licensing inquiries, contact GLINCKER.
Glin-Profanity is developed and maintained by GLINCKER, a technology company focused on building developer tools and content moderation solutions.
- 🌐 Website: glincker.com/tools/glin-profanity
- 📖 Documentation: GitHub Repository
- 🐛 Report Issues: GitHub Issues
- 💬 Community: GitHub Discussions
- 📧 Contact: [email protected]
We welcome contributions from the community! Please see our Contributing Guidelines for details on:
- 🐛 Reporting bugs and issues
- 💡 Suggesting new features
- 🔧 Submitting code changes
- 📖 Improving documentation
- 🌍 Adding new language support
- 🌟 Community contributors who helped expand language support
- 🔧 Open source libraries that inspired the architecture
- 🗣️ Users who provide valuable feedback and report issues
- 🌍 Linguistic experts who helped improve detection accuracy