A cyberbullying detection and prevention browser extension that helps create safer online spaces.
CyberGuard is a Chrome extension designed to identify and prevent cyberbullying through advanced text analysis and machine learning techniques. It provides real-time monitoring of Reddit comments and automatically censors potentially harmful content.
- Real-time comment analysis
- Automatic censoring of harmful content
- Support for Reddit comments
- Privacy-focused design
- Fast parallel processing
- Dynamic content detection
- Python 3.6 or higher
- Jupyter Notebook
- Required Python packages:
requirements.txt
-
Install the required packages:
pip install -r requirements.txt
-
Open the Training Notebook:
- Navigate to the project directory
- Open
train.ipynb
in Jupyter Notebook or Jupyter Lab
jupyter notebook train.ipynb
-
Dataset Preparation:
- The notebook uses a cyberbullying dataset
- Ensure the dataset is in the correct path as specified in the notebook
- The data should contain text samples and their corresponding labels
-
Run the Notebook:
- Execute each cell in sequence
- The notebook will:
- Load and preprocess the data
- Train the model
- Save the trained model
- Deploy to ModelBit
-
ModelBit Deployment:
- Create a ModelBit account if you don't have one
- Follow the notebook instructions to deploy your model
- Save the API endpoint URL for use in the extension
- Google Chrome browser
- Internet connection (for API access)
- Clone the repository
git clone https://github.com/BlazingPh0enix/cyberguard
-
Open Chrome and go to the Extensions page
- Type
chrome://extensions
in the address bar, or - Click the three dots menu → More Tools → Extensions
- Type
-
Enable Developer Mode
- Toggle the "Developer mode" switch in the top right corner
-
Load the Extension
- Click "Load unpacked"
- Navigate to the CyberGuard directory
- Select the folder containing
manifest.json
The extension icon should now appear in your Chrome toolbar.
- Visit any Reddit page
- The extension will automatically:
- Monitor comments as they load
- Analyze them for harmful content
- Censor potentially harmful comments with asterisks (*)
- Comments are analyzed using a machine learning model via ModelBit API
- If a comment's toxicity probability is above 50%, it will be censored
- Censored text is replaced with asterisks matching the original length
- New comments are processed automatically as they load
If the extension isn't working:
- Check that it's enabled in Chrome Extensions
- Ensure you have an active internet connection
- Try refreshing the Reddit page
- If problems persist, try disabling and re-enabling the extension
manifest.json
: Extension configurationcontent.js
: Main extension logicicon.png
: Extension icontrain.ipynb
: Model training notebook
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.