AI-Powered Technical Writing Assistant with Local Ollama Integration
Transform your technical documentation with comprehensive style analysis, readability scoring, and AI-powered iterative rewriting. Designed for technical writers targeting 9th-11th grade readability standards.
-
Python 3.12+ (Download here)
β οΈ CRITICAL: This project requires Python 3.12 or higher. Older versions WILL NOT WORK.Quick Check:
python3.12 --version # Should show: Python 3.12.x
Windows:
# Navigate to project folder
cd C:\path\to\style-guide-ai
# Create and activate virtual environment (ensure Python 3.12 is installed)
python -m venv venv
venv\Scripts\activate
Linux (Fedora/RHEL-based):
# Update system and install Python 3.12
sudo dnf clean all
sudo dnf update
sudo dnf install python3.12
# Navigate to project folder
cd ~/path/to/style-guide-ai
# Create and activate virtual environment
python3.12 -m venv venv
source venv/bin/activate
macOS:
# Navigate to project folder
cd ~/path/to/style-guide-ai
# Create and activate virtual environment (ensure Python 3.12 is installed)
python3.12 -m venv venv
source venv/bin/activate
β
You should see (venv)
at the start of your command prompt
# Upgrade pip first
pip install --upgrade pip
# Install all Python packages (conflict-free!)
pip install -r requirements.txt
python main.py
Then visit: http://localhost:5000 π
β¨ That's it! The application will auto-setup everything on first run:
- β SpaCy language models
- β NLTK data downloads
- β Directory creation
- β Dependency verification
- β Ollama detection & guidance
Always activate your virtual environment first:
Windows:
cd C:\path\to\style-guide-ai
venv\Scripts\activate
python main.py
Linux/macOS:
cd ~/path/to/style-guide-ai
source venv/bin/activate
python main.py
- Two-Pass Process: AI reviews and refines its own output
- Local Ollama Integration: Privacy-first with Llama models
- Real-Time Progress: Watch the AI improvement process step-by-step
- Smart Confidence Scoring: Know how much the AI improved your text
- Grade Level Assessment: Targets 9th-11th grade readability
- Multiple Readability Scores: Flesch, Gunning Fog, SMOG, Coleman-Liau, ARI
- Style Issues Detection: Passive voice, sentence length, wordiness
- Technical Writing Metrics: Custom scoring for documentation
- Text Files: .txt, .md (Markdown)
- Documents: .docx (Microsoft Word)
- Technical Formats: .adoc (AsciiDoc), .dita (DITA)
- PDFs: Extract and analyze existing documents
- Direct Input: Paste text directly into the interface
- Real-Time Analysis: Instant feedback on text quality
- Interactive Error Highlighting: Click to see specific issues
- Progress Transparency: No fake spinners - see actual AI work
- Responsive Design: Works on desktop, tablet, and mobile
For the best AI rewriting experience, install Ollama with our recommended model:
- Download from: https://ollama.com/download/windows
- Run installer
- Open Command Prompt:
ollama pull llama3:8b
- Download from: https://ollama.com/download/mac
- Install .dmg file
- Open Terminal:
ollama pull llama3:8b
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3:8b
Recommended Model:
llama3:8b
- Superior writing quality and reasoning (4.7GB) β Recommended
Alternative Models (if needed):
llama3.2:3b
- Good balance of speed and quality (2GB)
Why llama3:8b?
- β Superior writing quality and reasoning capabilities
- β Excellent for complex technical writing improvements
- β Better understanding of context and nuance
- β Optimal performance with our two-pass iterative process
If you prefer to use a different model than our recommended llama3:8b
, you can easily customize it:
Step 1: Update Configuration
Edit config.py
and change line 45:
# Change this line:
OLLAMA_MODEL = os.getenv('OLLAMA_MODEL', 'llama3:8b')
# To your preferred model, for example:
OLLAMA_MODEL = os.getenv('OLLAMA_MODEL', 'llama3.2:3b')
Step 2: Pull Your Chosen Model
# For llama3.2:3b (faster, smaller)
ollama pull llama3.2:3b
# Or any other compatible model
ollama pull your-chosen-model
Step 3: Restart the Application
python main.py
Popular Alternative Models:
llama3.2:3b
- Good balance of speed and quality (2GB)llama3.2:1b
- Very fast, basic quality (1.3GB)llama3:70b
- Highest quality, requires powerful hardware (40GB)codellama:7b
- Optimized for technical/code documentation (3.8GB)
For AsciiDoc document parsing and analysis, install the asciidoctor Ruby gem:
Windows:
- Download from: https://rubyinstaller.org/
- Run the installer (includes Ruby + DevKit)
- Open Command Prompt and verify:
ruby --version
macOS:
# Ruby is usually pre-installed. If not:
brew install ruby
# Verify installation
ruby --version
Linux:
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install ruby-full
# Fedora/RHEL
sudo dnf install ruby ruby-devel
# Verify installation
ruby --version
# Install asciidoctor
gem install asciidoctor
# Or with sudo if needed
sudo gem install asciidoctor
# Verify installation
asciidoctor --version
- β High-Performance Parsing: Uses persistent Ruby server (15x faster than subprocess)
- β Full AsciiDoc Support: Complete parsing of admonitions, tables, includes, etc.
- β Accurate Structure Analysis: Proper block-level content analysis
- β Document Title Detection: Correctly identifies and displays document titles
Without Asciidoctor:
β οΈ AsciiDoc parsing will be limited to basic text extractionβ οΈ Document structure analysis may be incompleteβ οΈ Style analysis won't recognize AsciiDoc-specific elements
For comprehensive troubleshooting, see: SETUP_TROUBLESHOOTING.md
This detailed guide covers:
- β Python version conflicts and installation
- β Virtual environment problems
- β Package installation failures
- β SpaCy and NLTK setup issues
- β Ollama/Llama installation and configuration
- β OS-specific solutions (Windows/macOS/Linux)
- β Complete setup verification script
Python Version Issues:
# CRITICAL: You must use Python 3.12+
python3.12 --version # Should show 3.12.x
# Create venv with EXACT Python version
python3.12 -m venv venv
Virtual Environment Issues:
# Always activate venv first (you should see (venv) in prompt)
source venv/bin/activate # Linux/macOS
venv\Scripts\activate # Windows
# Verify Python version in venv
python --version # Should show Python 3.12.x
Package Installation Issues:
# Nuclear option: fresh reinstall
rm -rf venv
python3.12 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt --no-cache-dir
Quick Setup Verification:
# Verify everything is working
python -c "import flask, spacy, nltk; print('β
Core packages OK')"
python -c "import spacy; spacy.load('en_core_web_sm'); print('β
SpaCy model OK')"
Input Text:
"In order to facilitate the implementation of the new system, it was decided by the team that the best approach would be to utilize a modular architecture."
AI Analysis Detects:
- β Passive voice: "it was decided"
- β Wordy phrases: "in order to", "utilize"
- β Long sentence: 25 words (target: 15-20)
- β Grade level: 14th (target: 9th-11th)
AI Rewrite (Pass 1):
"To implement the new system, the team decided to use a modular architecture."
AI Rewrite (Pass 2 - Final):
"The team chose a modular architecture to implement the new system."
Improvements:
- β Reduced from 25 to 10 words
- β Converted to active voice
- β Removed wordy phrases
- β Lowered to 9th grade level
- β Improved clarity and flow
style-guide-ai/
βββ main.py # Main Flask application (auto-setup included!)
βββ requirements.txt # Clean, conflict-free dependencies
βββ config.py # Main application configuration
βββ style_analyzer/ # Analysis modules
βββ rewriter/ # AI rewriting components
βββ rules/ # Style rules and checks
βββ models/ # AI model management
βββ ui/ # User interface files
β βββ templates/ # HTML templates
β βββ static/ # CSS, JS, images
βββ uploads/ # File upload storage
βββ logs/ # Application logs
- Fork the repository
- Create your feature branch:
git checkout -b feature/amazing-feature
- Commit your changes:
git commit -m 'Add amazing feature'
- Push to the branch:
git push origin feature/amazing-feature
- Open a Pull Request
This project is licensed under the Apache License 2.0
- SpaCy for advanced NLP processing
- Ollama for local AI model serving
- Flask for the web framework
- Bootstrap for responsive UI components
Made with β€οΈ for technical writers who value privacy and quality.