QA Tool is a powerful command-line tool that leverages AI to assist with various QA tasks. It integrates shell utilities with LLM backends (OpenAI or local models) to provide intelligent QA assistance.
- Test Case Generation: Automatically generates comprehensive test cases from source code
- Code Analysis: Identifies bugs, code smells, and testability issues
- Log Analysis: Summarizes test results and error logs
- Assertion Generation: Suggests meaningful test assertions for functions
- Multiple AI Backends: Works with OpenAI API or local models (via Ollama)
- Markdown Output: Saves all analysis in well-formatted markdown files
- Configurable: Customize behavior through environment variables
jq
- For JSON processingollama
- For local AI model support (recommended)bat
- For better source file previews (optional but recommended)
- Clone this repository:
git clone https://github.com/yourusername/qa-tool.git
cd qa-tool
- Make the script executable:
chmod +x qa-tool.sh
- Install dependencies:
# For macOS
brew install jq bat ollama
# For Ubuntu/Debian
sudo apt-get install jq bat
curl -fsSL https://ollama.com/install.sh | sh
- Start Ollama and pull the model:
# Start Ollama service
brew services start ollama # macOS
sudo systemctl start ollama # Linux
# Pull the llama2 model
ollama pull llama2
./qa-tool.sh [command] <file>
# Examples:
./qa-tool.sh testcases src/app.js
./qa-tool.sh analyze src/userController.py
./qa-tool.sh summarize logs/test_output.log
./qa-tool.sh assertions api/handler.go
Let's look at a real example using a Python function:
def calculate_discount(price, discount_percent, is_premium_user=False):
"""
Calculate the final price after applying a discount.
"""
if price < 0:
return 0
if discount_percent < 0:
discount_percent = 0
elif discount_percent > 100:
discount_percent = 100
discount = price * (discount_percent / 100)
final_price = price - discount
if is_premium_user:
final_price *= 0.9 # Additional 10% off for premium users
return round(final_price, 2)
When running ./qa-tool.sh analyze test_function.py
, you'll get a detailed analysis:
### Potential Bugs and Edge Cases
- The function doesn't handle negative prices correctly
- The discount percentage validation could be improved
- Premium user discount logic needs review
### Code Smells and Maintainability
- Complex logic chain with multiple conditions
- Parameter documentation could be improved
### Testability Concerns
- Need unit tests for edge cases
- Input validation testing required
### Security and Performance
- Input validation needed
- Performance optimization possible
When running ./qa-tool.sh testcases test_function.py
, you'll get structured test cases:
### Unit Test Cases
| Input | Output | Expected Result |
|-------|--------|-----------------|
| price = 10.0, discount_percent = 5.0 | final_price = 5.0 | Correct calculation |
| price = 10.0, discount_percent = -5.0 | final_price = 10.0 | Error handling |
| price = 10.0, is_premium_user = True | final_price = 9.0 | Premium discount |
### Integration Test Scenarios
- Regular user discount calculation
- Premium user with various discounts
- Edge cases with negative values
Customize QA Tool behavior using environment variables:
# Use a different AI model (default: llama2)
export QA_MODEL="llama2"
# Change output directory (default: ./qa-tool-output)
export QA_OUTPUT_DIR="./my-qa-reports"
# Set log level (DEBUG|INFO|WARN|ERROR)
export QA_LOG_LEVEL="DEBUG"
All analysis results are saved in the qa-tool-output
directory with the following naming pattern:
<filename>_<analysis_type>_<timestamp>.md
Example:
test_function.py_analyze_20250513_180613.md
test_function.py_testcases_20250513_180641.md
Each output file includes:
- File metadata (path, analysis type, timestamp)
- AI model used
- Detailed analysis or test cases
- Recommendations and improvements
Generates comprehensive test cases from source code, including:
- Unit test cases with edge conditions
- Integration test scenarios
- Error handling test cases
- Performance test considerations
Analyzes code for:
- Potential bugs and edge cases
- Code smells and maintainability issues
- Testability concerns
- Security vulnerabilities
- Performance bottlenecks
Analyzes test logs or error logs to provide:
- Concise summary of key findings
- Patterns in failures or errors
- Critical issues needing attention
- Improvement recommendations
Generates test assertions for functions, including:
- Input validation assertions
- Output verification assertions
- Edge case assertions
- Error condition assertions
- Performance assertions
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama for providing the local AI model support
- The open-source community for the amazing tools that make this possible