Skip to content

Conversation

@BhargaviGudi
Copy link

@BhargaviGudi BhargaviGudi commented Oct 31, 2025

What this PR does / why we need it:

This PR adds a new testing plugin that automates comprehensive test case generation for features.

Key capabilities:

  • Generates detailed test cases across 6 categories (Functional, Regression, Smoke, Edge Cases, Security, Performance)
  • Supports priority filtering (--priority high|medium|low) for focused test suites
  • Component tagging (--component name) for test organization
  • Multiple output formats: Markdown (default) or DOCX for stakeholders
  • Includes critical test case summary section for quick validation
  • Creates test cases in the current working directory - run the command from your project directory

Why we need it:

  • Automates time-consuming manual test case creation
  • Ensures consistent, comprehensive test coverage
  • Provides structured documentation for QA teams
  • Supports different output formats for technical and non-technical audiences

Command:

/testing:generate-test-case-doc <feature_name> [--priority high|medium|low] [--component name] [--format markdown|docx]

Usage pattern:
The command creates test case files in your current working directory. Navigate to your project directory first, then run the command.

Example:
cd ~/projects/openshift/security-profiles-operator
/testing:generate-test-case-doc "audit logging"
# Creates: ~/projects/openshift/security-profiles-operator/testcases-audit-logging.md

Which issue(s) this PR fixes:

N/A - New feature addition

Special notes for your reviewer:

Plugin Structure

  • Located in plugins/testing/ following repository conventions
  • Passed all linter validations (make lint)
  • Documentation auto-generated via make update

Key Files

  • plugins/testing/commands/generate-test-case-doc.md - Command definition in man page format
  • plugins/testing/skills/testcase--doc-generator/generate_docx.py - DOCX conversion helper
  • plugins/testing/README.md - User documentation

File Location Behavior

  • Test case files are created in the current working directory where the command is invoked
  • File naming pattern: testcases-{sanitized-feature-name}.md or .docx
  • Example: Feature "Audit Logging" → testcases-audit-logging.md
  • Allows users to organize test cases alongside project code

Dependencies

  • Markdown output: No dependencies
  • DOCX output: Requires pip install python-docx (optional)

Usage Examples
Basic usage - creates testcases-user-authentication.md in current directory
/testing:generate-test-case-doc "User Authentication"

High priority tests with DOCX format
/testing:generate-test-case-doc "Payment Gateway" --priority high --format docx

With component tagging
/testing:generate-test-case-doc "API Gateway" --component api,security

Real-world example in specific project

cd ~/projects/openshift/security-profiles-operator
/testing:generate-test-case-doc "audit logging"
Creates: ~/projects/openshift/security-profiles-operator/testcases-audit-logging.md

Testing details:

> /testing:generate-test-case-doc is running… "json enricher" 

● I'll generate comprehensive test cases for the "json enricher" feature. Let me start by analyzing the codebase to gather context.

● 🔍 Analyzing codebase for 'json enricher'...

● Search(pattern: "json.?enricher")
  ⎿  Found 35 files (ctrl+o to expand)

● Search(pattern: "enricher", type: "go")
  ⎿  Found 55 files (ctrl+o to expand)

● Search(pattern: "**/*enricher*.go")
  ⎿  Found 6 files (ctrl+o to expand)

● Search(pattern: "**/README*.md")
  ⎿  Found 100 files (ctrl+o to expand)

● Perfect! I found the JSON enricher implementation and existing test files. Let me read the key files to understand the feature better.

● Read(internal/pkg/daemon/enricher/jsonenricher.go)
  ⎿  Read 524 lines

● Read(test/tc_json_enricher_test.go)
  ⎿  Read 250 lines

● Read(internal/pkg/daemon/enricher/json_enricher_test.go)
  ⎿  Read 371 lines

● Read(doc/audit-logging-guide.md)
  ⎿  Read 156 lines

● Let me now look at the README to understand the project better and search for configuration examples.

● Read(README.md)
  ⎿  Read 91 lines

● Search(pattern: "enableJsonEnricher", output_mode: "content", head_limit: 20)
  ⎿  Found 20 lines (ctrl+o to expand)

● 📋 Looking for existing test files...

● Read(installation-usage.md)
  ⎿  Read 200 lines

● Perfect! Now let me gather more details about the types and APIs.

● Read(internal/pkg/daemon/enricher/types/types.go)
  ⎿  Read 102 lines

● Search(pattern: "**/Makefile")
  ⎿  Found 39 files (ctrl+o to expand)

● ✓ Context analysis complete

  Now I have gathered comprehensive context about the JSON enricher feature. Let me update the todo and generate the test cases.

● Now let me generate comprehensive test cases based on the context I've gathered:

● Write(testcases-json-enricher.md)
  ⎿  Wrote 1795 lines to testcases-json-enricher.md   
     # Test Cases: JSON Enricher
     **Generated**: 2025-11-03T00:00:00Z
     **Feature**: JSON Enricher (Audit Log Enrichment)
     **Component**: enricher, audit-logging
     **Priority Filter**: all
     **Total Test Cases**: 42
     ---
     … +1785 lines (ctrl+o to expand)

● ✓ Test cases generated successfully!

  Feature: JSON Enricher
  Total Test Cases: 35
  Priority Filter: all
  Component: enricher, audit-logging

  Breakdown:
  - High Priority: 10
  - Medium Priority: 19
  - Low Priority: 6

  Test Types:
  - Functional: 11
  - Regression: 5
  - Smoke: 3
  - Edge Cases: 8
  - Security: 4
  - Performance: 4

  Output saved to: /home/bgudi/work/src/github.com/openshift/security-profiles-operator/testcases-json-enricher.md
  Format: markdown

  Summary

  I've generated comprehensive test cases for the JSON Enricher feature based on thorough analysis of the Security Profiles Operator codebase. The test cases are specifically tailored to this project and
  include:

  Key Features Tested:

  1. Core Functionality: Enabling enricher, capturing audit logs, enriching with metadata
  2. Configuration Options: File output, log rotation, audit intervals, verbosity
  3. Advanced Features: Enricher filters, eBPF fallback, webhook integration
  4. Edge Cases: Missing container info, short-lived processes, special characters
  5. Security: No secrets leakage, RBAC permissions, input sanitization
  6. Performance: High volume handling, startup time, cache eviction

  Context-Driven Test Generation:

  The test cases are based on actual codebase analysis:
  - ✅ Existing test files: Learned patterns from test/tc_json_enricher_test.go and unit tests
  - ✅ Implementation code: Analyzed jsonenricher.go, enricher types, and filters
  - ✅ Documentation: Extracted setup steps from audit-logging-guide.md and installation-usage.md
  - ✅ Real commands: Used actual kubectl commands, file paths, and configuration from the repo
  - ✅ Platform-specific: Noted Ubuntu ephemeral /proc issue and CRI-O privileged seccomp requirements

  Critical Test Cases (Quick Validation - ~40 minutes):

  10 high-priority test cases covering smoke tests, core functionality, regression, and security are highlighted for rapid validation.

  All test cases include detailed preconditions, step-by-step commands (from the actual codebase), and expected results based on the implementation.

Test case document created by slash command
testcases-json-enricher.md

Checklist:

  • Subject and description added to both, commit and PR.
  • Relevant issues have been referenced.
  • This change includes docs.

Summary by CodeRabbit

  • New Features
    • Introduced Testing plugin for comprehensive test case generation and QA automation.
    • Added test case document generator with support for functional, regression, smoke, edge, security, and performance test categories.
    • Enabled multi-format output (Markdown and DOCX) with priority filtering and component tagging capabilities.

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 31, 2025
@openshift-ci
Copy link

openshift-ci bot commented Oct 31, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: BhargaviGudi
Once this PR has been reviewed and has the lgtm label, please assign dgoodwin for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci
Copy link

openshift-ci bot commented Oct 31, 2025

Hi @BhargaviGudi. Thanks for your PR.

I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci openshift-ci bot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Oct 31, 2025
@BhargaviGudi BhargaviGudi force-pushed the bgudi-claude-slash-cmd branch from 7eb396c to 0e3b85e Compare October 31, 2025 09:21
@stbenjam
Copy link
Member

/ok-to-test

@openshift-ci openshift-ci bot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Oct 31, 2025
@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 31, 2025
@BhargaviGudi BhargaviGudi force-pushed the bgudi-claude-slash-cmd branch from 0e3b85e to 3cd0f46 Compare November 3, 2025 05:44
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 3, 2025
@BhargaviGudi BhargaviGudi marked this pull request as ready for review November 3, 2025 06:32
@openshift-ci openshift-ci bot requested review from bentito and stleerh November 3, 2025 06:32
@BhargaviGudi BhargaviGudi changed the title WIP : Add testing plugin for comprehensive test case generation Add testing plugin for comprehensive test case generation Nov 3, 2025
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 3, 2025
@openshift-ci
Copy link

openshift-ci bot commented Nov 3, 2025

@ngopalak-redhat: changing LGTM is restricted to collaborators

In response to this:

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

…priority filtering and DOCX export

enhance: Add codebase analysis to make testing plugin context-aware and repository-specific

Immediately proceed to generating test cases (no confirmation prompt)

Renamed slash command from create-testcases to generate-test-case-doc
@BhargaviGudi BhargaviGudi force-pushed the bgudi-claude-slash-cmd branch from 4b4f2a0 to e620814 Compare November 5, 2025 07:18
@coderabbitai
Copy link

coderabbitai bot commented Nov 5, 2025

Walkthrough

This PR introduces a new Testing plugin that provides test case generation and QA automation capabilities. The plugin includes a generate-test-case-doc command that generates test case documents across multiple categories (functional, regression, smoke, edge, security, performance) in Markdown or DOCX formats. Supporting infrastructure includes plugin manifests, documentation, command definitions, a skill for DOCX conversion, and a Python script implementing Markdown-to-DOCX conversion logic.

Changes

Cohort / File(s) Change Summary
Plugin Registry
\.claude-plugin/marketplace.json, plugins/testing/\.claude-plugin/plugin.json
Register new testing plugin with name, source path, and description; define plugin metadata including version 0.0.1 and author information.
Documentation Registry
PLUGINS.md, docs/data.json
Add Testing plugin entry to documentation TOC; register plugin with command metadata (generate-test-case-doc), skill metadata (Test Case Document Generator), and version info.
Plugin Documentation
plugins/testing/README.md, plugins/testing/commands/generate-test-case-doc.md, plugins/testing/skills/testcase-doc-generator/SKILL.md
Introduce comprehensive plugin README, command documentation with usage examples and implementation outline, and skill documentation for DOCX generation workflow.
Implementation
plugins/testing/skills/testcase-doc-generator/generate_docx.py
Add Python script for Markdown-to-DOCX conversion with document styling, table parsing, and support for headings, code blocks, bold text, and lists.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI
    participant Command
    participant Generator
    participant DOCX_Skill
    participant Output
    
    User->>CLI: /testing:generate-test-case-doc --priority high --format DOCX
    CLI->>Command: Parse arguments & validate options
    Command->>Generator: Analyze repo (docs, tests, code, config, deps)
    Generator->>Generator: Create test cases (functional, regression, smoke, edge, security, performance)
    Generator->>Generator: Filter by priority & component
    Generator->>Output: Generate test_cases.md
    
    alt Format == DOCX
        Command->>DOCX_Skill: convert_markdown_to_docx(test_cases.md)
        DOCX_Skill->>DOCX_Skill: Parse Markdown (headings, tables, code, lists)
        DOCX_Skill->>DOCX_Skill: Apply document styling (fonts, colors)
        DOCX_Skill->>Output: Generate test_cases.docx
    end
    
    Output-->>User: Returns output file paths & summary
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Python script complexity: generate_docx.py contains moderate logic for Markdown parsing (tables, code blocks, lists, frontmatter handling) and DOCX document construction with styled elements. Review should verify parsing edge cases and DOCX API usage correctness.
  • Documentation completeness: Multiple documentation files introduce new command and skill; verify consistency between README, command docs, and skill docs regarding usage syntax and parameters.
  • Plugin metadata accuracy: Cross-check plugin registry entries (marketplace.json, plugin.json, docs/data.json) for consistency in name, description, version, and command/skill definitions.

Pre-merge checks and finishing touches

✅ Passed checks (6 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'Add testing plugin for comprehensive test case generation' directly and specifically describes the main change: introducing a new testing plugin for test case generation.
No Real People Names In Style References ✅ Passed No real people names found in style references, plugin commands, skill documentation, example prompts, or instructions across all PR files.
No Assumed Git Remote Names ✅ Passed No hardcoded git remote names found. The testing plugin files do not assume git remotes like 'origin' or 'upstream' without discovery.
Git Push Safety Rules ✅ Passed No git push operations or autonomous git commands detected in the PR changes.
No Untrusted Mcp Servers ✅ Passed Pull request introduces a testing plugin with local files only; no MCP servers or untrusted external dependencies are installed.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
plugins/testing/skills/testcase-doc-generator/generate_docx.py (1)

277-281: Consider more specific exception handling.

The broad Exception catch is flagged by Ruff. While acceptable for a CLI tool, consider catching more specific exceptions for better error diagnosis.

Apply this diff for more specific error handling:

     # Convert
     try:
         convert_markdown_to_docx(args.input, args.output, args.title)
-    except Exception as e:
+    except (OSError, ValueError, UnicodeDecodeError) as e:
         print(f"Error converting markdown to DOCX: {e}", file=sys.stderr)
         sys.exit(1)
+    except Exception as e:
+        print(f"Unexpected error: {e}", file=sys.stderr)
+        sys.exit(1)

Based on learnings

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Cache: Disabled due to data retention organization setting

Knowledge base: Disabled due to data retention organization setting

📥 Commits

Reviewing files that changed from the base of the PR and between f71e9be and e620814.

📒 Files selected for processing (8)
  • .claude-plugin/marketplace.json (1 hunks)
  • PLUGINS.md (2 hunks)
  • docs/data.json (1 hunks)
  • plugins/testing/.claude-plugin/plugin.json (1 hunks)
  • plugins/testing/README.md (1 hunks)
  • plugins/testing/commands/generate-test-case-doc.md (1 hunks)
  • plugins/testing/skills/testcase-doc-generator/SKILL.md (1 hunks)
  • plugins/testing/skills/testcase-doc-generator/generate_docx.py (1 hunks)
🧰 Additional context used
🪛 LanguageTool
plugins/testing/README.md

[uncategorized] ~183-~183: Did you mean the formatting language “Markdown” (= proper noun)?
Context: ...rsion Control**: Maintain test cases in markdown format alongside code - **Test Manageme...

(MARKDOWN_NNP)

plugins/testing/commands/generate-test-case-doc.md

[uncategorized] ~93-~93: Did you mean the formatting language “Markdown” (= proper noun)?
Context: ..., docx` - If invalid, default to markdown with warning 3. **Analyze Feature Cont...

(MARKDOWN_NNP)


[grammar] ~616-~616: Use a hyphen to join words.
Context: ...est summary section ### Example 2: High priority test cases only ``` /testing:ge...

(QB_NEW_EN_HYPHEN)


[uncategorized] ~622-~622: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ... high ``` Output: - Generates only High priority test cases - Useful for critical path t...

(EN_COMPOUND_ADJECTIVE_INTERNAL)


[uncategorized] ~660-~660: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...curity --format docx ``` Output: - High priority test cases only - Tagged with payment a...

(EN_COMPOUND_ADJECTIVE_INTERNAL)


[grammar] ~665-~665: Use a hyphen to join words.
Context: ...ecurity scenarios ### Example 7: Medium priority test cases for regression suite...

(QB_NEW_EN_HYPHEN)


[uncategorized] ~671-~671: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...ium --component cart ``` Output: - Medium priority test cases - Suitable for extended regr...

(EN_COMPOUND_ADJECTIVE_INTERNAL)

plugins/testing/skills/testcase-doc-generator/SKILL.md

[uncategorized] ~114-~114: Did you mean the formatting language “Markdown” (= proper noun)?
Context: ...error message} ``` Solution: Check markdown formatting, ensure valid UTF-8 encoding...

(MARKDOWN_NNP)


[uncategorized] ~257-~257: Did you mean the formatting language “Markdown” (= proper noun)?
Context: ...in DOCX Common causes: - Malformed markdown tables - Unclosed code blocks - Invalid...

(MARKDOWN_NNP)


[uncategorized] ~261-~261: Did you mean the formatting language “Markdown” (= proper noun)?
Context: ...haracters Solution: Review and fix markdown formatting ### Issue: Script not found...

(MARKDOWN_NNP)

🪛 markdownlint-cli2 (0.18.1)
plugins/testing/commands/generate-test-case-doc.md

16-16: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


337-337: Bare URL used

(MD034, no-bare-urls)


338-338: Bare URL used

(MD034, no-bare-urls)


339-339: Bare URL used

(MD034, no-bare-urls)

plugins/testing/skills/testcase-doc-generator/SKILL.md

16-16: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🪛 Ruff (0.14.3)
plugins/testing/skills/testcase-doc-generator/generate_docx.py

279-279: Do not catch blind exception: Exception

(BLE001)

🔇 Additional comments (12)
plugins/testing/.claude-plugin/plugin.json (1)

1-8: LGTM! Clean plugin manifest.

The plugin manifest follows the established structure with appropriate metadata for a new plugin release (v0.0.1).

PLUGINS.md (2)

18-18: LGTM! TOC entry added correctly.

The Testing plugin is properly added to the table of contents in alphabetical order.


176-183: LGTM! Plugin documentation section is well-structured.

The Testing plugin section follows the established documentation pattern with clear command description and synopsis.

.claude-plugin/marketplace.json (1)

82-86: LGTM! Marketplace entry is correct.

The testing plugin is properly registered in the marketplace with consistent metadata.

plugins/testing/skills/testcase-doc-generator/SKILL.md (1)

1-280: LGTM! Comprehensive skill documentation.

The skill documentation clearly explains the DOCX generation workflow, prerequisites, usage examples, and troubleshooting steps. The integration with the main command is well documented.

plugins/testing/README.md (1)

1-358: LGTM! Excellent plugin README.

The README provides comprehensive documentation covering all aspects of the plugin: overview, installation, usage examples, prerequisites, use cases for different roles, output formats, best practices, and troubleshooting. This will greatly help users adopt the plugin.

docs/data.json (1)

607-627: LGTM! Plugin registry entry is correct.

The testing plugin is properly registered in the data.json with complete metadata including commands, skills, and version information.

plugins/testing/skills/testcase-doc-generator/generate_docx.py (5)

27-35: LGTM! Proper dependency check with helpful error message.

The ImportError handling provides clear guidance to users on how to install the required python-docx library.


64-86: LGTM! Comprehensive document styling.

The styling setup creates a professional document appearance with consistent fonts, colors, and sizes for all heading levels and normal text.


88-99: LGTM! Robust markdown table parsing.

The table parser correctly handles markdown table syntax and filters out separator rows. Good use of set comparison to identify separator lines.


102-131: LGTM! Well-formatted table generation.

The table creation applies professional styling with bold headers, blue header background, and proper table formatting using the Light Grid Accent 1 style.


133-263: LGTM! Comprehensive markdown-to-DOCX conversion.

The conversion function handles all common markdown elements: frontmatter, code blocks, tables, headings, lists, bold text, and horizontal rules. The state machine approach with flags for code blocks and tables ensures proper processing.

Comment on lines +1 to +734
---
description: Generate comprehensive test cases for a feature with priority filtering and multiple output formats
argument-hint: <feature_name> [--priority high|medium|low] [--component name] [--format markdown|docx]
---

## Name
testing:generate-test-case-doc

## Synopsis
```
/testing:generate-test-case-doc <feature_name> [--priority high|medium|low] [--component name] [--format markdown|docx]
```

## Description

The `testing:generate-test-case-doc` command generates comprehensive, detailed test cases for any new feature or functionality. It analyzes the feature requirements, generates multiple test scenarios covering different aspects (functional, regression, smoke, edge cases), and outputs a well-structured document that can be used by QA teams.

This command automates the creation of:
- Detailed test cases with clear steps and expected results
- Priority-based categorization (High/Medium/Low)
- Test type tagging (Regression, Smoke, Functional, Integration, etc.)
- Critical test case summary for quick validation
- Support for multiple output formats (Markdown, DOCX)

The command is designed for:
- QA engineers creating test plans for new features
- Developers documenting testing requirements
- Product teams validating feature completeness
- CI/CD integration for automated test documentation

## Implementation

### Process Flow

1. **Parse Arguments and Flags**:
- **$1** (feature_name): Required - The name or description of the feature to test
- Example: "User Authentication with OAuth2"
- **--priority**: Optional filter to generate only test cases of specific priority
- Values: `high`, `medium`, `low`, `all` (default: all)
- Example: `--priority high` generates only high-priority test cases
- **--component**: Optional component/module tag for organizing test cases
- Example: `--component auth` tags all tests with the auth component
- Multiple components: `--component auth,api,ui`
- **--format**: Output format
- Values: `markdown` (default), `docx`
- Example: `--format docx` generates a Word document

Parse these arguments using bash parameter parsing:
```bash
FEATURE_NAME="$1"
PRIORITY_FILTER="all"
COMPONENT=""
FORMAT="markdown"

shift # Remove feature_name from arguments

while [[ $# -gt 0 ]]; do
case "$1" in
--priority)
PRIORITY_FILTER="$2"
shift 2
;;
--component)
COMPONENT="$2"
shift 2
;;
--format)
FORMAT="$2"
shift 2
;;
*)
echo "Unknown option: $1"
exit 1
;;
esac
done
```

2. **Validate Inputs**:
- Check if feature_name is provided:
```bash
if [ -z "$FEATURE_NAME" ]; then
echo "Error: Feature name is required"
echo "Usage: /testing:generate-test-case-doc <feature_name> [options]"
exit 1
fi
```
- Validate priority filter (if provided):
- Must be one of: `high`, `medium`, `low`, `all`
- If invalid, display error and exit
- Validate format:
- Must be one of: `markdown`, `docx`
- If invalid, default to markdown with warning

3. **Analyze Feature Context and Codebase** (CRITICAL - DO NOT SKIP):

**IMPORTANT**: This step is essential for generating relevant, accurate test cases. Spend time gathering context from the actual codebase.

**A. Search for Feature Documentation**

Display message: "🔍 Analyzing codebase for '{feature_name}'..."

- Find documentation files mentioning the feature:
```bash
# Search all markdown files
grep -r -i "{feature_name}" --include="*.md" --include="*.txt" --include="*.rst"
```

- Look for key documentation files:
- README.md (installation, setup, usage)
- CONTRIBUTING.md (development setup)
- docs/ directory
- Design documents, RFCs, proposals
- CHANGELOG.md (feature additions)

- Extract from documentation:
- Installation/setup steps
- Prerequisites (tools, versions, dependencies)
- Configuration requirements
- Usage examples
- Known limitations

**B. Find Existing Test Files** (Learn from existing patterns)

Display message: "📋 Looking for existing test files..."

- Search for test files in common locations:
```bash
# Find test files
find . -type f \( -name "*test*.go" -o -name "*test*.py" -o -name "*_test.js" -o -name "test_*.py" -o -name "*_spec.rb" \)
# Search test files for feature mentions
grep -r -i "{feature_name}" test/ tests/ spec/ --include="*test*" --include="*spec*"
```

- Look for test files containing the feature name
- Read relevant test files to understand:
- Test structure and patterns used in this project
- How tests are organized (by feature, by component, etc.)
- Setup/teardown procedures
- Assertion styles
- Mock/stub patterns
- Test data examples

- **IMPORTANT**: If existing test files are found for this feature:
- Read them completely
- Learn the test case format used
- Identify what scenarios are already tested
- Use similar naming conventions
- Follow the same structure

**C. Search for Implementation Code**

Display message: "💻 Searching implementation files..."

- Find source code related to the feature:
```bash
# Search source code
grep -r -i "{feature_name}" --include="*.go" --include="*.py" --include="*.js" --include="*.java" --include="*.rb" --include="*.ts" | head -50
```

- Identify key implementation files
- Understand:
- Main components involved
- APIs or interfaces
- Configuration options
- Dependencies on external systems
- Entry points (CLI commands, API endpoints, etc.)

**D. Identify Setup and Configuration Requirements**

Display message: "⚙️ Identifying setup requirements..."

- Search for configuration files:
```bash
# Find config files
find . -type f \( -name "*.yaml" -o -name "*.yml" -o -name "*.json" -o -name "*.conf" -o -name "*.toml" -o -name "*.ini" \) | grep -v vendor | grep -v node_modules | head -20
```

- Look for:
- Deployment manifests (Kubernetes, Docker Compose)
- Configuration examples
- Environment variables
- Command-line flags
- Default settings

- Check for installation/setup scripts:
- Makefile targets
- install.sh, setup.sh
- Package managers (package.json, requirements.txt, go.mod)

**E. Analyze Integration Points and Dependencies**

Display message: "🔗 Analyzing integrations..."

- Identify external dependencies:
- Check README for prerequisites
- Look for mentions of:
- Container runtimes (Docker, containerd, CRI-O)
- Kubernetes/OpenShift
- Databases
- Message queues
- External APIs or services

- Understand platform-specific requirements:
- Operating system requirements
- Kernel features needed
- Network requirements
- Storage requirements

**F. Extract Commands and Tools Used**

- Find command-line usage:
- Search for CLI commands in docs
- Look for kubectl, oc, docker commands
- Identify custom tools or scripts

- Extract from code/docs:
- Actual commands users run
- API endpoints
- Configuration values
- File paths

**G. Summarize Context Gathered**

Display message: "✓ Context analysis complete"

Create a context summary with:
- Repository type (Go project, Python project, K8s operator, etc.)
- Feature location (which files implement it)
- Existing test file(s) found (if any)
- Setup/installation steps identified
- Key tools/commands involved
- Platform-specific requirements
- Integration dependencies

**IMPORTANT Decision Point**:
- If existing test files found for this feature → Use them as primary reference
- If similar test files found → Learn patterns and adapt
- If no test files found → Generate from documentation and code analysis

This context will be used to generate accurate, repository-specific test cases.

**IMPORTANT**: Do NOT ask for user confirmation after gathering context. Proceed directly to Step 4 (Generate Comprehensive Test Cases) using the gathered context.

4. **Generate Comprehensive Test Cases**:

**IMPORTANT**: Use the context gathered in Step 3 to create relevant, repository-specific test cases.

**Context-Driven Test Generation**:

- **If existing test files were found**:
- Use their structure and format as the primary template
- Follow their naming conventions (e.g., TC-001, Test-01, etc.)
- Match their level of detail and specificity
- Learn from scenarios already tested
- Extend with additional scenarios not covered

- **Use discovered setup/installation steps**:
- In preconditions, reference actual installation steps from README
- Include actual configuration files found (yaml, json, conf)
- Reference specific tools/versions from prerequisites

- **Use actual commands and tools found**:
- In test steps, use real CLI commands discovered (kubectl, oc, docker, etc.)
- Reference actual API endpoints from code
- Use actual configuration values from examples
- Include real file paths from the repository

- **Reference platform-specific requirements**:
- Include platform requirements discovered (K8s version, CRI-O config, etc.)
- Reference container runtime specifics
- Mention OS or kernel requirements found

Create test cases covering these categories:

**A. Functional Test Cases** (Core feature functionality):
- Happy path scenarios (using actual commands from docs)
- Alternative flows (based on code analysis)
- User workflows (from README usage examples)
- Data validation scenarios (based on implementation details)

**B. Regression Test Cases** (Ensure existing functionality works):
- Related feature interactions (from integration points found)
- Backward compatibility checks (if version info found)
- Integration with existing modules (from dependency analysis)

**C. Smoke Test Cases** (Critical path validation):
- Core functionality quick checks (based on critical paths in code)
- Basic feature availability (from installation validation)
- Critical user journeys (from documentation examples)

**D. Edge Cases and Negative Test Cases**:
- Boundary value testing (based on code constraints)
- Invalid input handling (from error handling in code)
- Error message validation (using actual error messages from code)
- Timeout and failure scenarios (from configuration limits)

**E. Security Test Cases** (if applicable):
- Authentication/Authorization checks (if auth found in code)
- Data privacy validations (based on security requirements)
- Input sanitization tests (from injection points in code)

**F. Performance Test Cases** (if applicable):
- Load testing scenarios (based on resource limits in config)
- Response time validations (from SLO/SLA docs if found)
- Resource usage checks (from deployment manifests)

For each test case, generate:
```
TC-{NUMBER}: {Test Case Title}
**Priority**: High | Medium | Low
**Component**: {component_name}
**Tags**: [Functional, Regression, Smoke, etc.]
**Preconditions**:
- List of setup requirements
**Test Steps**:
1. Step one with clear action
2. Step two with clear action
3. ...
**Expected Result**:
- Clear, measurable expected outcome
- Verification criteria
**Test Data** (if applicable):
- Input data specifications
- Test user accounts
- Configuration values
**Notes**:
- Additional considerations
- Related test cases
```

5. **Apply Filters**:
- If `--priority` filter is specified:
- Include only test cases matching the specified priority
- Maintain all other metadata
- If `--component` is specified:
- Tag all test cases with the specified component
- Can be comma-separated for multiple components

6. **Create Document Structure**:

Generate document with the following sections:

```markdown
# Test Cases: {Feature Name}
**Generated**: {Current Date and Time}
**Feature**: {Feature Name}
**Component**: {Component Name(s)}
**Priority Filter**: {Priority Filter Applied}
**Total Test Cases**: {Count}
---
## Table of Contents
1. Overview
2. Setup and Installation
3. Test Environment Requirements
4. Test Cases
- 4.1 Functional Tests
- 4.2 Regression Tests
- 4.3 Smoke Tests
- 4.4 Edge Cases
- 4.5 Security Tests (if applicable)
- 4.6 Performance Tests (if applicable)
5. Critical Test Cases Summary
6. Test Execution Notes
---
## 1. Overview
**Feature Description**: {Brief description of the feature based on docs/code analysis}
**Scope**: {What is being tested - derived from feature context}
**Out of Scope**: {What is not covered}
**Project**: {Repository name if identifiable}
---
## 2. Setup and Installation
**IMPORTANT**: Populate this section with actual setup steps discovered in Step 3.
**Installation Steps**:
{Extract from README.md, INSTALL.md, or installation scripts found}
- Include actual commands with full paths
- Reference specific versions if found
- Include prerequisite installations
**Configuration**:
{Extract from configuration files and setup documentation}
- Include actual configuration file snippets
- Reference environment variables needed
- Include platform-specific configuration (e.g., CRI-O setup)
**Verification**:
{Include verification steps from documentation}
- Commands to verify installation success
- Expected output examples
- Health check procedures
---
## 3. Test Environment Requirements
**Prerequisites**:
{Populate with actual requirements discovered in Step 3}
- Specific tools and versions (from README/package files)
- Platform requirements (OS, kernel version from docs)
- Access requirements (cluster admin, specific RBAC)
- External dependencies (databases, message queues from code analysis)
**Test Data**:
{Reference actual test data from existing test files or examples}
- Test configuration files (from test/ directory)
- Sample input data (from examples/ or test fixtures)
- Test accounts/credentials needed
**Dependencies**:
{List actual dependencies discovered}
- Runtime dependencies (Kubernetes, OpenShift from manifests)
- External services (from integration points in code)
- Network requirements (from deployment configs)
---
## 4. Test Cases
### 4.1 Functional Tests
{Generated functional test cases - use context from Step 3}
### 4.2 Regression Tests
{Generated regression test cases - use context from Step 3}
### 4.3 Smoke Tests
{Generated smoke test cases - use context from Step 3}
### 4.4 Edge Cases
{Generated edge case test cases - use context from Step 3}
### 4.5 Security Tests
{Generated security test cases if applicable - use context from Step 3}
### 4.6 Performance Tests
{Generated performance test cases if applicable - use context from Step 3}
---
## 5. Critical Test Cases Summary
This section lists all **High Priority** and **Smoke** test cases for quick validation:
| TC ID | Title | Priority | Type | Expected Result |
|-------|-------|----------|------|----------------|
| TC-001 | ... | High | Smoke | ... |
| TC-003 | ... | High | Functional | ... |
| ... | ... | ... | ... | ... |
**Quick Validation Steps**:
1. Execute all Smoke tests (TC-XXX, TC-YYY)
2. Execute all High Priority tests
3. Verify critical user journeys
---
## 6. Test Execution Notes
**Execution Order**:
- Recommended order for test execution (based on test dependencies discovered)
**Known Issues**:
- Any known limitations or issues discovered in Step 3 analysis
- Reference to existing issues in issue tracker if found
**Reporting**:
- How to report test results
- Defect tracking information (reference actual project tools if found)
---
## Appendix
**Test Case Statistics**:
- Total: {total_count}
- High Priority: {high_count}
- Medium Priority: {medium_count}
- Low Priority: {low_count}
- Smoke Tests: {smoke_count}
- Regression Tests: {regression_count}
- Functional Tests: {functional_count}
**Context Analysis**:
- Existing test files found: {count or "None"}
- Documentation files analyzed: {count}
- Implementation files analyzed: {count}
- Setup steps extracted: {Yes/No}
**Generated by**: Claude Code `/testing:generate-test-case-doc` command
**Timestamp**: {ISO 8601 timestamp}
**Working Directory**: {pwd output - the repository/directory being analyzed}
**Command**: `/testing:generate-test-case-doc "{feature_name}" {flags if any}`
```
7. **Generate Output File**:
**A. For Markdown format (default)**:
- Filename: `testcases-{sanitized_feature_name}.md`
- Sanitize feature name: lowercase, replace spaces with hyphens
- Example: "User Authentication"`testcases-user-authentication.md`
- Save to current working directory
- Use Write tool to create the file
**B. For DOCX format**:
- Filename: `testcases-{sanitized_feature_name}.docx`
- Use the helper script: `python3 plugins/testing/skills/testcase-doc-generator/generate_docx.py`
- Script usage:
```bash
python3 plugins/testing/skills/testcase-doc-generator/generate_docx.py \
--input testcases-{sanitized_feature_name}.md \
--output testcases-{sanitized_feature_name}.docx \
--title "Test Cases: {Feature Name}"
```
- The script converts markdown to properly formatted DOCX with:
- Styled headings (Heading 1, 2, 3)
- Tables for test case summaries
- Proper spacing and formatting
- Table of contents (if supported)
8. **Display Results to User**:
```
✓ Test cases generated successfully!
Feature: {Feature Name}
Total Test Cases: {count}
Priority Filter: {filter if applied}
Component: {component if specified}
Breakdown:
- High Priority: {count}
- Medium Priority: {count}
- Low Priority: {count}
Test Types:
- Functional: {count}
- Regression: {count}
- Smoke: {count}
- Edge Cases: {count}
- Security: {count}
- Performance: {count}
Output saved to: {file_path}
Format: {markdown/docx}
Critical test cases ({count}) are highlighted in Section 4 for quick validation.
Next steps:
- Review the generated test cases
- Customize test data and preconditions
- Execute smoke tests first
- Report any issues found
```
9. **Post-Generation Options**:
Ask the user if they would like to:
- Generate additional test cases for specific scenarios
- Export to a different format
- Create a filtered version (e.g., only smoke tests)
- Add custom test cases to the document
## Return Value
- **Success**:
- File path of generated test cases document
- Summary statistics of test cases created
- Breakdown by priority and type
- **Error**:
- Clear error message if feature name missing
- Validation errors for invalid flags
- File write errors with troubleshooting steps
- **Format**: Structured summary with:
- Generated file location
- Test case counts and categories
- Critical test case highlights
- Next steps for the user
## Examples
### Example 1: Basic usage (all test cases, markdown)
```
/testing:generate-test-case-doc "User Authentication with OAuth2"
```
**Output**:
- Generates `testcases-user-authentication-with-oauth2.md`
- Includes all priority levels (High, Medium, Low)
- All test types (Functional, Regression, Smoke, Edge Cases, Security)
- Critical test summary section
### Example 2: High priority test cases only
```
/testing:generate-test-case-doc "User Authentication with OAuth2" --priority high
```
**Output**:
- Generates only High priority test cases
- Useful for critical path testing
- Faster test execution planning
### Example 3: With component tagging
```
/testing:generate-test-case-doc "User Authentication with OAuth2" --component auth
```
**Output**:
- All test cases tagged with `Component: auth`
- Helps organize test cases by module
### Example 4: Multiple components
```
/testing:generate-test-case-doc "API Gateway Rate Limiting" --component api,gateway,security
```
**Output**:
- Test cases tagged with multiple components
- Useful for cross-functional features
### Example 5: DOCX format for sharing
```
/testing:generate-test-case-doc "User Authentication with OAuth2" --format docx
```
**Output**:
- Generates `testcases-user-authentication-with-oauth2.docx`
- Professional Word document with proper formatting
- Easy to share with non-technical stakeholders
### Example 6: Filtered high-priority DOCX for specific component
```
/testing:generate-test-case-doc "Payment Processing" --priority high --component payment,security --format docx
```
**Output**:
- High priority test cases only
- Tagged with payment and security components
- DOCX format for stakeholder review
- Focused on critical payment security scenarios
### Example 7: Medium priority test cases for regression suite
```
/testing:generate-test-case-doc "Shopping Cart Updates" --priority medium --component cart
```
**Output**:
- Medium priority test cases
- Suitable for extended regression testing
- Component-specific test organization
## Arguments
- **$1** (feature_name): The name or description of the feature to generate test cases for (required)
- Example: "User Authentication with OAuth2"
- Can be a brief description or full feature name
- Spaces and special characters are supported
- **--priority** (filter): Filter test cases by priority level (optional)
- Values: `high`, `medium`, `low`, `all`
- Default: `all` (generates all priority levels)
- Example: `--priority high` generates only critical test cases
- **--component** (name): Tag test cases with component/module name(s) (optional)
- Can be single component: `--component auth`
- Can be multiple components: `--component auth,api,ui`
- Helps organize test cases by system module
- Default: No component tag
- **--format** (type): Output file format (optional)
- Values: `markdown`, `docx`
- Default: `markdown`
- `markdown`: Creates `.md` file (text-based, version control friendly)
- `docx`: Creates Microsoft Word document (professional formatting, easy sharing)
## Notes
- **Test Case Quality**: Generated test cases are comprehensive but should be reviewed and customized based on specific requirements
- **Component Tagging**: Use consistent component names across projects for better organization
- **Priority Guidance**:
- **High**: Critical functionality, blocking issues, smoke tests
- **Medium**: Important features, common user scenarios, regression coverage
- **Low**: Edge cases, optional features, nice-to-have validations
- **DOCX Generation**: Requires Python with `python-docx` library. The helper script will notify if dependencies are missing
- **File Location**: Test cases are saved in the current working directory. Use absolute paths if needed
- **Version Control**: Markdown format is recommended for version-controlled test cases
- **Customization**: Review and enhance generated test cases with:
- Specific test data values
- Environment-specific configurations
- Team-specific testing conventions
- **Integration**: Generated test cases can be imported into test management tools (TestRail, Zephyr, etc.)
## Troubleshooting
- **Missing dependencies for DOCX**:
```bash
pip install python-docx
```
- **Invalid priority filter**: Ensure value is one of: `high`, `medium`, `low`, `all`
- **File write errors**:
- Check write permissions in current directory
- Ensure disk space is available
- Verify filename doesn't contain invalid characters
- **Empty test cases**:
- Provide more context about the feature
- Check if feature name is too vague
- Manually add feature description in the prompt
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Critical: PR description has incorrect command name.

The PR description and summary mention the command as /testing:create-testcases, but the actual implemented command is /testing:generate-test-case-doc. This inconsistency will confuse users.

Please update the PR description to reflect the correct command name: /testing:generate-test-case-doc.

The command documentation itself is comprehensive and well-structured with excellent examples and troubleshooting guidance.

🧰 Tools
🪛 LanguageTool

[uncategorized] ~93-~93: Did you mean the formatting language “Markdown” (= proper noun)?
Context: ..., docx` - If invalid, default to markdown with warning 3. **Analyze Feature Cont...

(MARKDOWN_NNP)


[grammar] ~616-~616: Use a hyphen to join words.
Context: ...est summary section ### Example 2: High priority test cases only ``` /testing:ge...

(QB_NEW_EN_HYPHEN)


[uncategorized] ~622-~622: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ... high ``` Output: - Generates only High priority test cases - Useful for critical path t...

(EN_COMPOUND_ADJECTIVE_INTERNAL)


[uncategorized] ~660-~660: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...curity --format docx ``` Output: - High priority test cases only - Tagged with payment a...

(EN_COMPOUND_ADJECTIVE_INTERNAL)


[grammar] ~665-~665: Use a hyphen to join words.
Context: ...ecurity scenarios ### Example 7: Medium priority test cases for regression suite...

(QB_NEW_EN_HYPHEN)


[uncategorized] ~671-~671: If this is a compound adjective that modifies the following noun, use a hyphen.
Context: ...ium --component cart ``` Output: - Medium priority test cases - Suitable for extended regr...

(EN_COMPOUND_ADJECTIVE_INTERNAL)

🪛 markdownlint-cli2 (0.18.1)

16-16: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


337-337: Bare URL used

(MD034, no-bare-urls)


338-338: Bare URL used

(MD034, no-bare-urls)


339-339: Bare URL used

(MD034, no-bare-urls)

🤖 Prompt for AI Agents
In plugins/testing/commands/generate-test-case-doc.md lines 1-734, the PR
description incorrectly references the command as /testing:create-testcases
while the implemented command is /testing:generate-test-case-doc; update the PR
description and any occurrences in the file (summary, examples, synopsis, usage
snippets, and any README or metadata sections) to use the correct command name
/testing:generate-test-case-doc, verify consistency across all examples and docs
in this file, and run a quick repo-wide search to replace any remaining
instances of the old command string.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ok-to-test Indicates a non-member PR verified by an org member that is safe to test.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants