Skip to content

Conversation

pUrGe12
Copy link
Contributor

@pUrGe12 pUrGe12 commented Jul 3, 2025

Proposed change

Test cases for the module.py file.

Type of change

  • New core framework functionality
  • Bugfix (non-breaking change which fixes an issue)
  • Code refactoring without any functionality changes
  • New or existing module/payload change
  • Documentation/localization improvement
  • Test coverage improvement
  • Dependency upgrade
  • Other improvement (best practice, cleanup, optimization, etc)

Checklist

  • I've followed the contributing guidelines
  • I've run make pre-commit, it didn't generate any changes
  • I've run make test, all tests passed locally

Copy link
Contributor

coderabbitai bot commented Jul 30, 2025

Summary by CodeRabbit

  • Tests
    • Added a comprehensive test suite covering initialization, loading, step sorting, execution flow, service/port discovery, concurrency, and unsupported-library handling.
    • Utilizes mocks to simulate runtime behavior, validating warnings, thread creation, and event handling across scenarios.
    • Improves coverage and regression protection for core execution paths, enhancing overall stability and user confidence.

Walkthrough

Adds a new test suite at tests/core/test_module.py for nettacker.core.module.Module, introducing fixtures, a DummyOptions helper, and multiple tests covering initialization, loading, sorting, and start behavior with extensive mocking of loaders, events, threading, and imports.

Changes

Cohort / File(s) Summary
Tests: Module core behavior
tests/core/test_module.py
New test file adding DummyOptions, pytest fixtures, and tests for Module init, service discovery, payload loading, loop sorting, start execution paths, unsupported libraries, thread creation, and port-append logic, using mocks for TemplateLoader, event discovery, threading, and import mechanisms.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Suggested reviewers

  • arkid15r

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 62a9a3b and 64dd555.

📒 Files selected for processing (1)
  • tests/core/test_module.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/core/test_module.py
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/core/test_module.py (1)

73-106: Consider adding specific sorting assertions.

While the test validates that sort_loops doesn't raise exceptions, it would be more valuable to assert the actual sorting results to ensure the dependency logic works correctly.

Add assertions to verify the sorting order:

 module.load()  # Should not raise
+
+ # Verify sorting results
+ steps = module.module_content["payloads"][0]["steps"]
+ # Add specific assertions about step ordering based on dependencies
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 630de62 and 934f835.

📒 Files selected for processing (1)
  • tests/core/test_module.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: CodeQL (javascript)
  • GitHub Check: CodeQL (python)
🔇 Additional comments (12)
tests/core/test_module.py (12)

1-15: LGTM - Well-structured test setup.

The imports are appropriate for comprehensive unit testing, and the DummyOptions class provides a clean test double with sensible defaults for module configuration.


17-31: LGTM - Clean fixture design.

The fixtures provide consistent test data and follow pytest best practices for reusable test components.


33-43: LGTM - Focused initialization test.

The test properly validates service discovery signature extraction during Module initialization with appropriate mocking.


45-71: LGTM - Comprehensive load method test.

The test thoroughly validates the load() method's service discovery functionality with proper mocking of all dependencies and meaningful assertions.


108-150: LGTM - Comprehensive start method test.

The test thoroughly validates the start() method's execution flow with appropriate mocking of all external dependencies and proper verification of thread management.


152-180: LGTM - Good error handling test.

The test properly validates that the start() method handles unsupported libraries gracefully by returning None.


182-217: LGTM - Well-designed helper function.

The helper function provides clean mock behavior for different test scenarios and promotes code reuse with clear documentation.


219-244: LGTM - Improved sorting validation.

This test provides much better validation of the sort_loops behavior with specific assertions about the resulting step structure and dependency handling.


246-270: LGTM - Clean helper function.

The helper function provides appropriate mock behavior with good error handling for unexpected inputs.


272-315: LGTM - Focused thread creation test.

The test provides excellent validation of thread creation behavior with precise control over the test scenario and thorough assertions about thread arguments and startup.


317-374: LGTM - Good edge case coverage.

The test properly validates warning logging for unsupported libraries with appropriate mocking and assertion of log calls.


376-414: LGTM - Thorough port aggregation test.

The test effectively validates that multiple ports for the same protocol are correctly aggregated in the discovered_services dictionary.

@arkid15r arkid15r enabled auto-merge August 30, 2025 04:05
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/core/test_module.py (1)

414-415: Missing assertion for payload modification.

The test verifies that discovered services are collected correctly but doesn't verify that the payloads' steps are modified with the discovered ports.

Add assertions to verify the complete behavior:

     module.load()
     assert module.discovered_services == {"http": [80, 443]}
+    # Verify that the ports were injected into the steps
+    assert mock_parse.called
+    # Verify parse was called with the discovered ports
+    for call in mock_parse.call_args_list:
+        if "port" in call[0][1]:
+            assert call[0][1]["port"] == [80, 443]
🧹 Nitpick comments (7)
tests/core/test_module.py (7)

9-15: Consider adding __dict__ property for more accurate behavior.

The Module.__init__ method accesses options.__dict__, but DummyOptions doesn't have this property defined explicitly. While Python provides it automatically, the test should properly simulate all accessed attributes.

Add __dict__ property to ensure the test accurately reflects the expected behavior:

 class DummyOptions:
     def __init__(self):
         self.modules_extra_args = {"foo": "bar"}
         self.skip_service_discovery = False
         self.thread_per_host = 2
         self.time_sleep_between_requests = 0
+    
+    @property
+    def __dict__(self):
+        return {
+            "modules_extra_args": self.modules_extra_args,
+            "skip_service_discovery": self.skip_service_discovery,
+            "thread_per_host": self.thread_per_host,
+            "time_sleep_between_requests": self.time_sleep_between_requests,
+            "foo": "bar"  # This gets added by Module.__init__ when processing modules_extra_args
+        }

45-46: Mock should return more representative file list.

The mock returns only ["http.py"], but the actual implementation would likely have more protocol files. Consider making the mock more realistic.

-@patch("os.listdir", return_value=["http.py"])
+@patch("os.listdir", return_value=["http.py", "https.py", "tcp.py", "__init__.py", "base.py"])

182-217: Test helper function should not have misleading comment.

The comment # NOT A TEST CASE on line 183 is unnecessary and could be confusing. Helper functions in test files are common and don't need such disclaimers.

 def template_loader_side_effect(name, inputs):
-    # NOT A TEST CASE
     mock_instance = MagicMock()

246-270: Redundant helper function.

This loader_side_effect helper function is very similar to template_loader_side_effect defined earlier. Consider consolidating these helper functions.

Consider creating a single reusable helper function at the module level:

def create_mock_loader(name, payloads_config):
    """Create a mock TemplateLoader instance with specified payloads configuration."""
    mock_inst = MagicMock()
    
    if name == "port_scan":
        mock_inst.load.return_value = {
            "payloads": [{"steps": [{"response": {"conditions": {"service": {"http": {}}}}}]}]
        }
    elif name in payloads_config:
        mock_inst.load.return_value = payloads_config[name]
    else:
        raise ValueError(f"Unexpected module name: {name}")
    
    return mock_inst

393-393: Inconsistent comment about test data structure.

The comment states .load() requires no [] but this is misleading - the actual Module code expects steps to be nested lists.

-                        {"response": {"conditions": {"service": {}}}}  # .load() requires no []
+                        {"response": {"conditions": {"service": {}}}}  # Single step without nesting for this test

1-415: Consider adding tests for error cases and edge conditions.

The test suite covers the main functionality well but is missing some important test cases:

  1. Error handling when TemplateLoader fails
  2. Behavior when no services are discovered
  3. Thread interruption/termination scenarios
  4. Module initialization with missing required parameters

Would you like me to generate additional test cases for these scenarios to improve the overall test coverage?


373-373: Assert full warning format in tests
In tests/core/test_module.py:373, replace the substring check with an exact match against the localized message, for example:

assert mock_log.warn.call_args[0][0] == _('library_not_supported').format('unsupported_lib')
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 934f835 and d693fc1.

📒 Files selected for processing (1)
  • tests/core/test_module.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/core/test_module.py (2)
nettacker/core/module.py (3)
  • Module (18-200)
  • load (76-109)
  • sort_loops (120-140)
nettacker/core/lib/http.py (1)
  • HttpEngine (138-225)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: CodeQL (javascript)
  • GitHub Check: CodeQL (python)
  • GitHub Check: Run tests
🔇 Additional comments (3)
tests/core/test_module.py (3)

1-7: LGTM! Imports are appropriate.

The imports are well-organized and include all necessary testing dependencies.


222-244: Good coverage of sort_loops functionality.

This test properly verifies the sorting behavior of steps based on their dependencies.


309-312: Use direct argument access instead of kwargs pattern.

The Thread mock is called with positional arguments, but the test accesses them through kwargs["args"]. This is incorrect - they should be accessed through call_args[0] for positional arguments.

-    for _, kwargs in mock_thread_cls.call_args_list:
-        sub_step = kwargs["args"][0]
+    for call_args in mock_thread_cls.call_args_list:
+        sub_step = call_args[1]["args"][0]  # call_args is (args, kwargs)
         actual_ids.add(sub_step["id"])

Likely an incorrect or invalid review comment.

Copy link
Collaborator

@arkid15r arkid15r left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't seem this test has ever passed in CI/CD. Could you check it and the bot's comments too @pUrGe12

Thank you

auto-merge was automatically disabled September 2, 2025 12:19

Head branch was pushed to by a user without write access

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between cb4891d and 62a9a3b.

📒 Files selected for processing (1)
  • nettacker/config.py (2 hunks)
🔇 Additional comments (1)
nettacker/config.py (1)

10-10: Tests-only PR: current ROOT_PATH usage is appropriate
This change only affects tests and resolves data paths under the project root, which is writable; adjusting ROOT_PATH for a production install is out of scope for this PR.

Likely an incorrect or invalid review comment.

Comment on lines 118 to 120
data_dir = ROOT_PATH / ".nettacker/data"
new_database_file = ROOT_PATH / ".nettacker/data/nettacker.db"
old_database_file = ROOT_PATH / ".data/nettacker.db"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Base-path drift: PathConfig moved to ROOT_PATH, but DbConfig and ApiConfig still use CWD — likely producing multiple DB/log locations.

  • Line 102 still points DB to CWD/.nettacker/data/nettacker.db while Lines 118–120 point “new_database_file” to ROOT_PATH/.... You can end up creating/migrating to one path and reading from another.
  • Line 66 keeps logs under CWD/.nettacker, while Lines 129–131 relocate results/tmp under ROOT_PATH/....
  • Migration path (Line 120 “old_database_file”) won’t find legacy DBs if they were previously under CWD.

Action: unify all runtime paths through one base (the fixed ROOT_PATH above or revert these changes). For example:

# outside-diff illustration:
ApiConfig.api_access_log = str(PathConfig.data_dir / "nettacker.log")
DbConfig.name = str(PathConfig.new_database_file)

Also ensure directories exist before use:

# outside-diff illustration:
for d in (PathConfig.data_dir, PathConfig.results_dir, PathConfig.tmp_dir):
    d.mkdir(parents=True, exist_ok=True)

To locate other hard-coded CWD usages:

Also applies to: 122-122, 129-131


🏁 Script executed:

#!/bin/bash
rg -nP 'Path\.cwd\(\)|\bCWD\b|\.nettacker\/data' -g '!**/tests/**'

Length of output: 3471


Unify all Nettacker runtime paths to ROOT_PATH
Replace every CWD / ".nettacker/data/…" usage in nettacker/config.py (lines 66 & 102) with the corresponding PathConfig.* under ROOT_PATH. Ensure the following are consistent:

  • api_access_log → PathConfig.data_dir / "nettacker.log"
  • name (DB) → PathConfig.new_database_file
  • old_database_file (if migrating legacy DBs) points to the previous CWD location (CWD / ".nettacker/data/nettacker.db")
  • results_dir → PathConfig.results_dir
  • tmp_dir → PathConfig.tmp_dir

Also ensure all target directories exist before use:

for d in (PathConfig.data_dir, PathConfig.results_dir, PathConfig.tmp_dir):
    d.mkdir(parents=True, exist_ok=True)
🤖 Prompt for AI Agents
In nettacker/config.py around lines 66, 102 and 118-120, replace any use of CWD
/ ".nettacker/data/…" with the corresponding PathConfig properties under
ROOT_PATH: set api_access_log to PathConfig.data_dir / "nettacker.log", set the
DB name to PathConfig.new_database_file, set old_database_file to the legacy CWD
/ ".nettacker/data/nettacker.db" for migration, and point results_dir and
tmp_dir to PathConfig.results_dir and PathConfig.tmp_dir respectively; finally
ensure the three runtime directories exist by creating PathConfig.data_dir,
PathConfig.results_dir and PathConfig.tmp_dir with mkdir(parents=True,
exist_ok=True) before they are used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants