-
-
Notifications
You must be signed in to change notification settings - Fork 914
test cases for core/module #1095
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Summary by CodeRabbit
WalkthroughAdds a new test suite at tests/core/test_module.py for nettacker.core.module.Module, introducing fixtures, a DummyOptions helper, and multiple tests covering initialization, loading, sorting, and start behavior with extensive mocking of loaders, events, threading, and imports. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Suggested reviewers
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tests/core/test_module.py (1)
73-106
: Consider adding specific sorting assertions.While the test validates that
sort_loops
doesn't raise exceptions, it would be more valuable to assert the actual sorting results to ensure the dependency logic works correctly.Add assertions to verify the sorting order:
module.load() # Should not raise + + # Verify sorting results + steps = module.module_content["payloads"][0]["steps"] + # Add specific assertions about step ordering based on dependencies
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/core/test_module.py
(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: CodeQL (javascript)
- GitHub Check: CodeQL (python)
🔇 Additional comments (12)
tests/core/test_module.py (12)
1-15
: LGTM - Well-structured test setup.The imports are appropriate for comprehensive unit testing, and the
DummyOptions
class provides a clean test double with sensible defaults for module configuration.
17-31
: LGTM - Clean fixture design.The fixtures provide consistent test data and follow pytest best practices for reusable test components.
33-43
: LGTM - Focused initialization test.The test properly validates service discovery signature extraction during Module initialization with appropriate mocking.
45-71
: LGTM - Comprehensive load method test.The test thoroughly validates the load() method's service discovery functionality with proper mocking of all dependencies and meaningful assertions.
108-150
: LGTM - Comprehensive start method test.The test thoroughly validates the start() method's execution flow with appropriate mocking of all external dependencies and proper verification of thread management.
152-180
: LGTM - Good error handling test.The test properly validates that the start() method handles unsupported libraries gracefully by returning None.
182-217
: LGTM - Well-designed helper function.The helper function provides clean mock behavior for different test scenarios and promotes code reuse with clear documentation.
219-244
: LGTM - Improved sorting validation.This test provides much better validation of the sort_loops behavior with specific assertions about the resulting step structure and dependency handling.
246-270
: LGTM - Clean helper function.The helper function provides appropriate mock behavior with good error handling for unexpected inputs.
272-315
: LGTM - Focused thread creation test.The test provides excellent validation of thread creation behavior with precise control over the test scenario and thorough assertions about thread arguments and startup.
317-374
: LGTM - Good edge case coverage.The test properly validates warning logging for unsupported libraries with appropriate mocking and assertion of log calls.
376-414
: LGTM - Thorough port aggregation test.The test effectively validates that multiple ports for the same protocol are correctly aggregated in the discovered_services dictionary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tests/core/test_module.py (1)
414-415
: Missing assertion for payload modification.The test verifies that discovered services are collected correctly but doesn't verify that the payloads' steps are modified with the discovered ports.
Add assertions to verify the complete behavior:
module.load() assert module.discovered_services == {"http": [80, 443]} + # Verify that the ports were injected into the steps + assert mock_parse.called + # Verify parse was called with the discovered ports + for call in mock_parse.call_args_list: + if "port" in call[0][1]: + assert call[0][1]["port"] == [80, 443]
🧹 Nitpick comments (7)
tests/core/test_module.py (7)
9-15
: Consider adding__dict__
property for more accurate behavior.The
Module.__init__
method accessesoptions.__dict__
, butDummyOptions
doesn't have this property defined explicitly. While Python provides it automatically, the test should properly simulate all accessed attributes.Add
__dict__
property to ensure the test accurately reflects the expected behavior:class DummyOptions: def __init__(self): self.modules_extra_args = {"foo": "bar"} self.skip_service_discovery = False self.thread_per_host = 2 self.time_sleep_between_requests = 0 + + @property + def __dict__(self): + return { + "modules_extra_args": self.modules_extra_args, + "skip_service_discovery": self.skip_service_discovery, + "thread_per_host": self.thread_per_host, + "time_sleep_between_requests": self.time_sleep_between_requests, + "foo": "bar" # This gets added by Module.__init__ when processing modules_extra_args + }
45-46
: Mock should return more representative file list.The mock returns only
["http.py"]
, but the actual implementation would likely have more protocol files. Consider making the mock more realistic.-@patch("os.listdir", return_value=["http.py"]) +@patch("os.listdir", return_value=["http.py", "https.py", "tcp.py", "__init__.py", "base.py"])
182-217
: Test helper function should not have misleading comment.The comment
# NOT A TEST CASE
on line 183 is unnecessary and could be confusing. Helper functions in test files are common and don't need such disclaimers.def template_loader_side_effect(name, inputs): - # NOT A TEST CASE mock_instance = MagicMock()
246-270
: Redundant helper function.This
loader_side_effect
helper function is very similar totemplate_loader_side_effect
defined earlier. Consider consolidating these helper functions.Consider creating a single reusable helper function at the module level:
def create_mock_loader(name, payloads_config): """Create a mock TemplateLoader instance with specified payloads configuration.""" mock_inst = MagicMock() if name == "port_scan": mock_inst.load.return_value = { "payloads": [{"steps": [{"response": {"conditions": {"service": {"http": {}}}}}]}] } elif name in payloads_config: mock_inst.load.return_value = payloads_config[name] else: raise ValueError(f"Unexpected module name: {name}") return mock_inst
393-393
: Inconsistent comment about test data structure.The comment states
.load() requires no []
but this is misleading - the actual Module code expects steps to be nested lists.- {"response": {"conditions": {"service": {}}}} # .load() requires no [] + {"response": {"conditions": {"service": {}}}} # Single step without nesting for this test
1-415
: Consider adding tests for error cases and edge conditions.The test suite covers the main functionality well but is missing some important test cases:
- Error handling when TemplateLoader fails
- Behavior when no services are discovered
- Thread interruption/termination scenarios
- Module initialization with missing required parameters
Would you like me to generate additional test cases for these scenarios to improve the overall test coverage?
373-373
: Assert full warning format in tests
In tests/core/test_module.py:373, replace the substring check with an exact match against the localized message, for example:assert mock_log.warn.call_args[0][0] == _('library_not_supported').format('unsupported_lib')
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
tests/core/test_module.py
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
tests/core/test_module.py (2)
nettacker/core/module.py (3)
Module
(18-200)load
(76-109)sort_loops
(120-140)nettacker/core/lib/http.py (1)
HttpEngine
(138-225)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: CodeQL (javascript)
- GitHub Check: CodeQL (python)
- GitHub Check: Run tests
🔇 Additional comments (3)
tests/core/test_module.py (3)
1-7
: LGTM! Imports are appropriate.The imports are well-organized and include all necessary testing dependencies.
222-244
: Good coverage of sort_loops functionality.This test properly verifies the sorting behavior of steps based on their dependencies.
309-312
: Use direct argument access instead of kwargs pattern.The Thread mock is called with positional arguments, but the test accesses them through
kwargs["args"]
. This is incorrect - they should be accessed throughcall_args[0]
for positional arguments.- for _, kwargs in mock_thread_cls.call_args_list: - sub_step = kwargs["args"][0] + for call_args in mock_thread_cls.call_args_list: + sub_step = call_args[1]["args"][0] # call_args is (args, kwargs) actual_ids.add(sub_step["id"])Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't seem this test has ever passed in CI/CD. Could you check it and the bot's comments too @pUrGe12
Thank you
Head branch was pushed to by a user without write access
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
nettacker/config.py
(2 hunks)
🔇 Additional comments (1)
nettacker/config.py (1)
10-10
: Tests-only PR: current ROOT_PATH usage is appropriate
This change only affects tests and resolves data paths under the project root, which is writable; adjusting ROOT_PATH for a production install is out of scope for this PR.Likely an incorrect or invalid review comment.
nettacker/config.py
Outdated
data_dir = ROOT_PATH / ".nettacker/data" | ||
new_database_file = ROOT_PATH / ".nettacker/data/nettacker.db" | ||
old_database_file = ROOT_PATH / ".data/nettacker.db" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Base-path drift: PathConfig moved to ROOT_PATH, but DbConfig and ApiConfig still use CWD — likely producing multiple DB/log locations.
- Line 102 still points DB to
CWD/.nettacker/data/nettacker.db
while Lines 118–120 point “new_database_file” toROOT_PATH/...
. You can end up creating/migrating to one path and reading from another. - Line 66 keeps logs under
CWD/.nettacker
, while Lines 129–131 relocate results/tmp underROOT_PATH/...
. - Migration path (Line 120 “old_database_file”) won’t find legacy DBs if they were previously under CWD.
Action: unify all runtime paths through one base (the fixed ROOT_PATH above or revert these changes). For example:
# outside-diff illustration:
ApiConfig.api_access_log = str(PathConfig.data_dir / "nettacker.log")
DbConfig.name = str(PathConfig.new_database_file)
Also ensure directories exist before use:
# outside-diff illustration:
for d in (PathConfig.data_dir, PathConfig.results_dir, PathConfig.tmp_dir):
d.mkdir(parents=True, exist_ok=True)
To locate other hard-coded CWD usages:
Also applies to: 122-122, 129-131
🏁 Script executed:
#!/bin/bash
rg -nP 'Path\.cwd\(\)|\bCWD\b|\.nettacker\/data' -g '!**/tests/**'
Length of output: 3471
Unify all Nettacker runtime paths to ROOT_PATH
Replace every CWD / ".nettacker/data/…"
usage in nettacker/config.py (lines 66 & 102) with the corresponding PathConfig.*
under ROOT_PATH
. Ensure the following are consistent:
- api_access_log →
PathConfig.data_dir / "nettacker.log"
- name (DB) →
PathConfig.new_database_file
- old_database_file (if migrating legacy DBs) points to the previous CWD location (
CWD / ".nettacker/data/nettacker.db"
) - results_dir →
PathConfig.results_dir
- tmp_dir →
PathConfig.tmp_dir
Also ensure all target directories exist before use:
for d in (PathConfig.data_dir, PathConfig.results_dir, PathConfig.tmp_dir):
d.mkdir(parents=True, exist_ok=True)
🤖 Prompt for AI Agents
In nettacker/config.py around lines 66, 102 and 118-120, replace any use of CWD
/ ".nettacker/data/…" with the corresponding PathConfig properties under
ROOT_PATH: set api_access_log to PathConfig.data_dir / "nettacker.log", set the
DB name to PathConfig.new_database_file, set old_database_file to the legacy CWD
/ ".nettacker/data/nettacker.db" for migration, and point results_dir and
tmp_dir to PathConfig.results_dir and PathConfig.tmp_dir respectively; finally
ensure the three runtime directories exist by creating PathConfig.data_dir,
PathConfig.results_dir and PathConfig.tmp_dir with mkdir(parents=True,
exist_ok=True) before they are used.
Proposed change
Test cases for the module.py file.
Type of change
Checklist
make pre-commit
, it didn't generate any changesmake test
, all tests passed locally