Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Test Summary - InstantOn support for Connectors Inbound Security #30382

Open
anjumfatima90 opened this issue Dec 10, 2024 · 1 comment

Comments

@anjumfatima90
Copy link
Contributor

anjumfatima90 commented Dec 10, 2024

Test Strategy

Describe the test strategy & approach for this feature, and describe how the approach verifies the functions delivered by this feature.

This the test summary for epic InstantOn Connectors Inbound Security support

The test strategy involves running FATs on test machines that are enabled for InstantOn (checkpoint) testing. Both SOE and the orchestrater builds invoked by PR #build run the checkpoint test buckets on machines/environments that support Liberty InstantOn. We also still maintain a continuous InstantOn build that runs daily using the Open Liberty integration branch with the checkpoint tests running in FULL mode. The checkpoint tests are run across Java 11, 17 and 21 in the continuous InstantOn build as well as in SOE runs. The InstantOn checkpoint FAT tests are identified using the @CheckpointTest annotation. This ensures the test is only executed on host systems that support checkpoint (i.e. Linux systems with the right kernel version and criu installed and running OpenJ9).

List of FAT projects affected

  • com.ibm.ws.jca_fat

Test strategy

  • What functionality is new or modified by this feature?

The following features have been enabled for Liberty InstantOn by adding the WLP-InstantOn-Enabled: true feature manifest header. This allows the features to be enabled when doing a Liberty InstantOn checkpoint action. Without this the checkpoint action would fail when one or more of the following features are enabled.

  1. jcaInboundSecurity-1.0
  2. connectorsInboundSecurity-2.0
  3. connectors-2.1
  • What are the positive and negative tests for that functionality? (Tell me the specific scenarios you tested. What kind of tests do you have for when everything ends up working (positive tests)? What about tests that verify we fail gracefully when things go wrong (negative tests)? See the Positive and negative tests section of the Feature Test Summary Process wiki for more detail.)
  • What manual tests are there (if any)? (Note: Automated testing is expected for all features with manual testing considered an exception to the rule.)

Confidence Level

Collectively as a team you need to assess your confidence in the testing delivered based on the values below. This should be done as a team and not an individual to ensure more eyes are on it and that pressures to deliver quickly are absorbed by the team as a whole.

Please indicate your confidence in the testing (up to and including FAT) delivered with this feature by selecting one of these values:

0 - No automated testing delivered

1 - We have minimal automated coverage of the feature including golden paths. There is a relatively high risk that defects or issues could be found in this feature.

2 - We have delivered a reasonable automated coverage of the golden paths of this feature but are aware of gaps and extra testing that could be done here. Error/outlying scenarios are not really covered. There are likely risks that issues may exist in the golden paths

3 - We have delivered all automated testing we believe is needed for the golden paths of this feature and minimal coverage of the error/outlying scenarios. There is a risk when the feature is used outside the golden paths however we are confident on the golden path. Note: This may still be a valid end state for a feature... things like Beta features may well suffice at this level.

4 - We have delivered all automated testing we believe is needed for the golden paths of this feature and have good coverage of the error/outlying scenarios. While more testing of the error/outlying scenarios could be added we believe there is minimal risk here and the cost of providing these is considered higher than the benefit they would provide.

5 - We have delivered all automated testing we believe is needed for this feature. The testing covers all golden path cases as well as all the error/outlying scenarios that make sense. We are not aware of any gaps in the testing at this time. No manual testing is required to verify this feature.

Based on your answer above, for any answer other than a 4 or 5 please provide details of what drove your answer. Please be aware, it may be perfectly reasonable in some scenarios to deliver with any value above. We may accept no automated testing is needed for some features, we may be happy with low levels of testing on samples for instance so please don't feel the need to drive to a 5. We need your honest assessment as a team and the reasoning for why you believe shipping at that level is valid. What are the gaps, what is the risk etc. Please also provide links to the follow on work that is needed to close the gaps (should you deem it needed)

Overall we are not experts for this feature so we rate our confidence level at 3.5.

@anjumfatima90
Copy link
Contributor Author

anjumfatima90 commented Dec 10, 2024

General test behavior

The tests encompasses testing AFTER_APP_START phase of checkpoint. This means that the tests primarily focus on verifying the behavior and functionality of the application on restore with a checkpoint being executed after the application has been initialized and is in a running state before the ports are open.

In positive tests, the aim is to verify the successful completion of each step (checkpoint/restore) as expected. This involves ensuring that the expected response, and status are received. They verify if the WorkManager establishes the expected, authenticated caller identity from the security context propagated by via resource adapters. Ensures correct and accurate identity propagation is done.

The negative tests focus on validating the proper handling of error scenarios. They verify whether the WorkManager rejects a Work instance if the in-flown security context returns a caller principal that is not in the application realm. Additionally, the tests check if the server logs display the correct failed exceptions and, if applicable, check for the occurrence of any expected FFDC events.

The tests are repeated to test EE8, EE9, EE10 and EE11 features. The tests uses a @ClassRule (CheckpointRule) in order to repeat the test class to do checkpoint/restore with each EE version.

List of FAT projects affected

  1. com.ibm.ws.jca_fat

The below tests are existing tests in the existing FAT which are repeated again to perform checkpoint/restore on different EE levels. Features tested include jcaInboundSecurity-1.0, connectorsInboundSecurity-2.0 and connectors-2.1.

  • InboundSecurityTestRapid

  • InboundSecurityTest

Both of these test classes aim to test the caller identity propagation by the resource adapter in different scenarios.
The tests ensure that authenticated identities are propagated correctly using CallerPrincipalCallback under different execution methods (doWork, startWork, and scheduleWork). The test cases also verify the rejection of invalid security contexts, such as caller principals from non-application realms, multiple CallerPrincipalCallback instances, authenticated subjects with conflicting credentials. Also tests for scenarios where the security context lacks valid principals or provides null/empty values and ensures proper handling of unauthenticated identities. Finally the tests verify that the WorkManager utilizes the execution subject when CallerPrincipalCallback is absent, ensuring correct principal resolution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant