Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update quality-checks.md #265

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

AndrewBell81
Copy link

Added BrowserStack and suggested config for endorsed accessibility test tools

Added BrowserStack and suggested config for endorsed accessibility test tools
@AndrewBell81 AndrewBell81 requested a review from a team as a code owner July 15, 2022 15:42
@sonarqubecloud
Copy link

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

Copy link
Contributor

@andyblundell andyblundell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi - can we clarify the conditions we think should apply to tests passing or failing? (Not the contents of your specific tests, but the things you're testing against?)

For example, are we expecting tests to fail if certain UI elements don't exist under some browsers, or for tests to fail if buttons don't take you to the correct destination, etc, etc?

I think this is what teams are missing: a clear steer on what "good" really means for cross-browser testing.

Thanks!

@AndrewBell81
Copy link
Author

Hi - can we clarify the conditions we think should apply to tests passing or failing? (Not the contents of your specific tests, but the things you're testing against?)

For example, are we expecting tests to fail if certain UI elements don't exist under some browsers, or for tests to fail if buttons don't take you to the correct destination, etc, etc?

I think this is what teams are missing: a clear steer on what "good" really means for cross-browser testing.

Thanks!

Hi Andy, in NHSUK we use Chrome as a default browser when running our tests. We expect the same test to pass on almost any BrowserStack combination of device, OS or browser. This is because we use the NHSUK front end library and the UI presented should not differ between devices/browsers, other than screen size/scale etc. This is the same for elements that appear on the page and end-to-end journeys.

Our automation tests are still running via our frame work, BrowserStack just gives us the ability to run our tests against different browsers/devices. BrowserStack itself isn't actually running any tests. We just use their interface to run our tests.

@stefaniuk
Copy link
Contributor

stefaniuk commented Jul 20, 2022

There are probably a couple of things bundled in this PR, both good - I think, they are as follows:

  1. Accessibility testing using BrowserStack
  2. Recommended browser compatibility testing

With the above in mind

  • Do we have an opinion on the tooling that can be integrated with the BrowserStack subscription to establish clear passing/failing quality gates? E.g. Axe. If so, sounds like a good candidate for a recipe book
  • Do we have a list of browsers that we could name explicitly that we would like / need to test against? E.g. any further reference to NHSD policy etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants