Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create list of supported browsers to test for each repo #136

Open
4 tasks
cfarm opened this issue Sep 26, 2017 · 7 comments
Open
4 tasks

Create list of supported browsers to test for each repo #136

cfarm opened this issue Sep 26, 2017 · 7 comments

Comments

@cfarm
Copy link
Contributor

cfarm commented Sep 26, 2017

Cfgov-refresh and other projects have this list in the config, we could add this to readme or other docs - M as long as it’s up to date!

@cfarm cfarm changed the title List of supported browsers to test in a repo - M List of supported browsers to test in a repo Sep 26, 2017
@cfarm cfarm changed the title List of supported browsers to test in a repo Create list of supported browsers to test for each repo Nov 14, 2017
@Scotchester
Copy link
Contributor

Suggestion: Point to a repo's .browserslistrc from its CONTRIBUTING.md file, and include an updated description of how we expect contributions to be tested.

@ErieMeyer
Copy link

I realize this is old, but would love to peg the standards of support to the usage data here: https://analytics.usa.gov/

Anyone aware of this policy currently?

@anselmbradford
Copy link
Member

anselmbradford commented Apr 4, 2022

@ErieMeyer I'm so glad you flagged this. A pipe dream I've had for awhile is that we base our browser support on our actual visitor metrics as opposed to intermittent scrutiny of our visitor data on an ad-hoc basis.

For some background, we have traditionally written our code to use modern features and then run it through a build process that transpiles it down to older syntax to support older browsers. At various stages through the years we have decided to drop support for legacy browsers. We have a "no-JS" state we deliver to these browsers, which provides simplified experiences for various parts of the site, depending on complexity of the app or feature being delivered. (One caveat to this is that analytics JavaScript is delivered in the no-JS state, so we still need to handle how that's written for legacy browsers.)

The traditional way to handle this workflow has been to use Babel in our build system and feed it a browserslist file that specifies what browsers we support for a particular script or app on the site. These files look like this and provide a relatively clean human-readable format to specify browser support, such as last 2 versions, which would target the latest two versions of every major browser.

Taking this idea further, I looked into using actual browser support metrics in coordination with a browserslist file. In this setup a data file of browser visitor percentages (like this browserslists-stats file) can be referenced from a browserslist file, like this:
> 0.5% in @cfpb/browserslist-config, which is saying aim to support whatever browsers in our stats are greater than 0.5% of visitors.

So as things currently stand, our support continues to be ad-hoc and based on the overall team's appetite to drop support based on where the trends appear to be heading, hopefully with the backing of some real data. Our continued support of legacy browsers, beyond their reasonable real-world usage, potentially carries significant maintenance burden. We have forked whole 3rd-party projects (i.e. https://github.com/cfpb/britecharts/) in order to support legacy IE. We will be blocked from using the next version of React for high profile apps if we continue to support IE.

Wyatt @wpears has recently done work to modernize our build system (here). This is perhaps an opportunity to re-evaluate how this system handles our browser support. Fundamentally, I think basing our support and maintenance efforts on actual visitor metrics makes profoundly good sense. I think in an ideal setup, this would require a mechanism to have our visitor data in a machine-readable format that we could access and parse, and the staffing needs to build out such a system. The team can correct me, but my guess would be that as a holistic unit, the team has very little insight into what our browser metrics currently are or where to easily find that info.

@wpears
Copy link
Member

wpears commented Apr 5, 2022

My thinking in this area has recently evolved and I no longer think we should be overly concerned with live browser stats, particularly if we decide to serve ie11 no-js. Let me explain:

Recently, I got access to our Digital Analytics Program account, which is the analytics account that feeds cfpb data to the government-wide stats page in analytics.usa.gov. Here's what our browser usage looks like for all of 2022:

Screen Shot 2022-04-05 at 9 32 58 AM

The USWDS follows the 2% rule, meaning they officially support browsers with >=2% usage. Looking at the above stats, for us that would be definitively Chrome, Safari, and Edge with Samsung Internet and Firefox just sneaking in. Notably absent is Internet Explorer, with total usage in 2022 of 0.77%.

Now, the picture is more complicated if you looks closer, because each of the above browsers with >=2% visitor share is comprised of a constellation of browser versions, each with its own feature support matrix. Crucially, though, all of the browsers in the >=2% support window are auto-updating, meaning users will get bumped to browsers with good support of modern features (and, even when IT departments pin people to, eg, slightly older versions of Chrome, it's far enough ahead in the support space that this doesn't present practical problems). This means we can count on ECMAScript 6 support in all our supported browsers.

My proposal, then, is to use esbuild with its target set to es6, transforming especially modern features back to still-quite-modern javascript, and dropping js support for ie11. This would allow us to remove several polyfills that ship in our bundles by default and weigh down the pages/slow down startup times for the >99% of users who don't need them. We'd then mark ie11 as no-js based on feature-detection and ship it the more basic experience.

A very important corollary here is that this means more app-like pages will need a decent no-js experience. Off the top of my head the rental assistance finder should be updated to ship a simple, unfilterable list of programs on no-js instead of the current red box of sadness. This isn't trivial due to the current implementation, but is worthwhile and will also benefit existing users who are already getting no-js (ie10 and below and people who just have js off).

In general, I think this simplifies our browser support story to two very tangible poles: "latest & greatest" and "simple", reducing the mental, testing, and build-machinery overhead associated with supporting a wider array of browsers to a wider array of support levels. This also unlocks a more classic "progressive enhancement" approach, where we can build the simple experience first and then load the interactivity on top of it. Due to our thick Django backend, I think we're leaving something on the table by not doing this, as that's one of its primary benefits (imo) over a static-site.

This overall approach can be thought of as everyone gets something useful and most people get the same rigorously designed & tested experience that devs have on their laptops. Seems like this is both a more pleasant experience for devs, but also, crucially, a better experience for all our users while leaving no one out in the cold who needs help.

Thoughts?

@wpears
Copy link
Member

wpears commented Apr 5, 2022

And something I didn't specifically address above, but is germane to your two comments: Internet Explorer share very likely won't increase over time, meaning tying support to specific, real-time visitor stats doesn't buy us a whole lot in the world of ES6+ and auto-updating browsers. I'd say we can periodically reevaluate the esbuild target, but that will only slightly affect the bundle size, meaning I don't think it's something we need automated machinery to enable.

@anselmbradford
Copy link
Member

Having the dichotomy of a js/no-js experience I think is great for simplifying the burden. My affinity on the live data comes from this: Is there a way we can move our support from a place of discussing support toward one based on a set of rules we can refer to over time?

Say we follow the 2% rule (though USWDS appears to support a wider set), it would then seem like we would need to know the matrix of support for the site as it is currently built and the matrix of visitors above 2% (as an average of a quarter, year, whatever). Our js-experience aim would then be to have those two matrices overlap as near as possible.

A thing I like in Babel is you can output the targets at build time (through the debug flag). E.g.:

Using targets:
{
  "android": "97",
  "chrome": "96",
  "edge": "96",
  "firefox": "78",
  "ie": "11",
  "ios": "12.2",
  "opera": "82",
  "safari": "14.1",
  "samsung": "15"
}

This also outputs what polyfills would be required to satisfy the full set of target (note that in this case we do not automatically include these polyfills, but having the list available is helpful toward catching potential bugs).

esbuild likely has an analogous way to do this.

The part I'm missing is I'm not sure that we have similarly accessible output on visitor data available to everyone who is working on the codebase.

We should be able to answer whether when we get bugs—are they something that is within our support or not? We will (and do) get error reports from visitors in the less than 2% bucket, but the burden there should be on how do we funnel them into the no-js experience versus whether the bug should be fixed in the js experience or not.

We also should be able to answer whether our documented browser support (i.e. here here here) is current at any given time. As it is, that's a snapshot in time. Our support is whatever the build spits out against actual visitors, not what we say it is.

@chosak
Copy link
Member

chosak commented Apr 5, 2022

I'm not sure that we have similarly accessible output on visitor data available to everyone who is working on the codebase

Visitor browser data is available via New Relic, which should be accessible to all developers via a query like

SELECT count(*) FROM PageView WHERE (entityGuid = ) AND (deviceType = 'Desktop') FACET userAgentName, userAgentVersion, userAgentOS SINCE 30 days AGO EXTRAPOLATE

This gives you results like this, which can also be exported via JSON:

image

@anselmbradford we've discussed in the past creating an automated job that pulls these statistics and puts them somewhere; would that be sufficient for your use case?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants