-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create list of supported browsers to test for each repo #136
Comments
Suggestion: Point to a repo's |
I realize this is old, but would love to peg the standards of support to the usage data here: https://analytics.usa.gov/ Anyone aware of this policy currently? |
@ErieMeyer I'm so glad you flagged this. A pipe dream I've had for awhile is that we base our browser support on our actual visitor metrics as opposed to intermittent scrutiny of our visitor data on an ad-hoc basis. For some background, we have traditionally written our code to use modern features and then run it through a build process that transpiles it down to older syntax to support older browsers. At various stages through the years we have decided to drop support for legacy browsers. We have a "no-JS" state we deliver to these browsers, which provides simplified experiences for various parts of the site, depending on complexity of the app or feature being delivered. (One caveat to this is that analytics JavaScript is delivered in the no-JS state, so we still need to handle how that's written for legacy browsers.) The traditional way to handle this workflow has been to use Babel in our build system and feed it a browserslist file that specifies what browsers we support for a particular script or app on the site. These files look like this and provide a relatively clean human-readable format to specify browser support, such as Taking this idea further, I looked into using actual browser support metrics in coordination with a browserslist file. In this setup a data file of browser visitor percentages (like this browserslists-stats file) can be referenced from a browserslist file, like this: So as things currently stand, our support continues to be ad-hoc and based on the overall team's appetite to drop support based on where the trends appear to be heading, hopefully with the backing of some real data. Our continued support of legacy browsers, beyond their reasonable real-world usage, potentially carries significant maintenance burden. We have forked whole 3rd-party projects (i.e. https://github.com/cfpb/britecharts/) in order to support legacy IE. We will be blocked from using the next version of React for high profile apps if we continue to support IE. Wyatt @wpears has recently done work to modernize our build system (here). This is perhaps an opportunity to re-evaluate how this system handles our browser support. Fundamentally, I think basing our support and maintenance efforts on actual visitor metrics makes profoundly good sense. I think in an ideal setup, this would require a mechanism to have our visitor data in a machine-readable format that we could access and parse, and the staffing needs to build out such a system. The team can correct me, but my guess would be that as a holistic unit, the team has very little insight into what our browser metrics currently are or where to easily find that info. |
My thinking in this area has recently evolved and I no longer think we should be overly concerned with live browser stats, particularly if we decide to serve ie11 Recently, I got access to our Digital Analytics Program account, which is the analytics account that feeds cfpb data to the government-wide stats page in analytics.usa.gov. Here's what our browser usage looks like for all of 2022: The USWDS follows the 2% rule, meaning they officially support browsers with >=2% usage. Looking at the above stats, for us that would be definitively Chrome, Safari, and Edge with Samsung Internet and Firefox just sneaking in. Notably absent is Internet Explorer, with total usage in 2022 of 0.77%. Now, the picture is more complicated if you looks closer, because each of the above browsers with >=2% visitor share is comprised of a constellation of browser versions, each with its own feature support matrix. Crucially, though, all of the browsers in the >=2% support window are auto-updating, meaning users will get bumped to browsers with good support of modern features (and, even when IT departments pin people to, eg, slightly older versions of Chrome, it's far enough ahead in the support space that this doesn't present practical problems). This means we can count on ECMAScript 6 support in all our supported browsers. My proposal, then, is to use A very important corollary here is that this means more app-like pages will need a decent In general, I think this simplifies our browser support story to two very tangible poles: "latest & greatest" and "simple", reducing the mental, testing, and build-machinery overhead associated with supporting a wider array of browsers to a wider array of support levels. This also unlocks a more classic "progressive enhancement" approach, where we can build the simple experience first and then load the interactivity on top of it. Due to our thick Django backend, I think we're leaving something on the table by not doing this, as that's one of its primary benefits (imo) over a static-site. This overall approach can be thought of as everyone gets something useful and most people get the same rigorously designed & tested experience that devs have on their laptops. Seems like this is both a more pleasant experience for devs, but also, crucially, a better experience for all our users while leaving no one out in the cold who needs help. Thoughts? |
And something I didn't specifically address above, but is germane to your two comments: Internet Explorer share very likely won't increase over time, meaning tying support to specific, real-time visitor stats doesn't buy us a whole lot in the world of ES6+ and auto-updating browsers. I'd say we can periodically reevaluate the |
Having the dichotomy of a js/no-js experience I think is great for simplifying the burden. My affinity on the live data comes from this: Is there a way we can move our support from a place of discussing support toward one based on a set of rules we can refer to over time? Say we follow the 2% rule (though USWDS appears to support a wider set), it would then seem like we would need to know the matrix of support for the site as it is currently built and the matrix of visitors above 2% (as an average of a quarter, year, whatever). Our js-experience aim would then be to have those two matrices overlap as near as possible. A thing I like in Babel is you can output the targets at build time (through the debug flag). E.g.:
This also outputs what polyfills would be required to satisfy the full set of target (note that in this case we do not automatically include these polyfills, but having the list available is helpful toward catching potential bugs). esbuild likely has an analogous way to do this. The part I'm missing is I'm not sure that we have similarly accessible output on visitor data available to everyone who is working on the codebase. We should be able to answer whether when we get bugs—are they something that is within our support or not? We will (and do) get error reports from visitors in the less than 2% bucket, but the burden there should be on how do we funnel them into the no-js experience versus whether the bug should be fixed in the js experience or not. We also should be able to answer whether our documented browser support (i.e. here here here) is current at any given time. As it is, that's a snapshot in time. Our support is whatever the build spits out against actual visitors, not what we say it is. |
Visitor browser data is available via New Relic, which should be accessible to all developers via a query like
This gives you results like this, which can also be exported via JSON: @anselmbradford we've discussed in the past creating an automated job that pulls these statistics and puts them somewhere; would that be sufficient for your use case? |
Cfgov-refresh and other projects have this list in the config, we could add this to readme or other docs - M as long as it’s up to date!
The text was updated successfully, but these errors were encountered: