Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fenced frames with local unpartitioned data access #975

Closed
shivanigithub opened this issue Jul 10, 2024 · 5 comments
Closed

Fenced frames with local unpartitioned data access #975

shivanigithub opened this issue Jul 10, 2024 · 5 comments
Assignees
Labels
Focus: Privacy (pending) privacy-tracker Group bringing to attention of Privacy, or tracked by the Privacy Group but not needing response. Progress: review complete Resolution: object The TAG has resolved this should not proceed towards Recommendation Topic: privacy

Comments

@shivanigithub
Copy link

I'm requesting a TAG review of Fenced Frames with local unpartitioned data access.

Overview of proposal
There are situations in which it is helpful to personalize content on pages with cross-site data, such as knowing whether a user has an account with a third-party service, whether a user is logged in, displaying the last few digits of a user’s credit card to give them confidence that the check-out process will be seamless, or a personalized sign-in button. These sorts of use cases will be broken by third-party cookie deprecation (3PCD). Fenced frames are a natural fit for such use cases, as they allow for frames with cross-site data to be visually composed within a page of another partition but are generally kept isolated from each other.
The idea proposed here is to allow fenced frames to have access to the cross-site data stored for the given origin within shared storage. In other words, a payment site could add the user’s payment data to shared storage when the user visits the payment site, and then read it in third-party fenced frames to decorate their payment button.
Today’s fenced frames prevent direct communication with the embedding page via the web platform, but they have network access, allowing for data joins to occur between colluding servers. Since the fenced frame in this proposal would have unfettered access to user’s cross-site data, we cannot allow it to talk to untrusted networks at all once it has been granted access to cross-site data. Therefore, we require that the fenced frame calls window.fence.disableUntrustedNetwork() before it can read from shared storage.
The driving motivation for this variant of fenced frames are customized payment buttons for third-party payment service providers (as discussed in this issue) but this proposal is not restricted to payments and we anticipate many other content personalisation use cases will be found with time.

Further details:

  • [✓ ] I have reviewed the TAG's Web Platform Design Principles
  • The group where the incubation/design work on this is being done (or is intended to be done in the future): WICG
  • The group where standardization of this work is intended to be done ("unknown" if not known): WHATWG HTML Standard
  • Existing major pieces of multi-implementer review or discussion of this design: None yet
  • Major unresolved issues with or opposition to this design: None
  • This work is being funded by: Google Privacy Sandbox

Security and Privacy questionnaire based on https://www.w3.org/TR/security-privacy-questionnaire/

  1. What information might this feature expose to Web sites or other parties, and for what purposes is that exposure necessary?

    Fenced frames can be viewed as a more private and restricted iframe. Fenced frames with the unpartitioned data access allows it to read unpartitioned data from shared storage to show personalized user information to the user, e.g. personalized payment button as described in the explainer. Existing fenced frames functionality already disables communication from the fenced frame to the embedding context but to access the unpartitioned data, the fenced frame is also required to disable network communications, with exceptions such as private aggregation report as described in the explainer here.

  2. Do features in your specification expose the minimum amount of information necessary to enable their intended uses?

    Yes, see above answer for ways information exposure is minimized.

  3. How do the features in your specification deal with personal information, personally-identifiable information (PII), or information derived from them?

    Any unpartitioned data that the fenced frames read, if it contains PII, is not exfiltrated out of the fenced frame.

  4. How do the features in your specification deal with sensitive information?

    Same answer as # 3.

  5. Do the features in your specification introduce a new state for an origin that persists across browsing sessions?

    No.

  6. Do the features in your specification expose information about the underlying platform to origins?

    No

  7. Does this specification allow an origin to send data to the underlying platform?

    No

  8. Do features in this specification allow an origin access to sensors on a user’s device

    No

  9. What data do the features in this specification expose to an origin? Please also document what data is identical to data exposed by other features, in the same or different contexts.

    Same answer as # 1.

  10. Do features in this specification enable new script execution/loading mechanisms?

    No

  11. Do features in this specification allow an origin to access other devices?

    No

  12. Do features in this specification allow an origin some measure of control over a user agent’s native UI?

    No

  13. What temporary identifiers do the features in this specification create or expose to the web?

    None.

  14. How does this specification distinguish between behavior in first-party and third-party contexts?

    Fenced frames are always present as embedded frames.

  15. How do the features in this specification work in the context of a browser’s Private Browsing or Incognito mode?

    No difference with a regular mode browser

  16. Does this specification have both "Security Considerations" and "Privacy Considerations" sections?

    Yes, privacy considerations and security considerations.

  17. Do features in your specification enable origins to downgrade default security protections?

    No

  18. How does your feature handle non-"fully active" documents?

Based on https://www.w3.org/TR/design-principles/#support-non-fully-active:

  • There is no user interaction with the fenced frame in a non-fully-active document.
  • There is no cross-document interaction/resource sharing possible (e.g. holding locks) in a fenced frame.
  • There is no expectation that the unpartitioned data read from within a fenced frame should be available when the document is restored.
  1. What should this questionnaire have asked?

    N/A

@hober
Copy link
Contributor

hober commented Aug 20, 2024

Recent discussion on #838 (the overall design review of Fenced Frames) covered the "last 4 digits" payment use case. Would you like us to move discussion of that particular use case to this issue? If so, how would you like us to down-scope issue #838?

@torgo torgo added Topic: privacy privacy-tracker Group bringing to attention of Privacy, or tracked by the Privacy Group but not needing response. Focus: Privacy (pending) labels Aug 29, 2024
@torgo torgo assigned torgo, martinthomson and jyasskin and unassigned torgo Aug 29, 2024
@shivanigithub
Copy link
Author

Recent discussion on #838 (the overall design review of Fenced Frames) covered the "last 4 digits" payment use case. Would you like us to move discussion of that particular use case to this issue? If so, how would you like us to down-scope issue #838?

Yes I think focusing this issue for "local unpartitioned access" makes sense, since that's an added major functionality on existing fenced frames.
With that, #838 can then be scoped to the fenced frame API and its core design pieces, including but not limited to being treated as a separate browsing context, it's navigation using fencedFrameConfig object etc.
PLMK if there are any more clarifications on this, thanks!

@shivanigithub
Copy link
Author

To give some more context on the 3 fenced frames TAG reviews so far:

These 2 that were created where fenced frames support use cases requiring src which should be hidden from the embedder. Examples of these use cases are Protected Audience and selectURL

And this TAG review that specifically focuses on local unpartitioned data access rendering

If it is recommended to instead converge this issue with #838 to review all use cases together, we could do that.

@jyasskin
Copy link
Contributor

The TAG agrees that it would be useful to enable the user-focused use case here. Specifically, the web is in the situation that sites show a list of third-party providers, each of which might or might not be able to help the user sign in, pay, or perform some other function on the main site. The user may not remember which of those providers they've stored the relevant data with, and it's frustrating to click one of these buttons only to find that it can't help you. In a browser with 3p cookies, those buttons can give an indication of what data they have access to, but as browsers phase that out, this sort of button can no longer provide these capabilities. It seems useful to try to prevent that frustration, if doing so is possible without confusing users about what identity they've already presented to the main site.

The explainer is very unclear about whether that use case is actually the fundamental goal of this proposal. If the explainer is literally correct that the goal is to "decorate a third-party widget with cross-site information about the user", we think that's very likely to be a harmful goal and incompatible with our work on privacy principles for the web.

Even if we've correctly understood the use case, we think the proposed solution in this feature makes it too hard for users to correctly infer who already has access to that information. If a user incorrectly infers that the containing site already knows their identity, they're more likely to then "agree" to share their identity with the site, violating the privacy principle on identity. The two concrete examples of when to use this feature appear to be causing this sort of mistake in practice, whether or not their designers intended the deception.

For example, Google Accounts presents a login chip on a number of websites (such as Reddit). Some versions of this chip show your Google account name, profile image, and email address. Several members of the TAG have concluded from this UI that they had already used Google to log into a site, even though they hadn't. We then clicked through the login chip, creating a connection between Google and the site that we hadn't intended or wanted. Even if it wasn't intentional on the part of the UI designers, this had the effect of reducing our autonomy. FedCM seems like a better solution for login than letting the providers embed cross-site data.

Google Pay implemented a button that presents the last four digits of a credit card, taken from the last transaction with that service, even if the transaction was elsewhere on the web. This greatly improved the rates at which people completed a purchase. However, we're concerned that, like in the login case, this increase in purchases might be happening because users incorrectly concluded that they'd already bought something from the active site, and we haven't seen UX research that explored users' beliefs in this case. Further work on Payment Handlers might be a better way to expose this sort of hint.

We don't mean to imply that Google is unusual in these practices. These techniques lead to better business outcomes for websites and their service providers, and it's perhaps unsurprising that neither group has checked what fraction of users are getting the outcomes they want. But we need that evidence before considering this UI in user agents. And these are just the relatively benign cases: once a browser removes 3p cookies, truly malicious actors have a much stronger incentive to find ways to trick users into joining their identities (see some ideas in WICG/turtledove#990). This proposal doesn't analyze or protect against that risk.

One might argue that the proposal is ok because it just allows websites to give their users false beliefs, and a user has to still separately consent before their private information is released, but as far as we can tell, identities can be joined as soon as the user clicks, which isn't sufficient for the browser to know they've consented. Even if there were a separate consent screen, its task seems very difficult, needing to both explain what the user's being asked to consent to, and override anything the user's been convinced to believe about what information the site already has.

We suspect that embedding information from a different context inherently enables deception about the surrounding site's knowledge. Certainly embedded sites could do the work of explaining what information their embedders already have, but we've seen that the default behavior is not to do that, and we haven't seen either what malicious actors could do with this when motivated, or a creative analysis of the worst abuse cases. If you intend to pursue something akin to this general approach, a thorough analysis of the ways in which this might be abused and mitigated is essential.

We want to reiterate that the core use case seems valuable to solve, and we encourage you to keep trying to solve it. To do this safely, we suspect that you'll need to show the available information in browser UI, rather than inside the content area.

@jyasskin jyasskin added Progress: review complete Resolution: object The TAG has resolved this should not proceed towards Recommendation and removed Progress: in progress labels Nov 27, 2024
@shivanigithub
Copy link
Author

One might argue that the proposal is ok because it just allows websites to give their users false beliefs, and a user has to still separately consent before their private information is released, but as far as we can tell, identities can be joined as soon as the user clicks, which isn't sufficient for the browser to know they've consented. Even if there were a separate consent screen, its task seems very difficult, needing to both explain what the user's being asked to consent to, and override anything the user's been convinced to believe about what information the site already has.

Could you elaborate on the understanding that a click would be sufficient for identities to be joined with the fenced frames solution?
It's an explicit design goal to not be able to exfiltrate the data read within the fenced frames to the embedding page or to the network. Post-click, the embedding context knows there was a click and the action they take would be independent of what was displayed in the fenced frame, e.g. opening a new pop-up or invoking PaymentHandler etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Focus: Privacy (pending) privacy-tracker Group bringing to attention of Privacy, or tracked by the Privacy Group but not needing response. Progress: review complete Resolution: object The TAG has resolved this should not proceed towards Recommendation Topic: privacy
Projects
None yet
Development

No branches or pull requests

6 participants