From 30f7f9ef8caab26c3a3dde1b90a22eab482fc8dc Mon Sep 17 00:00:00 2001 From: "Dean H. Saxe - AWS Identity" <33666281+dhs-aws@users.noreply.github.com> Date: Thu, 6 Jun 2024 15:20:25 -0700 Subject: [PATCH] Continued writing on the intro --- ...token-exchange-and-translation-protocol.md | 37 +++++++++++++++---- 1 file changed, 29 insertions(+), 8 deletions(-) diff --git a/draft-saxe-wimse-token-exchange-and-translation-protocol.md b/draft-saxe-wimse-token-exchange-and-translation-protocol.md index 68e2cb5..b4d0b10 100644 --- a/draft-saxe-wimse-token-exchange-and-translation-protocol.md +++ b/draft-saxe-wimse-token-exchange-and-translation-protocol.md @@ -59,34 +59,51 @@ informative: --- abstract -The specification defines the processes of token exchange and token translation for workloads. Token exchange is well defined for OAuth 2.0 in RFC8693, allowing the exchange of access tokens, refresh tokens, id_tokens, and SAML assertions for new OAuth access or refresh tokens. However, for workloads, there exist a broad array of input and output token types which must be considered beyond the input types supported by RFC8693. These token types include, but are not limited to, SPIFFE SVIDs, x.509 certificates, Amazon sigv4A, macaroons, <...>. Further, these tokens may be encoded in formats including JWT, CBOR, and protocol buffers (protobufs). Given the variety and complexity of input and output token types and encoding, a strict token exchange that maintains all of the contextual information from the input token to the output token may not be possible. We define these potentially lossy conversions as "token translation" (e.g. information may be lost in translation). In this document we describe the process and mechanisms for token exchange, using the existing mechanisms in RFC8693, and a new set of translations between arbitrary token types. Additionally, we define mechanisms to enrich tokens during translation to support the use cases defined in . +The specification defines the processes of token exchange and token translation for workloads. Token exchange is well defined for OAuth 2.0 in RFC8693, allowing the exchange of access tokens, refresh tokens, id_tokens, and SAML assertions for new OAuth access or refresh tokens. However, for workloads, there exist a broad array of input and output token types which must be considered beyond the input types supported by RFC8693. These token types include, but are not limited to, SPIFFE SVIDs, x.509 certificates, Amazon sigv4A, macaroons, <...>. Further, these tokens may be encoded in formats including JWT, CBOR, and protocol buffers (protobufs). Given the variety and complexity of input and output token types and encoding, a strict token exchange that maintains all of the contextual information from the input token to the output token may not be possible. We define these non-RFC8693 use cases with potentially lossy conversions as "token translation" (e.g. information may be lost in translation). In this document we describe a workload profile for token exchange, using the mechanisms in RFC8693, and a new set of translations between arbitrary token types. Additionally, we define mechanisms to enrich tokens during translation to support the use cases defined in . --- middle # Introduction -Define the need for token exchange & translation - refer to the use cases. +TODO: What is a security token? What is a STS? (see https://datatracker.ietf.org/doc/html/rfc8693, the intro has great definitions) -## Token Translation Endpoint +TODO: Define the need for token exchange & translation - refer to the use cases. + +This specification defines a protocol for converting from one security token to another with support for high fidelity and lossy conversions. We refer to the high fidelity exchange as "token exchange" as has been embodied in OAuth 2.0 Token Exchange (RFC8693). We profile RFC8693 to enable OAuth token exchange for workloads where the output is an OAuth Access Token or Refresh Token. "Token translation" describes all other conversions, including those where data loss may occur during conversion. This protocol does not define the specifics of token translation between arbitrary token types. Profiles must be defined to describe token translations between different token types, including any loss of context during translation. Where the input and output token are of the same type, and the protocol herein is sufficient to meet the use cases defined in . ## Token Exchange vs. Token Translation TODO - define exchange vs. translation in terms of RFC8693 and WS-Trust. Translation may be perfect or introduce lost context -## Lossy Translation +Token translation fills a gap that workloads must reinvent today. For example, a common SPIFFE workload use case is to have a Kubernetes workload assume an AWS IAM role to access an S3 bucket. -TODO - define what we mean by lossy. What's lost? Does this mean that some token translations lose valuable information? -TODO - provide a specific lossy scenario and use case. +Token translation accounts for different token types, formats, encodings, and encyryption allowing for translation between most, but not all, token types using token translation profiles. Profiles are not required when the input and output token are the same type. Not all token input/output pairs are expected to be profiled. During translation, the token translation service (TTS) may add, replace, or remove contextual data including attestations, validity constraints, and subjects. Cryptographic operations on the tokens may be replaced or supplemented, such as by adding PQC algorithms to a token encrypted and signed with classical algorithms. For each use case defined in , this document defines the protocol requirements. + +Different token types, formats, encodings, crypto and public key +Additional context or attestation added +Replaced context or attestation +Context or attestation removal +Transaction tokens +Validity constraints +Identity change between domains + + +## Token Translation Endpoint + +TODO - Define a new translation endpoint. ## Token Context Enrichment TODO - what context do we enrich tokens with during translation? Embedding tokens, attestations, assertions, validity, change/add subject, sender constraints. This doc can give specific guidance on adding context to a scoped set of token types that are common. -## Token Translation Profiles +## Lossy Translation -TODO - this draft does not define normative specs for translating from arbitrary format to another arbitrary format. Profiles describing specific token translations must be developed and their names (possibly?) registered with IANA. Profiles will define any losses that may occur during translation and the risks associated with the loss of context. Not all token pairs can be translated, some may only be translatable in one direction. +TODO - define what we mean by lossy. What's lost? Does this mean that some token translations lose valuable information? +TODO - provide a specific lossy scenario and use case. +## Token Translation Profiles +TODO - this draft does not define normative specs for translating from arbitrary format to another arbitrary format. Profiles describing specific token translations must be developed and their names (possibly?) registered with IANA. Profiles will define any losses that may occur during translation and the risks associated with the loss of context. Not all token pairs can be translated, some may only be translatable in one direction. # Conventions and Definitions @@ -102,6 +119,10 @@ TODO Security This document has no IANA actions. +# Appendices + +## Appendix 1 - Non-normative token exchange examples + --- back