-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workstream: Hardware Attested Build Environments #975
Comments
I think this isn't really specific to Build platform. Maybe just "Platform" or "Platform Operations" or "Platform Security" track? For example, once we have a Source track, the same requirements will likely apply there, too. |
@MarkLodato I'm trying to reason through what you mean by that but I'm not sure I follow. The doc describes a handful of cryptographic operations that can be executed in the build environment, not only by the platform to provide their own attestations about the environment but also by a consumer who wants to perform their own validation. I think this property is unique to CI because the consumer can execute whatever they want inside the build environment. We are also looking at fleshing out some requirements that can't be validated in the same way - the distinction we have been drawing there is that some requirements are "verifiable" whereas others are "auditable" (e.g. requiring a third-party investigation and attestation that certain requirements are being met). Is that where you see the overlap with things like the Source track? |
I'm sorry, I saw "Build Platform Operations" but didn't read the actual proposal or your questions. Upon further review, I agree that it's not "Build Platform Operations" but more of a "Hardware Attested Build". I also am not sure it's necessarily a new track. I'll update this issue and create a separate one for the Operations track. |
This also seems related to the reproducible builds discussion in #977. In both cases, we want stronger guarantees that the build platform is not compromised. With reproducible builds, we do that by running the build multiple times by independent parties. With trusted computing, we rely on hardware built into modern CPUs. In both cases, we increase the cost of attack. It also seems to overlap with Operations track in #985. Yet again, it's about preventing an operator from influencing the build. Maybe it makes sense to merge all three ideas into a new Build level or two (L4 or L5) that describes this higher property - that an operator of a single build platform has no way of influencing the build, even if they collude with colleagues? |
I'm going to push back a little bit on the notion that requirements on platform operations (implemented by operators) should be merged with the Build track (implemented by developers). I do think that choosing a verifiable build platform (whether that's reproducible and/or HW-attested etc.) should correspond to a higher Build track level, but from a separation of responsibilities standpoint, I think that defining integrity properties and requirements for build platform operators does warrant a separate track. The other comment I'll make is that from a platform operator view, implementing a verifiable build platform will require us to dive one layer deeper into the different components. Even though we tend to think of build platforms as a single unit, this isn't really true in practice. For instance, one challenge we've come across is defining requirements for cases in which the build platform doesn't necessarily own/provide all of its own infrastructure (e.g., in GH's case, the build VMs run on Azure, introducing additional operators from third parties). So, we've found that we'll have to identify these trust boundaries and components within a build platform so that SLSA may define requirements for these various pieces. We've made a first attempt at this in our current build platform model. We've laid out a number of requirements for build platform operators in our Draft Doc, though they are currently guided by what is feasible through hardware-based mechanisms. I'd be very interested in hearing others' thoughts on whether those requirements are 1) sufficient and 2) generalizable to other verifiable platform approaches. EDIT: In case this wasn't clear from my comments above, I want to distinguish between HW-attested builds and HW-attested platforms. HW attestations don't attest to application behavior, they attest to platform integrity and need to be enabled by the platform operator, so I think we need to be careful not to conflate the two concepts. |
@MarkLodato's earlier comment got me thinking if this could be generalized to something like a "Hardware Attested Supply Chain Steps". That's definitely a terrible name so, please don't actually use that. 😅 I know that SLSA is primarily concerned with Build today but, I could see additional tracks with recommendations for static analysis, testing, or vulnerability scanning. Even in the case of a source track, many organization are running and host their own source code repositories (Gitlab, Github Enterprise, etc) and will be concerned with tampering of those systems. Or a developer's laptop could be tampered with upon committing code. From the threats in the document I could see a generalization towards something like:
The modification or tampering with source code, an SBOM, vulnerability database, vulnerability scan results, static/dynamic analysis findings, or many other supply chain steps could certainly compromise a software supply chain. |
@marcelamelara Sorry, I don't follow. What is the difference between "HW-attested builds" and "HW-attested platforms"? Can you rephrase in terms of threats that are being addressed? I think that might help me better understand. (https://slsa.dev/threats may provide some prior art.) In my mind, the threat model is this: the attacker intends to get the victim to accept an artifact Threats addressed at each level:
I thought that last bullet was what both Hardware Attested Builds and Reproducible builds protect against. In both cases, it is a reduction in the size of the Trusted Computing Base. At Build L3, you need to trust hundreds/thousands of employees, software, and hardware with privileged access. With Hardware Attested Builds, we can rely on Intel/AMD's hardware2; with reproducible builds, we can require the attacker to compromise multiple independent parties. Either way, the threat seems similar. Am I misunderstanding? Is the threat actually something else? Footnotes
|
@jkjell Yeah, I think it can generalize well. We might still consider it a "build" in the generic sense, taking some inputs and transforming to some output, with the provenance describing that transformation process. If you identify everything by hash and have trustworthy provenance, then perhaps we could cover a lot? |
Updated the title to "Hardware Attested Platforms" as per @marcelamelara's comment at #981 (comment). |
@MarkLodato I appreciate the detailed threat model. In this framing, I'd add that the |
Ah, this may be the difference in our mental models. In my mind, the process
Isn't that the purpose of the signature on the provenance?
In my mind, it's about shrinking the size of the trusted computing base (TCB)
So I see both of these as gaining greater trust in the provenance claiming that inputs |
I agree that the two are complementary - I figure that a hardware attested build can strengthen the claims made by any individual reproduced build. While reproducible builds and hardware-attested builds are targeting a similar problem, I think there is a particular niche that reproducible builds does not solve well. While SLSA is heavily influenced by desires to secure the distribution of open-source software, the framework (and the guarantees provided by it) map well into a closed-source organization if given a suitable private sigstore implementation. I have already heard from a number of our highly regulated customers that they are working to ensure all of their CI workflows meet SLSA L2 and eventually L3 despite the fact that their code will never be publicly visible. Ignoring the security benefits, there are also ecosystem-level reasons for us to encourage closed-source adoption of SLSA - since greater organizational adherence is likely to drive more open source adoption. For closed-source organizations, reproducible builds are still a desirable goal but the act of actually reproducing them on other build providers is likely to be cost prohibitive. At the very least, there will always exist cost-based negative incentives to doing so. Hardware attestation of build environments provides a lower cost option to reduce the amount of trust required in the builder. |
@chkimes I agree with everything you said (except the bit about SLSA being influenced by the desire to secure open source; see https://cloud.google.com/docs/security/binary-authorization-for-borg 😄.) That's why I am suggesting that SLSA focus on the problem/outcome rather than a specific solution. If hardware-attested builds and reproducible builds indeed target roughly the same problem, then my inclination is to describe the desired outcome as a single level rather than having one level/track for one solution and another level/track for another solution. But I don't think we yet have agreement that they are targeting the same problem. 😁 |
To update this thread. Given the discussions we've had with the SLSA spec community, we've landed on including HW attested build platforms as part of a higher level of the build track. The main reasoning is that the Build Track does already cover both producer and build platform requirements. |
The attestation function of commercially available TEEs isn't, itself, implemented in hardware. It's typically implemented in software that is provided by a hardware provider (e.g., SGX quoting enclave) I think that the important property is that there is a trusted third party who is standing behind the attestation, not that it's "hardware attested." They may use a combination of proprietary hardware, software, and business processes to provide a high-assurance attestation. I would encourage adopting a broader definition / terminology so that many trusted / confidential computing designs can "qualify" if the parties trust the attestations made. |
@mswilson the draft document references vTPM and Confidential Virtual Machines. Often times those technologies are implemented at some level in hardware (i.e. specific instruction sets for virtualization). Are there other technologies implementing trusted / confidential computing designs that you think should be included?
I disagree with this. This definition would allow anyone to provide any sort of attestation, with no ability to verify anything, and we take it on trust. This would be akin to a signature on a container image, a black box representation of trust. I see the important property as the increase in transparency about what is being attested to, and the ability for an external party to verify it. In this way, we reduce the level of trust to a third party to the smallest scope possible. That scope is detailed in the threat model. |
The proposal explicitly allows this if a user wants to achieve it. In a very brief summary, the proposal requires:
This is perhaps a quibble in wording? The hardware is attesting the current state of the machine depending on what has been written into its attestable measurement registers (e.g. PCRs). A first or third party is necessarily involved in that attestation dance and can then create their own attestation that the state of the hardware is valid. The proposal title perhaps makes it seem focused on the former, when both are actually in scope. Do you have a recommendation for a concise title that would make this nuance more clear? |
Any signature produced has to be taken on trust. How trust is established can be done many ways. If you are trusting a signature rooted in SGX, or SEV, or a TPM, or something else, you have to trust that the full system has a sound design, and that the implementation maintains all of the properties required to have confidence in the attestation. You have to trust that the materials used to produce cryptographic attestation are sufficiently protected, and in practice we've seen systems where this has not held true over time. https://www.youtube.com/watch?v=mqma65eRYbo Nitro Enclaves provide an ability for AWS to make an attestation about the image and configuration used by an instance that provisions an enclave. Some would argue that Nitro Enclave attestations are not "hardware attested" because that attestation does not surface to the user a "hardware root of trust." Performing builds in a Nitro Enclave are more naturally hermetically sealed (there's no I/O other than what is permitted via a vsock connection) and minimized TCB. I think it would be a shame if Nitro Enclave attestations weren't considered "a high bar" merely because they are not generally marketed as "hardware attested." |
Perhaps "trusted independently attested compute environments"? Too wordy? A point I'm trying to make is that there are compute environments where hardware details are abstracted away from the user of the resource, and that is not a bad thing when it comes to building a high-trust system. Using Nitro Enclaves requires that you trust AWS to dutifully implement the isolation, protection, and attestation features of the product, "as advertised". Using Intel SGX requires the same "leap of trust" from my perspective. |
@mswilson I appreciate your perspective on the nuance of AWS Nitro vs an Intel SGX, for example. One of the challenges we're trying to address in this proposal is the variety of TEEs, so I would really like to be able to make sure we're capturing AWS Nitro in our model and requirements as well. I do want to note that this proposal primarily targets implementers/deployers of build infrastructure, so in the ideal case, the tenant of the build platform shouldn't have to directly interact with the specific underlying compute platform in any case, whether it's an Intel TDX TD or an AWS Nitro Enclave.
I might be amenable to renaming the proposed Build Track level to something like "Attested Build Platforms" (i.e., dropping the "hardware"), but do want to emphasize that we are seeking to reduce trust in the build platform, and have much more than just the compute platform be verifiable via (hardware-rooted) attestation. |
Sorry for only now successfully following @pdxjohnny shiny crumbs ✨ Better late than never. I read the comments on this issue. And as there is a lot I'll start eclectically. @MarkLodato "reducing" the burden of proving a single @chkimes I assume with "validate build environments" you mean to produce trusted Attestation Results that reflect the believability w.r.t. the authenticity of a produced In general, what I read from this issue is that the intent is to create "stronger guarantees that the build platform is not compromised" (I'd use "assurances" instead of "guarantees"). Just including Evidence (in this issue often referred to as "attestation") produced by a |
Are there any tangible plans how to realize that? The CCC Attestation SIG might be a good place to start as there are a lot of similar interested parties active there. |
Remote Attestation is based on endorsed (that means NIST's 1st party and 3rd party Attestation) roots of trust. Trusting a RoT's trustworthiness is a decision (that is based to a big extend on their accompanying Endorsements). If that trust relationship can be established via policy, then it becomes unnecessary to "have to trust that the full system has a sound design" as remote attestation procedures exist to provide you with the outcome of exactly that appraisal, As soon as a trustworthiness assessment of a |
Please mind that "attestation" has two very different meanings and is pretty much always confused now that NIST came up with a second definition: |
@marcelamelara , what is the latest on this track? The proposal in the initial document is out of date as I think I recall the most recent discussion in a SLSA call being that there would be a new track with only one level (well, L0 and L1). Is there a reference to the proposal in advance of making a PR to SLSA? |
Thanks for the ping on this @arewm . Yes, we haven't updated the Doc, we've been quite bogged down with prepping a talk for OSS NA '24 on this topic. Given that our Google Doc is in a similar place as the Source Track's with many comments that aren't immediately actionable, we're likely going to freeze the Doc and open that PR directly. |
POC tracking issue: #1110 |
This PR introduces the following spec changes associated with #975. The spec enhancements are being proposed as the new "build Environment track". Spec changes: Adds new high-level build environment terminology and levels. Part 1 of #975 CC @paveliak --------- Signed-off-by: Marcela Melara <[email protected]> Co-authored-by: Tom Hennen <[email protected]> Co-authored-by: Dionna Amalie Glaze <[email protected]> Co-authored-by: Andrew McNamara <[email protected]>
With the merge of #1115, I think it's time to close this issue! |
This is a tracking issue for incorporating Hardware Attested Platforms, aka Trusted Computing into SLSA. The main idea is to provide greater trust in the build by using trusted computing features like Trusted Execution Environments (TEEs) of modern CPUs to reduce the risk of tampering and to increase transparency.
Workstream shepherd: Marcela Melara (@marcelamelara), Pavel Iakovenko (@paveliak)
Working proposal: #1051
Proposal doc: here
Related: We might want to merge with #977 (Build L4, discussing reproducible builds) and/or #985 (about hardening operations) as discussed in below.
Sub-issues:
In the 2023-09-13 Supply Chain Integrity meeting, @marcelamelara and I presented on a potential new SLSA track, using cryptographic primitives provided by hardware to validate build environments.
Slides: https://docs.google.com/presentation/d/11cycDxYaoZpuG144pR6atI1_zk2CfZOWlNO_f_HhhyE
Doc: https://docs.google.com/document/d/1l7IKAli-K-uof8VkLuiqV5-hMGS_ecDmBcuc07-ILeQ/edit
Recording: TBD pending upload to YouTube
Some points for discussion, seeding some from the SCI meeting:
The text was updated successfully, but these errors were encountered: