Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v1.0: Define how systems are verified (manually) #508

Closed
2 of 3 tasks
Tracked by #130
MarkLodato opened this issue Oct 17, 2022 · 18 comments
Closed
2 of 3 tasks
Tracked by #130

v1.0: Define how systems are verified (manually) #508

MarkLodato opened this issue Oct 17, 2022 · 18 comments
Assignees
Labels
spec-change Modification to the spec (requirements, schema, etc.)

Comments

@MarkLodato
Copy link
Member

MarkLodato commented Oct 17, 2022

For v1.0, we need to define how systems (particularly build systems) are verified to meet the SLSA requirements. These are the requirements that cannot be automated because they depends on the design of the system.

Sub-issues:

Context:

@MarkLodato MarkLodato added the spec-change Modification to the spec (requirements, schema, etc.) label Oct 17, 2022
@MarkLodato MarkLodato added this to the SLSA spec v1.0 milestone Oct 17, 2022
@MarkLodato MarkLodato changed the title v1.0: Define how systems are verified to meet the SLSA spec v1.0: Define how systems are verified (manually) Oct 17, 2022
@marcelamelara
Copy link
Contributor

marcelamelara commented Oct 25, 2022

Taking a stab at this. It seems that we're really trying to answer two questions here:
(1) How can each build requirement be assessed?
(2) Which, if any, evidence can we obtain to demonstrate a given requirement is met?

Here are all the build requirements, as of right now:

  • Scripted build (All build steps were fully defined in some sort of “build script”. The only manual command, if any, was to invoke the build script.)
  • Build service (All build steps ran using some build service, not on a developer’s workstation.)
  • Build as code (The build definition and configuration executed by the build service is verifiably derived from text file definitions stored in a version control system.) -- this is essentially scripted build + build service
  • Ephemeral environment (The build service ensured that the build steps ran in an ephemeral environment, such as a container or VM, provisioned solely for this build, and not reused from a prior build.)
  • Isolated (The build service ensured that the build steps ran in an isolated environment free of influence from other build instances, whether prior or concurrent.)

Answering the questions above, looking at the requirements in order of increasing difficulty to assess automatically:

  • For "scripted build" and "build as code", the existence of these files can be assessed and demonstrated by including a pointer/URI to the appropriate Github repo or file. The buildConfig field in the SLSA provenance seems appropriate for this, and could be described as a requirement for L1-L3 builds.

  • "Build service" is a bit tougher to assess, but I think including (a pointer to) the corresponding GHA or Jenkins logs may demonstrate that the specified build script was indeed run by a build service. There are two challenges here. First, the CI logs tend to be private, so project owners may be hesitant to expose these. Second, CI logs get huge quite quickly, so they cannot simply be downloaded and included in SLSA provenance as is. That said, I don't know if it should be within the scope of SLSA to address these issues, only to require/recommend that the build service logs be made available somehow for L2+ builds.

  • For the "Isolated" requirement (and sub-requirements), I think a newly proposed in-toto predicate for build run-time tracing may help here: Add runtime trace predicate type in-toto/attestation#111.
    Now, the "Isolated" requirements state that "it MUST NOT be possible" for a build to access certain resources. In the case of assessing whether a build access the provenance signing key, for instance, I still think it would be valuable for the build service to record such a trace for L3 builds and show that at least the build indeed did not access the keys, even if the trace does not prove that it is never possible. I can envision run-time traces being useful evidence for showing whether two builds overlapped on the same machine.

  • "Ephemeral environment" seems to be the most difficult to assess, even manually. I think clarifying the requirement text will be helpful here: what do we mean by "not reused from a prior build"? One way to interpret that is that a fresh container/VM image is spun up and back down for each build, and not somehow kept running longer-term. I wonder if build service logs, or more specifically Docker logs in the case of containers, could provide some evidence for L3 builds in this case. The container image hash would have to be included in the SLSA provenance along with a pointer to the logs, so we do potentially run into the same privacy issues as with the "build service" logs.

Final notes: My goal is really to start thinking about the evidence that would be needed to assess/demonstrate that a given build requirement has been met. I know that my suggestions don't always go all the way, especially in the L3 requirements that truly include a number of sub-requirements that may warrant a more fine-grained breakdown. But, I do hope they can provide some concrete starting points. I'd love to hear other folks thoughts.

cc @MarkLodato @TomHennen

@kpk47
Copy link
Contributor

kpk47 commented Oct 26, 2022

I've been thinking about build system verification, and I like your suggestions. I don't think they'll all work for closed source projects, though. The easiest (at least conceptually) closed source solution is probably formalizing the SLSA conformance program (#515). We can augment provenance fields that carry build info with fields that would hold proof for open source projects and attestations from a third party for closed source projects.

Going by level and requirement, I think that looks like:

  • L1: The provenance should contain a pointer to the evidence.

    1. Scripted build: In this case, the evidence is either a pointer to the build script in version control or a third-party attestation. My read of the provenance schema is that the invocation field is a better fit than the buildConfig field, but I'm happy to be convinced otherwise. In either case, we should make the field mandatory. If we use invocation then we'll need to add a field to hold third-party attestations. If we use buildConfig then we could use a new buildType to signal that buildConfig holds a third party attestation.
  • L2: The provenance should contain a pointer to the evidence.

    1. Build service: I like Marcela's suggestion of using CI logs to tie a build to a specific CI run. We could populate buildType with the name of the build system and put a pointer to the logs in the buildConfig field. In the closed source case, we could have a buildType value that indicates close/proprietary build systems and put a third-party attestation in buildConfig.
  • L3: This level is where human verification is really necessary since the requirements are all about properties of the build system (as opposed to its existence). SLSA Compliance Program #515 mentions using a survey to gather security evidence, so I'm going to work in that direction. A builder could either publish its answers to the questionnaire publicly or have a third party attest that their answers meet the L3 requirements. The provenance would contain either a pointer to public survey responses or a third-party attestation, perhaps in the metadata field.

    1. Build as code: This requirement is analogous to the "scripted build" requirement for L1, so we should replace the build script in invocation/buildConfig with the build config here.
    1. Isolated: We'll need to wordsmith the questionnaire for this requirement. I would prefer to maintain that "it MUST NOT be possible" for a build to access builder resources (e.g. keys) and require that the isolation be maintained either through an access control mechanism. We're having a human assess this requirement, and it's hard for humans to assess trace logs. I also don't know if it's practical to assess individual builds as opposed to assessing build systems periodically.
    1. Ephemeral: We'll need to wordsmith the questionnaire for this requirement.

@marcelamelara
Copy link
Contributor

Thanks for your input @kpk47 ! I'll also have to look into the Conformance Program.

Here are some specific comments.

I don't think they'll all work for closed source projects, though. The easiest (at least conceptually) closed source solution is probably formalizing the SLSA conformance program (#515).

Totally agree here. Even in the case of releasing CI logs for a given run has this issue. So, I'm hugely in favor of being able to include third-party attestations as evidence for a build system's properties.

That said, I think there are some assumptions in the closed-source case that aren't fully clear to me yet.

(0) What information would builders/producers want to keep private in the closed-source case: Do they want to keep information about the type of build script or build service they used private (in addition to the pointers to the build script/CI logs)?
(1) If the answer to Q0 is yes, would we expect an attestation each for the "scripted build" and "build service" requirement?
(2) Thinking about in-toto predicates, what format would these attestations take? Should SLSA recommend/require a set of acceptable formats for these?

My read of the provenance schema is that the invocation field is a better fit than the buildConfig field, but I'm happy to be convinced otherwise. In either case, we should make the field mandatory. If we use invocation then we'll need to add a field to hold third-party attestations. If we use buildConfig then we could use a new buildType to signal that buildConfig holds a third party attestation.

Also totally agree that whichever field we use needs to be mandatory given the "scripted build" requirement is needed starting at L1.

The current provenance spec is kind of ambiguous about the purpose of the invocation.configSource field vs buildConfig. According to https://slsa.dev/provenance/v0.2#fields:

  • buildConfig: Lists the steps in the build. If invocation.configSource is not available, buildConfig can be used to verify information about the build.
  • invocationConfigSource: Describes where the config file that kicked off the build came from. This is effectively a pointer to the source where buildConfig came from.

So, invocation.configSource should be the more suitable field to point to the build script for the "scripted build" requirement. Then, it makes sense to add a field to hold the third-party attestation.

We could populate buildType with the name of the build system and put a pointer to the logs in the buildConfig field. In the closed source case, we could have a buildType value that indicates close/proprietary build systems and put a third-party attestation in buildConfig.

Without answers to some of my questions above about assumptions in the closed-source case, I hesitate to agree to introduce new buildTypes for the closed-source scenarios in L2 (and L1), because I think that could lead to more confusion or ambiguity. I think having a type field in the evidence object(s) themselves would make the semantics of the evidence much clearer to verifiers, keeping that information much closer to the data that it relates to. Now, if a builder does want to conceal information about the actual build script type or build service used (Q0 above), then there clearly would have to be a way for them to specify the L1/L2 buildType that is accompanied by third-party attestation evidence.

That said, I would even go as far as to suggest renaming buildConfig to something like buildService to make it abundantly clear that the information in that field is needed for L2, specifically, while invocation can hold evidence for the build script.

A builder could either publish its answers to the questionnaire publicly or have a third party attest that their answers meet the L3 requirements. The provenance would contain either a pointer to public survey responses or a third-party attestation, perhaps in the metadata field.

This makes sense to me.

We're having a human assess this [isolated] requirement, and it's hard for humans to assess trace logs.

My point here was more to suggest that parts of this assessment could be automated, especially if a third-party auditor is involved. Is there a reason for having the requirement be as strict as "MUST NOT be possible"? Not that this isn't something builders shouldn't strive for, but I do wonder if this isn't too difficult to assess. Put differently, I wonder if the jump between L2 and L3 (as currently written, at least) isn't too big or difficult to achieve.

I also don't know if it's practical to assess individual builds as opposed to assessing build systems periodically.

Great point. I might even suggest that assessment frequency, even if it's a range like "once a month to once a quarter" should be something that is codified into the spec for L3 and above, or at least require that the frequency be specified in the provenance, if SLSA doesn't want to be too prescriptive.

@kpk47
Copy link
Contributor

kpk47 commented Oct 28, 2022

Thanks for your input @kpk47 ! I'll also have to look into the Conformance Program.

Here are some specific comments.

I don't think they'll all work for closed source projects, though. The easiest (at least conceptually) closed source solution is probably formalizing the SLSA conformance program (#515).

Totally agree here. Even in the case of releasing CI logs for a given run has this issue. So, I'm hugely in favor of being able to include third-party attestations as evidence for a build system's properties.

That said, I think there are some assumptions in the closed-source case that aren't fully clear to me yet.

(0) What information would builders/producers want to keep private in the closed-source case: Do they want to keep information about the type of build script or build service they used private (in addition to the pointers to the build script/CI logs)?

Good point. There's probably a range of preferences for how much information an organization is willing to share about their internal processes, and it probably doesn't actually map very well to the open/closed source dichotomy I was using. We'd probably be better off thinking of a spectrum between minimally and maximally public processes. Even with that framing, I think that third party attestations offer an elegant solution. If you trust the third party completely, then you don't need any supporting metadata other than their attestation. If you trust them partially, then you might want additional metadata.

(1) If the answer to Q0 is yes, would we expect an attestation each for the "scripted build" and "build service" requirement?

I'm not aware of any build services that don't require some sort of build script, so I think we can accept either "scripted build" or "build service" with the understanding that "build service" implies "scripted build".

(2) Thinking about in-toto predicates, what format would these attestations take? Should SLSA recommend/require a set of acceptable formats for these?

It makes sense for SLSA to recommend a format for the attestations, though now may be a bit premature. We'll have to flesh out the Conformance program a bit more before we know what exactly the requirements are for the attestations. For example, if the attestations rely on public key cryptography, then we'll have to figure out what to do about certificate expiration and whether we need to encode it in the attestation.

My read of the provenance schema is that the invocation field is a better fit than the buildConfig field, but I'm happy to be convinced otherwise. In either case, we should make the field mandatory. If we use invocation then we'll need to add a field to hold third-party attestations. If we use buildConfig then we could use a new buildType to signal that buildConfig holds a third party attestation.

Also totally agree that whichever field we use needs to be mandatory given the "scripted build" requirement is needed starting at L1.

The current provenance spec is kind of ambiguous about the purpose of the invocation.configSource field vs buildConfig. According to https://slsa.dev/provenance/v0.2#fields:

* `buildConfig`: Lists the steps in the build. If `invocation.configSource` is not available, `buildConfig` can be used to verify information about the build.

* `invocationConfigSource`: Describes where the config file that kicked off the build came from. This is effectively a pointer to the source where `buildConfig` came from.

So, invocation.configSource should be the more suitable field to point to the build script for the "scripted build" requirement. Then, it makes sense to add a field to hold the third-party attestation.

+1

We could populate buildType with the name of the build system and put a pointer to the logs in the buildConfig field. In the closed source case, we could have a buildType value that indicates close/proprietary build systems and put a third-party attestation in buildConfig.

Without answers to some of my questions above about assumptions in the closed-source case, I hesitate to agree to introduce new buildTypes for the closed-source scenarios in L2 (and L1), because I think that could lead to more confusion or ambiguity. I think having a type field in the evidence object(s) themselves would make the semantics of the evidence much clearer to verifiers, keeping that information much closer to the data that it relates to. Now, if a builder does want to conceal information about the actual build script type or build service used (Q0 above), then there clearly would have to be a way for them to specify the L1/L2 buildType that is accompanied by third-party attestation evidence.

I like the idea of having a type field in the attestations, or maybe a field indicating which SLSA requirement the third party is attesting to the builder meeting.

That said, I would even go as far as to suggest renaming buildConfig to something like buildService to make it abundantly clear that the information in that field is needed for L2, specifically, while invocation can hold evidence for the build script.

+1

A builder could either publish its answers to the questionnaire publicly or have a third party attest that their answers meet the L3 requirements. The provenance would contain either a pointer to public survey responses or a third-party attestation, perhaps in the metadata field.

This makes sense to me.

We're having a human assess this [isolated] requirement, and it's hard for humans to assess trace logs.

My point here was more to suggest that parts of this assessment could be automated, especially if a third-party auditor is involved. Is there a reason for having the requirement be as strict as "MUST NOT be possible"? Not that this isn't something builders shouldn't strive for, but I do wonder if this isn't too difficult to assess. Put differently, I wonder if the jump between L2 and L3 (as currently written, at least) isn't too big or difficult to achieve.

My mental model of the difference between an L2 builder and an L3 one is that an L3 builder treats the user-supplied build as an adversary and an L2 builder does not. If you want to forge L3 provenance, you'll need to exploit a vulnerability in the build platform. Assuming that model is correct, then at a high level the assessment boils down to "is the build sandboxed?" and "are any secrets stored in disk or memory that is accessible to the sandbox?". I wonder if we would consider a build system that uses a weak sandbox to meet L3. What about strong sandboxes that are compromised? Do builds drop from L3 to L2 when impacted by a CVE?

I agree that the gap between L2 and L3 is large, but I don't see an obvious way to narrow it.

I also don't know if it's practical to assess individual builds as opposed to assessing build systems periodically.

Great point. I might even suggest that assessment frequency, even if it's a range like "once a month to once a quarter" should be something that is codified into the spec for L3 and above, or at least require that the frequency be specified in the provenance, if SLSA doesn't want to be too prescriptive.

+1 to encoding frequency in the spec, though the shape of conformance program will determine what a reasonable value is.

It looks like we're starting to converge, so I see the following action items:

  1. Add fields for attestations to invocation and buildConfig.
  2. Rename buildConfig to buildService.
  3. Draft a sample questionnaire for assessing a builder's L2/L3 status.

Did I miss any?

@marcelamelara
Copy link
Contributor

@kpk47 Your list of recommended tasks looks good to me. Thanks!

Now that #525 is up, we should also reconsider how these recommended tasks fit with the refactored provenance.

@marcelamelara
Copy link
Contributor

@kpk47 Per https://slsa.dev/spec/v1.0/requirements#build-levels, it seems like the evidence/attestation fields we recommended would now apply to the "Producer" requirements?

@kpk47
Copy link
Contributor

kpk47 commented Dec 6, 2022

@marcelamelara I'm not sure the attestations are part of the Producers requirements. My read is that Producers are the organization that provides the artifact to downstream consumers, not the organization that creates the build system. For example, the OpenSSF releases slsa-verifier using GitHub Actions as a builder. In this example, the OpenSSF is the Producer, not GitHub. I'm not quite sure the best place to add builder evidence to the new provenance format. It may be simplest just to add a dedicated field for it.

I've been working on a draft of the L3 builder specification and the questionnaire for assessment. Please take a look and give feedback: https://docs.google.com/document/d/1CdSi1qF-uYM00_LYO-216NKLi4Hu6teCKRLmc23g9IE/edit?usp=sharing

@marcelamelara
Copy link
Contributor

@kpk47 I've reviewed the Doc, thanks so much for putting that together. I've left a number of comments.

One question I still have is: How are L1-L2 build system requirements verified? I believe the need for an evidence field in the provenance for the L2 "build service" requirement, as we discussed previously, is still needed regardless of the format of the L3+ questionnaire. Right now, the process for demonstrating L1-L2 compliance seems very disjoint from the L3+ process, and I would be concerned that that would cause confusion among SLSA users.

cc'ing @MarkLodato to see if you have any thoughts on this question.

@kpk47
Copy link
Contributor

kpk47 commented Dec 20, 2022

I agree that L1-2 verification and L3 verification are disjoint processes, and I think we can avoid any user confusion by having appropriate tooling for provenance verification. My mental model is that L1 verification is trivial, L2 verification requires an attestation, and L3 requires an attestation that is signed by a trusted party. If the verifier doesn't trust the attestation signer, then L3 can be downgraded to L2. I don't have a strong opinion on how a builder should attest to being L2 but not L3, but I suspect we don't need much more than a level enum field in the attestation.

@marcelamelara
Copy link
Contributor

marcelamelara commented Dec 20, 2022

That makes a lot of sense to me, and agree that tooling will address the verification issues. I'm actually concerned about the generation process from the producer's perspective (sorry that wasn't clear). It seems like L1/L2 provenance generation can be automated in large part, while L3+ provenance generation will by nature be considerably more manual. In addition to streamlining the generation process to lower the barrier to entry for L3+, I'm wondering how we can provide consistency and integrity for the L3+ provenance generation process? Maybe this determination is up to the consumer of the provenance, but we should be stating these types of assumptions as part of the generation process.

@MarkLodato
Copy link
Member Author

It seems like L1/L2 provenance generation can be automated in large part, while L3+ provenance generation will by nature be considerably more manual.

Sorry, I don't follow. Could you elaborate? I had intended that provenance generation is fully automatic at all levels. From the producer's perspective, not much changes between L2 and L3; almost all of the changes are in the builder. (To clarify, by "producer" I mean a tenant of the build service.) But I suspect I'm misunderstanding what you're saying.

@kpk47
Copy link
Contributor

kpk47 commented Dec 21, 2022

It seems like L1/L2 provenance generation can be automated in large part, while L3+ provenance generation will by nature be considerably more manual.

I agree with Mark. While it is a manual process to verify an L3 builder, once that verification happens (and assuming it's up-to-date, etc) generating provenance is automatic. Would you mind explaining which part of L3 provenance generation is manual?

@marcelamelara
Copy link
Contributor

marcelamelara commented Dec 21, 2022

@MarkLodato @kpk47 So I think where my gap in understanding lies at this point is how the information in the questionnaire relates to the provenance. My understanding was that the questionnaire would be the way in which builders gather information about and enable others to verify the build system. The producer of the provenance is the tenant of the build system, but they need to somehow obtain the information gathered through the questionnaire to include it in the provenance. Otherwise, who is the consumer of the questionnaire? And how is information about the build system conveyed to a provenance consumer? Is my understanding correct? Hopefully this clarifies my comments a bit.

@MarkLodato
Copy link
Member Author

Ah, I see the misunderstanding now. Here's what I've been picturing:

Suppose libfoo is built on Awesome Builder, which is a public CI/CD service, and I am a consumer of libfoo. Using the terminology from the current v1.0 draft, the libfoo is the "project", Awesome Builder is an "infrastructure provider," and I am the consumer.

The maintainers of libfoo just use Awesome Builder's built-in feature to generate provenance. That's all they really need to do.1 Awesome Builder's provenance API may be the same between L2 and L3, or it might be different (e.g. enable some L3 mode). But other than that, the maintainers of libfoo have to do almost nothing to get from L2 to L3.

It is the build service that does the work to increase level.

By default, everything is L1. If provenance exists, it's L1.

To become L2, Awesome Builder just needs to convince me that it's a "service" and get me the public key. That's a very low bar, so I trivially say, sure, it's L2. (Probably I don't do this myself but instead rely on some other party to make that determination, such as the slsa-verifier project.)

To become L3, Awesome Builder needs to (1) lock down the service sufficiently to cover the threats included in L3 and (2) convince me that they really are L3. The survey / questionnaire / document is designed to help with both purposes. For (1), it can guide the Awesome Builder team to figure out what the gaps are and how to mitigate them. For (2), it can help Awesome Builder convince me that they really are deserving of my trust, by describing in detail the design and operation of Awesome Builder. If they convince me, I configure my SLSA tool to register Awesome Builder as L3. If they don't, I leave it as L2.

In any event, the provenance is independent of the survey.

Does that help?

Footnotes

  1. Ignoring the packaging ecosystem requirements around setting expectations and propagation of provenance, which are irrelevant to this discussion.

@marcelamelara
Copy link
Contributor

Thanks for this detailed walk-through. It helps a lot.

So, just to make sure that I've understood correctly: We're assuming that the infrastructure provider will (a) generate the provenance per build, (b) deliver the generated provenance and questionnaire information to you (the consumer). Then, for L3, it's up to you to verify both documents. The second option being, the infrastructure provider relies on a trusted delegate/verifier service to check that they meet the build level the provider claims via the questionnaire.

Does this sound right?

@MarkLodato
Copy link
Member Author

Almost. The provenance and the questionnaire are separate processes.

  • Up front, the infrastructure provider (Awesome Builder) publishes the questionnaire and a public key, and it's up to me or my tooling vendor to configure the tool to accept that public key at a given level, using the questionnaire as part of the decision making process. For example, the slsa-verifier project might decide to add Awesome Builder's public key at SLSA Build L3. There are not many builders, so this does not happen often.

  • On every build, the producer (libfoo) builds and generates provenance using the infrastructure provider (Awesome Builder), then the producer publishes provenance through the packaging ecosystem's conventions, and finally my tooling verifies the provenance using the configuration from the first step. This happens automatically every time I install or upgrade a package.

@marcelamelara
Copy link
Contributor

Got it, thanks. The questionnaire and provenance being generated in two separate processes makes total sense. The part I was still having an issue with here (which led to my comment in #525) was if/how the consumer of libfoo ends up verifying the infrastructure provider's claim of Build L3, when the provenance does not contain any information about the infrastructure. But I guess, your point is that the verification flow is really: provenance consumer verifies provenance producer verifies infrastructure provider.

@kpk47
Copy link
Contributor

kpk47 commented Mar 20, 2023

Closing because the conformance program is out of scope for 1.0.

@kpk47 kpk47 closed this as completed Mar 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
spec-change Modification to the spec (requirements, schema, etc.)
Projects
Status: Done
Development

No branches or pull requests

3 participants