Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DPE-5350][DPE-5416] Add sbom generation in CI #107

Merged
merged 7 commits into from
Sep 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 21 additions & 2 deletions .github/workflows/trivy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,13 @@ on:
push:
branches:
- 3.4-22.04/edge
- dpe-5350-3.4 # tmp to test new action.
pull_request:
jobs:
build:
uses: ./.github/workflows/build.yaml
scan:
name: Trivy scan
name: Trivy scan and sbom generation
needs: build
runs-on: ubuntu-20.04
steps:
Expand Down Expand Up @@ -46,4 +47,22 @@ jobs:
uses: github/codeql-action/upload-sarif@v2
if: always()
with:
sarif_file: 'trivy-results.sarif'
sarif_file: 'trivy-results.sarif'

- name: Run Trivy in GitHub SBOM mode and submit results to Dependency Graph
uses: aquasecurity/[email protected]
with:
scan-type: 'image'
format: 'spdx-json'
output: 'dependency-results.sbom.json'
image-ref: 'trivy/charmed-spark:test'
github-pat: ${{ secrets.GITHUB_TOKEN }}
severity: "MEDIUM,HIGH,CRITICAL"
scanners: "vuln"

- name: Upload trivy report as a Github artifact
uses: actions/upload-artifact@v4
with:
name: trivy-sbom-report
path: '${{ github.workspace }}/dependency-results.sbom.json'
retention-days: 90
37 changes: 32 additions & 5 deletions tests/integration/setup-aws-cli.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,41 @@

# Install AWS CLI
sudo snap install aws-cli --classic

set -x

get_s3_endpoint(){
# Get S3 endpoint from MinIO
kubectl get service minio -n minio-operator -o jsonpath='{.spec.clusterIP}'
# Print the endpoint where the S3 bucket is exposed on.
kubectl get service minio -n minio-operator -o jsonpath='{.spec.clusterIP}'
}


get_s3_access_key(){
# Print the S3 Access Key by reading it from K8s secret or by outputting the default value
kubectl get secret -n minio-operator microk8s-user-1 &> /dev/null
if [ $? -eq 0 ]; then
# echo "Use access-key from secret"
access_key=$(kubectl get secret -n minio-operator microk8s-user-1 -o jsonpath='{.data.CONSOLE_ACCESS_KEY}' | base64 -d)
else
# echo "use default access-key"
access_key="minio"
fi
echo "$access_key"
}


get_s3_secret_key(){
# Print the S3 Secret Key by reading it from K8s secret or by outputting the default value
kubectl get secret -n minio-operator microk8s-user-1 &> /dev/null
if [ $? -eq 0 ]; then
# echo "Use access-key from secret"
secret_key=$(kubectl get secret -n minio-operator microk8s-user-1 -o jsonpath='{.data.CONSOLE_SECRET_KEY}' | base64 -d)
else
# echo "use default access-key"
secret_key="minio123"
fi
echo "$secret_key"
}

wait_and_retry(){
# Retry a command for a number of times by waiting a few seconds.

Expand Down Expand Up @@ -37,8 +64,8 @@ wait_and_retry get_s3_endpoint

S3_ENDPOINT=$(get_s3_endpoint)
DEFAULT_REGION="us-east-2"
ACCESS_KEY=$(kubectl get secret -n minio-operator microk8s-user-1 -o jsonpath='{.data.CONSOLE_ACCESS_KEY}' | base64 -d)
SECRET_KEY=$(kubectl get secret -n minio-operator microk8s-user-1 -o jsonpath='{.data.CONSOLE_SECRET_KEY}' | base64 -d)
ACCESS_KEY=$(get_s3_access_key)
SECRET_KEY=$(get_s3_secret_key)

# Configure AWS CLI credentials
aws configure set aws_access_key_id $ACCESS_KEY
Expand Down
24 changes: 20 additions & 4 deletions tests/integration/utils/s3-utils.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,30 @@ get_s3_endpoint(){


get_s3_access_key(){
# Print the S3 Access Key by reading it from K8s secret
kubectl get secret -n minio-operator microk8s-user-1 -o jsonpath='{.data.CONSOLE_ACCESS_KEY}' | base64 -d
# Print the S3 Access Key by reading it from K8s secret or by outputting the default value
kubectl get secret -n minio-operator microk8s-user-1 &> /dev/null
if [ $? -eq 0 ]; then
# echo "Use access-key from secret"
access_key=$(kubectl get secret -n minio-operator microk8s-user-1 -o jsonpath='{.data.CONSOLE_ACCESS_KEY}' | base64 -d)
else
# echo "use default access-key"
access_key="minio"
fi
echo "$access_key"
}


get_s3_secret_key(){
# Print the S3 Secret Key by reading it from K8s secret
kubectl get secret -n minio-operator microk8s-user-1 -o jsonpath='{.data.CONSOLE_SECRET_KEY}' | base64 -d
# Print the S3 Secret Key by reading it from K8s secret or by outputting the default value
kubectl get secret -n minio-operator microk8s-user-1 &> /dev/null
if [ $? -eq 0 ]; then
# echo "Use access-key from secret"
secret_key=$(kubectl get secret -n minio-operator microk8s-user-1 -o jsonpath='{.data.CONSOLE_SECRET_KEY}' | base64 -d)
else
# echo "use default access-key"
secret_key="minio123"
fi
echo "$secret_key"
}


Expand Down
Loading