This repo holds configuration for the Jenkins testing infrastructure used by CORD.
The best way to work with this repo is to check it out with repo, per these
instructions: Downloading testing and QA
repositories
NOTE: This repo uses git submodules. If you get an error like this when testing:
jenkins_jobs.errors.JenkinsJobsException: Failed to find suitable template named '{project-name}-ci-jobs'or have trouble with the other tasks, please run:
git submodule init && git submodule updateto obtain these submodules, as a clone of the repo won't automatically checkout these submodules.
LF mailing list for release engineering
The #lf-releng channel on Freenode IRC is usually well attended.
When writing jobs, there are some things that JJB should be used to handle, and some things that should be put in external scripts or pipeline jobs.
Some things that are good to put in a JJB job:
- Perform all SCM tasks (checkout code, etc.)
- Specify the executors type and size (don't hardcode this in a pipeline)
JJB Jobs should not:
- Have complicated (more than 2-5 lines) scripts inline - instead, include
these using
!include-escapeor!include-raw-escape.
When adding a new git repo that needs tests:
-
Create a new file in
jjb/verifynamed<reponame>.yaml -
Create a project using the name of the repo, and a job-group section with a list of jobs-template
ids to invoke. -
Optional: If you have more than one job that applies to the repo, add a
dependency-jobsvariable to each item thejob-groupjobslist to control the order of jobs to invoke. Note that this is a string with the name of the jobs as created in Jenkins, not thejob-templateid.
To create jobs that are usable by multiple repos, you want to create a job-template that can be used by multiple jobs.
Most job-templates are kept in jjb/*.yaml. See lint.yaml or
api-test.yaml for examples.
Every job-template must have at least a name (which creates the name of the
job in Jenkins) and an id item (referred to in the job-group), as well as
several
modules
that invoke Jenkins functionality, or macros (see below, and in the docs)
that customize or provide defaults for those modules.
Default values can be found in jjb/defaults.yaml. These can be used in
projects, jobs, job-templates.
NOTE: Defaults don't work with
macros- all parameters must be passed to every macro invocation.
If you need to customize how a Jenkins module is run, consider creating a
reusable
macro.
These are generally put in jjb/cord-macros.yaml, and have names matching
cord-infra-*.
See also global-jjb/jjb/lf-macros.yaml for more macros to use (these have
name matching lf-infra-*).
There are a few useful macros defined in jjb/cord-macros.yml
cord-infra-properties- sets build discarder settingscord-infra-gerrit-repo-scm- checks out the entire source tree with therepotoolcord-infra-gerrit-repo-patch- checks out a patch to a git repo within a checked out repo source tree (WIP, doesn't work yet)cord-infra-gerrit-trigger-patchset- triggers build on gerrit new patchset, draft publishing, comments, etc.cord-infra-gerrit-trigger-merge- triggers build on gerrit merge
JJB job definitions can be tested by running make test, which will create a
python virtualenv, install jenkins-job-builder in it, then try building all the
job files, which are put in job-configs and can be inspected.
The output of this is somewhat difficult to decipher, sometimes requiring you to go through the python backtrace to figure out where the error occurred in the jenkins-job-builder source code.
There is also a make lint target which will run yamllint on all the JJB
YAML files, which can catch a variety of formatting errors.
If you're writing a new shell script, it's a good idea to test it with
shellcheck before including it -
failing to heed those messages then using !include-escape to add it to the
job may lead to hard to debug problems with the job definition.
Another way of creating jobs in Jenkins is to use the Pipeline method, which
traditionally is done by creating a Groovy script that describes a job. These
are traditionally stored in a "Jenkinsfile". It is recommended that you use the
Declarative Pipeline syntax,
which can be linted with the shell/jjb/jflint.sh script, which will verify
the pipeline syntax against the Jenkins server. This script may run
automatically on commits in the future, so please verfiy your scripts with it.
The recommended way of creating a pipeline job is to create a pipeline script
in jjb/pipeline with an extension of .groovy, and a job-template job that
calls it and uses the JJB parameters to configure the pipeline job. One
necessary parameter is the executorNode, which should be defined in the job
or job template, but is used to specify the agent in the pipeline script (the
executor the job runs on).
For help writing pipeline jobs, please see the Pipeline steps documentation for help with the syntax.
The Jenkins executors are spun up automatically in EC2, and torn down after jobs have completed. Some are "one shot" and others (usually static or lint checks) are re-used for run multiple jobs.
The AMI images used for these executors built with
Packer and most of the local configuration happens in
packer/provision/basebuild.sh. If you need a new tool installed in the
executor, you would add the steps to install it here. It's verified, and when
merged generates a new AMI image.
NOTE: Future builds won't automatically use the new AMI - you have to manually set the instance
AMI IDon jenkins in Global Config > Cloud > Amazon EC2. The new AMI ID can be found near the end of the logs of the run of ci-management-packer-merge--basebuild.
Source OS images published by upstream projects like Ubuntu and CentOS need to be well specified, so that the correct images are used. Anyone can list in the marketplace, so care should be taken to use the correct images.
This is done in Packer using
source_ami_filter
with is parameterized on the image name, owner, and product-code within
the packer/vars/<os_name>.json files that define the source images.
Upstream docs that specify AMIs:
Unfortunately these filter parameters have conflicts - images with the official
Ubuntu owner (099720109477) doesn't specify a product-code field.
As an alternative, aws-marketplace owner is used, which also has the same
images. To find the product-code, go to the AWS
Marketplace and find the image you want,
then click the button to launch the image. In the URL there will be a
productId UUID parameter - find this, and then use it search for a product
code using the aws command
line:
aws ec2 describe-images \
--owners aws-marketplace \
--filters "Name=name,Values=*d83d0782-cb94-46d7-8993-f4ce15d1a484*"
Then look at the output for the ProductCodeId - this is what should go in the
OS json file in the source_ami_filter_product_code field.
Once you've determined the correct settings, the Packer filter can be tested with this command:
aws ec2 describe-images \
--owners aws-marketplace \
--filters "Name=name,Values=*ubuntu*16.04*" \
"Name=product-code,Values=csv6h7oyg29b7epjzg7qdr7no" \
"Name=architecture,Values=x86_64" \
"Name=virtualization-type,Values=hvm" \
"Name=root-device-type,Values=ebs"
If you create a new cloud instance type, make sure to set both the Security group names and Subnet ID for VPC or it will fail to instantiate.