From 3f2387acc2782570afcd48125e28d086331888a3 Mon Sep 17 00:00:00 2001 From: edp-bot Date: Thu, 12 Oct 2023 15:35:18 +0000 Subject: [PATCH] Update documentation --- operator-guide/sonarqube/index.html | 2 +- search/search_index.json | 2 +- sitemap.xml | 294 ++++++++++++++-------------- sitemap.xml.gz | Bin 1462 -> 1462 bytes 4 files changed, 149 insertions(+), 149 deletions(-) diff --git a/operator-guide/sonarqube/index.html b/operator-guide/sonarqube/index.html index 33f220724..3e2b6f272 100644 --- a/operator-guide/sonarqube/index.html +++ b/operator-guide/sonarqube/index.html @@ -1,4 +1,4 @@ - SonarQube - EPAM Delivery Platform
Skip to content

SonarQube Integration⚓︎

This documentation guide provides comprehensive instructions for integrating SonarQube with the EPAM Delivery Platform.

Info

In EDP release 3.5, we have changed the deployment strategy for the sonarqube-operator component, now it is not installed by default. The sonarURL parameter management has been transferred from the values.yaml file to Kubernetes secrets.

Prerequisites⚓︎

Before proceeding, ensure that you have the following prerequisites:

  • Kubectl version 1.26.0 is installed.
  • Helm version 3.12.0+ is installed.

Installation⚓︎

To install SonarQube with pre-defined templates, use the sonar-operator installed via Cluster Add-Ons approach.

Configuration⚓︎

To establish robust authentication and precise access control, generating a SonarQube token is essential. This token is a distinct identifier, enabling effortless integration between SonarQube and EDP. To generate the SonarQube token, proceed with the following steps:

  1. Open the SonarQube UI and navigate to Administration -> Security -> User. Create a new user or select an existing one. Click the Options List icon to create a token:

    SonarQube user settings
    SonarQube user settings

  2. Type the ci-user username, define an expiration period, and click the Generate button to create the token:

    SonarQube create token
    SonarQube create token

  3. Click the Copy button to copy the generated <Sonarqube-token>:

    SonarQube token
    SonarQube token

  4. Provision secrets using Manifest, EDP Portal or with the externalSecrets operator:

Go to EDP Portal -> EDP -> Configuration -> SonarQube. Update or fill in the URL and Token fields and click the Save button:

SonarQube update manual secret
SonarQube update manual secret

apiVersion: v1
+ SonarQube - EPAM Delivery Platform       

SonarQube Integration⚓︎

This documentation guide provides comprehensive instructions for integrating SonarQube with the EPAM Delivery Platform.

Info

In EDP release 3.5, we have changed the deployment strategy for the sonarqube-operator component, now it is not installed by default. The sonarURL parameter management has been transferred from the values.yaml file to Kubernetes secrets.

Prerequisites⚓︎

Before proceeding, ensure that you have the following prerequisites:

  • Kubectl version 1.26.0 is installed.
  • Helm version 3.12.0+ is installed.

Installation⚓︎

To install SonarQube with pre-defined templates, use the sonar-operator installed via Cluster Add-Ons approach.

Configuration⚓︎

To establish robust authentication and precise access control, generating a SonarQube token is essential. This token is a distinct identifier, enabling effortless integration between SonarQube and EDP. To generate the SonarQube token, proceed with the following steps:

  1. Open the SonarQube UI and navigate to Administration -> Security -> User. Create a new user or select an existing one. Click the Options List icon to create a token:

    SonarQube user settings
    SonarQube user settings

  2. Type the ci-user username, define an expiration period, and click the Generate button to create the token:

    SonarQube create token
    SonarQube create token

  3. Click the Copy button to copy the generated <Sonarqube-token>:

    SonarQube token
    SonarQube token

  4. Provision secrets using Manifest, EDP Portal or with the externalSecrets operator:

Go to EDP Portal -> EDP -> Configuration -> SonarQube. Update or fill in the URL and Token fields and click the Save button:

SonarQube update manual secret
SonarQube update manual secret

apiVersion: v1
 kind: Secret
 metadata:
   name: ci-sonarqube
diff --git a/search/search_index.json b/search/search_index.json
index 52d8a4812..6166b245f 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"faq/","title":"FAQ","text":""},{"location":"faq/#how-do-i-set-parallel-reconciliation-for-a-number-of-codebase-branches","title":"How Do I Set Parallel Reconciliation for a Number of Codebase Branches?","text":"

Set the CODEBASE_BRANCH_MAX_CONCURRENT_RECONCILES Env variable in codebase-operator by updating Deployment template. For example:

          ...\n          env:\n            - name: WATCH_NAMESPACE\n          ...\n\n            - name: CODEBASE_BRANCH_MAX_CONCURRENT_RECONCILES\n              value: 10\n...\n

It's not recommended to set the value above 10.

"},{"location":"faq/#how-to-change-the-lifespan-of-an-access-token-that-is-used-for-edp-portal-and-oidc-login-plugin","title":"How To Change the Lifespan of an Access Token That Is Used for EDP Portal and 'oidc-login' Plugin?","text":"

Change the Access Token Lifespan: go to your Keycloak and select Openshift realm > Realm settings > Tokens > Access Token Lifespan > set a new value to the field and save this change.

By default, \"Access Token Lifespan\" value is 5 minutes.

Access Token Lifespan

"},{"location":"features/","title":"Basic Concepts","text":"

Consult EDP Glossary section for definitions mentioned on this page and EDP Toolset to have a full list of tools used with the Platform. The below table contains a full list of features provided by EDP.

Features Description Cloud Agnostic EDP runs on Kubernetes cluster, so any Public Cloud Provider which provides Kubernetes can be used. Kubernetes clusters deployed on-premises work as well CI/CD for Microservices EDP is initially designed to support CI/CD for Microservices running as containerized applications inside Kubernetes Cluster. EDP also supports CI for:- Terraform Modules, - Open Policy Rules,- Workflows for Java (8,11,17), JavaScript (React, Vue, Angular, Express, Antora), C# (.NET 6.0), Python (FastAPI, Flask, 3.8), Go (Beego, Operator SDK) Version Control System (VCS) EDP installs Gerrit as a default Source Code Management (SCM) tool. EDP also supports GitHub and GitLab integration Branching Strategy EDP supports Trunk-based development as well as GitHub/GitLab flow. EDP creates two Pipelines per each codebase branch (see Pipeline Framework): Code Review and Build Repository Structure EDP provides separate Git repository per each Codebase and doesn't work with Monorepo. However, EDP does support customization and runs helm-lint, dockerfile-lint steps using Monorepo approach. Artifacts Versioning EDP supports two approaches for Artifacts versioning: - default (BRANCH-[TECH_STACK_VERSION]-BUILD_ID)- EDP (MAJOR.MINOR.PATCH-BUILD_ID), which is SemVer.Custom versioning can be created by implementing get-version stage Application Library EDP provides baseline codebase templates for Microservices, Libraries, within create strategy while onboarding new Codebase Stages Library Each EDP Pipeline consists of pre-defined steps (stages). Consult library documentation for more details CI Pipelines EDP provides CI Pipelines for first-class citizens: - Applications (Microservices) based on Java (8,11,17), JavaScript (React, Vue, Angular, Express, Antora), C# (.NET 6.0), Python (FastAPI, Flask, 3.8), Go (Beego, Operator SDK)- Libraries based on Java (8,11,17), JavaScript (React, Vue, Angular, Express), Python (FastAPI, Flask, 3.8), Groovy Pipeline (Codenarc), Terraform, Rego (OPA), Container (Docker), Helm (Pipeline), C#(.NET 6.0)- Autotests based on Java8, Java11, Java17 CD Pipelines EDP provides capabilities to design CD Pipelines (in Admin Console) for Microservices and defines logic for artifacts flow (promotion) from env to env. Artifacts promotion is performed automatically (Autotests), manually (User Approval) or combining both approaches Autotests EDP provides CI pipeline for autotest implemented in Java. Autotests can be used as Quality Gates in CD Pipelines Custom Pipeline Library EDP can be extended by introducing Custom Pipeline Library Dynamic Environments Each EDP CD Pipeline creates/destroys environment upon user requests"},{"location":"getting-started/","title":"Quick Start","text":""},{"location":"getting-started/#software-requirements","title":"Software Requirements","text":"
  • Kubernetes cluster 1.23+ or OpenShift 4.9+;
  • Kubectl tool;
  • Helm 3.10.x+;
  • Keycloak 18.0+;
  • Kiosk 0.2.11.
"},{"location":"getting-started/#minimal-hardware-requirements","title":"Minimal Hardware Requirements","text":"

The system should have the following specifications to run properly:

  • CPU: 8 Core
  • Memory: 32 Gb
"},{"location":"getting-started/#edp-toolset","title":"EDP Toolset","text":"

EPAM Delivery Platform supports the following tools:

Domain Related Tools/Solutions Artifacts Management Nexus Repository, Jfrog Artifactory AWS IRSA, AWS ECR, AWS EFS, Parameter Store, S3, ALB/NLB, Route53 Build .NET, Go, Apache Gradle, Apache Maven, NPM Cluster Backup Velero Code Review Gerrit, GitLab, GitHub Container Registry AWS ECR, OpenShift Registry, Harbor, DockerHub Containers Hadolint, Kaniko, Crane Documentation as Code MkDocs, Antora (AsciiDoc) Infrastructure as Code Terraform, TFLint, Terraform Docs, Crossplane, AWS Controllers for Kubernetes Kubernetes Deployment Kubectl, Helm, Helm Docs, Chart Testing, Argo CD, Argo Rollout Kubernetes Multitenancy Kiosk Logging OpenSearch, EFK, ELK, Loki, Splunk Monitoring Prometheus, Grafana, VictoriaMetrics Pipeline Orchestration Tekton, Jenkins Policies/Rules Open Policy Agent Secrets Management External Secret Operator, Vault Secure Development SonarQube, DefectDojo, Dependency Track, Semgrep, Grype, Trivy, Clair, GitLeaks, CycloneDX Generator, tfsec, checkov SSO Keycloak, oauth2-proxy Test Report Tool ReportPortal, Allure Tracing OpenTelemetry, Jaeger"},{"location":"getting-started/#install-edp","title":"Install EDP","text":"

To install EDP with the necessary parameters, please refer to the Install EDP section of the Operator Guide. Mind the parameters in the EDP installation chart. For details, please refer to the values.yaml.

Find below the example of the installation command:

    helm install edp epamedp/edp-install --wait --timeout=900s \\\n    --version <edp_version> \\\n    --set global.dnsWildCard=<cluster_DNS_wilcdard> \\\n    --set global.platform=<platform_type> \\\n    --set awsRegion=<region> \\\n    --set global.dockerRegistry.url=<aws_account_id>.dkr.ecr.<region>.amazonaws.com \\\n    --set keycloak-operator.keycloak.url=<keycloak_endpoint> \\\n    --set global.gerritSSHPort=<gerrit_ssh_port> \\\n    --namespace edp\n

Warning

Please be aware that the command above is an example.

"},{"location":"getting-started/#related-articles","title":"Related Articles","text":"

Getting Started

"},{"location":"glossary/","title":"Glossary","text":"

Get familiar with the definitions and context for the most useful EDP terms presented in table below.

Terms Details EDP Component - an item used in CI/CD process EDP Portal UI - an EDP component that helps to manage, set up, and control the business entities. Artifactory - an EDP component that stores all the binary artifacts. NOTE: Nexus is used as a possible implementation of a repository. CI/CD Server - an EDP component that launches pipelines that perform the build, QA, and deployment code logic. NOTE: Jenkins is used as a possible implementation of a CI/CD server. Code Review tool - an EDP component that collaborates with the changes in the codebase. NOTE: Gerrit is used as a possible implementation of a code review tool. Identity Server - an authentication server providing a common way to verify requests to all of the applications. NOTE: Keycloak is used as a possible implementation of an identity server. Security Realm Tenant - a realm in identity server (e.g Keycloak) where all users' accounts and their access permissions are managed. The realm is unique for the identity server instance. Static Code Analyzer - an EDP component that inspects continuously a code quality before the necessary changes appear in a master branch. NOTE: SonarQube is used as a possible implementation of a static code analyzer. VCS (Version Control System) - a replication of the Gerrit repository that displays all the changes made by developers. NOTE: GitHub and GitLab are used as the possible implementation of a repository with the version control system. EDP Business Entity - a part of the CI/CD process (the integration, delivery, and deployment of any codebase changes) Application - a codebase type that is built as the binary artifact and deployable unit with the code that is stored in VCS. As a result, the application becomes a container and can be deployed in an environment. Autotests - a codebase type that inspects a product (e.g. an application set) on a stage. Autotests are not deployed to any container and launched from the respective code stage. CD Pipeline (Continuous Delivery Pipeline) - an EDP business entity that describes the whole delivery process of the selected application set via the respective stages. The main idea of the CD pipeline is to promote the application version between the stages by applying the sequential verification (i.e. the second stage will be available if the verification on the first stage is successfully completed). NOTE: The CD pipeline can include the essential set of applications with its specific stages as well. CD Pipeline Stage - an EDP business entity that is presented as the logical gate required for the application set inspection. Every stage has one OpenShift project where the selected application set is deployed. All stages are sequential and promote applications one-by-one. Codebase - an EDP business entity that possesses a code. Codebase Branch - an EDP business entity that represents a specific version in a Git branch. Every codebase branch has a Codebase Docker Stream entity. Codebase Docker Stream - a deployable component that leads to the application build and displays that the last build was verified on the specific stage. Every CD pipeline stage accepts a set of Codebase Docker Streams (CDS) that are input and output. SAMPLE: if an application1 has a master branch, the input CDS will be named as [app name]-[pipeline name]-[stage name]-[master] and the output after the passing of the DEV stage will be as follows: [app name]-[pipeline name]-[stage name]-[dev]-[verified]. Library - a codebase type that is built as the binary artifact, i.e. it`s stored in the Artifactory and can be uploaded by other applications, autotests or libraries. Quality Gate - an EDP business entity that represents the minimum acceptable results after the testing. Every stage has a quality gate that should be passed to promote the application. The stage quality gate can be a manual approve from a QA specialist OR a successful autotest launch. Quality Gate Type - this value defines trigger type that promotes artifacts (images) to the next environment in CD Pipeline. There are manual and automatic types of quality gates. The manual type means that the promoting process should be confirmed in Jenkins. The automatic type promotes the images automatically in case there are no errors in the Allure Report. NOTE: If any of the test types is not passed, the CD pipeline will fail. Trigger Type - a value that defines a trigger type used for the CD pipeline triggering. There are manual and automatic types of triggering. The manual type means that the CD pipeline should be triggered manually. The automatic type triggers the CD pipeline automatically as soon as the Codebase Docker Stream was changed. EDP CI/CD Pipelines Framework - a library that allows extending the Jenkins pipelines and stages to develop an application. Pipelines are presented as the shared library that can be connected in Jenkins. The library is connected using the Git repository link (a public repository that is supported by EDP) on the GitHub. Allure Report- a tool that represents test results in one brief report in a clear form. Automated Tests - different types of automated tests that can be run on the environment for a specific stage. Build Pipeline - a Jenkins pipeline that builds a corresponding codebase branch in the Codebase. Build Stage - a stage that takes place after the code has been submitted/merged to the repository of the main branch (the pull request from the feature branch is merged to the main one, the Patch set is submitted in Gerrit). Code Review Pipeline - a Jenkins pipeline that inspects the code candidate in the Code Review tool. Code Review Stage - a stage where code is reviewed before it goes to the main branch repository of the version control system (the commit to the feature branch is pushed, the Patch set is created in Gerrit). Deploy Pipeline - a Jenkins pipeline that is responsible for the CD Pipeline Stage deployment with the full set of applications and autotests. Deployment Stage - a part of the Continuous Delivery where artifacts are being deployed to environments. EDP CI/CD Pipelines - an orchestrator for stages that is responsible for the common technical events, e.g. initialization, in Jenkins pipeline. The set of stages for the pipeline is defined as an input JSON file for the respective Jenkins job. NOTE: There is the ability to create the necessary realization of the library pipeline on your own as well. EDP CI/CD Stages - a repository that is launched in the Jenkins pipeline. Every stage is presented as an individual Groovy file in a corresponding repository. Such single responsibility realization allows rewriting of one essential stage without changing the whole pipeline. Environment - a part of the stage where the built and packed into an image application are deployed for further testing. It`s possible to deploy several applications to several environments (Team and Integration environments) within one stage. Integration Environment - an environment type that is always deployed as soon as the new application version is built in order to launch the integration test and promote images to the next stages. The Integration Environment can be triggered manually or in case a new image appears in the Docker registry. Jenkinsfile - a text file that keeps the definition of a Jenkins Pipeline and is checked into source control. Every Job has its Jenkinsfile that is stored in the specific application repository and in Jenkins as the plain text. Jenkins Node - a machine that is a part of the Jenkins environment that is capable of executing a pipeline. Jenkins Pipeline - a user-defined model of a CD pipeline. The pipeline code defines the entire build process. Jenkins Stage - a part of the whole CI/CD process that should pass the source code in order to be released and deployed on the production. Team Environment - an environment type that can be deployed at any time by the manual trigger of the Deploy pipeline where team or developers can check out their applications. NOTE: The promotion from such kind of environment is prohibited and developed only for the local testing. OpenShift / Kubernetes (K8S) ConfigMap - a resource that stores configuration data and processes the strings that do not contain sensitive information. Docker Container - is a lightweight, standalone, and executable package. Docker Registry - a store for the Docker Container that is created for the application after the Build pipeline performance. OpenShift Web Console - a web console that enables to view, manage, and change OpenShift / K8S resources using browser. Operator Framework - a deployable unit in OpenShift that is responsible for one or a set of resources and performs its life circle (adding, displaying, and provisioning). Path - a route component that helps to find a specified path (e.g. /api) at once and skip the other. Pod - the smallest deployable unit of the large microservice application that is responsible for the application launch. The pod is presented as the one launched Docker container. When the Docker container is collected, it will be kept in Docker Registry and then saved as Pod in the OpenShift project. NOTE: The Deployment Config is responsible for the Pod push, restart, and stop processes. PV (Persistent Volume) - a cluster resource that captures the details of the storage implementation and has an independent lifecycle of any individual pod. PVC (Persistent Volume Claim) - a user request for storage that can request specific size and access mode. PV resources are consumed by PVCs. Route - a resource in OpenShift that allows getting the external access to the pushed application. Secret - an object that stores and manages all the sensitive information (e.g. passwords, tokens, and SSH keys). Service - an external connection point with Pod that is responsible for the network. A specific Service is connected to a specific Pod using labels and redirects all the requests to Pod as well. Site - a route component (link name) that is created from the indicated application name and applies automatically the project name and a wildcard DNS record."},{"location":"overview/","title":"Overview","text":"

EPAM Delivery Platform (EDP) is an open-source cloud-agnostic SaaS/PaaS solution for software development, licensed under Apache License 2.0. It provides a pre-defined set of CI/CD patterns and tools, which allow a user to start product development quickly with established code review, release, versioning, branching, build processes. These processes include static code analysis, security checks, linters, validators, dynamic feature environments provisioning. EDP consolidates the top Open-Source CI/CD tools by running them on Kubernetes/OpenShift, which enables web/app development either in isolated (on-prem) or cloud environments.

EPAM Delivery Platform, which is also called \"The Rocket\", is a platform that allows shortening the time that is passed before an active development can be started from several months to several hours.

EDP consists of the following:

  • The platform based on managed infrastructure and container orchestration
  • Security covering authentication, authorization, and SSO for platform services
  • Development and testing toolset
  • Well-established engineering process and EPAM practices (EngX) reflected in CI/CD pipelines, and delivery analytics
  • Local development with debug capabilities
"},{"location":"overview/#features","title":"Features","text":"
  • Deployed and configured CI/CD toolset (Tekton, ArgoCD, Jenkins, Nexus, SonarQube, DefectDojo)
  • Gerrit, GitLab or GitHub as a version control system for your code
  • Tekton is a default pipeline orchestrator
  • Jenkins is an optional pipeline orchestrator
  • CI pipelines

    Tekton (by default)Jenkins (optional) Language Framework Build Tool Application Library Autotest Java Java 8, Java 11, Java 17 Gradle, Maven Python Python 3.8, FastAPI, Flask Python C# .Net 3.1, .Net 6.0 .Net Go Beego, Gin, Operator SDK Go JavaScript React, Vue, Angular, Express, Next.js, Antora NPM HCL Terraform Terraform Helm Helm, Pipeline Helm Groovy Codenarc Codenarc Rego OPA OPA Container Docker Kaniko Language Framework Build Tool Application Library Autotest Java Java 8, Java 11 Gradle, Maven Python Python 3.8 Python .Net .Net 3.1 .Net Go Beego, Operator SDK Go JavaScript React NPM HCL Terraform Terraform Groovy Codenarc Codenarc Rego OPA OPA Container Docker Kaniko
  • Portal UI as a single entry point
  • CD pipeline for Microservice Deployment
  • Kubernetes native approach (CRD, CR) to declare CI/CD pipelines
"},{"location":"overview/#whats-inside","title":"What's Inside","text":"

EPAM Delivery Platform (EDP) is suitable for all aspects of delivery starting from development including the capability to deploy production environment. EDP architecture is represented on a diagram below.

Architecture

EDP consists of four cross-cutting concerns:

  1. Infrastructure as a Service;
  2. GitOps approach;
  3. Container orchestration and centralized services;
  4. Security.

On the top of these indicated concerns, EDP adds several blocks that include:

  • EDP CI/CD Components. EDP component enables a feature in CI/CD or an instance artifacts storage and distribution (Nexus or Artifactory), static code analysis (Sonar), etc.;
  • EDP Artifacts. This element represents an artifact that is being delivered through EDP and presented as a code.

    Artifact samples: frontend, backend, mobile, applications, functional and non-functional autotests, workloads for 3rd party components that can be deployed together with applications.

  • EDP development and production environments that share the same logic. Environments wrap a set of artifacts with a specific version, and allow performing SDLC routines in order to be sure of the artifacts quality;
  • Pipelines. Pipelines cover CI/CD process, production rollout and updates. They also connect three elements indicated above via automation allowing SDLC routines to be non-human;
"},{"location":"overview/#technology-stack","title":"Technology Stack","text":"

Explore the EDP technology stack diagram

Technology stack

The EDP IaaS layer supports most popular public clouds AWS, Azure and GCP keeping the capability to be deployed on private/hybrid clouds based on OpenStack. EDP containers are based on Docker technology, orchestrated by Kubernetes compatible solutions.

There are two main options for Kubernetes provided by EDP:

  • Managed Kubernetes in Public Clouds to avoid installation and management of Kubernetes cluster, and get all benefits of scaling, reliability of this solution;
  • OpenShift that is a Platform as a Service on the top of Kubernetes from Red Hat. OpenShift is the default option for on-premise installation and it can be considered whether the solution built on the top of EDP should be cloud-agnostic or require enterprise support;

There is no limitation to run EDP on vanilla Kubernetes.

"},{"location":"overview/#related-articles","title":"Related Articles","text":"
  • Quick Start
  • Basic Concepts
  • Glossary
  • Supported Versions and Compatibility
"},{"location":"roadmap/","title":"RoadMap","text":"

RoadMap consists of three streams:

  • Community
  • Architecture
  • Building Blocks
  • Admin Console
  • Documentation
"},{"location":"roadmap/#i-community","title":"I. Community","text":"

Goals:

  • Innovation Through Collaboration
  • Improve OpenSource Adoption
  • Build Community around technology solutions EDP is built on
"},{"location":"roadmap/#deliver-operators-on-operatorhub","title":"Deliver Operators on OperatorHub","text":"

OperatorHub is a defacto leading solution which consolidates Kubernetes Community around Operators. EDP follows the best practices of delivering Operators in a quick and reliable way. We want to improve Deployment and Management experience for our Customers by publishing all EDP operators on this HUB.

Another artifact aggregator which is used by EDP - ArtifactHub, that holds description for both components: stable and under-development.

OperatorHub. Keycloak Operator

EDP Keycloak Operator is now available from OperatorHub both for Upstream (Kubernetes) and OpenShift deployments.

"},{"location":"roadmap/#ii-architecture","title":"II. Architecture","text":"

Goals:

  • Improve reusability for EDP components
  • Integrate Kubernetes Native Deployment solutions
  • Introduce abstraction layer for CI/CD components
  • Build processes around the GitOps approach
  • Introduce secrets management
"},{"location":"roadmap/#kubernetes-multitenancy","title":"Kubernetes Multitenancy","text":"

Multiple instances of EDP are run in a single Kubernetes cluster. One way to achieve this is to use Multitenancy. Initially, Kiosk was selected as tools that provides this capability. An alternative option that EDP Team took into consideration is Capsule. Another tool which goes far beyond multitenancy is vcluster going a good candidate for e2e testing scenarios where one needs simple lightweight kubernetes cluster in CI pipelines.

"},{"location":"roadmap/#microservice-reference-architecture-framework","title":"Microservice Reference Architecture Framework","text":"

EDP provides basic Application Templates for a number of technology stacks (Java, .Net, NPM, Python) and Helm is used as a deployment tool. The goal is to extend this library and provide: Application Templates which are built on pre-defined architecture patterns (e.g., Microservice, API Gateway, Circuit Breaker, CQRS, Event Driven) and Deployment Approaches: Canary, Blue/Green. This requires additional tools installation on cluster as well.

"},{"location":"roadmap/#policy-enforcement-for-kubernetes","title":"Policy Enforcement for Kubernetes","text":"

Running workload in Kubernetes calls for extra effort from Cluster Administrators to ensure those workloads do follow best practices or specific requirements defined on organization level. Those requirements can be formalized in policies and integrated into: CI Pipelines and Kubernetes Cluster (through Admission Controller approach) - to guarantee proper resource management during development and runtime phases. EDP uses Open Policy Agent (from version 2.8.0), since it supports compliance check for more use-cases: Kubernetes Workloads, Terraform and Java code, HTTP APIs and many others. Kyverno is another option being checked in scope of this activity.

"},{"location":"roadmap/#secrets-management","title":"Secrets Management","text":"

EDP should provide secrets management as a part of platform. There are multiple tools providing secrets management capabilities. The aim is to be aligned with GitOps and Operator Pattern approaches so HashiCorp Vault, Banzaicloud Bank Vaults, Bitnami Sealed Secrets are currently used for internal projects and some of them should be made publicly available - as a part of EDP Deployment.

EDP Release 2.12.x

External Secret Operator is a recommended secret management tool for the EDP components.

"},{"location":"roadmap/#release-management","title":"Release Management","text":"

Conventional Commits and Conventional Changelog are two approaches to be used as part of release process. Today EDP provides only capabilities to manage Release Branches. This activity should address this gap by formalizing and implementing Release Process as a part of EDP. Topics to be covered: Versioning, Tagging, Artifacts Promotion.

"},{"location":"roadmap/#kubernetes-native-cicd-pipelines","title":"Kubernetes Native CI/CD Pipelines","text":"

EDP uses Jenkins as Pipeline Orchestrator. Jenkins runs workload for CI and CD parts. There is also basic support for GitLab CI, but it provides Docker image build functionality only. EDP works on providing an alternative to Jenkins and use Kubernetes Native Approach for pipeline management. There are a number of tools, which provides such capability:

  • Argo CD
  • Argo Workflows
  • Argo Rollouts
  • Tekton
  • Drone
  • Flux

This list is under investigation and solution is going to be implemented in two steps:

  1. Introduce tool that provide Continues Delivery/Deployment approach. Argo CD is one of the best to go with.
  2. Integrate EDP with tool that provides Continues Integration capabilities.

EDP Release 2.12.x

Argo CD is suggested as a solution providing the Continuous Delivery capabilities.

EDP Release 3.0

Tekton is used as a CI/CD pipelines orchestration tool on the platform. Review edp-tekton GitHub repository that keeps all the logic behind this solution on the EDP (Pipelines, Tasks, TriggerTemplates, Interceptors, etc). Get acquainted with the series of publications on our Medium Page.

"},{"location":"roadmap/#advanced-edp-role-based-model","title":"Advanced EDP Role-based Model","text":"

EDP has a number of base roles which are used across EDP. In some cases it is necessary to provide more granular permissions for specific users. It is possible to do this using Kubernetes Native approach.

"},{"location":"roadmap/#notifications-framework","title":"Notifications Framework","text":"

EDP has a number of components which need to report their statuses: Build/Code Review/Deploy Pipelines, changes in Environments, updates with artifacts. The goal for this activity is to onboard Kubernetes Native approach which provides Notification capabilities with different sources/channels integration (e.g. Email, Slack, MS Teams). Some of these tools are Argo Events, Botkube.

"},{"location":"roadmap/#reconciler-component-retirement","title":"Reconciler Component Retirement","text":"

Persistent layer, which is based on edp-db (PostgreSQL) and reconciler component should be retired in favour of Kubernetes Custom Resource (CR). The latest features in EDP are implemented using CR approach.

EDP Release 3.0

Reconciler component is deprecated and is no longer supported. All the EDP components are migrated to Kubernetes Custom Resources (CR).

"},{"location":"roadmap/#iii-building-blocks","title":"III. Building Blocks","text":"

Goals:

  • Introduce best practices from Microservice Reference Architecture deployment and observability using Kubernetes Native Tools
  • Enable integration with the Centralized Test Reporting Frameworks
  • Onboard SAST/DAST tool as a part of CI pipelines and Non-Functional Testing activities

EDP Release 2.12.x

SAST is introduced as a mandatory part of the CI Pipelines. The list of currently supported SAST scanners and the instruction on how to add them are also available.

"},{"location":"roadmap/#infrastructure-as-code","title":"Infrastructure as Code","text":"

EDP Target tool for Infrastructure as Code (IaC) is Terraform. EDP sees two CI/CD scenarios while working with IaC: Module Development and Live Environment Deployment. Today, EDP provides basic capabilities (CI Pipelines) for Terraform Module Development. At the same time, currently EDP doesn't provide Deployment pipelines for Live Environments and the feature is under development. Terragrunt is an option to use in Live Environment deployment. Another Kubernetes Native approach to provision infrastructure components is Crossplane.

"},{"location":"roadmap/#database-schema-management","title":"Database Schema Management","text":"

One of the challenges for Application running in Kubernetes is to manage database schema. There are a number of tools which provides such capabilities, e.g. Liquibase, Flyway. Both tools provide versioning control for database schemas. There are different approaches on how to run migration scripts in Kubernetes: in init container, as separate Job or as a separate CD stage. Purpose of this activity is to provide database schema management solution in Kubernetes as a part of EDP. EDP Team investigates SchemaHero tool and use-cases which suits Kubernetes native approach for database schema migrations.

"},{"location":"roadmap/#open-policy-agent","title":"Open Policy Agent","text":"

Open Policy Agent is introduced in version 2.8.0. EDP now supports CI for Rego Language, so you can develop your own policies. The next goal is to provide pipeline steps for running compliance policies check for Terraform, Java, Helm Chart as a part of CI process.

"},{"location":"roadmap/#report-portal","title":"Report Portal","text":"

EDP uses Allure Framework as a Test Report tool. Another option is to integrate Report Portal into EDP ecosystem.

EDP Release 3.0

Use ReportPortal to consolidate and analyze your Automation tests results. Consult our pages on how to perform reporting and Keycloak integration.

"},{"location":"roadmap/#carrier","title":"Carrier","text":"

Carrier provides Non-functional testing capabilities.

"},{"location":"roadmap/#java-17","title":"Java 17","text":"

EDP supports two LTS versions of Java: 8 and 11. The goal is to provide Java 17 (LTS) support.

EDP Release 3.2.1

CI Pipelines for Java 17 is available in EDP.

"},{"location":"roadmap/#velero","title":"Velero","text":"

Velero is used as a cluster backup tool and is deployed as a part of Platform. Currently, Multitenancy/On-premise support for backup capabilities is in process.

"},{"location":"roadmap/#istio","title":"Istio","text":"

Istio is to be used as a Service Mesh and to address challenges for Microservice or Distributed Architectures.

"},{"location":"roadmap/#kong","title":"Kong","text":"

Kong is one of tools which is planned to use as an API Gateway solution provider. Another possible candidate for investigation is Ambassador API Gateway

"},{"location":"roadmap/#openshift-4x","title":"OpenShift 4.X","text":"

EDP supports the OpenShift 4.9 platform.

EDP Release 2.12.x

EDP Platform runs on the latest OKD versions: 4.9 and 4.10. Creating the IAM Roles for Service Account is a recommended way to work with AWS Resources from the OKD cluster.

"},{"location":"roadmap/#iv-admin-console-ui","title":"IV. Admin Console (UI)","text":"

Goals:

  • Improve U\u0425 for different user types to address their concerns in the delivery model
  • Introduce user management capabilities
  • Enrich with traceability metrics for products

EDP Release 2.12.x

EDP Team has introduced a new UI component called EDP Headlamp, which will replace the EDP Admin Console in future releases. EDP Headlamp is based on the Kinvolk Headlamp UI Client.

EDP Release 3.0

EDP Headlamp is used as a Control Plane UI on the platform.

EDP Release 3.4

Since EDP v3.4.0, Headlamp UI has been renamed to EDP Portal.

"},{"location":"roadmap/#users-management","title":"Users Management","text":"

EDP uses Keycloak as an Identity and Access provider. EDP roles/groups are managed inside the Keycloak realm, then these changes are propagated across the EDP Tools. We plan to provide this functionality in EDP Portal using the Kubernetes-native approach (Custom Resources).

"},{"location":"roadmap/#the-delivery-pipelines-dashboard","title":"The Delivery Pipelines Dashboard","text":"

The CD Pipeline section in EDP Portal provides basic information, such as environments, artifact versions deployed per each environment, and direct links to the namespaces. One option is to enrich this panel with metrics from the Prometheus, custom resources, or events. Another option is to use the existing dashboards and expose EDP metrics to them, for example, plugin for Lens or OpenShift UI Console.

"},{"location":"roadmap/#split-jira-and-commit-validation-sections","title":"Split Jira and Commit Validation Sections","text":"

Commit Validate step was initially designed to be aligned with Jira Integration and cannot be used as single feature. Target state is to ensure features CommitMessage Validation and Jira Integration both can be used independently. We also want to add support for Conventional Commits.

EDP Release 3.2.0

EDP Portal has separate sections for Jira Integration and CommitMessage Validation step.

"},{"location":"roadmap/#v-documentation-as-code","title":"V. Documentation as Code","text":"

Goal:

  • Transparent documentation and clear development guidelines for EDP customization.

Consolidate documentation in a single repository edp-install, use mkdocs tool to generate docs and GitHub Pages as a hosting solution.

"},{"location":"supported-versions/","title":"Supported Versions and Compatibility","text":"

EPAM Delivery Platform supports only the three last versions. For a stable performance, the EDP team recommends installing the corresponding Kubernetes and OpenShift versions as indicated in the table below.

Get acquainted with the list of the latest releases and component versions on which the platform is tested and verified:

EDP Release Version Release Date EKS Version OpenShift Version 3.4 Aug 18, 2023 1.26 4.12 3.3 May 25, 2023 1.26 4.12 3.2 Mar 26, 2023 1.23 4.10 3.1 Jan 24, 2023 1.23 4.10 3.0 Dec 19, 2022 1.23 4.10 2.12 Aug 30, 2022 1.23 4.10"},{"location":"developer-guide/","title":"Overview","text":"

The EDP Developer guide is intended for developers and provides details on the necessary actions to extend the EDP functionality.

"},{"location":"developer-guide/edp-workflow/","title":"EDP Project Rules. Working Process","text":"

This page contains the details on the project rules and working process for EDP team and contributors. Explore the main points about working with Gerrit, following the main commit flow, as well as the details about commit types and message below.

"},{"location":"developer-guide/edp-workflow/#project-rules","title":"Project Rules","text":"

Before starting the development, please check the project rules:

  1. It is highly recommended to become familiar with the Gerrit flow. For details, please refer to the Gerrit official documentation and pay attention to the main points:

    a. Voting in Gerrit.

    b. Resolution of Merge Conflict.

    c. Comments resolution.

    d. One Jira task should have one Merge Request (MR). If there are many changes within one MR, add the next patch set to the open MR by selecting the Amend commit check box.

  2. Only the Assignee is responsible for the MR merge and Jira task status.

  3. Every MR should be merged in a timely manner.

  4. Log time to Jira ticket.

"},{"location":"developer-guide/edp-workflow/#working-process","title":"Working Process","text":"

With EDP, the main workflow is based on the getting a Jira task and creating a Merge Request according to the rules described below.

Workflow

Get Jira task \u2192 implement, verify by yourself the results \u2192 create Merge Request (MR) \u2192 send for review \u2192 resolve comments/add changes, ask colleagues for the final review \u2192 track the MR merge \u2192 verify by yourself the results \u2192 change the status in the Jira ticket to CODE COMPLETE or RESOLVED \u2192 share necessary links with a QA specialist in the QA Verification channel \u2192 QA specialist closes the Jira task after his verification \u2192 Jira task should be CLOSED.

Commit Flow

  1. Get a task in the Jira/GitHub dashboard. Please be aware of the following points:

    JiraGitHub

    a. Every task has a reporter who can provide more details in case something is not clear.

    b. The responsible person for the task and code implementation is the assignee who tracks the following:

    • Actual Jira task status.
    • Time logging.
    • Add comments, attach necessary files.
    • In comments, add link that refers to the merged MR (optional, if not related to many repositories).
    • Code review and the final merge.
    • MS Teams chats - ping other colleagues, answer questions, etc.
    • Verification by a QA specialist.
    • Bug fixing.

    c. Pay attention to the task Status that differs in different entities, the workflow will help to see the whole task processing:

    View Jira workflow

    d. There are several entities that are used on the EDP project: Story, Improvement, Task, Bug.

    a. Every task has a reporter who can provide more details in case something is not clear.

    b. The responsible person for the task and code implementation is the assignee who tracks the following:

    • Actual GitHub task status.
    • Add comments, attach necessary files.
    • In comments, add link that refers to the merged MR (optional, if not related to many repositories).
    • Code review and the final merge.
    • MS Teams chats - ping other colleagues, answer questions, etc.
    • Verification by a QA specialist.
    • Bug fixing.

    c. If the task is created on your own, make sure it is populated completely. See an example below:

    GitHub issue

  2. Implement feature, improvement, fix and check the results on your own. If it is impossible to check the results of your work before the merge, verify all later.

  3. Create a Merge Request, for details, please refer to the Code Review Process.

  4. When committing, use the pattern: commit type: Commit message (#GitHub ticket number).

    a. commit type:

    feat: (new feature for the user, not a new feature for build script)

    fix: (bug fix for the user, not a fix to a build script)

    docs: (changes to the documentation)

    style: (formatting, missing semicolons, etc; no production code change)

    refactor: (refactoring production code, eg. renaming a variable)

    test: (adding missing tests, refactoring tests; no production code change)

    chore: (updating grunt tasks etc; no production code change)

    !: (added to other commit types to mark breaking changes) For example:

    feat!: Job provisioner is responsible for the formation of Jenkinsfile (#26)\n\nBREAKING CHANGE: Job provisioner creates Jenkinsfile and configures it in Jenkins pipeline as a pipeline script.\n

    b. Commit message:

    • brief, for example:

      fix: Fix Gerrit plugin for Jenkins provisioning (#62)

      or

    • descriptive, for example:

      feat: Provide the ability to configure hadolint check (#88)\n\n*Add configuration files .hadolint.yaml and .hadolint.yml to stash\n

      Note

      It is mandatory to start a commit message from a capital letter.

    c. GitHub tickets are typically identified using a number preceded by the # sign and enclosed in parentheses.

Note

Make sure there is a descriptive commit message for a breaking change Merge Request. For example:

feat!: Job provisioner is responsible for the formation of Jenkinsfile

BREAKING CHANGE: Job provisioner creates Jenkinsfile and configures it in Jenkins pipeline as a pipeline script.

Note

If a Merge Request contains both new functionality and breaking changes, make sure the functionality description is placed before the breaking changes. For example:

feat!: Update Gerrit to improve access

  • Implement Developers group creation process
  • Align group permissions

BREAKING CHANGES: Update Gerrit config according to groups

"},{"location":"developer-guide/edp-workflow/#related-articles","title":"Related Articles","text":"
  • Conventional Commits
  • Karma
"},{"location":"developer-guide/local-development/","title":"Workspace Setup Manual","text":"

This page is intended for developers with the aim to share details on how to set up the local environment and start coding in Go language for EPAM Delivery Platform.

"},{"location":"developer-guide/local-development/#prerequisites","title":"Prerequisites","text":"
  • Git is installed;
  • One of our repositories where you would like to contribute is cloned locally;
  • Docker is installed;
  • Kubectl is set up;
  • Local Kubernetes cluster (Kind is recommended) is installed;
  • Helm is installed;
  • Any IDE (GoLand is used here as an example) is installed;
  • GoLang stable version is installed.

Note

Make sure GOPATH and GOROOT environment variables are added in PATH.

"},{"location":"developer-guide/local-development/#environment-setup","title":"Environment Setup","text":"

Set up your environment by following the steps below.

"},{"location":"developer-guide/local-development/#set-up-your-ide","title":"Set Up Your IDE","text":"

We recommend using GoLand and enabling the Kubernetes plugin. Before installing plugins, make sure to save your work because IDE may require restarting.

"},{"location":"developer-guide/local-development/#set-up-your-operator","title":"Set Up Your Operator","text":"

To set up the cloned operator, follow the three steps below:

  1. Configure Go Build Option. Open folder in GoLand, click the button and select the Go Build option:

    Add configuration

  2. Fill in the variables in Configuration tab:

    • In the Files field, indicate the path to the main.go file;
    • In the Working directory field, indicate the path to the operator;
    • In the Environment field, specify the namespace to watch by setting WATCH_NAMESPACE variable. It should equal default but it can be any other if required by the cluster specifications.
    • In the Environment field, also specify the platform type by setting PLATFORM_TYPE. It should equal either kubernetes or openshift.

    Build config

  3. Check cluster connectivity and variables. Local development implies working within local Kubernetes clusters. Kind (Kubernetes in Docker) is recommended so set this or another environment first before running code.

"},{"location":"developer-guide/local-development/#pre-commit-activities","title":"Pre-commit Activities","text":"

Before making commit and sending pull request, take care of precautionary measures to avoid crashing some other parts of the code.

"},{"location":"developer-guide/local-development/#testing-and-linting","title":"Testing and Linting","text":"

Testing and linting must be used before every single commit with no exceptions. The instructions for the commands below are written here.

It is mandatory to run test and lint to make sure the code passes the tests and meets acceptance criteria. Most operators are covered by tests so just run them by issuing the commands \"make test\" and \"make lint\":

  make test\n

The command \"make test\" should give the output similar to the following:

\"make test\" command

  make lint\n

The command \"make lint\" should give the output similar to the following:

\"make lint\" command

"},{"location":"developer-guide/local-development/#observe-auto-generated-docs-api-and-manifests","title":"Observe Auto-Generated Docs, API and Manifests","text":"

The commands below are especially essential when making changes to API. The code is unsatisfactory if these commands fail.

  • Generate documentation in the .MD file format so the developer can read it:

    make api-docs\n

    The command \"make api-docs\" should give the output similar to the following:

\"make api-docs\" command with the file contents

  • There are also manifests within the operator that generate zz_generated.deepcopy.go file in /api/v1 directory. This file is necessary for the platform to work but it's time-consuming to fill it by yourself so there is a mechanism that does it automatically. Update it using the following command and check if it looks properly:

    make generate\n

    The command \"make generate\" should give the output similar to the following:

\"make generate\" command

  • Refresh custom resource definitions for Kubernetes, thus allowing the cluster to know what resources it deals with.

    make manifests\n

    The command \"make manifests\" should give the output similar to the following:

\"make manifests\" command

At the end of the procedure, you can push your code confidently to your branch and create a pull request.

That's it, you're all set! Good luck in coding!

"},{"location":"developer-guide/local-development/#related-articles","title":"Related Articles","text":"
  • EDP Project Rules. Working Process
"},{"location":"developer-guide/mk-docs-development/","title":"Documentation Flow","text":"

This section defines necessary steps to start developing the EDP documentation in the MkDocs Framework. The framework presents a static site generator with documentation written in Markdown. All the docs are configured with a YAML configuration file.

Note

For more details on the framework, please refer to the MkDocs official website.

There are two options for working with MkDocs:

  • Work with MkDocs if Docker is installed
  • Work with MkDocs if Docker is not installed

Please see below the detailed description of each options and choose the one that suits you.

"},{"location":"developer-guide/mk-docs-development/#mkdocs-with-docker","title":"MkDocs With Docker","text":"

Prerequisites:

  • Docker is installed.
  • make utility is installed.
  • Git is installed. Please refer to the Git downloads.

To work with MkDocs, take the following steps:

  1. Clone the edp-install repository to your local folder.

  2. Run the following command:

    make docs

  3. Enter the localhost:8000 address in the browser and check that documentation pages are available.

  4. Open the file editor, navigate to edp-install->docs and make necessary changes. Check all the changes at localhost:8000.

  5. Create a merge request with changes.

"},{"location":"developer-guide/mk-docs-development/#mkdocs-without-docker","title":"MkDocs Without Docker","text":"

Prerequisites:

  • Git is installed. Please refer to the Git downloads.
  • Python 3.9.5 is installed.

To work with MkDocs without Docker, take the following steps:

  1. Clone the edp-install repository to your local folder.

  2. Run the following command:

    pip install -r  hack/mkdocs/requirements.txt\n
  3. Run the local development command:

    mkdocs serve --dev-addr 0.0.0.0:8000\n

    Note

    This command may not work on Windows, so a quick solution is:

    python -m mkdocs serve --dev-addr 0.0.0.0:8000\n

  4. Enter the localhost:8000 address in the browser and check that documentation pages are available.

  5. Open the file editor, navigate to edp-install->docs and make necessary changes. Check all the changes at localhost:8000.

  6. Create a merge request with changes.

"},{"location":"operator-guide/","title":"Overview","text":"

The EDP Operator guide is intended for DevOps and provides information on EDP installation, configuration and customization, as well as the platform support. Inspect the documentation to adjust the EPAM Delivery Platform according to your business needs:

  • The Installation section provides the prerequisites for EDP installation, including Kubernetes or OpenShift cluster setup, Keycloak, DefectDojo, Kiosk, and Ingress-nginx setup as well as the subsequent deployment of EPAM Delivery Platform.
  • The Configuration section indicates the options to set the project with adding a code language, backup, integrate VCS with Jenkins or Tekton, managing Jenkins pipelines, and logging.
  • The Integration section comprises the AWS, GitHub, GitLab, Jira, and Logsight integration options.
  • The Tutorials section provides information on working with various aspects, for example, using cert-manager in OpenShift, deploying AWS EKS cluster, deploying OKD 4.9 cluster, deploying OKD 4.10 cluster, managing Jenkins agent, and upgrading Keycloak v.17.0.x-legacy to v.19.0.x on Kubernetes.
"},{"location":"operator-guide/add-jenkins-agent/","title":"Manage Jenkins Agent","text":"

Inspect the main steps to add and update Jenkins agent.

"},{"location":"operator-guide/add-jenkins-agent/#createupdate-jenkins-agent","title":"Create/Update Jenkins Agent","text":"

Every Jenkins agent is based on epamedp/edp-jenkins-base-agent. Check DockerHub for the latest version. Use it to create a new agent (or update an old one). See the example with Dockerfile of gradle-java11-agent below:

View: Dockerfile
    # Copyright 2021 EPAM Systems.\n    # Licensed under the Apache License, Version 2.0 (the \"License\");\n    # you may not use this file except in compliance with the License.\n    # You may obtain a copy of the License at\n    # http://www.apache.org/licenses/LICENSE-2.0\n    # Unless required by applicable law or agreed to in writing, software\n    # distributed under the License is distributed on an \"AS IS\" BASIS,\n    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n    # See the License for the specific language governing permissions and\n    # limitations under the License.\n\n    FROM epamedp/edp-jenkins-base-agent:1.0.1\n    SHELL [\"/bin/bash\", \"-o\", \"pipefail\", \"-c\"]\n    ENV GRADLE_VERSION=7.1 \\\n        PATH=$PATH:/opt/gradle/bin\n\n    # Install Gradle\n    RUN curl -skL -o /tmp/gradle-bin.zip https://services.gradle.org/distributions/gradle-$GRADLE_VERSION-bin.zip && \\\n        mkdir -p /opt/gradle && \\\n        unzip -q /tmp/gradle-bin.zip -d /opt/gradle && \\\n        ln -sf /opt/gradle/gradle-$GRADLE_VERSION/bin/gradle /usr/local/bin/gradle\n\n    RUN yum install java-11-openjdk-devel.x86_64 -y && \\\n        rpm -V java-11-openjdk-devel.x86_64 && \\\n        yum clean all -y\n\n    WORKDIR $HOME/.gradle\n\n    RUN chown -R \"1001:0\" \"$HOME\" && \\\n        chmod -R \"g+rw\" \"$HOME\"\n\n    USER 1001\n

After the Docker agent update/creation, build and load the image into the project registry (e.g. DockerHub, AWS ECR, etc.).

"},{"location":"operator-guide/add-jenkins-agent/#add-jenkins-agent-configuration","title":"Add Jenkins Agent Configuration","text":"

To add a new Jenkins agent, take the steps below:

  1. Run the following command. Please be aware that edp is the name of the EDP tenant.

      kubectl edit configmap jenkins-slaves -n edp\n

    Note

    On an OpenShift cluster, run the oc command instead of kubectl one.

    Add new agent template. View: ConfigMap jenkins-slaves

      data:\n    docker-template: |-\n     <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n<inheritFrom></inheritFrom>\n<name>docker</name>\n<namespace></namespace>\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<instanceCap>2147483647</instanceCap>\n<slaveConnectTimeout>100</slaveConnectTimeout>\n<idleMinutes>5</idleMinutes>\n<activeDeadlineSeconds>0</activeDeadlineSeconds>\n<label>docker</label>\n<serviceAccount>jenkins</serviceAccount>\n<nodeSelector>beta.kubernetes.io/os=linux</nodeSelector>\n<nodeUsageMode>NORMAL</nodeUsageMode>\n<workspaceVolume class=\"org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume\">\n<memory>false</memory>\n</workspaceVolume>\n<volumes/>\n<containers>\n<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n<name>jnlp</name>\n\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<workingDir>/tmp</workingDir>\n<command></command>\n<args>${computer.jnlpmac} ${computer.name}</args>\n<ttyEnabled>false</ttyEnabled>\n<resourceRequestCpu></resourceRequestCpu>\n<resourceRequestMemory></resourceRequestMemory>\n<resourceLimitCpu></resourceLimitCpu>\n<resourceLimitMemory></resourceLimitMemory>\n<envVars>\n<org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n<key>JAVA_TOOL_OPTIONS</key>\n<value>-XX:+UnlockExperimentalVMOptions -Dsun.zip.disableMemoryMapping=true</value>\n</org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n</envVars>\n<ports/>\n</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n</containers>\n<envVars/>\n<annotations/>\n<imagePullSecrets/>\n<podRetention class=\"org.csanchez.jenkins.plugins.kubernetes.pod.retention.Default\"/>\n</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n

    Note

    The name and label properties should be unique(docker in the example above). Insert image name and tag instead of IMAGE_NAME:IMAGE_TAG.

  2. Open Jenkins to ensure that everything is added correctly. Click the Manage Jenkins option, navigate to the Manage Nodes and Clouds->Configure Clouds->Kubernetes->Pod Templates..., and scroll down to find new Jenkins agent Pod Template details...:

    Jenkins pod template

    As a result, the newly added Jenkins agent will be available in the Advanced Settings block of the Admin Console tool during the codebase creation:

    Advanced settings

  3. "},{"location":"operator-guide/add-jenkins-agent/#modify-existing-agent-configuration","title":"Modify Existing Agent Configuration","text":"

    If your application is integrated with EDP, take the steps below to change an existing agent configuration:

    1. Run the following command. Please be aware that edp is the name of the EDP tenant.

        kubectl edit configmap jenkins-slaves -n edp\n

      Note

      On an OpenShift cluster, run the oc command instead of kubectl one.

    2. Find the agent template in use and change and change the parameters.

    3. Open Jenkins and check the correct addition. Click the Manage Jenkins option, navigate to the Manage Nodes and Clouds->Configure Clouds->Kubernetes->Pod Templates..., and scroll down to Pod Template details... with the necessary data.

    "},{"location":"operator-guide/add-ons-overview/","title":"Cluster Add-Ons Overview","text":"

    This page describes the entity of Cluster Add-Ons for EPAM Delivery Platform, as well as their purpose, benefits and usage.

    "},{"location":"operator-guide/add-ons-overview/#what-are-add-ons","title":"What Are Add-Ons","text":"

    EDP Add-Ons is basically a Kubernetes-based structure that enables users to quickly install additional components for the platform using Argo CD applications.

    Add-Ons have been introduced into EDP starting from version 3.4.0. They empower users to seamlessly incorporate the platform with various additional components, such as SonarQube, Nexus, Keycloak, Jira, and more. This eliminates the need for manual installations, as outlined in the Install EDP page.

    In a nutshell, Add-Ons represent separate Helm Charts that imply to be installed by one click using the Argo CD tool.

    "},{"location":"operator-guide/add-ons-overview/#add-ons-repository-structure","title":"Add-Ons Repository Structure","text":"

    All the Add-Ons are stored in our public GitHub repository adhering to the GitOps approach. Apart from default Helm and Git files, it contains both custom resources called Applications for Argo CD and application source code. The repository follows the GitOps approach to enable Add-Ons with the capability to rollback changes when needed. The repository structure is the following:

      \u251c\u2500\u2500 CHANGELOG.md\n  \u251c\u2500\u2500 LICENSE\n  \u251c\u2500\u2500 Makefile\n  \u251c\u2500\u2500 README.md\n  \u251c\u2500\u2500 add-ons\n  \u2514\u2500\u2500 chart\n
    • add-ons - the directory that contains Helm charts of the applications that can be integrated with EDP using Add-Ons.
    • chart - the directory that contains Helm charts with application templates that will be used to create custom resources called Applications for Argo CD.
    "},{"location":"operator-guide/add-ons-overview/#enable-edp-add-ons","title":"Enable EDP Add-Ons","text":"

    To enable EDP Add-Ons, it is necessary to have the configured Argo CD, and connect and synchronize the forked repository. To do this, follow the guidelines below:

    1. Fork the Add-Ons repository to your personal account.

    2. Provide the parameter values for the values.yaml files of the desired Add-Ons you are going to install.

    3. Navigate to Argo CD -> Settings -> Repositories. Connect your forked repository where you have the values.yaml files changed by clicking the + Connect repo button:

      Connect the forked repository

    4. In the appeared window, fill in the following fields and click the Connect button:

      • Name - select the namespace where the project is going to be depolyed;
      • Choose your connection method - choose Via SSH;
      • Type - choose Helm;
      • Repository URL - enter the URL of your forked repository.

      Repository parameters

    5. As soon as the repository is connected, the new item in the repository list will appear:

      Connected repository

    6. Navigate to Argo CD -> Applications. Click the + New app button:

      Adding Argo CD application

    7. Fill in the required fields:

      • Application Name - addons-demo;
      • Project name - select the namespace where the project is going to be depolyed;
      • Sync policy - select Manual;
      • Repository URL - enter the URL of your forked repository;
      • Revision - Head;
      • Path - select chart;
      • Cluster URL - enter the URL of your cluster;
      • Namespace - enter the namespace which must be equal to the Project name field.
    8. As soon as the repository is synchronized, the list of applications that can be installed by Add-Ons will be shown:

      Add-Ons list

    "},{"location":"operator-guide/add-ons-overview/#install-edp-add-ons","title":"Install EDP Add-Ons","text":"

    Now that Add-Ons are enabled in Argo CD, they can be installed by following the steps below:

    1. Choose the Add-On to install.

    2. On the chosen Add-On, click the \u22ee button and then Details:

      Open Add-Ons

    3. To install the Add-On, click the \u22ee button -> Sync:

      Install Add-Ons

    4. Once the Add-On is installed, the Sync OK message will appear in the Add-On status bar:

      Sync OK message

    5. Open the application details by clicking on the little square with an arrow underneath the Add-On name:

      Open details

    6. Track application resources and status in the App details menu:

      Application details

    As we see, Argo CD offers great observability and monitoring tools for its resources which comes in handy when using EDP Add-Ons.

    "},{"location":"operator-guide/add-ons-overview/#available-add-ons-list","title":"Available Add-Ons List","text":"

    The list of the available Add-Ons:

    Name Description Default Argo CD A GitOps continuous delivery tool that helps automate the deployment, configuration, and lifecycle management of applications in Kubernetes clusters. false AWS EFS CSI Driver A Container Storage Interface (CSI) driver that enables the dynamic provisioning of Amazon Elastic File System (EFS) volumes in Kubernetes clusters. true Cert Manager A native Kubernetes certificate management controller that automates the issuance and renewal of TLS certificates. true DefectDojo A security vulnerability management tool that allows tracking and managing security findings in applications. true DependencyTrack A Software Composition Analysis (SCA) platform that helps identify and manage open-source dependencies and their associated vulnerabilities. true EDP An internal platform created by EPAM to enhance software delivery processes using DevOps principles and tools. false Extensions OIDC EDP Helm chart to provision OIDC clients for different Add-Ons using EDP Keycloak Operator. true External Secrets A Kubernetes Operator that fetches secrets from external secret management systems and injects them as Kubernetes Secrets. true Fluent Bit A lightweight and efficient log processor and forwarder that collects and routes logs from various sources in Kubernetes clusters. false Harbor A cloud-native container image registry that provides support for vulnerability scanning, policy-based image replication, and more. true Nginx ingress An Ingress controller that provides external access to services running within a Kubernetes cluster using Nginx as the underlying server. true Jaeger Operator An operator for deploying and managing Jaeger, an end-to-end distributed tracing system, in Kubernetes clusters. true Keycloak An open-source Identity and Access Management (IAM) solution that enables authentication, authorization, and user management in Kubernetes clusters. true Keycloak PostgreSQL A PostgreSQL database operator that simplifies the deployment and management of PostgreSQL instances in Kubernetes clusters. false MinIO Operator An operator that simplifies the deployment and management of MinIO, a high-performance object storage server compatible with Amazon S3, in Kubernetes clusters. true OpenSearch A community-driven, open-source search and analytics engine that provides scalable and distributed search capabilities for Kubernetes clusters. true OpenTelemetry Operator An operator for automating the deployment and management of OpenTelemetry, a set of observability tools for capturing, analyzing, and exporting telemetry data. true PostgreSQL Operator An operator for running and managing PostgreSQL databases in Kubernetes clusters with high availability and scalability. true Prometheus Operator An operator that simplifies the deployment and management of Prometheus, a monitoring and alerting toolkit, in Kubernetes clusters. true Redis Operator An operator for managing Redis, an in-memory data structure store, in Kubernetes clusters, providing high availability and horizontal scalability. true StorageClass A Kubernetes resource that provides a way to define different classes of storage with different performance characteristics for persistent volumes. true Tekton A flexible and cloud-native framework for building, testing, and deploying applications using Kubernetes-native workflows. true Vault An open-source secrets management solution that provides secure storage, encryption, and access control for sensitive data in Kubernetes clusters. true"},{"location":"operator-guide/add-other-code-language/","title":"Add Other Code Language","text":"

    There is an ability to extend the default code languages when creating a codebase with the Clone or Import strategy.

    Other code language

    Warning

    The Create strategy does not allow to customize the default code language set.

    To customize the Build Tool list, perform the following:

    • Edit the edp-admin-console deployment by adding the necessary code language into the BUILD TOOLS field:

       kubectl edit deployment edp-admin-console -n edp\n

      Note

      Using an OpenShift cluster, run the oc command instead of kubectl one.

      Info

      edp is the name of the EDP tenant here and in all the following steps.

      View: edp-admin-console deployment
      ...\nspec:\ncontainers:\n- env:\n...\n- name: BUILD_TOOLS\nvalue: docker # List of custom build tools in Admin Console, e.g. 'docker,helm';\n...\n...\n
    • Add the Jenkins agent by following the instruction.
    • Add the Custom CI pipeline provisioner by following the instruction.
    • As a result, the newly added Jenkins agent will be available in the Select Jenkins Slave dropdown list of the Advanced Settings block during the codebase creation:

      Advanced settings

    If it is necessary to create Code Review and Build pipelines, add corresponding entries (e.g. stages[Build-application-docker], [Code-review-application-docker]). See the example below:

    ...\nstages['Code-review-application-docker'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" + ',{\"name\": \"sonar\"}]'\nstages['Build-application-docker'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sonar\"},' +\n'{\"name\": \"build-image-kaniko\"}' + ',{\"name\": \"git-tag\"}]'\n...\n

    Jenkins job provisioner

    Note

    Application is one of the available options. Another option might be to add a library. Please refer to the Add Library page for details.

    "},{"location":"operator-guide/add-other-code-language/#related-articles","title":"Related Articles","text":"
    • Add Application
    • Add Library
    • Manage Jenkins Agent
    • Manage Jenkins CI Pipeline Job Provisioner
    "},{"location":"operator-guide/add-security-scanner/","title":"Add Security Scanner","text":"

    In order to add a new security scanner, perform the steps below:

    1. Select a pipeline customization option from the Customize CI Pipeline article. Follow the steps described in this article, to create a new repository.

      Note

      This tutorial will focus on adding a new stage using shared library via the custom global pipeline libraries.

    2. Open the new repository and create a directory with the /src/com/epam/edp/customStages/impl/ci/impl/stageName/ name in the library repository, for example: /src/com/epam/edp/customStages/impl/ci/impl/security/. After that, add a Groovy file with another name to the same stages catalog, for example: CustomSAST.groovy.

    3. Copy the logic from SASTMavenGradleGoApplication.groovy stage into the new CustomSAST.groovy stage.

    4. Add a new runGoSecScanner function to the stage:

      @Stage(name = \"sast-custom\", buildTool = [\"maven\",\"gradle\",\"go\"], type = [ProjectType.APPLICATION])\nclass CustomSAST {\n...\ndef runGoSecScanner(context) {\ndef edpName = context.platform.getJsonPathValue(\"cm\", \"edp-config\", \".data.edp_name\")\ndef reportData = [:]\nreportData.active = \"true\"\nreportData.verified = \"false\"\nreportData.path = \"sast-gosec-report.json\"\nreportData.type = \"Gosec Scanner\"\nreportData.productTypeName = \"Tenant\"\nreportData.productName = \"${edpName}\"\nreportData.engagementName = \"${context.codebase.name}-${context.git.branch}\"\nreportData.autoCreateContext = \"true\"\nreportData.closeOldFindings = \"true\"\nreportData.pushToJira = \"false\"\nreportData.environment = \"Development\"\nreportData.testTitle = \"SAST\"\nscript.sh(script: \"\"\"\n                set -ex\n                gosec -fmt=json -out=${reportData.path} ./...\n        \"\"\")\nreturn reportData\n}\n...\n}\n
    5. Add function calls for the runGoSecScanner and publishReport functions:

      ...\nscript.node(\"sast\") {\nscript.dir(\"${testDir}\") {\nscript.unstash 'all-repo'\n...\ndef dataFromGoSecScanner = runGoSecScanner(context)\npublishReport(defectDojoCredentials, dataFromGoSecScanner)\n}\n}\n...\n
    6. Gosec scanner will be installed on the Jenkins SAST agent. It is based on the epamedp/edp-jenkins-base-agent. Please check DockerHub for its latest version.

      See below an example of the edp-jenkins-sast-agent Dockerfile:

      View: Default Dockerfile
       # Copyright 2022 EPAM Systems.\n\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n # You may obtain a copy of the License at\n # http://www.apache.org/licenses/LICENSE-2.0\n\n # Unless required by applicable law or agreed to in writing, software\n # distributed under the License is distributed on an \"AS IS\" BASIS,\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n\n # See the License for the specific language governing permissions and\n # limitations under the License.\n\n FROM epamedp/edp-jenkins-base-agent:1.0.31\n\n SHELL [\"/bin/bash\", \"-o\", \"pipefail\", \"-c\"]\n\n USER root\n\n ENV SEMGREP_SCANNER_VERSION=0.106.0 \\\n     GOSEC_SCANNER_VERSION=2.12.0\n\n RUN apk --no-cache add \\\n     curl=7.79.1-r2 \\\n     build-base=0.5-r3 \\\n     python3-dev=3.9.5-r2 \\\n     py3-pip=20.3.4-r1 \\\n     go=1.16.15-r0\n\n # hadolint ignore=DL3059\n RUN pip3 install --no-cache-dir --upgrade --ignore-installed \\\n     pip==22.2.1 \\\n     ruamel.yaml==0.17.21 \\\n     semgrep==${SEMGREP_SCANNER_VERSION}\n\n # Install GOSEC\n RUN curl -Lo /tmp/gosec.tar.gz https://github.com/securego/gosec/releases/download/v${GOSEC_SCANNER_VERSION}/gosec_${GOSEC_SCANNER_VERSION}_linux_amd64.tar.gz && \\\n     tar xf /tmp/gosec.tar.gz && \\\n     rm -f /tmp/gosec.tar.gz && \\\n     mv gosec /bin/gosec\n\n RUN chown -R \"1001:0\" \"$HOME\" && \\\n     chmod -R \"g+rw\" \"$HOME\"\n\n USER 1001\n
    "},{"location":"operator-guide/add-security-scanner/#related-articles","title":"Related Articles","text":"
    • Customize CI Pipeline
    • Static Application Security Testing Overview
    • Semgrep
    "},{"location":"operator-guide/argocd-integration/","title":"Argo CD Integration","text":"

    EDP uses Jenkins Pipeline as a part of the Continues Delivery/Continues Deployment implementation. Another approach is to use Argo CD tool as an alternative to Jenkins. Argo CD follows the best GitOps practices, uses Kubernetes native approach for the Deployment Management, has rich UI and required RBAC capabilities.

    "},{"location":"operator-guide/argocd-integration/#argo-cd-deployment-approach-in-edp","title":"Argo CD Deployment Approach in EDP","text":"

    Argo CD can be installed using two different approaches:

    • Cluster-wide scope with the cluster-admin access
    • Namespaced scope with the single namespace access

    Both approaches can be deployed with High Availability (HA) or Non High Availability (non HA) installation manifests.

    EDP uses the HA deployment with the cluster-admin permissions, to minimize cluster resources consumption by sharing single Argo CD instance across multiple EDP Tenants. Please follow the installation instructions to deploy Argo CD.

    "},{"location":"operator-guide/argocd-integration/#edp-argo-cd-integration","title":"EDP Argo CD Integration","text":"

    See a diagram below for the details:

    Argo CD Diagram

    • Argo CD is deployed in a separate argocd namespace.
    • Argo CD uses a cluster-admin role for managing cluster-scope resources.
    • The control-plane application is created using the App of Apps approach, and its code is managed by the control-plane members.
    • The control-plane is used to onboard new Argo CD Tenants (Argo CD Projects - AppProject).
    • The EDP Tenant Member manages Argo CD Applications using kind: Application in the edpTenant namespace.

    The App Of Apps approach is used to manage the EDP Tenants. Inspect the edp-grub repository structure that is used to provide the EDP Tenants for the Argo CD Projects:

    edp-grub\n\u251c\u2500\u2500 LICENSE\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 apps                      ### All Argo CD Applications are stored here\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 grub-argocd.yaml      # Application that provisions Argo CD Resources - Argo Projects (EDP Tenants)\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 grub-keycloak.yaml    # Application that provisions Keycloak Resources - Argo CD Groups (EDP Tenants)\n\u251c\u2500\u2500 apps-configs\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 grub\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 argocd            ### Argo CD resources definition\n\u2502\u00a0\u00a0     \u2502\u00a0\u00a0 \u251c\u2500\u2500 team-bar.yaml\n\u2502\u00a0\u00a0     \u2502\u00a0\u00a0 \u2514\u2500\u2500 team-foo.yaml\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 keycloak          ### Keycloak resources definition\n\u2502\u00a0\u00a0         \u251c\u2500\u2500 team-bar.yaml\n\u2502\u00a0\u00a0         \u2514\u2500\u2500 team-foo.yaml\n\u251c\u2500\u2500 bootstrap\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 root.yaml             ### Root application in App of Apps, which provision Applications from /apps\n\u2514\u2500\u2500 examples                  ### Examples\n\u2514\u2500\u2500 tenant\n        \u2514\u2500\u2500 foo-petclinic.yaml\n

    The Root Application must be created under the control-plane scope.

    "},{"location":"operator-guide/argocd-integration/#configuration","title":"Configuration","text":"

    Note

    Make sure that both EDP and Argo CD are installed, and that SSO is enabled.

    To start using Argo CD with EDP, perform the following steps:

    "},{"location":"operator-guide/argocd-integration/#keycloak","title":"Keycloak","text":"
    1. Create a Keycloak Group.

      apiVersion: v1.edp.epam.com/v1\nkind: KeycloakRealmGroup\nmetadata:\nname: argocd-team-foo-users\nspec:\nname: ArgoCD-team-foo-users\nrealm: main\n
    2. In Keycloak, add users to the ArgoCD-team-foo-users Keycloak Group.

    "},{"location":"operator-guide/argocd-integration/#argo-cd","title":"Argo CD","text":"
    1. Add a credential template for Gerrit, GitHub, GitLab integrations. The credential template must be created for each Git server.

      GerritGitHub/GitLab

      Copy existing SSH private key for Gerrit to Argo CD namespace

      EDP_NAMESPACE=<EPD_NAMESPACE>\nGERRIT_PORT=$(kubectl get gerrit gerrit -n ${EDP_NAMESPACE} -o jsonpath='{.spec.sshPort}')\nGERRIT_ARGOCD_SSH_KEY_NAME=\"gerrit-argocd-sshkey\"\nGERRIT_URL=$(echo \"ssh://argocd@gerrit.${EDP_NAMESPACE}:${GERRIT_PORT}\" | base64)\nkubectl get secret ${GERRIT_ARGOCD_SSH_KEY_NAME} -n ${EDP_NAMESPACE} -o json | jq 'del(.data.username,.metadata.annotations,.metadata.creationTimestamp,.metadata.labels,.metadata.resourceVersion,.metadata.uid,.metadata.ownerReferences)' | jq '.metadata.namespace = \"argocd\"' | jq --arg name \"${EDP_NAMESPACE}\" '.metadata.name = $name' | jq --arg url \"${GERRIT_URL}\" '.data.url = $url' | jq '.data.sshPrivateKey = .data.id_rsa' | jq 'del(.data.id_rsa,.data.\"id_rsa.pub\")' | kubectl apply -f -\nkubectl label --overwrite secret ${EDP_NAMESPACE} -n argocd \"argocd.argoproj.io/secret-type=repo-creds\"\n

      Generate an SSH key pair and add a public key to GitLab or GitHub account.

      Warning

      Use an additional GitHub/GitLab User to access a repository. For example: - GitHub, add a User to a repository with a \"Read\" role. - GitLab, add a User to a repository with a \"Guest\" role.

      ssh-keygen -t ed25519 -C \"email@example.com\" -f argocd\n

      Copy SSH private key to Argo CD namespace

      EDP_NAMESPACE=<EDP_NAMESPACE>\nVCS_HOST=\"<github.com_or_gitlab.com>\"\nACCOUNT_NAME=\"<ACCOUNT_NAME>\"\nURL=\"ssh://git@${VCS_HOST}:22/${ACCOUNT_NAME}\"\n\nkubectl create secret generic ${EDP_NAMESPACE} -n argocd \\\n--from-file=sshPrivateKey=argocd \\\n--from-literal=url=\"${URL}\"\nkubectl label --overwrite secret ${EDP_NAMESPACE} -n argocd \"argocd.argoproj.io/secret-type=repo-creds\"\n

      Add public SSH key to GitHub/GitLab account.

    2. Add SSH Known hosts for Gerrit, GitHub, GitLab integration.

      GerritGitHub/GitLab

      Add Gerrit host to Argo CD config map with known hosts

      EDP_NAMESPACE=<EDP_NAMESPACE>\nKNOWN_HOSTS_FILE=\"/tmp/ssh_known_hosts\"\nARGOCD_KNOWN_HOSTS_NAME=\"argocd-ssh-known-hosts-cm\"\nGERRIT_PORT=$(kubectl get gerrit gerrit -n ${EDP_NAMESPACE} -o jsonpath='{.spec.sshPort}')\n\nrm -f ${KNOWN_HOSTS_FILE}\nkubectl get cm ${ARGOCD_KNOWN_HOSTS_NAME} -n argocd -o jsonpath='{.data.ssh_known_hosts}' > ${KNOWN_HOSTS_FILE}\nkubectl exec -it deployment/gerrit -n ${EDP_NAMESPACE} -- ssh-keyscan -p ${GERRIT_PORT} gerrit.${EDP_NAMESPACE} >> ${KNOWN_HOSTS_FILE}\nkubectl create configmap ${ARGOCD_KNOWN_HOSTS_NAME} -n argocd --from-file ${KNOWN_HOSTS_FILE} -o yaml --dry-run=client | kubectl apply -f -\n

      Add GitHub/GitLab host to Argo CD config map with known hosts

      EDP_NAMESPACE=<EPD_NAMESPACE>\nVCS_HOST=\"<VCS_HOST>\"\nKNOWN_HOSTS_FILE=\"/tmp/ssh_known_hosts\"\nARGOCD_KNOWN_HOSTS_NAME=\"argocd-ssh-known-hosts-cm\"\n\nrm -f ${KNOWN_HOSTS_FILE}\nkubectl get cm ${ARGOCD_KNOWN_HOSTS_NAME} -n argocd -o jsonpath='{.data.ssh_known_hosts}' > ${KNOWN_HOSTS_FILE}\nssh-keyscan ${VCS_HOST} >> ${KNOWN_HOSTS_FILE}\nkubectl create configmap ${ARGOCD_KNOWN_HOSTS_NAME} -n argocd --from-file ${KNOWN_HOSTS_FILE} -o yaml --dry-run=client | kubectl apply -f -\n
    3. Create an Argo CD Project (EDP Tenant), for example, with the team-foo name:

      AppProject
      apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\nname: team-foo\nnamespace: argocd\n# Finalizer that ensures that project is not deleted until it is not referenced by any application\nfinalizers:\n- resources-finalizer.argocd.argoproj.io\nspec:\ndescription: CD pipelines for team-foo\nroles:\n- name: developer\ndescription: Users for team-foo tenant\npolicies:\n- p, proj:team-foo:developer, applications, create, team-foo/*, allow\n- p, proj:team-foo:developer, applications, delete, team-foo/*, allow\n- p, proj:team-foo:developer, applications, get, team-foo/*, allow\n- p, proj:team-foo:developer, applications, override, team-foo/*, allow\n- p, proj:team-foo:developer, applications, sync, team-foo/*, allow\n- p, proj:team-foo:developer, applications, update, team-foo/*, allow\n- p, proj:team-foo:developer, repositories, create, team-foo/*, allow\n- p, proj:team-foo:developer, repositories, delete, team-foo/*, allow\n- p, proj:team-foo:developer, repositories, update, team-foo/*, allow\n- p, proj:team-foo:developer, repositories, get, team-foo/*, allow\ngroups:\n# Keycloak Group name\n- ArgoCD-team-foo-users\ndestinations:\n# ensure we can deploy to ns with tenant prefix\n- namespace: 'team-foo-*'\n# allow to deploy to specific server (local in our case)\nserver: https://kubernetes.default.svc\n# Deny all cluster-scoped resources from being created, except for Namespace\nclusterResourceWhitelist:\n- group: ''\nkind: Namespace\n# Allow all namespaced-scoped resources to be created, except for ResourceQuota, LimitRange, NetworkPolicy\nnamespaceResourceBlacklist:\n- group: ''\nkind: ResourceQuota\n- group: ''\nkind: LimitRange\n- group: ''\nkind: NetworkPolicy\n# we are ok to create any resources inside namespace\nnamespaceResourceWhitelist:\n- group: '*'\nkind: '*'\n# enable access only for specific git server. The example below 'team-foo' - it is namespace where EDP deployed\nsourceRepos:\n- ssh://argocd@gerrit.team-foo:30007/*\n# enable capability to deploy objects from namespaces\nsourceNamespaces:\n- team-foo\n
    4. Optional: if the Argo CD controller has not been enabled to manage the Application resources in the specific namespaces (team-foo, in our case) in the Install Argo CD, modify the argocd-cmd-params-cm ConfigMap in the Argo CD namespace and add the application.namespaces parameter to the subsection data:

      argocd-cmd-params-cm
      ...\ndata:\napplication.namespaces: team-foo\n...\n
      values.yaml file
      ...\nconfigs:\nparams:\napplication.namespaces: team-foo\n...\n
    5. Check that your new Repository, Known Hosts, and AppProject are added to the Argo CD UI.

    Once Argo CD is successfully integrated, EDP user can utilize Argo CD to deploy CD pipelines.

    "},{"location":"operator-guide/argocd-integration/#check-argo-cd-integration-optional","title":"Check Argo CD Integration (Optional)","text":"

    This section provides the information on how to test the integration with Argo CD and is not mandatory to be followed.

    1. Follow the Add Application instruction to deploy a test EDP application with the demo name, which should be stored in a Gerrit private repository:

      Example: Argo CD Application
      apiVersion: argoproj.io/v1alpha1\nkind: Application\nmetadata:\nname: demo\nspec:\nproject: team-foo\ndestination:\nnamespace: team-foo-demo\nserver: https://kubernetes.default.svc\nsource:\nhelm:\nparameters:\n- name: image.tag\nvalue: master-0.1.0-1\n- name: image.repository\nvalue: image-repo\npath: deploy-templates\nrepoURL: ssh://argocd@gerrit.team-foo:30007/demo.git\ntargetRevision: master\nsyncPolicy:\nsyncOptions:\n- CreateNamespace=true\nautomated:\nselfHeal: true\nprune: true\n
    2. Check that your new Application is added to the Argo CD UI under the team-foo Project scope.

    "},{"location":"operator-guide/argocd-integration/#related-articles","title":"Related Articles","text":"
    • Install Argo CD
    "},{"location":"operator-guide/aws-marketplace-install/","title":"Install via AWS Marketplace","text":"

    This documentation provides the detailed instructions on how to install the EPAM Delivery Platform via the AWS Marketplace.

    To initiate the installation process, navigate to our dedicated AWS Marketplace page and commence the deployment of EPAM Delivery Platform.

    Disclaimer

    EDP is aligned with industry standards for storing and managing sensitive data, ensuring optimal security. However, the use of custom solutions introduces uncertainties, thus the responsibility for the safety of your data is totally covered by platform administrator.

    "},{"location":"operator-guide/aws-marketplace-install/#prerequisites","title":"Prerequisites","text":"

    Please familiarize yourself with the Prerequisites page before deploying the product. To perform a minimal installation, ensure that you meet the following requirements:

    • The domain name is available and associated with the ingress object in cluster.
    • Cluster administrator access.
    • The Tekton resources are deployed.
    • Access to the cluster via Service Account token is available.
    "},{"location":"operator-guide/aws-marketplace-install/#deploy-epam-delivery-platform","title":"Deploy EPAM Delivery Platform","text":"

    To deploy the platform, follow the steps below:

    1. To apply Tekton stack, deploy Tekton resources by executing the command below:

       kubectl create ns tekton-pipelines\n kubectl create ns tekton-chains\n kubectl create ns tekton-pipelines-resolvers\n kubectl apply --filename https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml\n kubectl apply --filename https://storage.googleapis.com/tekton-releases/triggers/latest/interceptors.yaml\n kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml\n kubectl apply --filename https://storage.googleapis.com/tekton-releases/chains/latest/release.yaml\n

    2. Define the mandatory parameters you would like to use for installation using the following command:

       kubectl create ns edp\n helm install edp-install \\\n--namespace edp ./* \\\n--set global.dnsWildCard=example.com \\\n--set awsRegion=<AWS_REGION>\n
    3. (Optional) Provide token to sign in to EDP Portal. Run the following command to create Service Account with cluster admin permissions:

      kubectl create serviceaccount edp-admin -n edp\nkubectl create clusterrolebinding edp-cluster-admin --clusterrole=cluster-admin --serviceaccount=edp:edp-admin\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: edp-admin-token\n  namespace: edp\n  annotations:\n    kubernetes.io/service-account.name: edp-admin\ntype: kubernetes.io/service-account-token\nEOF\n
    4. (Optional) To get access to EDP Portal, run the port-forwarding command:

       kubectl port-forward service/edp-headlamp 59480:80 -n edp\n

    5. (Optional) To open EDP Portal, navigate to the http://localhost:59480.

    6. (Optional) To get admin token to sign in to EDP Portal:

      kubectl get secrets -o jsonpath=\"{.items[?(@.metadata.annotations['kubernetes\\.io/service-account\\.name']=='edp-admin')].data.token}\" -n edp|base64 --decode\n

    As a result, you will get access to EPAM Delivery Platform components via EDP Portal UI. Navigate to our Use Cases to try out EDP functionality. Visit other subsections of the Operator Guide to figure out how to configure EDP and integrate it with various tools.

    "},{"location":"operator-guide/aws-marketplace-install/#related-articles","title":"Related Articles","text":"
    • EPAM Delivery Platform on AWS Marketplace
    • Integrate GitHub/GitLab in Tekton
    • Set Up Kubernetes
    • Set Up OpenShift
    • EDP Installation Prerequisites Overview
    • Headlamp OIDC Integration
    "},{"location":"operator-guide/capsule/","title":"Capsule Integration","text":"

    This documentation guide provides comprehensive instructions of integrating Capsule with the EPAM Delivery Platform to enhance security and resource management.

    Note

    When integrating the EPAM Delivery Platform with Capsule, it's essential to understand that the platform needs administrative rights to make and oversee resources. This requirement might raise security concerns, but it's important to clarify that it only pertains to the deployment process within the platform. There is an alternative approach available. You can manually create permissions for each deployment flow. This alternative method can be used to address and lessen these security concerns.

    "},{"location":"operator-guide/capsule/#installation","title":"Installation","text":"

    To install the Capsule tool, use the Cluster Add-Ons approach. For more details, please refer to the Capsule official page.

    "},{"location":"operator-guide/capsule/#configuration","title":"Configuration","text":"

    To use Capsule in EDP, follow the steps below:

    1. Run the command below to upgrade EDP with Capsule capabilities:

      helm upgrade --install edp epamedp/edp-install -n edp --values values.yaml --set cd-pipeline-operator.tenancyEngine=capsule\n
    2. Open the CapsuleConfiguration custom resource called default:

      kubectl edit CapsuleConfiguration default\n

      Add the tenant name (by default, it's the EDP namespace name) to the manifest's spec section as follows:

      spec:\nuserGroups:\n- system:serviceaccounts:edp\n

    As a result, EDP will be using Capsule capabilities to manage tenants, thus providing better access management.

    "},{"location":"operator-guide/capsule/#related-articles","title":"Related Articles","text":"
    • Install EDP With Values File
    • Cluster Add-Ons Overview
    • Set Up Kiosk
    • EDP Kiosk Usage
    "},{"location":"operator-guide/configure-keycloak-oidc-eks/","title":"EKS OIDC With Keycloak","text":"

    This article provides the instruction of configuring Keycloak as OIDC Identity Provider for EKS. The example is written on Terraform (HCL).

    "},{"location":"operator-guide/configure-keycloak-oidc-eks/#prerequisites","title":"Prerequisites","text":"

    To follow the instruction, check the following prerequisites:

    1. terraform 0.14.10
    2. hashicorp/aws = 4.8.0
    3. mrparkers/keycloak >= 3.0.0
    4. hashicorp/kubernetes ~> 2.9.0
    5. kubectl = 1.22
    6. kubelogin >= v1.25.1
    7. Ensure that Keycloak has network availability for AWS (not in a private network).

    Note

    To connect OIDC with a cluster, install and configure the kubelogin plugin. For Windows, it is recommended to download the kubelogin as a binary and add it to your PATH.

    "},{"location":"operator-guide/configure-keycloak-oidc-eks/#solution-overview","title":"Solution Overview","text":"

    The solution includes three types of the resources - AWS (EKS), Keycloak, Kubernetes. The left part of Keycloak resources remain unchanged after creation, thus allowing us to associate a claim for a user group membership. Other resources can be created, deleted or changed if needed. The most crucial from Kubernetes permissions are Kubernetes RoleBindings and ClusterRoles/Roles. Roles present a set of permissions, in turn RoleBindings map Kubernetes Role to representative Keycloak groups, so a group member can have just appropriate permissions.

    EKS Keycloak OIDC

    "},{"location":"operator-guide/configure-keycloak-oidc-eks/#keycloak-configuration","title":"Keycloak Configuration","text":"

    To configure Keycloak, follow the steps described below.

    • Create a client:
    resource \"keycloak_openid_client\" \"openid_client\" {\nrealm_id                                  = \"openshift\"\nclient_id                                 = \"kubernetes\"\naccess_type                               = \"CONFIDENTIAL\"\nstandard_flow_enabled                     = true\nimplicit_flow_enabled                     = false\ndirect_access_grants_enabled              = true\nservice_accounts_enabled                  = true\noauth2_device_authorization_grant_enabled = true\nbackchannel_logout_session_required       = true\n\nroot_url    = \"http://localhost:8000/\"\nbase_url    = \"http://localhost:8000/\"\nadmin_url   = \"http://localhost:8000/\"\nweb_origins = [\"*\"]\n\nvalid_redirect_uris = [\n\"http://localhost:8000/*\"\n]\n}\n
    • Create the client scope:
    resource \"keycloak_openid_client_scope\" \"openid_client_scope\" {\nrealm_id               = <realm_id>\nname                   = \"groups\"\ndescription            = \"When requested, this scope will map a user's group memberships to a claim\"\ninclude_in_token_scope = true\nconsent_screen_text    = false\n}\n
    • Add scope to the client by selecting all default client scope:
    resource \"keycloak_openid_client_default_scopes\" \"client_default_scopes\" {\nrealm_id  = <realm_id>\nclient_id = keycloak_openid_client.openid_client.id\n\ndefault_scopes = [\n\"profile\",\n\"email\",\n\"roles\",\n\"web-origins\",\nkeycloak_openid_client_scope.openid_client_scope.name,\n]\n}\n
    • Add the following mapper to the client scope:
    resource \"keycloak_openid_group_membership_protocol_mapper\" \"group_membership_mapper\" {\nrealm_id            = <realm_id>\nclient_scope_id     = keycloak_openid_client_scope.openid_client_scope.id\nname                = \"group-membership-mapper\"\nadd_to_id_token     = true\nadd_to_access_token = true\nadd_to_userinfo     = true\nfull_path           = false\n\nclaim_name = \"groups\"\n}\n
    • In the authorization token, get groups membership field with the list of group membership in the realm:
      ...\n\"email_verified\": false,\n\"name\": \"An User\",\n\"groups\": [\n\"<env_prefix_name>-oidc-viewers\",\n\"<env_prefix_name>-oidc-cluster-admins\"\n],\n\"preferred_username\": \"an_user@example.com\",\n\"given_name\": \"An\",\n\"family_name\": \"User\",\n\"email\": \"an_user@example.com\"\n...\n
    • Create group/groups, e.g. admin group:
    resource \"keycloak_group\" \"oidc_tenant_admin\" {\nrealm_id = <realm_id>\nname     = \"kubernetes-oidc-admins\"\n}\n
    "},{"location":"operator-guide/configure-keycloak-oidc-eks/#eks-configuration","title":"EKS Configuration","text":"

    To configure EKS, follow the steps described below. In AWS Console, open EKS home page -> Choose a cluster -> Configuration tab -> Authentication tab.

    The Terraform code for association with Keycloak:

    • terraform.tfvars
      ...\ncluster_identity_providers = {\nkeycloak = {\nclient_id                     = <keycloak_client_id>\nidentity_provider_config_name = \"Keycloak\"\nissuer_url                    = \"https://<keycloak_url>/auth/realms/<realm_name>\"\ngroups_claim                  = \"groups\"\n}\n...\n
    • the resource code
      resource \"aws_eks_identity_provider_config\" \"keycloak\" {\nfor_each = { for k, v in var.cluster_identity_providers : k => v if true }\n\ncluster_name = var.platform_name\n\noidc {\nclient_id                     = each.value.client_id\ngroups_claim                  = lookup(each.value, \"groups_claim\", null)\ngroups_prefix                 = lookup(each.value, \"groups_prefix\", null)\nidentity_provider_config_name = try(each.value.identity_provider_config_name, each.key)\nissuer_url                    = each.value.issuer_url\nrequired_claims               = lookup(each.value, \"required_claims\", null)\nusername_claim                = lookup(each.value, \"username_claim\", null)\nusername_prefix               = lookup(each.value, \"username_prefix\", null)\n}\n\ntags = var.tags\n}\n

    Note

    The resource creation takes around 20-30 minutes. The resource doesn't support updating, so each change will lead to deletion of the old instance and creation of a new instance instead.

    "},{"location":"operator-guide/configure-keycloak-oidc-eks/#kubernetes-configuration","title":"Kubernetes Configuration","text":"

    To connect the created Keycloak resources with permissions, it is necessary to create Kubernetes Roles and RoleBindings:

    • ClusterRole
      resource \"kubernetes_cluster_role_v1\" \"oidc_tenant_admin\" {\nmetadata {\nname = \"oidc-admin\"\n}\nrule {\napi_groups = [\"*\"]\nresources  = [\"*\"]\nverbs      = [\"*\"]\n}\n}\n
    • ClusterRoleBinding
      resource \"kubernetes_cluster_role_binding_v1\" \"oidc_cluster_rb\" {\nmetadata {\nname = \"oidc-cluster-admin\"\n}\nrole_ref {\napi_group = \"rbac.authorization.k8s.io\"\nkind      = \"ClusterRole\"\nname      = kubernetes_cluster_role_v1.oidc_tenant_admin.metadata[0].name\n}\nsubject {\nkind      = \"Group\"\nname      = keycloak_group.oidc_tenant_admin.name\napi_group = \"rbac.authorization.k8s.io\"\n    # work-around due https://github.com/hashicorp/terraform-provider-kubernetes/issues/710\nnamespace = \"\"\n}\n}\n

    Note

    When creating the Keycloak group, ClusterRole, and ClusterRoleBinding, a user receives cluster admin permissions. There is also an option to provide admin permissions just to a particular namespace or another resources set in another namespace. For details, please refer to the Mixing Kubernetes Roles page.

    "},{"location":"operator-guide/configure-keycloak-oidc-eks/#kubeconfig","title":"Kubeconfig","text":"

    Template for kubeconfig:

    apiVersion: v1\npreferences: {}\nkind: Config\n\nclusters:\n- cluster:\nserver: https://<eks_url>.eks.amazonaws.com\ncertificate-authority-data: <certificate_authtority_data>\nname: <cluster_name>\n\ncontexts:\n- context:\ncluster: <cluster_name>\nuser: <keycloak_user_email>\nname: <cluster_name>\n\ncurrent-context: <cluster_name>\n\nusers:\n- name: <keycloak_user_email>\nuser:\nexec:\napiVersion: client.authentication.k8s.io/v1beta1\ncommand: kubectl\nargs:\n- oidc-login\n- get-token\n- -v1\n- --oidc-issuer-url=https://<keycloak_url>/auth/realms/<realm>\n- --oidc-client-id=<keycloak_client_id>\n- --oidc-client-secret=<keycloak_client_secret>\n
    Flag -v1 can be used for debug, in a common case it's not needed and can be deleted.

    To find the client secret:

    1. Open Keycloak
    2. Choose realm
    3. Find keycloak_client_id that was previously created
    4. Open Credentials tab
    5. Copy Secret
    "},{"location":"operator-guide/configure-keycloak-oidc-eks/#testing","title":"Testing","text":"

    Before testing, ensure that a user is a member of the correct Keycloak group. To add a user to a Keycloak group:

    1. Open Keycloak
    2. Choose realm
    3. Open user screen with search field
    4. Find a user and open the configuration
    5. Open Groups tab
    6. In Available Groups, choose an appropriate group
    7. Click the Join button
    8. The group should appear in the Group Membership list

    Follow the steps below to test the configuration:

    • Run kubectl command, it is important to specify the correct kubeconfig:
      KUBECONFIG=<path_to_oidc_kubeconfig> kubectl get ingresses -n <namespace_name>\n
    • After the first run and redirection to the Keycloak login page, log in using credentials (login:password) or using SSO Provider. In case of the successful login, you will receive the following notification that can be closed:

    OIDC Successful Login

    • As the result, a respective response from the Kubernetes will appear in the console in case a user is configured correctly and is a member of the correct group and Roles/RoleBindings.
    • If something is not set up correctly, the following output error will be displayed:
      Error from server (Forbidden): ingresses.networking.k8s.io is forbidden:\nUser \"https://<keycloak_url>/auth/realms/<realm>#<keycloak_user_id>\"\ncannot list resource \"ingresses\" in API group \"networking.k8s.io\" in the namespace \"<namespace_name>\"\n
    "},{"location":"operator-guide/configure-keycloak-oidc-eks/#session-update","title":"Session Update","text":"

    To update the session, clear cache. The default location for the login cache:

    rm -rf ~/.kube/cache\n
    "},{"location":"operator-guide/configure-keycloak-oidc-eks/#access-cluster-via-lens","title":"Access Cluster via Lens","text":"

    To access the Kubernetes cluster via Lens, follow the steps below to configure it:

    • Add a new kubeconfig to the location where Lens has access. The default location of the kubeconfig is ~/.kube/config but it can be changed by navigating to File -> Preferences -> Kubernetes -> Kubeconfig Syncs;
    • (Optional) Using Windows, it is recommended to reboot the system after adding a new kubeconfig.
    • Authenticate on the Keycloak login page to be able to access the cluster;

    Note

    Lens does not add namespaces of the project automatically, so it is necessary to add them manually, simply go to Settings -> Namespaces and add the namespaces of a project.

    "},{"location":"operator-guide/configure-keycloak-oidc-eks/#related-articles","title":"Related Articles","text":"
    • Headlamp OIDC Configuration
    "},{"location":"operator-guide/container-registry-harbor-integration-tekton-ci/","title":"Integrate Harbor With EDP Pipelines","text":"

    Harbor serves as a tool for storing images and artifacts. This documentation contains instructions on how to create a project in Harbor and set up a robot account for interacting with the registry from CI pipelines.

    "},{"location":"operator-guide/container-registry-harbor-integration-tekton-ci/#overview","title":"Overview","text":"

    Harbor integration with Tekton enables the centralized storage of container images within the cluster, eliminating the need for external services. By leveraging Harbor as the container registry, users can manage and store their automation results and reports in one place.

    "},{"location":"operator-guide/container-registry-harbor-integration-tekton-ci/#integration-procedure","title":"Integration Procedure","text":"

    The integration process involves two steps:

    1. Creating a project to store application images.

    2. Creating two accounts with different permissions to push (read/write) and pull (read-only) project images.

    "},{"location":"operator-guide/container-registry-harbor-integration-tekton-ci/#create-new-project","title":"Create New Project","text":"

    The process of creating new projects is the following:

    1. Log in to the Harbor console using your credentials.
    2. Navigate to the Projects menu, click the New Project button:

      Projects menu

    3. On the New Project menu, enter a project name that matches your EDP namespace in the Project Name field. Keep other fields as default and click OK to continue:

      New Project menu

    "},{"location":"operator-guide/container-registry-harbor-integration-tekton-ci/#set-up-robot-account","title":"Set Up Robot Account","text":"

    To make EDP and Harbor project interact with each other, set up a robot account:

    1. Navigate to your newly created project, select Robot Accounts menu and choose New Robot Account:

      Create Robot Account menu

    2. In the pop-up window, fill in the fields as follows:

      • Name - edp-push;
      • Expiration time - set the value which is aligned with your organization policy;
      • Description - read/write permissions;
      • Permissions - Pull Repository and Push Repository.

      To proceed, click the ADD button:

      Robot Accounts menu

    3. In the appeared window, copy the robot account credentials or click the Export to file button to save the secret and account name locally:

      New credentials for Robot Account

    4. Provision the kaniko-docker-config secrets using kubectl, EDP Portal or with the externalSecrets operator:

      Example

      The auth string can be generated by this command:

      echo -n \"robot\\$edp-project+edp:secret\" | base64\n

      kubectlManual SecretExternal Secrets Operator
        apiVersion: v1\nkind: Secret\nmetadata:\nname: kaniko-docker-config\nnamespace: edp\nlabels:\napp.edp.epam.com/secret-type: registry\ntype: kubernetes.io/dockerconfigjson\nstringData:\n.dockerconfigjson: |\n{\n\"auths\" : {\n\"harbor-registry.com\":\n{\n\"username\":\"registry-username\",\n\"password\":\"registry-password\",\n\"auth\": \"secret-string\"\n}\n}\n}\n

      Navigate to EDP Portal UI -> EDP -> Configuration -> Registry. Fill in the required fields and click Save.

      Registry update manual secret

      \"kaniko-docker-config\":\n{\"auths\" : \"harbor-registry.com\":\n{\n\"username\":\"registry-username\",\n\"password\":\"registry-password\",\n\"auth\": \"secret-string\"\n}\n}\n

      Navigate to EDP Portal UI -> EDP -> Configuration -> Registry. Here, you will observe the Managed by ExternalSecret message:

      Registry managed by external secret operator

      Note

      More details of External Secrets Operator Integration can be found in the External Secrets Operator Integration page.

    5. Repeat steps 2-3 with values below:

      • Name - edp-pull;
      • Expiration time - set the value which is aligned with your organization policy;
      • Description - read-only permissions;
      • Permissions - Pull Repository.
    6. Provision the regcred secrets using kubectl, EDP Portal or with the externalSecrets operator:

      Example

      The auth string can be generated by this command:

      echo -n \"robot\\$edp-project+edp-push:secret\" | base64\n

      kubectlManual SecretExternal Secrets Operator
      apiVersion: v1\nkind: Secret\nmetadata:\nname: regcred\nnamespace: edp\nlabels:\napp.edp.epam.com/secret-type: registry\ntype: kubernetes.io/dockerconfigjson\nstringData:\n.dockerconfigjson: |\n{\n\"auths\" : {\n\"harbor-registry.com\":\n{\n\"username\":\"registry-username\",\n\"password\":\"registry-password\",\n\"auth\": \"secret-string\"\n}\n}\n}\n

      Navigate to EDP Portal UI -> EDP -> Configuration -> Registry. Fill in the required fields and click Save.

      Registry update manual secret

      \"regcred\":\n{\"auths\" : \"harbor-registry.com\":\n{\n\"username\":\"registry-username\",\n\"password\":\"registry-password\",\n\"auth\": \"secret-string\"\n}\n}\n

      Navigate to EDP Portal UI -> EDP -> Configuration -> Registry. Here, you will observe the Managed by ExternalSecret message:

      Registry managed by external secret operator

      Note

      More details of External Secrets Operator Integration can be found in the External Secrets Operator Integration page.

    7. In the values.yaml file for the edp-install Helm chart, set the following values for the specified fields:

      Manual SecretExternal Secrets Operator

      If the kaniko-docker-config secret has been created manually:

      values.yaml
      ...\nkaniko:\nexistingDockerConfig: \"kaniko-docker-config\"\nglobal:\ndockerRegistry:\nurl: harbor-registry.com\ntype: \"harbor\"\n...\n

      If the kaniko-docker-config secret has been created via External Secrets Operator:

      values.yaml
      ...\nkaniko:\nexistingDockerConfig: \"kaniko-docker-config\"\nexternalSecrets:\nenabled: true\nglobal:\ndockerRegistry:\nurl: harbor-registry.com\ntype: \"harbor\"\n...\n
    8. (Optional) If you've already deployed the EDP Helm chart, you can update it using the following command:

      helm update --install edp epamedp/edp-install \\\n--values values.yaml \\\n--namespace edp\n

    As a result, application images built in EDP Portal will be stored in Harbor project and will be deployed from the harbor registry.

    Harbor projects can be added and retained with a retention policy generated through the EDP script in edp-cluster-add-ons.

    "},{"location":"operator-guide/container-registry-harbor-integration-tekton-ci/#related-articles","title":"Related Articles","text":"
    • Install EDP
    • Install Harbor
    • Adjust Jira Integration
    • Custom SonarQube Integration
    "},{"location":"operator-guide/delete-edp/","title":"Uninstall EDP","text":"

    This tutorial provides detailed instructions on the optimal method to uninstall the EPAM Delivery Platform.

    "},{"location":"operator-guide/delete-edp/#deletion-procedure","title":"Deletion Procedure","text":"

    To uninstall EDP, perform the following steps:

    1. It is highly recommended to delete all the resources created via EDP Portal UI first. It can be:

      • Applications;
      • Libraries;
      • Autotests;
      • Infrastructures;
      • CD Pipelines.

      We recommend deleting them via EDP Portal UI respectively, although it is also possible to delete all the EDP Portal resources using the kubectl delete command.

    2. Delete application namespaces. They should be called according to the edp-<cd-pipeline>-<stage-name> pattern.

    3. Uninstall EDP the same way it was installed.

    4. Run the script that deletes the rest of the custom resources:

      View: CleanEDP.sh
      #!/bin/sh\n\n###################################################################\n# A POSIX script to remove EDP Kubernetes Custom Resources        #\n#                                                                 #\n# PREREQUISITES                                                   #\n#     kubectl>=1.23.x, awscli (for EKS authentication)            #\n#                                                                 #\n# TESTED                                                          #\n#     OS: Ubuntu, FreeBSD, Windows (GitBash)                      #\n#     Shells: zsh, bash, dash                                     #\n###################################################################\n\n[ -n \"${DEBUG}\" ] && set -x\n\nset -e\n\nexit_err() {\nprintf '%s\\n' \"$1\" >&2\nexit 1\n}\n\ncheck_kubectl() {\nif ! hash kubectl; then\nexit_err \"Error: kubectl is not installed\"\nfi\n}\n\nget_script_help() {\nself_name=\"$(basename \"$0\")\"\necho \"\\\n${self_name} deletes EDP Kubernetes Custom Resources\n\nUsage: ${self_name}\n\nOptions:\n${self_name} [OPTION] [FILE]\n\n-h, --help          Print Help\n-k, --kubeconfig    Pass Kubeconfig file\n\nDebug:\nDEBUG=true ${self_name}\n\nExamples:\n${self_name} --kubeconfig ~/.kube/custom_config\"\n}\n\nyellow_fg() {\ntput setaf 3 || true\n}\n\nno_color_out() {\ntput sgr0 || true\n}\n\nget_current_context() {\nkubectl config current-context\n}\n\nget_context_ns() {\nkubectl config view \\\n--minify --output jsonpath='{..namespace}' 2> /dev/null\n}\n\nget_ns() {\nkubectl get ns \"${edp_ns}\" --output name --request-timeout='5s'\n}\n\ndelete_ns() {\nkubectl delete ns \"${edp_ns}\" --timeout='30s'\n}\n\nget_edp_crds() {\nkubectl get crds --no-headers=true | awk '/edp.epam.com/ {print $1}'\n}\n\nget_all_edp_crs_manif() {\nkubectl get \"${edp_crds_comma_list}\" -n \"${edp_ns}\" \\\n--output yaml --ignore-not-found --request-timeout='15s'\n}\n\ndel_all_edp_crs() {\nkubectl delete --all \"${edp_crds_comma_list}\" -n \"${edp_ns}\" \\\n--ignore-not-found --timeout='15s'\n}\n\niterate_edp_crs() {\nedp_crds_comma_list=\"$(printf '%s' \"${edp_crds}\" | tr -s '\\n' ',')\"\nget_all_edp_crs_manif \\\n| sed '/finalizers:/,/.*:/{//!d;}' \\\n| kubectl replace -f - || true\ndel_all_edp_crs || true\n}\n\niterate_edp_crds() {\nn=0\nwhile [ \"$n\" -lt 2 ]; do\nn=$((n + 1))\n\nif [ \"$n\" -eq 2 ]; then\n# Delete remaining resources\nedp_crds=\"keycloakclients,codebasebranches,jenkinsfolders\"\niterate_edp_crs\necho \"EDP Custom Resources in NS ${color_ns} have been deleted.\"\nbreak\nfi\n\necho \"Replacing EDP CR Manifests. Wait for output (may take 2min)...\"\nedp_crds=\"$(get_edp_crds)\"\niterate_edp_crs\ndone\n}\n\nselect_ns() {\nis_context=\"$(get_current_context)\" || exit 1\nprintf '%s' \"Current cluster: \"\nprintf '%s\\n' \"$(yellow_fg)${is_context}$(no_color_out)\"\n\ncurrent_ns=\"$(get_context_ns)\" || true\n\nprintf '%s\\n' \"Enter EDP namespace\"\nprintf '%s' \"Skip to use [$(yellow_fg)${current_ns}$(no_color_out)]: \"\nread -r edp_ns\n\nif [ -z \"${edp_ns}\" ]; then\nedp_ns=\"${current_ns}\"\necho \"${edp_ns}\"\nif [ -z \"${edp_ns}\" ]; then\nexit_err \"Error: namespace is not specified\"\nfi\nelse\nget_ns || exit 1\nfi\n\ncolor_ns=\"$(yellow_fg)${edp_ns}$(no_color_out)\"\n}\n\nchoose_delete_ns() {\nprintf '%s\\n' \"Do you want to delete namespace ${color_ns} as well? (y/n)?\"\nprintf '%s' \"Skip or enter [N/n] to keep the namespace: \"\nread -r answer\nif [ \"${answer}\" != \"${answer#[Yy]}\" ]; then\ndelete_edp_ns=true\necho \"Namespace ${color_ns} is marked for deletion.\"\nelse\necho \"Skipped. Deleting EDP Custom Resources only.\"\nfi\n}\n\ndelete_ns_if_true() {\nif [ \"${delete_edp_ns}\" = true ]; then\necho \"Deleting ${color_ns} namespace...\"\ndelete_ns || exit 1\nfi\n}\n\ninvalid_option() {\nexit_err \"Invalid option '$1'. Use -h, --help for details\"\n}\n\nmain_func() {\ncheck_kubectl\nselect_ns\nchoose_delete_ns\niterate_edp_crds\ndelete_ns_if_true\n}\n\nwhile [ \"$#\" -gt 0 ]; do\ncase \"$1\" in\n-h | --help)\nget_script_help\nexit 0\n;;\n-k | --kubeconfig)\nshift\n[ $# = 0 ] && exit_err \"No Kubeconfig file specified\"\nexport KUBECONFIG=\"$1\"\n;;\n--)\nbreak\n;;\n-k* | --k*)\necho \"Did you mean '--kubeconfig'?\"\ninvalid_option \"$1\"\n;;\n-* | *)\ninvalid_option \"$1\"\n;;\nesac\nshift\ndone\n\nmain_func\n

      The script will prompt user to specify the namespace where EDP was deployed in and choose if the namespace is going to be deleted. This script will delete EDP custom resources in the namespace specified by user.

    5. In Keycloak, delete the edp-main realm, also delete client which is supposed to be called by the edp-main pattern in the openshift realm.

    "},{"location":"operator-guide/delete-edp/#related-articles","title":"Related Articles","text":"
    • Install EDP
    • Install EDP via Helmfile
    • Keycloak Integration
    "},{"location":"operator-guide/delete-jenkins-job-provision/","title":"Delete Jenkins Job Provision","text":"

    To delete the job provisioner, take the following steps:

    1. Delete the job provisioner from Jenkins. Navigate to Admin Console->Jenkins->jobs->job-provisions folder, select the necessary provisioner and click the drop-down right to the provisioner name. Select Delete project.

      Delete job provisioner

    "},{"location":"operator-guide/dependency-track/","title":"Install DependencyTrack","text":"

    This documentation guide provides comprehensive instructions for installing and integrating DependencyTrack with the EPAM Delivery Platform.

    "},{"location":"operator-guide/dependency-track/#prerequisites","title":"Prerequisites","text":"
    • Kubectl version 1.26.0 is installed.
    • Helm version 3.12.0+ is installed.
    "},{"location":"operator-guide/dependency-track/#installation","title":"Installation","text":"

    To install DependencyTrack use EDP addons approach.

    "},{"location":"operator-guide/dependency-track/#configuration","title":"Configuration","text":"
    1. Open Administration -> Access Management -> Teams. Click Create Team -> Automation and click Create.

    2. Click + in Permissions and add:

      BOM_UPLOAD\nPROJECT_CREATION_UPLOAD\nVIEW_PORTFOLIO\n
    3. Click + in API keys to create token:

    DependencyTrack settings

    1. Provision secrets using manifest, EDP Portal, or with the externalSecrets operator:
    manifestEDP Portal UIExternal Secrets Operator
    apiVersion: v1\nkind: Secret\nmetadata:\nname: ci-dependency-track\nnamespace: <edp>\nlabels:\napp.edp.epam.com/secret-type: dependency-track\nstringData:\ntoken: <dependency-track-token>\nurl: <dependency-track-api-url>\ntype: Opaque\n

    Go to the EDP Portal UI open EDP -> Configuration -> DependencyTrack apply Token and URL click the save button.

    DependencyTrack update manual secret

    Store DependencyTrack URL and Token in the AWS Parameter Store with the following format:

    \"ci-dependency-track\":\n{\n\"token\": \"XXXXXXXXXXXX\",\n\"url\": \"https://dependency-track.example.com\"\n}\n

    Go to the EDP Platform UI open EDP -> Configuration -> DependencyTrack see the Managed by External Secret.

    DependencyTrack managed by external secret operator

    More detail on External Secrets Operator Integration can be found on the following page

    After following the instructions provided, you should be able to integrate your DependencyTrack with the EPAM Delivery Platform.

    "},{"location":"operator-guide/dependency-track/#related-articles","title":"Related Articles","text":"
    • Install External Secrets Operator
    • External Secrets Operator Integration
    • Cluster Add-Ons Overview
    "},{"location":"operator-guide/deploy-aws-eks/","title":"Deploy AWS EKS Cluster","text":"

    This instruction provides detailed information on the Amazon Elastic Kubernetes Service cluster deployment and contains the additional setup necessary for the managed infrastructure.

    "},{"location":"operator-guide/deploy-aws-eks/#prerequisites","title":"Prerequisites","text":"

    Before the EKS cluster deployment and configuration, make sure to check the prerequisites.

    "},{"location":"operator-guide/deploy-aws-eks/#required-tools","title":"Required Tools","text":"

    Install the required tools listed below:

    • Git
    • tfenv
    • AWS CLI
    • kubectl
    • helm
    • lens (optional)

    To check the correct tools installation, run the following commands:

    $ git --version\n$ tfenv --version\n$ aws --version\n$ kubectl version\n$ helm version\n
    "},{"location":"operator-guide/deploy-aws-eks/#aws-account-and-iam-roles","title":"AWS Account and IAM Roles","text":"
    • Make sure the AWS account is active.
    • Create the AWS IAM role: EKSDeployerRole to deploy EKS cluster on the project side. The provided resources will allow to use cross-account deployment by assuming the created EKSDeployerRole from the root AWS account. Take the following steps:

      1. Clone git repo with the edp-terraform-aws-platform.git ism-deployer project, and rename it according to the project name.

        clone project

        $ git clone https://github.com/epmd-edp/edp-terraform-aws-platform.git\n$ mv edp-terraform-aws-platform edp-terraform-aws-platform-<PROJECT_NAME>\n$ cd edp-terraform-aws-platform-<PROJECT_NAME>/iam-deployer\n

        where:

        • \u2039PROJECT_NAME\u203a - is a project name or a unique platform identifier, for example, shared or test-eks.
      2. Fill in the input variables for Terraform run in the \u2039iam-deployer/terraform.tfvars\u203a file. Use the iam-deployer/template.tfvars as an example. Please find the detailed description of the variables in the iam-deployer/variables.tf file.

        terraform.tfvars file example

        aws_profile = \"aws_user\"\n\nregion = \"eu-central-1\"\n\ntags = {\n\"SysName\"      = \"EKS\"\n\"SysOwner\"     = \"owner@example.com\"\n\"Environment\"  = \"EKS-TEST-CLUSTER\"\n\"CostCenter\"   = \"0000\"\n\"BusinessUnit\" = \"BU\"\n\"Department\"   = \"DEPARTMENT\"\n}\n
      3. Run the terraform apply command. Then initialize the backend and apply the changes.

        apply the changes

        $ terraform init\n$ terraform apply\n...\nDo you want to perform these actions?\nTerraform will perform the actions described above.\nOnly 'yes' will be accepted to approve.\n\nEnter a value: yes\n\naws_iam_role.deployer: Creating...\naws_iam_role.deployer: Creation complete after 4s [id=EKSDeployerRole]\n\nApply complete! Resources: 1 added, 0 changed, 0 destroyed.\n\nOutputs:\n\ndeployer_iam_role_arn = \"arn:aws:iam::012345678910:role/EKSDeployerRole\"\ndeployer_iam_role_id = \"EKSDeployerRole\"\ndeployer_iam_role_name = \"EKSDeployerRole\"\n
      4. Commit the local state. At this run, Terraform will use the local backend to store the state on the local filesystem. Terraform locks that state using system APIs and performs operations locally. It is not mandatory to store the resulted state file in Git, but this option can be used since the file data is not sensitive. Optionally, commit the state of the s3-backend project.

        $ git add iam-deployer/terraform.tfstate iam-deployer/terraform.tfvars\n$ git commit -m \"Terraform state for IAM deployer role\"\n
        • Create the AWS IAM role: ServiceRoleForEKSWorkerNode to connect to the EKS cluster. Take the following steps:

          1. Use the local state file or the AWS S3 bucket for saving the state file. The AWS S3 bucket creation is described in the Terraform Backend section.

          2. Go to the folder with the iam-workernode role edp-terraform-aws-platform.git, and rename it according to the project name.

            go to the iam-workernode folder

            $ cd edp-terraform-aws-platform-<PROJECT_NAME>/iam-workernode\n

            where:

            • \u2039PROJECT_NAME\u203a - is a project name or a unique platform identifier, for example, shared or test-eks.
          3. Fill in the input variables for Terraform run in the \u2039iam-workernode/terraform.tfvars\u203a file, use the iam-workernode/template.tfvars as an example. Please find the detailed description of the variables in the iam-workernode/variables.tf file.

            terraform.tfvars file example

            role_arn = \"arn:aws:iam::012345678910:role/EKSDeployerRole\"\n\nplatform_name = \"<PROJECT_NAME>\"\n\niam_permissions_boundary_policy_arn = \"arn:aws:iam::012345678910:policy/some_role_boundary\"\n\nregion = \"eu-central-1\"\n\ntags = {\n\"SysName\"      = \"EKS\"\n\"SysOwner\"     = \"owner@example.com\"\n\"Environment\"  = \"EKS-TEST-CLUSTER\"\n\"CostCenter\"   = \"0000\"\n\"BusinessUnit\" = \"BU\"\n\"Department\"   = \"DEPARTMENT\"\n}\n
          4. Run the terraform apply command. Then initialize the backend and apply the changes.

            apply the changes

            $ terraform init\n$ terraform apply\n...\nDo you want to perform these actions?\nTerraform will perform the actions described above.\nOnly 'yes' will be accepted to approve.\n\nEnter a value: yes\n
            • Create the AWS IAM role: ServiceRoleForEKSShared for the EKS cluster. Take the following steps:

              1. Create the AWS IAM role: ServiceRoleForEKSShared

              2. Attach the following policies: \"AmazonEKSClusterPolicy\" and \"AmazonEKSServicePolicy\"

            • Configure AWS profile for deployment from the local node. Please, refer to the AWS documentation for detailed guide to configure profiles.
            • Create AWS Key pair for EKS cluster nodes access. Please refer to the AWS documentation for detailed guide to create a Key pair.
            • Create a public Hosted Zone if there is no any to provide for EKS cluster deployment. Please, refer to the AWS documentation for detailed guide to create a Hosted zone.
            "},{"location":"operator-guide/deploy-aws-eks/#terraform-backend","title":"Terraform Backend","text":"

            The Terraform configuration for EKS cluster deployment has a backend block, which defines where and how the operations are performed, and where the state snapshots are stored. Currently, the best practice is to store the state as a given key in a given bucket on Amazon S3.

            This backend also supports state locking and consistency checking via Dynamo DB, which can be enabled by setting the dynamodb_table field to an existing DynamoDB table name.

            In the following configuration a single DynamoDB table can be used to lock multiple remote state files. Terraform generates key names that include the values of the bucket and key variables.

            In the edp-terraform-aws-platform.git repo an optional project is provided to create initial resources to start using Terraform from the scratch.

            The provided resources will allow to use the following Terraform options:

            • to store Terraform states remotely in the Amazon S3 bucket;
            • to manage remote state access with S3 bucket policy;
            • to support state locking and consistency checking via DynamoDB.

            After Terraform run the following AWS resources will be created:

            • S3 bucket: terraform-states-\u2039AWS_ACCOUNT_ID\u203a
            • S3 bucket policy: terraform-states-\u2039AWS_ACCOUNT_ID\u203a
            • DynamoDB lock table: terraform_locks

            Please, skip this section if you already have the listed resources for further Terraform remote backend usage.

            To create the required resources, do the following:

            1. Clone git repo with s3-backend project edp-terraform-aws-platform.git, rename it in the correspondence with project name.

              clone project

                $ git clone https://github.com/epmd-edp/edp-terraform-aws-platform.git\n\n  $ mv edp-terraform-aws-platform tedp-terraform-aws-platform-<PROJECT_NAME>\n\n  $ cd edp-terraform-aws-platform-<PROJECT_NAME>/s3-backend\n

              where:

              \u2039PROJECT_NAME\u203a - is a project name, a unique platform identifier, e.g. shared, test-eks etc.

            2. Fill the input variables for Terraform run in the \u2039s3-backend/terraform.tfvars\u203a file, refer to the s3-backend/template.tfvars as an example.

              terraform.tfvars file example

                region = \"eu-central-1\"\n\ns3_states_bucket_name = \"terraform-states\"\n\ntable_name = \"terraform_locks\"\n\ntags = {\n\"SysName\"      = \"EKS\"\n\"SysOwner\"     = \"owner@example.com\"\n\"Environment\"  = \"EKS-TEST-CLUSTER\"\n\"CostCenter\"   = \"0000\"\n\"BusinessUnit\" = \"BU\"\n\"Department\"   = \"DEPARTMENT\"\n}\n

              Find the detailed description of the variables in the s3-backend/variables.tf file.

            3. Run Terraform apply. Initialize the backend and apply the changes.

              apply the changes

                $ terraform init\n$ terraform apply\n...\n  Do you want to perform these actions?\n  Terraform will perform the actions described above.\n  Only 'yes' will be accepted to approve.\n\n  Enter a value: yes\n\naws_dynamodb_table.terraform_lock_table: Creating...\n  aws_s3_bucket.terraform_states: Creating...\n  aws_dynamodb_table.terraform_lock_table: Creation complete after 27s [id=terraform-locks-test]\n  aws_s3_bucket.terraform_states: Creation complete after 1m10s [id=terraform-states-test-012345678910]\n  aws_s3_bucket_policy.terraform_states: Creating...\n  aws_s3_bucket_policy.terraform_states: Creation complete after 1s [id=terraform-states-test-012345678910]\n\n  Apply complete! Resources: 3 added, 0 changed, 0 destroyed.\n\n  Outputs:\n\n  terraform_lock_table_dynamodb_id = \"terraform_locks\"\nterraform_states_s3_bucket_name = \"terraform-states-012345678910\"\n
            4. Commit the local state. At this run Terraform will use the local backend to store state on the local filesystem, locks that state using system APIs, and performs operations locally. There is no strong requirements to store the resulted state file in the git, but it's possible at will since there is no sensitive data. On your choice, commit the state of the s3-backend project or not.

                $ git add s3-backend/terraform.tfstate\n\n$ git commit -m \"Terraform state for s3-backend\"\n

              As a result, the projects that run Terraform can use the following definition for remote state configuration:

              providers.tf - terraform backend configuration block
              terraform {\n  backend \"s3\" {\n    bucket         = \"terraform-states-<AWS_ACCOUNT_ID>\"\n    key            = \"<PROJECT_NAME>/<REGION>/terraform/terraform.tfstate\"\n    region         = \"<REGION>\"\n    acl            = \"bucket-owner-full-control\"\n    dynamodb_table = \"terraform_locks\"\n    encrypt        = true\n  }\n}\n

              where:

              • AWS_ACCOUNT_ID - is AWS account id, e.g. 012345678910,
              • REGION - is AWS region, e.g. eu-central-1,
              • PROJECT_NAME - is a project name, a unique platform identifier, e.g. shared, test-eks etc.
              View: providers.tf - terraform backend configuration example
              terraform {\n  backend \"s3\" {\n    bucket         = \"terraform-states-012345678910\"\n    key            = \"test-eks/eu-central-1/terraform/terraform.tfstate\"\n    region         = \"eu-central-1\"\n    acl            = \"bucket-owner-full-control\"\n    dynamodb_table = \"terraform_locks\"\n    encrypt        = true\n  }\n}\n
            5. Note

              At the moment, it is recommended to use common s3 bucket and Dynamo DB in the root EDP account both for Shared and Standalone clusters deployment.

              "},{"location":"operator-guide/deploy-aws-eks/#deploy-eks-cluster","title":"Deploy EKS Cluster","text":"

              To deploy the EKS cluster, make sure that all the above-mentioned Prerequisites are ready to be used.

              "},{"location":"operator-guide/deploy-aws-eks/#eks-cluster-deployment-with-terraform","title":"EKS Cluster Deployment with Terraform","text":"
              1. Clone git repo with the Terraform project for EKS infrastructure edp-terraform-aws-platform.git and rename it in the correspondence with project name if not yet.

                clone project

                  $ git clone https://github.com/epmd-edp/edp-terraform-aws-platform.git\n  $ mv edp-terraform-aws-platform edp-terraform-aws-platform-<PROJECT_NAME>\n  $ cd edp-terraform-aws-platform-<PROJECT_NAME>\n

                where:

                • \u2039PROJECT_NAME\u203a - is a project name, a unique platform identifier, e.g. shared, test-eks etc.
              2. Configure Terraform backend according to your project needs or use instructions from the Configure Terraform backend section.

              3. Fill the input variables for Terraform run in the \u2039terraform.tfvars\u203a file, refer to the template.tfvars file and apply the changes. See details below. Be sure to put the correct values of the variables created in the Prerequisites section. Find the detailed description of the variables in the variables.tf file.

                Warning

                Please, do not use upper case in the input variables. It can lead to unexpected issues.

                template.tfvars file template
                # Check out all the inputs based on the comments below and fill the gaps instead <...>\n  # More details on each variable can be found in the variables.tf file\n\n  create_elb = true # set to true if you'd like to create ELB for Gerrit usage\n\n  region   = \"<REGION>\"\n  role_arn = \"<ROLE_ARN>\"\n\n  platform_name        = \"<PLATFORM_NAME>\"        # the name of the cluster and AWS resources\n  platform_domain_name = \"<PLATFORM_DOMAIN_NAME>\" # must be created as a prerequisite\n\n  # The following will be created or used existing depending on the create_vpc value\n  subnet_azs    = [\"<SUBNET_AZS1>\", \"<SUBNET_AZS2>\"]\n  platform_cidr = \"<PLATFORM_CIDR>\"\n  private_cidrs = [\"<PRIVATE_CIDRS1>\", \"<PRIVATE_CIDRS2>\"]\n  public_cidrs  = [\"<PUBLIC_CIDRS1>\", \"<PUBLIC_CIDRS2>\"]\n\n  infrastructure_public_security_group_ids = [\n    \"<INFRASTRUCTURE_PUBLIC_SECURITY_GROUP_IDS1>\",\n    \"<INFRASTRUCTURE_PUBLIC_SECURITY_GROUP_IDS2>\",\n  ]\n\n  ssl_policy = \"<SSL_POLICY>\"\n\n  # EKS cluster configuration\n  cluster_version = \"1.22\"\n  key_name        = \"<AWS_KEY_PAIR_NAME>\" # must be created as a prerequisite\n  enable_irsa     = true\n\n  cluster_iam_role_name            = \"<SERVICE_ROLE_FOR_EKS>\"\n  worker_iam_instance_profile_name = \"<SERVICE_ROLE_FOR_EKS_WORKER_NODE\"\n\n  add_userdata = <<EOF\n  export TOKEN=$(aws ssm get-parameter --name <PARAMETER_NAME> --query 'Parameter.Value' --region <REGION> --output text)\n  cat <<DATA > /var/lib/kubelet/config.json\n  {\n    \"auths\":{\n      \"https://index.docker.io/v1/\":{\n        \"auth\":\"$TOKEN\"\n      }\n    }\n  }\n  DATA\n  EOF\n\n  map_users = [\n    {\n      \"userarn\" : \"<IAM_USER_ARN1>\",\n      \"username\" : \"<IAM_USER_NAME1>\",\n      \"groups\" : [\"system:masters\"]\n    },\n    {\n      \"userarn\" : \"<IAM_USER_ARN2>\",\n      \"username\" : \"<IAM_USER_NAME2>\",\n      \"groups\" : [\"system:masters\"]\n    }\n  ]\n\n  map_roles = [\n    {\n      \"rolearn\" : \"<IAM_ROLE_ARN1>\",\n      \"username\" : \"<IAM_ROLE_NAME1>\",\n      \"groups\" : [\"system:masters\"]\n    },\n  ]\n\n  tags = {\n    \"SysName\"      = \"<SYS_NAME>\"\n    \"SysOwner\"     = \"<SYSTEM_OWNER>\"\n    \"Environment\"  = \"<ENVIRONMENT>\"\n    \"CostCenter\"   = \"<COST_CENTER>\"\n    \"BusinessUnit\" = \"<BUSINESS_UNIT>\"\n    \"Department\"   = \"<DEPARTMENT>\"\n    \"user:tag\"     = \"<PLATFORM_NAME>\"\n  }\n\n  # Variables for demand pool\n  demand_instance_types      = [\"r5.large\"]\n  demand_max_nodes_count     = 0\n  demand_min_nodes_count     = 0\n  demand_desired_nodes_count = 0\n\n  // Variables for spot pool\n  spot_instance_types      = [\"r5.xlarge\", \"r5.large\", \"r4.large\"] # need to ensure we use nodes with more memory\n  spot_max_nodes_count     = 2\n  spot_desired_nodes_count = 2\n  spot_min_nodes_count     = 2\n

                Note

                The file above is an example. Please find the latest version in the project repo in the terraform.tfvars file.

                There are the following possible scenarios to deploy the EKS cluster:

                Case 1: Create new VPC and deploy the EKS cluster, terraform.tfvars file example
                create_elb     = true # set to true if you'd like to create ELB for Gerrit usage\n\nregion   = \"eu-central-1\"\nrole_arn = \"arn:aws:iam::012345678910:role/EKSDeployerRole\"\n\nplatform_name        = \"test-eks\"\nplatform_domain_name = \"example.com\" # must be created as a prerequisite\n\n# The following will be created or used existing depending on the create_vpc value\nsubnet_azs    = [\"eu-central-1a\", \"eu-central-1b\"]\nplatform_cidr = \"172.31.0.0/16\"\nprivate_cidrs = [\"172.31.0.0/20\", \"172.31.16.0/20\"]\npublic_cidrs  = [\"172.31.32.0/20\", \"172.31.48.0/20\"]\n\n# Use this parameter the second time you apply the code to specify new AWS Security Groups\ninfrastructure_public_security_group_ids = [\n  #  \"sg-00000000000000000\",\n  #  \"sg-00000000000000000\",\n]\n\n# EKS cluster configuration\ncluster_version = \"1.22\"\nkey_name        = \"test-kn\" # must be created as a prerequisite\nenable_irsa     = true\n\n# Define if IAM roles should be created during the deployment or used existing ones\ncluster_iam_role_name            = \"ServiceRoleForEKSShared\"\nworker_iam_instance_profile_name = \"ServiceRoleForEksSharedWorkerNode0000000000000000000000\"\n\nadd_userdata = <<EOF\nexport TOKEN=$(aws ssm get-parameter --name edprobot --query 'Parameter.Value' --region eu-central-1 --output text)\ncat <<DATA > /var/lib/kubelet/config.json\n{\n  \"auths\":{\n    \"https://index.docker.io/v1/\":{\n      \"auth\":\"$TOKEN\"\n    }\n  }\n}\nDATA\nEOF\n\nmap_users = [\n  {\n    \"userarn\" : \"arn:aws:iam::012345678910:user/user_name1@example.com\",\n    \"username\" : \"user_name1@example.com\",\n    \"groups\" : [\"system:masters\"]\n  },\n  {\n    \"userarn\" : \"arn:aws:iam::012345678910:user/user_name2@example.com\",\n    \"username\" : \"user_name2@example.com\",\n    \"groups\" : [\"system:masters\"]\n  }\n]\n\nmap_roles = [\n  {\n    \"rolearn\" : \"arn:aws:iam::012345678910:role/EKSClusterAdminRole\",\n    \"username\" : \"eksadminrole\",\n    \"groups\" : [\"system:masters\"]\n  },\n]\n\ntags = {\n  \"SysName\"      = \"EKS\"\n  \"SysOwner\"     = \"owner@example.com\"\n  \"Environment\"  = \"EKS-TEST-CLUSTER\"\n  \"CostCenter\"   = \"2020\"\n  \"BusinessUnit\" = \"BU\"\n  \"Department\"   = \"DEPARTMENT\"\n  \"user:tag\"     = \"test-eks\"\n}\n\n# Variables for spot pool\nspot_instance_types      = [\"r5.large\", \"r4.large\"] # need to ensure we use nodes with more memory\nspot_max_nodes_count     = 1\nspot_desired_nodes_count = 1\nspot_min_nodes_count     = 1\n
              4. Run Terraform apply. Initialize the backend and apply the changes.

                apply the changes
                   $ terraform init\n   $ terraform apply\n   ...\n\n   Do you want to perform these actions?\n   Terraform will perform the actions described above.\n   Only 'yes' will be accepted to approve.\n   Enter a value: yes\n   ...\n
              5. "},{"location":"operator-guide/deploy-aws-eks/#check-eks-cluster-deployment","title":"Check EKS cluster deployment","text":"

                As a result, the \u2039PLATFORM_NAME\u203a EKS cluster is deployed to the specified AWS account.

                Make sure you have all required tools listed in the Install required tools section.

                To connect to the cluster find the kubeconfig_ file in the project folder which is output of the last Terraform apply run. Move it to the ~/.kube/ folder.

                    $ mv kubeconfig_<PLATFORM_NAME> ~/.kube/\n

                Run the following commands to ensure the EKS cluster is up and has required nodes count:

                    $ kubectl config get-contexts\n    $ kubectl get nodes\n

                Note

                If the there are any authorisation issues, make sure the users section in the kubeconfig_ file has all required parameters based on you AWS CLI version. Find more details in the create kubeconfig AWS user guide. And pay attention on the kubeconfig_aws_authenticator terraform input variables.

                Optionally, a Lens tool can be installed and used for further work with Kubernetes cluster. Refer to the original documentation to add and process the cluster.

                "},{"location":"operator-guide/deploy-okd-4.10/","title":"Deploy OKD 4.10 Cluster","text":"

                This instruction provides detailed information on the OKD 4.10 cluster deployment in the AWS Cloud and contains the additional setup necessary for the managed infrastructure.

                A full description of the cluster deployment can be found in the official documentation.

                "},{"location":"operator-guide/deploy-okd-4.10/#prerequisites","title":"Prerequisites","text":"

                Before the OKD cluster deployment and configuration, make sure to check the prerequisites.

                "},{"location":"operator-guide/deploy-okd-4.10/#required-tools","title":"Required Tools","text":"
                1. Install the following tools listed below:

                  • AWS CLI
                  • OpenShift CLI
                  • Lens (optional)
                2. Create the AWS IAM user with the required permissions. Make sure the AWS account is active, and the user doesn't have a permission boundary. Remove any Service Control Policy (SCP) restrictions from the AWS account.

                3. Generate a key pair for cluster node SSH access. Please perform the steps below:

                  • Generate the SSH key. Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If there is an existing key pair, ensure that the public key is in the ~/.ssh directory.
                    ssh-keygen -t ed25519 -N '' -f <path>/<file_name>\n
                  • Add the SSH private key identity to the SSH agent for a local user if it has not already been added.
                    eval \"$(ssh-agent -s)\"\n
                  • Add the SSH private key to the ssh-agent:
                    ssh-add <path>/<file_name>\n
                4. Build the ccoctl tool:

                  • Clone the cloud-credential-operator repository.
                    git clone https://github.com/openshift/cloud-credential-operator.git\n
                  • Move to the cloud-credential-operator folder and build the ccoctl tool.
                    cd cloud-credential-operator && git checkout release-4.10\nGO_PACKAGE='github.com/openshift/cloud-credential-operator'\ngo build -ldflags \"-X $GO_PACKAGE/pkg/version.versionFromGit=$(git describe --long --tags --abbrev=7 --match 'v[0-9]*')\" ./cmd/ccoctl\n
                "},{"location":"operator-guide/deploy-okd-4.10/#prepare-for-the-deployment-process","title":"Prepare for the Deployment Process","text":"

                Before deploying the OKD cluster, please perform the steps below:

                "},{"location":"operator-guide/deploy-okd-4.10/#create-aws-resources","title":"Create AWS Resources","text":"

                Create the AWS resources with the Cloud Credential Operator utility (the ccoctl tool):

                1. Generate the public and private RSA key files that are used to set up the OpenID Connect identity provider for the cluster:

                  ./ccoctl aws create-key-pair\n
                2. Create an OpenID Connect identity provider and an S3 bucket on AWS:

                  ./ccoctl aws create-identity-provider \\\n--name=<NAME> \\\n--region=<AWS_REGION> \\\n--public-key-file=./serviceaccount-signer.public\n

                  where:

                  • NAME - is the name used to tag any cloud resources created for tracking,
                  • AWS_REGION - is the AWS region in which cloud resources will be created.
                3. Create the IAM roles for each component in the cluster:

                  • Extract the list of the CredentialsRequest objects from the OpenShift Container Platform release image:

                    oc adm release extract \\\n--credentials-requests \\\n--cloud=aws \\\n--to=./credrequests \\\n--quay.io/openshift-release-dev/ocp-release:4.10.25-x86_64\n

                    Note

                    A version of the openshift-release-dev docker image can be found in the Quay registry.

                  • Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory:
                    ccoctl aws create-iam-roles \\\n--name=<NAME> \\\n--region=<AWS_REGION> \\\n--credentials-requests-dir=./credrequests\n--identity-provider-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/<NAME>-oidc.s3.<AWS_REGION>.amazonaws.com\n
                "},{"location":"operator-guide/deploy-okd-4.10/#create-okd-manifests","title":"Create OKD Manifests","text":"

                Before deploying the OKD cluster, please perform the steps below:

                1. Download the OKD installer.

                2. Extract the installation program:

                  tar -xvf openshift-install-linux.tar.gz\n
                3. Download the installation pull secret for any private registry. This pull secret allows to authenticate with the services that are provided by the authorities, including Quay.io, serving the container images for OKD components. For example, here is a pull secret for Docker Hub:

                  The pull secret for the private registry
                  {\n\"auths\":{\n\"https://index.docker.io/v1/\":{\n\"auth\":\"$TOKEN\"\n}\n}\n}\n
                4. Create a deployment directory and the install-config.yaml file:

                  mkdir okd-deployment\ntouch okd-deployment/install-config.yaml\n

                  To specify more details about the OKD cluster platform or to modify the values of the required parameters, customize the install-config.yaml file for the AWS. Please see below an example of the customized file:

                  install-config.yaml - OKD cluster\u2019s platform installation configuration file
                  apiVersion: v1\nbaseDomain: <YOUR_DOMAIN>\ncredentialsMode: Manual\ncompute:\n- architecture: amd64\nhyperthreading: Enabled\nname: worker\nplatform:\naws:\nrootVolume:\nsize: 30\nzones:\n- eu-central-1a\ntype: r5.large\nreplicas: 3\ncontrolPlane:\narchitecture: amd64\nhyperthreading: Enabled\nname: master\nplatform:\naws:\nrootVolume:\nsize: 50\nzones:\n- eu-central-1a\ntype: m5.xlarge\nreplicas: 3\nmetadata:\ncreationTimestamp: null\nname: 4-10-okd-sandbox\nnetworking:\nclusterNetwork:\n- cidr: 10.128.0.0/14\nhostPrefix: 23\nmachineNetwork:\n- cidr: 10.0.0.0/16\nnetworkType: OVNKubernetes\nserviceNetwork:\n- 172.30.0.0/16\nplatform:\naws:\nregion: eu-central-1\nuserTags:\nuser:tag: 4-10-okd-sandbox\npublish: External\npullSecret: <PULL_SECRET>\nsshKey: |\n<SSH_KEY>\n

                  where:

                  • YOUR_DOMAIN - is a base domain,
                  • PULL_SECRET - is a created pull secret for a private registry,
                  • SSH_KEY - is a created SSH key.
                5. Create the required OpenShift Container Platform installation manifests:

                  ./openshift-install create manifests --dir okd-deployment\n
                6. Copy the manifests generated by the ccoctl tool to the manifests directory created by the installation program:

                  cp ./manifests/* ./okd-deployment/manifests/\n
                7. Copy the private key generated in the tls directory by the ccoctl tool to the installation directory:

                  cp -a ./tls ./okd-deployment\n
                "},{"location":"operator-guide/deploy-okd-4.10/#deploy-the-cluster","title":"Deploy the Cluster","text":"

                To initialize the cluster deployment, run the following command:

                ./openshift-install create cluster --dir okd-deployment --log-level=info\n

                Note

                If the cloud provider account configured on the host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

                When the cluster deployment is completed, directions for accessing the cluster are displayed in the terminal, including a link to the web console and credentials for the kubeadmin user. The kubeconfig for the cluster will be located in okd-deployment/auth/kubeconfig.

                Example output
                ...\nINFO Install complete!\nINFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\nINFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\nINFO Login to the console with the user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\"\nINFO Time elapsed: 36m22s:\n

                Warning

                The Ignition config files contain certificates that expire after 24 hours, which are then renewed at that time. Do not turn off the cluster for this time, or you will have to update the certificates manually. See OpenShift Container Platform documentation for more information.

                "},{"location":"operator-guide/deploy-okd-4.10/#log-into-the-cluster","title":"Log Into the Cluster","text":"

                To log into the cluster, export the kubeconfig:

                  export KUBECONFIG=<installation_directory>/auth/kubeconfig\n

                Optionally, use the Lens tool for further work with the Kubernetes cluster.

                Note

                To install and manage the cluster, refer to Lens documentation.

                "},{"location":"operator-guide/deploy-okd-4.10/#manage-okd-cluster-without-the-inbound-rules","title":"Manage OKD Cluster Without the Inbound Rules","text":"

                In order to manage the OKD cluster without the 0.0.0.0/0 inbound rules, please perform the steps below:

                1. Create a Security Group with a list of your external IPs:

                  aws ec2 create-security-group --group-name <SECURITY_GROUP_NAME> --description \"<DESCRIPTION_OF_SECURITY_GROUP>\" --vpc-id <VPC_ID>\naws ec2 authorize-security-group-ingress \\\n--group-id '<SECURITY_GROUP_ID>' \\\n--ip-permissions 'IpProtocol=all,PrefixListIds=[{PrefixListId=<PREFIX_LIST_ID>}]'\n
                2. Manually attach this new Security Group to all master nodes of the cluster.

                3. Create another Security Group with an Elastic IP of the Cluster VPC:

                  aws ec2 create-security-group --group-name custom-okd-4-10 --description \"Cluster Ip to 80, 443\" --vpc-id <VPC_ID>\naws ec2 authorize-security-group-ingress \\\n--group-id '<SECURITY_GROUP_ID>' \\\n--protocol all \\\n--port 80 \\\n--cidr <ELASTIC_IP_OF_CLUSTER_VPC>\naws ec2 authorize-security-group-ingress \\\n--group-id '<SECURITY_GROUP_ID>' \\\n--protocol all \\\n--port 443 \\\n--cidr <ELASTIC_IP_OF_CLUSTER_VPC>\n
                4. Modify the cluster load balancer via the router-default svc in the openshift-ingress namespace, paste two Security Groups created on previous steps:

                  The pull secret for the private registry
                  apiVersion: v1\nkind: Service\nmetadata:\n  name: router-default\n  namespace: openshift-ingress\n  annotations:\n    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: \"tag_name=some_value\"\n    service.beta.kubernetes.io/aws-load-balancer-security-groups: \"<SECURITY_GROUP_IDs>\"\n    ...\n
                "},{"location":"operator-guide/deploy-okd-4.10/#optimize-spot-instances-usage","title":"Optimize Spot Instances Usage","text":"

                In order to optimize the usage of Spot Instances on the AWS, add the following line under the providerSpec field in the MachineSet of Worker Nodes:

                providerSpec:\nvalue:\nspotMarketOptions: {}\n
                "},{"location":"operator-guide/deploy-okd-4.10/#related-articles","title":"Related Articles","text":"
                • Deploy AWS EKS Cluster
                • Manage Jenkins Agent
                • Associate IAM Roles With Service Accounts
                • Deploy OKD 4.9 Cluster
                "},{"location":"operator-guide/deploy-okd/","title":"Deploy OKD 4.9 Cluster","text":"

                This instruction provides detailed information on the OKD 4.9 cluster deployment in the AWS Cloud and contains the additional setup necessary for the managed infrastructure.

                A full description of the cluster deployment can be found in the official documentation.

                "},{"location":"operator-guide/deploy-okd/#prerequisites","title":"Prerequisites","text":"

                Before the OKD cluster deployment and configuration, make sure to check the prerequisites.

                "},{"location":"operator-guide/deploy-okd/#required-tools","title":"Required Tools","text":"
                1. Install the following tools listed below:

                  • AWS CLI
                  • OpenShift CLI
                  • Lens (optional)
                2. Create the AWS IAM user with the required permissions. Make sure the AWS account is active, and the user doesn't have a permission boundary. Remove any Service Control Policy (SCP) restrictions from the AWS account.

                3. Generate a key pair for cluster node SSH access. Please perform the steps below:

                  • Generate the SSH key. Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If there is an existing key pair, ensure that the public key is in the ~/.ssh directory.
                     ssh-keygen -t ed25519 -N '' -f <path>/<file_name>\n
                  • Add the SSH private key identity to the SSH agent for a local user if it has not already been added.
                     eval \"$(ssh-agent -s)\"\n
                  • Add the SSH private key to the ssh-agent:
                     ssh-add <path>/<file_name>\n
                "},{"location":"operator-guide/deploy-okd/#prepare-for-the-deployment-process","title":"Prepare for the Deployment Process","text":"

                Before deploying the OKD cluster, please perform the steps below:

                1. Download the OKD installer.

                2. Extract the installation program:

                  tar -xvf openshift-install-linux.tar.gz\n
                3. Download the installation pull secret for any private registry.

                  This pull secret allows to authenticate with the services that are provided by the included authorities, including Quay.io serving container images for OKD components. For example, here is a pull secret for Docker Hub:

                  The pull secret for the private registry
                  {\n  \"auths\":{\n    \"https://index.docker.io/v1/\":{\n      \"auth\":\"$TOKEN\"\n    }\n  }\n}\n
                4. Create the deployment directory and the install-config.yaml file:

                  mkdir okd-deployment\ntouch okd-deployment/install-config.yaml\n

                  To specify more details about the OKD cluster platform or to modify the values of the required parameters, customize the install-config.yaml file for AWS. Please see an example of the customized file below:

                  install-config.yaml - OKD cluster\u2019s platform installation configuration file
                  apiVersion: v1\nbaseDomain: <YOUR_DOMAIN>\ncompute:\n- architecture: amd64\n  hyperthreading: Enabled\n  name: worker\n  platform:\n    aws:\n      zones:\n        - eu-central-1a\n      rootVolume:\n        size: 50\n      type: r5.large\n  replicas: 3\ncontrolPlane:\n  architecture: amd64\n  hyperthreading: Enabled\n  name: master\n  platform:\n    aws:\n      rootVolume:\n        size: 50\n      zones:\n        - eu-central-1a\n      type: m5.xlarge\n  replicas: 3\nmetadata:\n  creationTimestamp: null\n  name: 4-9-okd-sandbox\nplatform:\n  aws:\n    region: eu-central-1\n    userTags:\n      user:tag: 4-9-okd-sandbox\npublish: External\npullSecret: <PULL_SECRET>\nsshKey: |\n  <SSH_KEY>\n

                  where:

                  • YOUR_DOMAIN - is a base domain,
                  • PULL_SECRET - is a created pull secret for a private registry,
                  • SSH_KEY - is a created SSH key.
                "},{"location":"operator-guide/deploy-okd/#deploy-the-cluster","title":"Deploy the Cluster","text":"

                To initialize the cluster deployment, run the following command:

                ./openshift-install create cluster --dir <installation_directory> --log-level=info\n

                Note

                If the cloud provider account configured on the host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

                When the cluster deployment is completed, directions for accessing the cluster are displayed in the terminal, including a link to the web console and credentials for the kubeadmin user. The kubeconfig for the cluster will be located in okd-deployment/auth/kubeconfig.

                Example output
                ...\nINFO Install complete!\nINFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\nINFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\nINFO Login to the console with the user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\"\nINFO Time elapsed: 36m22s:\n

                Warning

                The Ignition config files contain certificates that expire after 24 hours, which are then renewed at that time. Do not turn off the cluster for this time, or you will have to update the certificates manually. See OpenShift Container Platform documentation for more information.

                "},{"location":"operator-guide/deploy-okd/#log-into-the-cluster","title":"Log Into the Cluster","text":"

                To log into the cluster, export the kubeconfig:

                  export KUBECONFIG=<installation_directory>/auth/kubeconfig\n

                Optionally, use the Lens tool for further work with the Kubernetes cluster.

                Note

                To install and manage the cluster, refer to Lens documentation.

                "},{"location":"operator-guide/deploy-okd/#related-articles","title":"Related Articles","text":"
                • Deploy AWS EKS Cluster
                • Manage Jenkins Agent
                • Deploy OKD 4.10 Cluster
                "},{"location":"operator-guide/ebs-csi-driver/","title":"Install Amazon EBS CSI Driver","text":"

                The Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver allows Amazon Elastic Kubernetes Service (Amazon EKS) clusters to manage the lifecycle of Amazon EBS volumes for Kubernetes Persistent Volumes.

                "},{"location":"operator-guide/ebs-csi-driver/#prerequisites","title":"Prerequisites","text":"

                An existing AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider for your cluster. To determine whether you already have an OIDC provider or to create a new one, see Creating an IAM OIDC provider for your cluster.

                To add an Amazon EBS CSI add-on, please follow the steps below:

                1. Check your cluster details (the random value in the cluster name will be required in the next step):

                  kubectl cluster-info\n
                2. Create Kubernetes IAM Trust Policy for Amazon EBS CSI Driver. Replace AWS_ACCOUNT_ID with your account ID, AWS_REGION with your AWS Region, and EXAMPLED539D4633E53DE1B71EXAMPLE with the value that was returned in the previous step. Save this Trust Policy into a file aws-ebs-csi-driver-trust-policy.json.

                  aws-ebs-csi-driver-trust-policy.json
                    {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Effect\": \"Allow\",\n\"Principal\": {\n\"Federated\": \"arn:aws:iam::AWS_ACCOUNT_ID:oidc-provider/oidc.eks.AWS_REGION.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE\"\n},\n\"Action\": \"sts:AssumeRoleWithWebIdentity\",\n\"Condition\": {\n\"StringEquals\": {\n\"oidc.eks.AWS_REGION.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud\": \"sts.amazonaws.com\",\n\"oidc.eks.AWS_REGION.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub\": \"system:serviceaccount:kube-system:ebs-csi-controller-sa\"\n}\n}\n}\n]\n}\n

                  To get the notion of the IAM Role creation, please refer to the official documentation.

                3. Create the IAM role, for example:

                  aws iam create-role \\\n--role-name AmazonEKS_EBS_CSI_DriverRole \\\n--assume-role-policy-document file://\"aws-ebs-csi-driver-trust-policy.json\"\n
                4. Attach the required AWS Managed Policy AmazonEBSCSIDriverPolicy to the role with the following command:

                  aws iam attach-role-policy \\\n--policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \\\n--role-name AmazonEKS_EBS_CSI_DriverRole\n
                5. Add the Amazon EBS CSI add-on using the AWS CLI. Replace my-cluster with the name of your cluster, AWS_ACCOUNT_ID with your account ID, and AmazonEKS_EBS_CSI_DriverRole with the name of the role that was created earlier:

                  aws eks create-addon --cluster-name my-cluster --addon-name aws-ebs-csi-driver \\\n--service-account-role-arn arn:aws:iam::AWS_ACCOUNT_ID:role/AmazonEKS_EBS_CSI_DriverRole\n

                  Note

                  When the plugin is deployed, it creates the ebs-csi-controller-sa service account. The service account is bound to a Kubernetes ClusterRole with the required Kubernetes permissions. The ebs-csi-controller-sa service account should already be annotated with arn:aws:iam::AWS_ACCOUNT_ID:role/AmazonEKS_EBS_CSI_DriverRole. To check the annotation, please run:

                  kubectl get sa ebs-csi-controller-sa -n kube-system -o=jsonpath='{.metadata.annotations}'\n

                  In case pods have errors, restart the ebs-csi-controller deployment:

                  kubectl rollout restart deployment ebs-csi-controller -n kube-system\n
                "},{"location":"operator-guide/ebs-csi-driver/#related-articles","title":"Related Articles","text":"
                • Creating an IAM OIDC provider for your cluster
                • Creating the Amazon EBS CSI driver IAM role for service accounts
                • Managing the Amazon EBS CSI driver as an Amazon EKS add-on
                "},{"location":"operator-guide/edp-access-model/","title":"EDP Access Model","text":"

                EDP uses two different methods to regulate access to resources, each tailored to specific scenarios:

                • The initial method involves roles and groups in Keycloak and is used for SonarQube, Jenkins and partly for Nexus.
                • The second method of resource access control in EDP involves EDP custom resources. This approach requires modifying custom resources that outline the required access privileges for every user or group and is used to govern access to Gerrit, Nexus, EDP Portal, EKS Cluster and Argo CD.

                Info

                These two approaches are not interchangeable, as each has its unique capabilities.

                "},{"location":"operator-guide/edp-access-model/#keycloak","title":"Keycloak","text":"

                This section explains what realm roles and realm groups are and how they function within Keycloak.

                "},{"location":"operator-guide/edp-access-model/#realm-roles","title":"Realm Roles","text":"

                The Keycloak realm of edp has two realm roles with a composite types named administrator and developer:

                • The administrator realm role is designed for users who need administrative access to the tools used in the project. This realm role contains two roles: jenkins-administrators and sonar-administrators. Users who are assigned the administrator realm role will be granted these two roles automatically.
                • The developer realm role, on the other hand, is designed for users who need access to the development tools used in the project. This realm role also contains two roles: jenkins-users and sonar-developers. Users who are assigned the developer realm role will be granted these two roles automatically.

                These realm roles have been defined to make it easier to assign groups of rights to users.

                The table below shows the realm roles and the composite types they relate to.

                Realm Role Name Regular Role Composite role administrator developer jenkins-administrators jenkins-users sonar-administrators sonar-developers"},{"location":"operator-guide/edp-access-model/#realm-groups","title":"Realm Groups","text":"

                EDP uses two different realms for group management, edp and openshift:

                • The edp realm contains two groups that are specifically used for controlling access to Argo CD. These groups are named ArgoCDAdmins and ArgoCD-edp-users.
                • The openshift realm contains five groups that are used for access control in both the EDP Portal and EKS cluster. These groups are named edp-oidc-admins, edp-oidc-builders, edp-oidc-deployers,edp-oidc-developers and edp-oidc-viewers.
                Realm Group Name Realm Name ArgoCDAdmins edp ArgoCD-edp-users edp edp-oidc-admins openshift edp-oidc-builders openshift edp-oidc-deployers openshift edp-oidc-developers openshift edp-oidc-viewers openshift"},{"location":"operator-guide/edp-access-model/#sonarqube","title":"SonarQube","text":"

                In the case of SonarQube, there are two ways to manage access: via Keycloak and via EDP approach. This sections describes both of the approaches.

                "},{"location":"operator-guide/edp-access-model/#manage-access-via-keycloak","title":"Manage Access via Keycloak","text":"

                SonarQube access is managed using Keycloak roles in the edp realm. The sonar-developers and sonar-administrators realm roles are the two available roles that determine user access levels. To grant access, the corresponding role must be added to the user in Keycloak.

                For example, a user who needs developer access to SonarQube should be assigned the sonar-developers or developer composite role in Keycloak.

                "},{"location":"operator-guide/edp-access-model/#edp-approach-for-managing-access","title":"EDP Approach for Managing Access","text":"

                EDP provides its own SonarQube Permission Template, which is used to manage user access and permissions for SonarQube projects.

                The template is stored in the custom SonarQube resource of the operator, an example of a custom resource can be found below.

                SonarPermissionTemplate

                apiVersion: v2.edp.epam.com/v1\nkind: SonarPermissionTemplate\nmetadata:\nname: edp-default\nspec:\ndescription: EDP permission templates (DO NOT REMOVE)\ngroupPermissions:\n- groupName: non-interactive-users\npermissions:\n- user\n- groupName: sonar-administrators\npermissions:\n- admin\n- user\n- groupName: sonar-developers\npermissions:\n- codeviewer\n- issueadmin\n- securityhotspotadmin\n- user\nname: edp-default\nprojectKeyPattern: .+\nsonarOwner: sonar\n

                The SonarQube Permission Template contains three groups: non-interactive-users, sonar-administrators and sonar-developers:

                • non-interactive-users are users who do not require direct access to the SonarQube project but need to be informed about the project's status and progress. This group has read-only access to the project, which means that they can view the project's data and metrics but cannot modify or interact with it in any way.
                • sonar-administrators are users who have full control over the SonarQube project. They have the ability to create, modify, and delete projects, as well as manage user access and permissions. This group also has the ability to configure SonarQube settings and perform other administrative tasks.
                • sonar-developers are users who are actively working on the SonarQube project. They have read and write access to the project, which means that they can modify the project's data and metrics. This group also has the ability to configure project-specific settings and perform other development tasks.

                These groups are designed to provide different levels of access to the SonarQube project, depending on the user's role and responsibilities.

                Info

                If a user has no group, it will have the sonar-users group by default. This group does not have any permissions in the edp-default Permission Template.

                The permissions that are attached to each of the groups are described below in the table:

                Group Name Permissions non-interactive-users user sonar-administrators admin, user sonar-developers codeviewer, issueadmin, securityhotspotadmin, user sonar-users -"},{"location":"operator-guide/edp-access-model/#nexus","title":"Nexus","text":"

                Users authenticate to Nexus using their Keycloak credentials.

                During the authentication process, the OAuth2-Proxy receives the user's role from Keycloak.

                Info

                Only users with either the administrator or developer role in Keycloak can access Nexus.

                Nexus has four distinct roles available, including edp-admin, edp-viewer, nx-admin and nx-anonymous. To grant the user access to one or more of these roles, an entry must be added to the custom Nexus resource.

                For instance, in the context of the custom Nexus resource, the user \"user_1@example.com\" has been assigned the \"nx-admin\" role. An example can be found below:

                Nexus

                apiVersion: v2.edp.epam.com/v1\nkind: Nexus\nmetadata:\nname: nexus\nspec:\nbasePath: /\nedpSpec:\ndnsWildcard: example.com\nkeycloakSpec:\nenabled: false\nroles:\n- developer\n- administrator\nusers:\n- roles:\n- nx-admin\nusername: user_1@example.com\n
                "},{"location":"operator-guide/edp-access-model/#gerrit","title":"Gerrit","text":"

                The user should use their credentials from Keycloak when authenticating to Gerrit.

                After logging into Gerrit, the user is not automatically attached to any groups. To add a user to a group, the GerritGroupMember custom resource must be created. This custom resource specifies the user's email address and the name of the group to which they should be added.

                The ConfigMap below is an example of the GerritGroupMember resource:

                GerritGroupMember

                apiVersion: v2.edp.epam.com/v1\nkind: GerritGroupMember\nmetadata:\nname: user-admins\nspec:\naccountId: user@user.com\ngroupId: Administrators\n

                After the GerritGroupMember resource is created, the user will have the permissions and access levels associated with that group.

                "},{"location":"operator-guide/edp-access-model/#edp-portal-and-eks-cluster","title":"EDP Portal and EKS Cluster","text":"

                Both Portal and EKS Cluster use Keycloak groups for controlling access. Users need to be added to the required group in Keycloak to get access. The groups that are used for access control are in the openshift realm.

                Note

                The openshift realm is used because a Keycloak client for OIDC is in this realm.

                "},{"location":"operator-guide/edp-access-model/#keycloak-groups","title":"Keycloak Groups","text":"

                There are two types of groups provided for users:

                • Independent group: provides the minimum required permission set.
                • Extension group: extends the rights of an independent group.

                For example, the edp-oidc-viewers group can be extended with rights from the edp-oidc-builders group.

                Group Name Independent Group Extension Group edp-oidc-admins edp-oidc-developers edp-oidc-viewers edp-oidc-builders edp-oidc-deployers Name Action List View Getting of all namespaced resources Build Starting a PipelineRun from EDP Portal UI Deploy Deploying a new version of application via Argo CD Application Group Name View Build Deploy Full Namespace Access edp-oidc-admins edp-oidc-developers edp-oidc-viewers edp-oidc-builders edp-oidc-deployers"},{"location":"operator-guide/edp-access-model/#cluster-rbac-resources","title":"Cluster RBAC Resources","text":"

                The edp namespace has five role bindings that provide the necessary permissions for the Keycloak groups described above.

                Role Binding Name Role Name Groups tenant-admin cluster-admin edp-oidc-admins tenant-builder tenant-builder edp-oidc-builders tenant-deployer tenant-deployer edp-oidc-deployers tenant-developer tenant-developer edp-oidc-developers tenant-viewer view edp-oidc-viewers , edp-oidc-developers

                Note

                EDP provides an aggregate ClusterRole with permissions to view custom EDP resources. ClusterRole is named edp-aggregate-view-edp

                Info

                The tenant-admin RoleBinding will be created in a created namespace by cd-pipeline-operator. tenant-admin RoleBinding assign the admin role to edp-oidc-admins and edp-oidc-developers groups.

                "},{"location":"operator-guide/edp-access-model/#grant-user-access-to-the-created-namespaces","title":"Grant User Access to the Created Namespaces","text":"

                To provide users with admin or developer privileges for project namespaces, they need to be added to the edp-oidc-admins and edp-oidc-developers groups in Keycloak.

                "},{"location":"operator-guide/edp-access-model/#argo-cd","title":"Argo CD","text":"

                In Argo CD, groups are specified when creating an AppProject to restrict access to deployed applications. To gain access to deployed applications within a project, the user must be added to their corresponding Argo CD group in Keycloak. This ensures that only authorized users can access and modify applications within the project.

                Info

                By default, only the ArgoCDAdmins group is automatically created in Keycloak.

                "},{"location":"operator-guide/edp-access-model/#related-articles","title":"Related Articles","text":"
                • EDP Portal Overview
                • EKS OIDC With Keycloak
                • Argo CD Integration
                "},{"location":"operator-guide/edp-kiosk-usage/","title":"EDP Kiosk Usage","text":"

                Explore the way Kiosk, a multi-tenancy extension for Kubernetes, is used in EDP.

                "},{"location":"operator-guide/edp-kiosk-usage/#prerequisites","title":"Prerequisites","text":"
                • Installed Kiosk 0.2.11.
                "},{"location":"operator-guide/edp-kiosk-usage/#diagram-of-using-kiosk-by-edp","title":"Diagram of using Kiosk by EDP","text":"

                Kiosk usage

                Agenda

                • blue - created by Helm chart;
                • grey - created manually
                "},{"location":"operator-guide/edp-kiosk-usage/#usage","title":"Usage","text":"
                • EDP installation area on a diagram is described by following link;
                • Once the above step is executed, edp-cd-pipeline-operator service account will be linked to kiosk-edit ClusterRole to get an ability for leveraging Kiosk specific resources (e.g. Space);
                • Newly created stage in edp installation of EDP generates new Kiosk Space resource that is linked to edp Kiosk Account;
                • According to Kiosk doc the Space resource creates namespace with RoleBinding that contains relation between service account which is linked to Kiosk Account and kiosk-space-admin ClusterRole. As cd-pipeline-operator ServiceAccount is linked to Account, it has admin permissions in all generated by him namespaces.
                "},{"location":"operator-guide/edp-kiosk-usage/#related-articles","title":"Related Articles","text":"
                • Install EDP
                • Set Up Kiosk
                "},{"location":"operator-guide/eks-oidc-integration/","title":"EKS OIDC Integration","text":"

                This page is a detailed guide on integrating Keycloak with the edp-keycloak-operator to serve as an identity provider for AWS Elastic Kubernetes Service (EKS). It provides step-by-step instructions for creating necessary realms, users, roles, and client configurations for a seamless Keycloak-EKS collaboration. Additionally, it includes guidelines on installing the edp-keycloak-operator using Helm charts.

                "},{"location":"operator-guide/eks-oidc-integration/#prerequisites","title":"Prerequisites","text":"
                • EKS Configuration is performed;
                • Helm v3.10.0 is installed;
                • Keycloak is installed.
                "},{"location":"operator-guide/eks-oidc-integration/#configure-keycloak","title":"Configure Keycloak","text":"

                To prepare Keycloak for integration with the edp-keycloak-operator, follow the steps below:

                1. Ensure that the openshift realm is created.

                2. Create the orchestrator user and set the password in the Master realm.

                3. In the Role Mapping tab, assign the proper roles to the user:

                  • Realm Roles:

                    • create-realm;
                    • offline_access;
                    • uma_authorization.
                  • Client Roles openshift-realm:

                    • impersonation;
                    • manage-authorization;
                    • manage-clients;
                    • manage-users.

                Role mappings

                "},{"location":"operator-guide/eks-oidc-integration/#install-keycloak-operator","title":"Install Keycloak Operator","text":"

                To install the Keycloak operator, follow the steps below:

                1. Add the epamedp Helm chart to a local client:

                  helm repo add epamedp https://epam.github.io/edp-helm-charts/stable\nhelm repo update\n
                2. Install the Keycloak operator:

                  helm install keycloak-operator epamedp/keycloak-operator --namespace security --set name=keycloak-operator\n
                "},{"location":"operator-guide/eks-oidc-integration/#connect-keycloak-operator-to-keycloak","title":"Connect Keycloak Operator to Keycloak","text":"

                The next stage after installing Keycloak is to integrate it with the Keycloak operator. It can be implemented with the following steps:

                1. Create the keycloak secret that will contain username and password to perform the integration. Set your own password. The username must be orchestrator:

                  kubectl -n security create secret generic keycloak \\\n--from-literal=username=orchestrator \\\n--from-literal=password=<password>\n
                2. Create the Keycloak Custom Resource with the Keycloak instance URL and the secret created in the previous step:

                  apiVersion: v1.edp.epam.com/v1\nkind: Keycloak\nmetadata:\nname: main\nnamespace: security\nspec:\nsecret: keycloak                   # Secret name\nurl: https://keycloak.example.com  # Keycloak URL\n
                3. Create the KeycloakRealm Custom Resource:

                  apiVersion: v1.edp.epam.com/v1\nkind: KeycloakRealm\nmetadata:\nname: control-plane\nnamespace: security\nspec:\nrealmName: control-plane\nkeycloakOwner: main\n
                4. Create the KeycloakRealmGroup Custom Resource for both administrators and developers:

                  administratorsdevelopers
                  apiVersion: v1.edp.epam.com/v1\nkind: KeycloakRealmGroup\nmetadata:\nname: administrators\nnamespace: security\nspec:\nrealm: control-plane\nname: eks-oidc-administrator\n
                  apiVersion: v1.edp.epam.com/v1\nkind: KeycloakRealmGroup\nmetadata:\nname: developers\nnamespace: security\nspec:\nrealm: control-plane\nname: eks-oidc-developers\n
                5. Create the KeycloakClientScope Custom Resource:

                  apiVersion: v1.edp.epam.com/v1\nkind: KeycloakClientScope\nmetadata:\nname: groups-keycloak-eks\nnamespace: security\nspec:\nname: groups\nrealm: control-plane\ndescription: \"Group Membership\"\nprotocol: openid-connect\nprotocolMappers:\n- name: groups\nprotocol: openid-connect\nprotocolMapper: \"oidc-group-membership-mapper\"\nconfig:\n\"access.token.claim\": \"true\"\n\"claim.name\": \"groups\"\n\"full.path\": \"false\"\n\"id.token.claim\": \"true\"\n\"userinfo.token.claim\": \"true\"\n
                6. Create the KeycloakClient Custom Resource:

                  apiVersion: v1.edp.epam.com/v1\nkind: KeycloakClient\nmetadata:\nname: eks\nnamespace: security\nspec:\nadvancedProtocolMappers: true\nclientId: eks\ndirectAccess: true\npublic: false\ndefaultClientScopes:\n- groups\ntargetRealm: control-plane\nwebUrl: \"http://localhost:8000\"\n
                7. Create the KeycloakRealmUser Custom Resource for both administrator and developer roles:

                  administrator roledeveloper role
                  apiVersion: v1.edp.epam.com/v1\nkind: KeycloakRealmUser\nmetadata:\nname: keycloakrealmuser-sample\nnamespace: security\nspec:\nrealm: control-plane\nusername: \"administrator\"\nfirstName: \"John\"\nlastName: \"Snow\"\nemail: \"administrator@example.com\"\nenabled: true\nemailVerified: true\npassword: \"12345678\"\nkeepResource: true\nrequiredUserActions:\n- UPDATE_PASSWORD\ngroups:\n- eks-oidc-administrator\n
                  apiVersion: v1.edp.epam.com/v1\nkind: KeycloakRealmUser\nmetadata:\nname: keycloakrealmuser-sample\nnamespace: security\nspec:\nrealm: control-plane\nusername: \"developers\"\nfirstName: \"John\"\nlastName: \"Snow\"\nemail: \"developers@example.com\"\nenabled: true\nemailVerified: true\npassword: \"12345678\"\nkeepResource: true\nrequiredUserActions:\n- UPDATE_PASSWORD\ngroups:\n- eks-oidc-developers\n
                8. As a result, Keycloak is integrated with the AWS Elastic Kubernetes Service. This integration enables users to log in to the EKS cluster effortlessly using their kubeconfig files while managing permissions through Keycloak.

                "},{"location":"operator-guide/eks-oidc-integration/#related-articles","title":"Related Articles","text":"
                • Keycloak Installation
                • EKS OIDC With Keycloak
                "},{"location":"operator-guide/enable-irsa/","title":"Associate IAM Roles With Service Accounts","text":"

                This page contains accurate information on how to associate an IAM role with the service account (IRSA) in EPAM Delivery Platform.

                Get acquainted with the AWS Official Documentation on the subject before proceeding.

                "},{"location":"operator-guide/enable-irsa/#common-configuration-of-iam-roles-with-service-accounts","title":"Common Configuration of IAM Roles With Service Accounts","text":"

                To successfully associate the IAM role with the service account, follow the steps below:

                1. Create an IAM role that will further be associated with the service account. This role must have the following trust policy:

                  IAM Role

                  {\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Principal\": {\n        \"Federated\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/<OIDC_PROVIDER>\"\n      },\n      \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n      \"Condition\": {\n        \"StringEquals\": {\n          \"<OIDC_PROVIDER>:sub\": \"system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>\"\n        }\n      }\n    }\n  ]\n}\n

                  View cluster's \u2039OIDC_PROVIDER\u203a URL.

                    aws eks describe-cluster --name <CLUSTER_NAME> --query \"cluster.identity.oidc.issuer\" --output text\n

                  Example output:

                    https://oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E\n

                  \u2039OIDC_PROVIDER\u203a in this example will be:

                    oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E\n
                2. Deploy the amazon-eks-pod-identity-webhook v0.2.0.

                  Note

                  The amazon-eks-pod-identity-webhook functionality is provided out of the box in EKS v1.21 and higher. This does not apply if the cluster has been upgraded from older versions. Therefore, skip step 2 and continue from step 3 in this documentation.

                  2.1. Provide the stable(ed8c41f) version of the Docker image in the deploy/deployment-base.yaml file.

                  2.2. Provide ${CA_BUNDLE}_in the_deploy/mutatingwebhook.yaml file:

                    secret_name=$(kubectl -n default get sa default -o jsonpath='{.secrets[0].name}') \\\n  CA_BUNDLE=$(kubectl -n default get secret/$secret_name -o jsonpath='{.data.ca\\.crt}' | tr -d '\\n')\n

                  2.3. Deploy the Webhook:

                    kubectl apply -f deploy/\n

                  2.4. Approve the csr:

                    csr_name=$(kubectl get csr -o jsonpath='{.items[?(@.spec.username==\"system:serviceaccount:default:pod-identity-webhook\")].metadata.name}')\n  kubectl certificate approve $csr_name\n
                3. Annotate the created service account with the IAM role:

                  Service Account

                    apiVersion: v1\n  kind: ServiceAccount\n  metadata:\n    name: <SERVICE_ACCOUNT_NAME>\n    namespace: <NAMESPACE>\n    annotations:\n      eks.amazonaws.com/role-arn: \"arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>\"\n
                4. All newly launched pods with this service account will be modified and then use the associated IAM role. Find below the pod specification template:

                  Pod Template

                    apiVersion: v1\n  kind: Pod\n  metadata:\n    name: irsa-test\n    namespace: <POD_NAMESPACE>\n  spec:\n    serviceAccountName: <SERVICE_ACCOUNT_NAME>\n    securityContext:\n      fsGroup: 65534\n    containers:\n    - name: terraform\n      image: epamedp/edp-jenkins-terraform-agent:3.0.9\n      command: ['sh', '-c', 'aws sts \"get-caller-identity\" && sleep 3600']\n
                5. Check the logs of the created pod from the template above.

                  Example output:

                    {\n  \"UserId\": \"XXXXXXXXXXXXXXXXXXXXX:botocore-session-XXXXXXXXXX\",\n  \"Account\": \"XXXXXXXXXXXX\",\n  \"Arn\": \"arn:aws:sts::XXXXXXXXXXXX:assumed-role/AWSIRSATestRole/botocore-session-XXXXXXXXXX\"\n  }\n

                  As a result, it is possible to perform actions in AWS under the AWSIRSATestRole role.

                "},{"location":"operator-guide/enable-irsa/#related-articles","title":"Related Articles","text":"
                • Use Terraform Library in EDP
                "},{"location":"operator-guide/external-secrets-operator-integration/","title":"External Secrets Operator Integration","text":"

                External Secrets Operator (ESO) can be integrated with EDP.

                There are multiple Secrets Providers that can be used within ESO. EDP is integrated with two major providers:

                • Kubernetes Secrets
                • AWS Systems Manager Parameter Store

                EDP uses various secrets to integrate various applications. Below is a list of secrets that are used in the EDP platform and their description.

                Secret Name Field Description keycloak username Admin username for keycloak, used by keycloak operator keycloak password Admin password for keycloak, used by keycloak operator defectdojo-ciuser-token token Defectdojo token with admin permissions defectdojo-ciuser-token url Defectdojo url kaniko-docker-config registry.com Change to registry url kaniko-docker-config username Registry username kaniko-docker-config password Registry password kaniko-docker-config auth Base64 encoded 'user:secret' string regcred registry.com Change to registry url regcred username Registry username regcred password Registry password regcred auth Base64 encoded 'user:secret' string github-config id_rsa Private key from github repo in base64 github-config token Api token github-config secretString Random string gitlab-config id_rsa Private key from gitlab repo in base64 gitlab-config token Api token gitlab-config secretString Random string jira-user username Jira username in base64 jira-user password Jira password in base64 sonar-ciuser-token username Sonar service account username sonar-ciuser-token secret Sonar service account secret nexus-ci-user username Nexus service account username nexus-ci-user password Nexus service accountpassword oauth2-proxy-cookie-secret cookie-secret Secret key for keycloak client in base64 nexus-proxy-cookie-secret cookie-secret Secret key for keycloak client in base64 keycloak-client-headlamp-secret Secret key for keycloak client in base64 keycloak-client-argo-secret Secret key for keycloak client in base64"},{"location":"operator-guide/external-secrets-operator-integration/#kubernetes-provider","title":"Kubernetes Provider","text":"

                All secrets are stored in Kubernetes in pre-defined namespaces. EDP suggests using the following approach for secrets management:

                • EDP_NAMESPACE-vault, where EDP_NAMESPACE is a name of the namespace where EDP is deployed, such as edp-vault. This namespace is used by EDP platform. Access to secrets in the edp-vault is permitted only for EDP Administrators.
                • EDP_NAMESPACE-cicd-vault, where EDP_NAMESPACE is a name of the namespace where EDP is deployed, such as edp-cicd-vault. Development team uses access to secrets in the edp-cicd-vaultfor microservices development.

                See a diagram below for more details:

                In order to install EDP, a list of passwords must be created. Secrets are provided automatically when using ESO.

                1. Create a common namespace for secrets and EDP:

                  kubectl create namespace edp-vault\nkubectl create namespace edp\n
                2. Create secrets in the edp-vault namespace:

                  apiVersion: v1\nkind: Secret\nmetadata:\nname: keycloak\nnamespace: edp-vault\ndata:\npassword: cGFzcw==  # pass in base64\nusername: dXNlcg==  # user in base64\ntype: Opaque\n
                3. In the edp-vault namespace, create a Role with a permission to read secrets:

                  apiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\nnamespace: edp-vault\nname: external-secret-store\nrules:\n- apiGroups: [\"\"]\nresources:\n- secrets\nverbs:\n- get\n- list\n- watch\n- apiGroups:\n- authorization.k8s.io\nresources:\n- selfsubjectrulesreviews\nverbs:\n- create\n
                4. In the edp-vault namespace, create a ServiceAccount used by SecretStore:

                  apiVersion: v1\nkind: ServiceAccount\nmetadata:\nname: secret-manager\nnamespace: edp\n
                5. Connect the Role from the edp-vault namespace with the ServiceAccount in the edp namespace:

                  apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\nname: eso-from-edp\nnamespace: edp-vault\nsubjects:\n- kind: ServiceAccount\nname: secret-manager\nnamespace: edp\nroleRef:\napiGroup: rbac.authorization.k8s.io\nkind: Role\nname: external-secret-store\n
                6. Create a SecretStore in the edp namespace, and use ServiceAccount for authentication:

                  apiVersion: external-secrets.io/v1beta1\nkind: SecretStore\nmetadata:\nname: edp-vault\nnamespace: edp\nspec:\nprovider:\nkubernetes:\nremoteNamespace: edp-vault  # namespace with secrets\nauth:\nserviceAccount:\nname: secret-manager\nserver:\ncaProvider:\ntype: ConfigMap\nname: kube-root-ca.crt\nkey: ca.crt\n
                7. Each secret must be defined by the ExternalSecret object. A code example below creates the keycloak secret in the edp namespace based on a secret with the same name in the edp-vault namespace:

                  apiVersion: external-secrets.io/v1beta1\nkind: ExternalSecret\nmetadata:\nname: keycloak\nnamespace: edp\nspec:\nrefreshInterval: 1h\nsecretStoreRef:\nkind: SecretStore\nname: edp-vault\n# target:\n#   name: secret-to-be-created  # name of the k8s Secret to be created. metadata.name used if not defined\ndata:\n- secretKey: username       # key to be created\nremoteRef:\nkey: keycloak           # remote secret name\nproperty: username      # value will be fetched from this field\n- secretKey: password       # key to be created\nremoteRef:\nkey: keycloak           # remote secret name\nproperty: password      # value will be fetched from this field\n

                Apply the same approach for enabling secrets management in the namespaces used for microservices development, such as sit and qa on the diagram above.

                "},{"location":"operator-guide/external-secrets-operator-integration/#aws-systems-manager-parameter-store","title":"AWS Systems Manager Parameter Store","text":"

                AWS SSM Parameter Store can be used as a Secret Provider for ESO. For EDP, it is recommended to use the IAM Roles For Service Accounts approach (see a diagram below).

                "},{"location":"operator-guide/external-secrets-operator-integration/#aws-parameter-store-in-edp-scenario","title":"AWS Parameter Store in EDP Scenario","text":"

                In order to install EDP, a list of passwords must be created. Follow the steps below, to get secrets from the SSM:

                1. In the AWS, create an AWS IAM policy and an IAM role used by ServiceAccount in SecretStore. The IAM role must have permissions to get values from the SSM Parameter Store.

                  a. Create an IAM policy that allows to get values from the Parameter Store with the edp/ path. Use your AWS Region and AWS Account Id:

                  {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Sid\": \"VisualEditor0\",\n\"Effect\": \"Allow\",\n\"Action\": \"ssm:GetParameter*\",\n\"Resource\": \"arn:aws:ssm:eu-central-1:012345678910:parameter/edp/*\"\n}\n]\n}\n

                  b. Create an AWS IAM role with trust relationships (defined below) and attach the IAM policy. Put your string for Federated value (see more on IRSA enablement for EKS Cluster) and AWS region.

                  {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Effect\": \"Allow\",\n\"Principal\": {\n\"Federated\": \"arn:aws:iam::012345678910:oidc-provider/oidc.eks.eu-central-1.amazonaws.com/id/XXXXXXXXXXXXXXXXXX\"\n},\n\"Action\": \"sts:AssumeRoleWithWebIdentity\",\n\"Condition\": {\n\"StringLike\": {\n\"oidc.eks.eu-central-1.amazonaws.com/id/XXXXXXXXXXXXXXXXXX:sub\": \"system:serviceaccount:edp:*\"\n}\n}\n}\n]\n}\n
                2. Create a secret in the AWS Parameter Store with the name /edp/my-json-secret. This secret is represented as a parameter of type string within the AWS Parameter Store:

                  View: Parameter Store JSON
                  {\n\"keycloak\":\n{\n\"username\": \"keycloak-username\",\n\"password\": \"keycloak-password\"\n},\n\"defectdojo-ciuser-token\":\n{\n\"token\": \"XXXXXXXXXXXX\",\n\"url\": \"https://defectdojo.example.com\"\n},\n\"kaniko-docker-config\":\n{\n\"auths\" :\n{\n\"registry.com\":\n{\n\"username\":\"registry-username\",\n\"password\":\"registry-password\",\n\"auth\": \"<base64 encoded 'user:secret' string>\"\n}\n}},\n\"regcred\":\n{\n\"auths\":\n{\n\"registry.com\":\n{\n\"username\":\"registry-username\",\n\"password\":\"registry-password\",\n\"auth\":\"<base64 encoded 'user:secret' string>\"\n}\n}},\n\"github-config\":\n{\n\"id_rsa\": \"id-rsa-key\",\n\"token\": \"github-token\",\n\"secretString\": \"XXXXXXXXXXXX\"\n},\n\"gitlab-config\":\n{\n\"id_rsa\": \"id-rsa-key\",\n\"token\": \"gitlab-token\",\n\"secretString\": \"XXXXXXXXXXXX\"\n},\n\"jira-user\":\n{\n\"username\": \"jira-username\",\n\"password\": \"jira-password\"\n},\n\"sonar-ciuser-token\": { \"username\": \"<ci-user>\",  \"secret\": \"<secret>\" },\n\"nexus-ci-user\": { \"username\": \"<ci.user>\",  \"password\": \"<secret>\" },\n\"oauth2-proxy-cookie-secret\": { \"cookie-secret\": \"XXXXXXXXXXXX\" },\n\"nexus-proxy-cookie-secret\": { \"cookie-secret\": \"XXXXXXXXXXXX\" },\n\"keycloak-client-headlamp-secret\":  \"XXXXXXXXXXXX\",\n\"keycloak-client-argo-secret\":  \"XXXXXXXXXXXX\"\n}\n
                3. Set External Secret operator enabled by updating the values.yaml file:

                  EDP install values.yaml
                  externalSecrets:\nenabled: true\n
                4. Install/upgrade edp-install:

                  helm upgrade --install edp epamedp/edp-install --wait --timeout=900s \\\n--version <edp_version> \\\n--values values.yaml \\\n--namespace edp \\\n--atomic\n
                "},{"location":"operator-guide/external-secrets-operator-integration/#related-articles","title":"Related Articles","text":"
                • Install External Secrets Operator
                "},{"location":"operator-guide/github-debug-webhooks/","title":"Debug GitHub Webhooks in Jenkins","text":"

                A webhook enables third-party services like GitHub to send real-time updates to an application. Updates are triggered by an event or an action by the webhook provider (for example, a push to a repository, a Pull Request creation), and pushed to the application via HTTP requests, namely, Jenkins. The GitHub Jenkins job provisioner creates a webhook in the GitHub repository during the Create release pipeline once the Integrate GitHub/GitLab in Jenkins is enabled and the GitHub Webhook Configuration is completed.

                The Jenkins setup in EDP uses the following plugins responsible for listening on GitHub webhooks:

                • GitHub plugin is configured to listen on Push events.
                • GitHub Pull Request Builder is configured to listen on Pull Request events.

                In case of any issues with webhooks, try the following solutions:

                1. Check that the firewalls are configured to accept the incoming traffic from the IP address range that is described in the GitHub documentation.

                2. Check that GitHub Personal Access Token is correct and has sufficient scope permissions.

                3. Check that the job has run at least once before using the hook (once an application is created in EDP, the build job should be run automatically in Jenkins).

                4. Check that both Push and issue comment and Pull Request webhooks are created on the GitHub side (unlike GitLab, GitHub does not need separate webhooks for each branch):

                  • Go to the GitHub repository -> Settings -> Webhooks.

                  Webhooks settings

                5. Click each webhook and check if the event delivery is successful:

                  • The URL payload must be https://jenkins-the-host.com/github-webhook/ for the GitHub plugin and https://jenkins-the-host.com/ghprbhook/ for the GitHub Pull Request Builder.
                  • The content type must be application/json for Push events and application/x-www-form-urlencoded for Pull Request events.
                  • The html_url in the Payload request must match the repository URL and be without .git at the end of the URL.
                6. Check that the X-Hub-Signature secret is verified. It is provided by the Jenkins GitHub plugin for Push events and by the GitHub Pull Request Builder plugin for Pull Request events. The Secret field is optional. Nevertheless, if incorrect, it can prevent webhook events.

                  For the GitHub plugin (Push events):

                  • Go to Jenkins -> Manage Jenkins -> Configure System, and find the GitHub plugin section.
                  • Select Advanced -> Shared secrets to add the secret via the Jenkins Credentials Provider.

                  For the GitHub Pull Request Builder (Pull Request events):

                  • Go to Jenkins -> Manage Jenkins -> Configure System, and find the GitHub Pull Request Builder plugin section.
                  • Check Shared secret that can be added manually.
                7. Redeliver events by clicking the Redeliver button and check the Response body.

                  Manage webhook

                  Note

                  Use Postman to debug webhooks. Add all headers to Postman from the webhook Request -> Headers field and send the payload (Request body) using the appropriate content type.

                  Examples for Push and Pull Request events:

                  Postman push event payload headers GitHub plugin push events

                  The response in the Jenkins log:

                  Jan 17, 2022 8:51:14 AM INFO org.jenkinsci.plugins.github.webhook.subscriber.PingGHEventSubscriber onEvent\nPING webhook received from repo <https://github.com/user-profile/user-repo>!\n

                  Postman pull request event payload headers GitHub pull request builder

                  The response in the Jenkins log:

                  Jan 17, 2022 8:17:53 AM FINE org.jenkinsci.plugins.ghprb.GhprbRootAction\nGot payload event: ping\n
                8. Check that the repo pushing to Jenkins, the GitHub project URL in the project configuration, and the repos in the pipeline Job must be lined up.

                9. Enable the GitHub hook trigger for GITScm polling for the Build job.

                  GitHub hook trigger

                10. Enable the GitHub Pull Request Builder for the Code Review job.

                  GitHub pull request builder

                11. Filter through Jenkins log by using Jenkins custom log recorder:

                  • Go to Manage Jenkins -> System log -> Add new log recorder.
                  • The Push events for the GitHub:

                    Logger Log Level org.jenkinsci.plugins.github.webhook.subscriber.DefaultPushGHEventSubscriber ALL com.cloudbees.jenkins.GitHubPushTrigger ALL com.cloudbees.jenkins.GitHubWebHook ALL org.jenkinsci.plugins.github.webhook.WebhookManager ALL org.jenkinsci.plugins.github.webhook.subscriber.PingGHEventSubscriber ALL
                  • The Pull Request events for the GitHub Pull Request Builder:

                    Logger Log Level org.jenkinsci.plugins.ghprb.GhprbRootAction ALL org.jenkinsci.plugins.ghprb.GhprbTrigger ALL org.jenkinsci.plugins.ghprb.GhprbPullRequest ALL org.jenkinsci.plugins.ghprb.GhprbRepository ALL

                  Note

                  Below is an example of using the Pipeline script with webhooks for the GitHub plugin implemented in the EDP pipelines:

                  properties([pipelineTriggers([githubPush()])])\n\nnode {\n    git credentialsId: 'github-sshkey', url: 'https://github.com/someone/something.git', branch: 'master'\n}\n

                  Push events may not work correctly with the Job Pipeline script from SCM option in the current version of the GitHub plugin 1.34.1.

                "},{"location":"operator-guide/github-debug-webhooks/#related-articles","title":"Related Articles","text":"
                • GitHub Webhooks
                • Integrate GitHub/GitLab in Jenkins
                • Integrate GitHub/GitLab in Tekton
                • GitHub Webhook Configuration
                • Manage Jenkins CI Pipeline Job Provision
                • GitHub Plugin
                • GitHub Pull Request Builder
                "},{"location":"operator-guide/github-integration/","title":"GitHub Webhook Configuration","text":"

                Follow the steps below to automatically integrate Jenkins with GitHub webhooks.

                Note

                Before applying the GitHub integration, make sure you have already visited the Integrate GitHub/GitLab in Jenkins page.

                1. Ensure the new job provisioner is created, as well as Secret with SSH key and GitServer custom resources.

                2. Ensure the access token for GitHub is created.

                3. Navigate to Dashboard -> Manage Jenkins -> Manage Credentials -> Global -> Add Credentials, and create new credentials with the Secret text kind. In the Secret field, provide the GitHub API token, fill in the ID field with the github-access-token value:

                  Jenkins github credentials

                4. Navigate to Jenkins -> Manage Jenkins -> Configure system -> GitHub, and configure the GitHub server:

                  GitHub plugin config GitHub plugin Shared secrets config

                  Note

                  Keep the Manage hooks checkbox clear since the Job Provisioner automatically creates webhooks in the repository regardless of the checkbox selection. Select Advanced to see the shared secrets that can be used in a webhook Secret field to authenticate payloads from GitHub to Jenkins. The Secret field is optional.

                5. Configure the GitHub Pull Request Builder plugin. This plugin is responsible for listening on Pull Request webhook events and triggering Code Review jobs:

                  Note

                  The Secret field is optional and is used in a webhook Secret field to authenticate payloads from GitHub to Jenkins. For details, please refer to the official GitHub pull request builder plugin documentation.

                  GitHub pull plugin config

                "},{"location":"operator-guide/github-integration/#related-articles","title":"Related Articles","text":"
                • Integrate GitHub/GitLab in Jenkins
                • Integrate GitHub/GitLab in Tekton
                • Adjust Jira Integration
                • Manage Jenkins CI Pipeline Job Provision
                "},{"location":"operator-guide/gitlab-debug-webhooks/","title":"Debug GitLab Webhooks in Jenkins","text":"

                A webhook enables third-party services like GitLab to send real-time updates to the application. Updates are triggered by an event or action by the webhook provider (for example, a push to a repository, a Merge Request creation), and pushed to the application via the HTTP requests, namely, Jenkins. The GitLab Jenkins job provisioner creates a webhook in the GitLab repository during the Create release pipeline once the Integrate GitHub/GitLab in Jenkins is enabled and the GitLab Integration is completed.

                The Jenkins setup in EDP uses the GitLab plugin responsible for listening on GitLab webhook Push and Merge Request events.

                In case of any issues with webhooks, try the following solutions:

                1. Check that the firewalls are configured to accept incoming traffic from the IP address range that is described in the GitLab documentation.

                2. Check that GitLab Personal Access Token is correct and has the api scope. If you have used the Project Access Token, make sure that the role is Owner or Maintainer, and it has the api scope.

                3. Check that the job has run at least once before using the hook (once an application is created in EDP, the build job should be run automatically in Jenkins).

                4. Check that both Push Events, Note Events and Merge Requests Events, Note Events webhooks are created on the GitLab side for each branch (unlike GitHub, GitLab must have separate webhooks for each branch).

                  • Go to the GitLab repository -> Settings -> Webhooks:

                  Webhooks list

                5. Click Edit next to each webhook and check if the event delivery is successful. If the webhook is sent, the Recent Deliveries list becomes available. Click View details.

                  Webhooks settings

                  • The URL payload must be similar to the job URL on Jenkins. For example: https://jenkins-server.com/project/project-name/MAIN-Build-job is for the Push events. https://jenkins-server.com/project/project-name/MAIN-Code-review-job is for the Merge Request events.
                  • The content type must be application/json for both events.
                  • The \"web_url\" in the Request body must match the repository URL.
                  • Project \"web_url\", \"path_with_namespace\", \"homepage\" links must be without .git at the end of the URL.
                6. Verify the Secret token (X-Gitlab-Token). This token comes from the Jenkins job due to the Jenkins GitLab Plugin and is created by our Job Provisioner:

                  • Go to the Jenkins job and select Configure.
                  • Select Advanced under the Build Triggers and check the Secret token.

                  Secret token is optional and can be empty. Nevertheless, if incorrect, it can prevent webhook events.

                7. Redeliver events by clicking the Resend Request button and check the Response body.

                  Note

                  Use Postman to debug webhooks. Add all headers to Postman from the webhook Request Headers field and send the payload (Request body) using the appropriate content type.

                  Examples for Push and Merge Request events:

                  Postman push request payload headers Push request build pipeline

                  The response in the Jenkins log:

                  Jan 17, 2022 11:26:34 AM INFO com.dabsquared.gitlabjenkins.webhook.GitLabWebHook getDynamic\nWebHook call ed with url: /project/project-name/MAIN-Build-job\nJan 17, 2022 11:26:34 AM INFO com.dabsquared.gitlabjenkins.trigger.handler.AbstractWebHookTriggerHandler handle\nproject-name/MAIN-Build-job triggered for push.\n

                  Postman merge request payload headers Merge request code review pipeline

                  The response in the Jenkins log:

                  Jan 17, 2022 11:14:58 AM INFO com.dabsquared.gitlabjenkins.webhook.GitLabWebHook getDynamic\nWebHook called with url: /project/project-name/MAIN-Code-review-job\n
                8. Check that the repository pushing to Jenkins and the repository(ies) in the pipeline Job are lined up. GitLab Connection must be defined in the job settings.

                9. Check that the settings in the Build Triggers for the Build job are as follows:

                  Build triggers build pipeline

                10. Check that the settings in the Build Triggers for the Code Review job are as follows:

                  Build triggers code review pipeline

                11. Filter through Jenkins log by using Jenkins custom log recorder:

                  • Go to Manage Jenkins -> System Log -> Add new log recorder.
                  • The Push and Merge Request events for the GitLab:

                    Logger Log Level com.dabsquared.gitlabjenkins.webhook.GitLabWebHook ALL com.dabsquared.gitlabjenkins.trigger.handler.AbstractWebHookTriggerHandler ALL com.dabsquared.gitlabjenkins.trigger.handler.merge.MergeRequestHookTriggerHandlerImpl ALL com.dabsquared.gitlabjenkins.util.CommitStatusUpdater ALL
                "},{"location":"operator-guide/gitlab-debug-webhooks/#related-articles","title":"Related Articles","text":"
                • GitLab Webhooks
                • Integrate GitHub/GitLab in Jenkins
                • Integrate GitHub/GitLab in Tekton
                • Jenkins Integration With GitLab
                • GitLab Integration
                • Manage Jenkins CI Pipeline Job Provision
                • GitLab Plugin
                "},{"location":"operator-guide/gitlab-integration/","title":"GitLab Webhook Configuration","text":"

                Follow the steps below to automatically create and integrate Jenkins GitLab webhooks.

                Note

                Before applying the GitLab integration, make sure to enable Integrate GitHub/GitLab in Jenkins. For details, please refer to the Integrate GitHub/GitLab in Jenkins page.

                1. Ensure the new job provisioner is created, as well as Secret with SSH key and GitServer custom resources.

                2. Ensure the access token for GitLab is created.

                3. Create the Jenkins Credential ID by navigating to Dashboard -> Manage Jenkins -> Manage Credentials -> Global -> Add Credentials:

                  • Select the Secret text kind.
                  • Select the Global scope.
                  • Secret is the access token that was created earlier.
                  • ID is the gitlab-access-token ID.
                  • Use the description of the current Credential ID.

                  Jenkins credential

                  Warning

                  When using the GitLab integration, a webhook is automatically created. After the removal of the application, the webhook stops working but is not deleted. If necessary, it must be deleted manually.

                  Note

                  The next step is necessary if it is needed to see the status of Jenkins Merge Requests builds in the GitLab CI/CD Pipelines section.

                4. In order to see the status of Jenkins Merge Requests builds in the GitLab CI/CD Pipelines section, configure the GitLab plugin by navigating to Manage Jenkins -> Configure System and filling in the GitLab plugin settings:

                  • Connection name is gitlab.
                  • GitLab host URL is a host URL to GitLab.
                  • Use the gitlab-access-token credentials.

                  GitLab plugin configuration

                  Find below an example of the Merge Requests build statuses in the GitLab CI/CD Pipelines section:

                  GitLab pipelines statuses

                "},{"location":"operator-guide/gitlab-integration/#related-articles","title":"Related Articles","text":"
                • Adjust Jira Integration
                • Integrate GitHub/GitLab in Jenkins
                • Integrate GitHub/GitLab in Tekton
                • Grant Jenkins Access to the Gitlab Project
                • Manage Jenkins CI Pipeline Job Provision
                "},{"location":"operator-guide/gitlabci-integration/","title":"Adjust GitLab CI Tool","text":"

                EDP allows selecting one of two available CI (Continuous Integration) tools, namely: Jenkins or GitLab. The Jenkins tool is available by default. To use the GitLab CI tool, it is required to make it available first.

                Follow the steps below to adjust the GitLab CI tool:

                1. In GitLab, add the environment variables to the project.

                  • To add variables, navigate to Settings -> CI/CD -> Expand Variables -> Add Variable:

                    Gitlab ci environment variables

                  • Apply the necessary variables as they differ in accordance with the cluster OpenShift / Kubernetes, see below:

                    OpenShift Environment Variables Description DOCKER_REGISTRY_URL URL to OpenShift docker registry DOCKER_REGISTRY_PASSWORD Service Account token that has an access to registry DOCKER_REGISTRY_USER user name OPENSHIFT_SA_TOKEN token that can be used to log in to OpenShift

                    Info

                    In order to get access to the Docker registry and OpenShift, use the gitlab-ci ServiceAccount; pay attention that SA description contains the credentials and secrets:

                    Service account

                    Kubernetes Environment Variables Description DOCKER_REGISTRY_URL URL to Amazon ECR AWS_ACCESS_KEY_ID auto IAM user access key AWS_SECRET_ACCESS_KEY auto IAM user secret access key K8S_SA_TOKEN token that can be used to log in to Kubernetes

                    Note

                    To get the access to ECR, it is required to have an auto IAM user that has rights to push/create a repository.

                2. In Admin Console, select the CI tool in the Advanced Settings menu during the codebase creation:

                  Advanced settings

                  Note

                  The selection of the CI tool is available only with the Import strategy.

                3. As soon as the codebase is provisioned, the .gitlab-ci.yml file will be created in the repository that describes the pipeline's stages and logic:

                  .gitlab-ci.yml file presented in repository

                "},{"location":"operator-guide/harbor-oidc/","title":"Harbor OIDC Configuration","text":"

                This page provides instructions for configuring OIDC authorization for Harbor. This enables the use of Single Sign-On (SSO) for authorization in Harbor and allows centralized control over user access and rights through a single configuration point.

                "},{"location":"operator-guide/harbor-oidc/#prerequisites","title":"Prerequisites","text":"

                Before the beginning, ensure your cluster meets the following requirements:

                • Keycloak is installed;
                • EPAM Delivery Platform is installed.
                "},{"location":"operator-guide/harbor-oidc/#configure-keycloak","title":"Configure Keycloak","text":"

                To start from, configure Keycloak by creating two Kubernetes resources. Follow the steps below to succeed:

                1. Generate the keycloak-client-harbor-secret for Keycloak using either the commands below or using the External Secrets Operator:

                  keycloak_client_harbor_secret=$(openssl rand -base64 32 | head -c 32)\n
                  kubectl -n edp create secret generic keycloak-client-harbor-secret \\\n--from-literal=cookie-secret=${keycloak_client_harbor_secret}\n
                2. Create the KeycloakClient custom resource by applying the HarborKeycloakClient.yaml file in the edp namespace. This custom resource will use the keycloak-client-harbor-secret to include the harbor client. After the download, you will receive the created harbor client, and the password that is actually the value of the Kubernetes secret from the step 1:

                  View: HarborKeycloakClient.yaml
                  apiVersion: v1.edp.epam.com/v1\nkind: KeycloakClient\nmetadata:\nname: harbor\nspec:\nadvancedProtocolMappers: true\nclientId: harbor\ndirectAccess: true\npublic: false\nsecret: keycloak-client-harbor-secret\ndefaultClientScopes:\n- profile\n- email\n- roles\ntargetRealm: control-plane\nwebUrl: <harbor_endpoint>\nprotocolMappers:\n- name: roles\nprotocol: openid-connect\nprotocolMapper: oidc-usermodel-realm-role-mapper\nconfig:\naccess.token.claim: true\nclaim.name: roles\nid.token.claim: true\nuserinfo.token.claim: true\nmultivalued: true\n
                "},{"location":"operator-guide/harbor-oidc/#configure-harbor","title":"Configure Harbor","text":"

                The next stage is to configure Harbor. Proceed with following the steps below:

                1. Log in to Harbor UI with an account that has Harbor system administrator privileges. To get the administrator password, execute the command below:

                  kubectl get secret harbor -n harbor -o jsonpath='{.data.HARBOR_ADMIN_PASSWORD}' | base64 --decode\n
                2. Navigate to Administration -> Configuration -> Authentication. Configure OIDC using the parameters below:

                  auth_mode: oidc_auth\noidc_name: keycloak\noidc_endpoint: <keycloak_endpoint>/auth/realms/control-plane\noidc_client_id: harbor\noidc_client_secret: <keycloak-client-harbor-secret>\noidc_groups_claim: roles\noidc_admin_group: administrator\noidc_scope: openid,email,profile,roles\nverify_certificate: true\noidc_auto_onboard: true\noidc_user_claim: preferred_username\n

                  Harbor Authentication Configuration

                As a result, users will be prompted to authenticate themselves when logging in to Harbor UI.

                "},{"location":"operator-guide/harbor-oidc/#related-articles","title":"Related Articles","text":"
                • Configure Access Token Lifetime
                • EKS OIDC With Keycloak
                • External Secrets Operator Integration
                • Integrate Harbor With EDP Pipelines
                "},{"location":"operator-guide/headlamp-oidc/","title":"Headlamp OIDC Configuration","text":"

                This page provides the instructions of configuring the OIDC authorization for EDP Portal UI, thus allowing using SSO for authorization in Portal and controlling user access and rights from one configuration point.

                "},{"location":"operator-guide/headlamp-oidc/#prerequisites","title":"Prerequisites","text":"

                Ensure the following values are set first before starting the Portal OIDC configuration:

                1. realm_id = openshift

                2. client_id = kubernetes

                3. keycloak_client_key= keycloak_client_secret_key (received from: Openshift realm -> clients -> kubernetes -> Credentials -> Client secret)

                4. group = edp-oidc-admins, edp-oidc-builders, edp-oidc-deployers, edp-oidc-developers, edp-oidc-viewers (Should be created manually in the realm from point 1)

                Note

                The values indicated above are the result of the Keycloak configuration as an OIDC identity provider. To receive them, follow the instructions on the Keycloak OIDC EKS Configuration page.

                "},{"location":"operator-guide/headlamp-oidc/#configure-keycloak","title":"Configure Keycloak","text":"

                To proceed with the Keycloak configuration, perform the following:

                1. Add the URL of the Headlamp to the valid_redirect_uris variable in Keycloak:

                  View: keycloak_openid_client
                    valid_redirect_uris = [\n\"https://edp-headlamp-edp.<dns_wildcard>/*\"\n\"http://localhost:8000/*\"\n]\n

                  Make sure to define the following Keycloak client values as indicated:

                  Keycloak client configuration

                2. Configure the Keycloak client key in Kubernetes using the Kubernetes secrets or the External Secrets Operator:

                  apiVersion: v1\nkind: Secret\nmetadata:\nname: keycloak-client-headlamp-secret\nnamespace: edp\ntype: Opaque\nstringData:\nclientSecret: <keycloak_client_secret_key>\n
                3. Assign user to one or more groups in Keycloak.

                "},{"location":"operator-guide/headlamp-oidc/#integrate-headlamp-with-kubernetes","title":"Integrate Headlamp With Kubernetes","text":"

                Headlamp can be integrated in Kubernetes in three steps:

                1. Update the values.yaml file by enabling OIDC:

                  View: values.yaml
                  edp-headlamp:\nconfig:\noidc:\nenabled: true\n
                2. Navigate to Headlamp and log in by clicking the Sign In button:

                  Headlamp login page

                3. Go to EDP section -> Account -> Settings, and set up a namespace:

                  Headlamp namespace settings

                As a result, it is possible to control access and rights from the Keycloak endpoint.

                "},{"location":"operator-guide/headlamp-oidc/#related-articles","title":"Related Articles","text":"
                • Configure Access Token Lifetime
                • EKS OIDC With Keycloak
                • External Secrets Operator
                "},{"location":"operator-guide/import-strategy-jenkins/","title":"Integrate GitHub/GitLab in Jenkins","text":"

                This page describes how to integrate EDP with GitLab or GitHub in case of following the Jenkins deploy scenario.

                "},{"location":"operator-guide/import-strategy-jenkins/#integration-procedure","title":"Integration Procedure","text":"

                To start from, it is required to add both Secret with SSH key and GitServer custom resources by taking the steps below:

                1. Generate an SSH key pair and add a public key to GitLab or GitHub account.

                  ssh-keygen -t ed25519 -C \"email@example.com\"\n
                2. Generate access token for GitLab or GitHub account with read/write access to the API. Both personal and project access tokens are applicable.

                  GitHubGitLab

                  To create access token in GitHub, follow the steps below:

                  • Log in to GitHub.
                  • Click the profile account and navigate to Settings -> Developer Settings.
                  • Select Personal access tokens (classic) and generate a new token with the following parameters:

                  Repo permission

                  Note

                  The access below is required for the GitHub Pull Request Builder plugin to get Pull Request commits, their status, and author info.

                  Admin permission User permission

                  Warning

                  Make sure to save a new personal access token because it won`t be displayed later.

                  To create access token in GitLab, follow the steps below:

                  • Log in to GitLab.
                  • In the top-right corner, click the avatar and select Settings.
                  • On the User Settings menu, select Access Tokens.
                  • Choose a name and an optional expiry date for the token.
                  • In the Scopes block, select the api scope for the token.

                  Personal access tokens

                  • Click the Create personal access token button.

                  Note

                  Make sure to save the access token as there will not be any ability to access it once again.

                  In case you want to create a project access token instead of a personal one, the GitLab Jenkins plugin will be able to accept payloads from webhooks for the project only:

                  • Log in to GitLab and navigate to the project.
                  • On the User Settings menu, select Access Tokens.
                  • Choose a name and an optional expiry date for the token.
                  • Choose a role: Owner or Maintainer.
                  • In the Scopes block, select the api scope for the token.

                  Project access tokens

                  • Click the Create project access token button.
                3. Create secret in the edp namespace for the Git account with the id_rsa, username, and token fields. We recommend using EDP Portal to implement this:

                  • Open EDP Portal URL. Use the Sign-In option:

                    Logging screen

                  • In the top right corner, enter the Cluster settings and set the Default namespace. The Allowed namespaces field is optional. All the resources created via EDP Portal are created in the Default namespace whereas Allowed namespaces means the namespaces you are allowed to access in this cluster:

                    Cluster settings

                  • Log into EDP Portal UI, select EDP -> Git Servers -> + to see the Create Git Server menu:

                    Git Servers overview

                  • Choose your Git provider, insert Host, Access token, Private SSH key. Adjust SSH port, User and HTTPS port if needed and click Apply:

                    Note

                    Do not forget to press enter at the very end of the private key to have the last row empty.

                    Create Git Servers menu

                  • After performing the steps above, two Kubernetes custom resources will be created in the default namespace: secret and GitServer. EDP Portal appends random symbols to both the secret and the GitServer to provide names with uniqueness. Also, the attempt to connect to your actual Git server will be performed. If the connection with the server is established, the Git server status should be green:

                    Git server status

                    Note

                    The value of the nameSshKeySecret property is the name of the Secret that is indicated in the first step above.

                4. Create the JenkinsServiceAccount custom resource with the credentials field that corresponds to the nameSshKeySecret property above:

                  apiVersion: v2.edp.epam.com/v1\nkind: JenkinsServiceAccount\nmetadata:\nname: gitlab # It can also be github.\nnamespace: edp\nspec:\ncredentials: <nameSshKeySecret>\nownerName: ''\ntype: ssh\n
                5. Double-check that the new SSH credentials called gitlab/github are created in Jenkins using the SSH key. Navigate to Jenkins -> Manage Jenkins -> Manage Credentials -> (global):

                  Jenkins credentials

                6. Create a new job provisioner by following the instructions for GitHub or GitLab. The job provisioner will create a job suite for an application added to EDP. The job provisioner will also create webhooks for the project in GitLab using a GitLab token.

                7. Configure GitHub or GitLab plugins in Jenkins.

                "},{"location":"operator-guide/import-strategy-jenkins/#related-articles","title":"Related Articles","text":"
                • Add Git Server
                • Add Application
                • GitHub Webhook Configuration
                • GitLab Webhook Configuration
                "},{"location":"operator-guide/import-strategy-tekton/","title":"Integrate GitHub/GitLab in Tekton","text":"

                This page describes how to integrate EDP with GitLab or GitHub Version Control System.

                "},{"location":"operator-guide/import-strategy-tekton/#integration-procedure","title":"Integration Procedure","text":"

                To start from, it is required to add both Secret with SSH key, API token, and GitServer resources by taking the steps below.

                1. Generate an SSH key pair and add a public key to GitLab or GitHub account.

                  ssh-keygen -t ed25519 -C \"email@example.com\"\n
                2. Generate access token for GitLab or GitHub account with read/write access to the API. Both personal and project access tokens are applicable.

                  GitHubGitLab

                  To create access token in GitHub, follow the steps below:

                  • Log in to GitHub.
                  • Click the profile account and navigate to Settings -> Developer Settings.
                  • Select Personal access tokens (classic) and generate a new token with the following parameters:

                  Repo permission

                  Note

                  The access below is required for the GitHub Pull Request Builder plugin to get Pull Request commits, their status, and author info.

                  Admin permission User permission

                  Warning

                  Make sure to save a new personal access token because it won`t be displayed later.

                  To create access token in GitLab, follow the steps below:

                  • Log in to GitLab.
                  • In the top-right corner, click the avatar and select Settings.
                  • On the User Settings menu, select Access Tokens.
                  • Choose a name and an optional expiry date for the token.
                  • In the Scopes block, select the api scope for the token.

                  Personal access tokens

                  • Click the Create personal access token button.

                  Note

                  Make sure to save the access token as there will not be any ability to access it once again.

                  In case you want to create a project access token instead of a personal one, take the following steps:

                  • Log in to GitLab and navigate to the project.
                  • On the User Settings menu, select Access Tokens.
                  • Choose a name and an optional expiry date for the token.
                  • Choose a role: Owner or Maintainer.
                  • In the Scopes block, select the api scope for the token.

                  Project access tokens

                  • Click the Create project access token button.
                3. Create a secret in the edp namespace for the Git account with the id_rsa, username, and token fields. Take the following template as an example (use ci-github instead of ci-gitlab for GitHub):

                  kubectl create secret generic ci-gitlab -n edp \\\n--from-file=id_rsa=id_rsa \\\n--from-literal=username=git \\\n--from-literal=token=your_gitlab_access_token\n
                "},{"location":"operator-guide/import-strategy-tekton/#related-articles","title":"Related Articles","text":"
                • Add Git Server
                • Add Application
                • GitHub WebHook Configuration
                • GitLab WebHook Configuration
                "},{"location":"operator-guide/import-strategy/","title":"Enable VCS Import Strategy","text":"

                Enabling the VCS Import strategy is a prerequisite to integrate EDP with GitLab or GitHub.

                "},{"location":"operator-guide/import-strategy/#general-steps","title":"General Steps","text":"

                In order to use the Import strategy, it is required to add both Secret with SSH key and GitServer custom resources by taking the steps below.

                1. Generate an SSH key pair and add a public key to GitLab or GitHub account.

                  ssh-keygen -t ed25519 -C \"email@example.com\"\n
                2. Generate access token for GitLab or GitHub account with read/write access to the API. Both personal and project access tokens are applicable.

                GitHubGitLab

                To create access token in GitHub, follow the steps below:

                • Log in to GitHub.
                • Click the profile account and navigate to Settings -> Developer Settings.
                • Select Personal access tokens (classic) and generate a new token with the following parameters:

                Repo permission

                Note

                The access below is required for the GitHub Pull Request Builder plugin to get Pull Request commits, their status, and author info.

                Admin permission User permission

                Warning

                Make sure to save a new personal access token because it won`t be displayed later.

                To create access token in GitLab, follow the steps below:

                • Log in to GitLab.
                • In the top-right corner, click the avatar and select Settings.
                • On the User Settings menu, select Access Tokens.
                • Choose a name and an optional expiry date for the token.
                • In the Scopes block, select the api scope for the token.

                Personal access tokens

                • Click the Create personal access token button.

                Note

                Make sure to save the access token as there will not be any ability to access it once again.

                In case you want to create a project access token instead of a personal one, the GitLab Jenkins plugin will be able to accept payloads from webhooks for the project only:

                • Log in to GitLab and navigate to the project.
                • On the User Settings menu, select Access Tokens.
                • Choose a name and an optional expiry date for the token.
                • Choose a role: Owner or Maintainer.
                • In the Scopes block, select the api scope for the token.

                Project access tokens

                • Click the Create project access token button.
                "},{"location":"operator-guide/import-strategy/#ci-tool-specific-steps","title":"CI Tool Specific Steps","text":"

                The further steps depend on the CI tool used.

                Tekton CI toolJenkins CI tool
                1. Create a secret in the edp-project namespace for the Git account with the id_rsa, username, and token fields. Take the following template as an example (use github instead of gitlab for GitHub):

                  kubectl create secret generic gitlab -n edp \\\n--from-file=id_rsa=id_rsa \\\n--from-literal=username=git \\\n--from-literal=token=your_gitlab_access_token\n
                2. After completing the steps above, you can get back and continue installing EDP.

                1. Create secret in the edp namespace for the Git account with the id_rsa, username, and token fields. We recommend using EDP Portal to implement this:

                  Open EDP Portal URL. Use the Sign-In option:

                  Logging screen

                  In the top right corner, enter the Cluster settings and set the Default namespace. The Allowed namespaces field is optional. All the resources created via EDP Portal are created in the Default namespace whereas Allowed namespaces means the namespaces you are allowed to access in this cluster:

                  Cluster settings

                  Log into EDP Portal UI, select EDP -> Git Servers -> + to see the Create Git Server menu:

                  Git Servers overview

                  Choose your Git provider, insert Host, Access token, Private SSH key. Adjust SSH port, User and HTTPS port if needed and click Apply:

                  Note

                  Do not forget to press enter at the very end of the private key to have the last row empty.

                  Create Git Servers menu

                  When everything is done, two custom resources will be created in the default namespace: secret and Git server. EDP Portal appends random symbols to both the secret and the server to provide names with uniqueness. Also, the attempt to connect to your Git server will be performed. If everything is correct, the Git server status should be green:

                  Git server status

                  Note

                  The value of the nameSshKeySecret property is the name of the Secret that is indicated in the first step above.

                2. Create the JenkinsServiceAccount custom resource with the credentials field that corresponds to the nameSshKeySecret property above:

                  apiVersion: v2.edp.epam.com/v1\nkind: JenkinsServiceAccount\nmetadata:\nname: gitlab # It can also be github.\nnamespace: edp\nspec:\ncredentials: <nameSshKeySecret>\nownerName: ''\ntype: ssh\n
                3. Double-check that the new SSH credentials called gitlab/github are created in Jenkins using the SSH key. Navigate to Jenkins -> Manage Jenkins -> Manage Credentials -> (global):

                  Jenkins credentials

                4. The next step is to create a new job provisioner by following the instructions for GitHub or GitLab. The job provisioner will create a job suite for an application added to EDP. It will also create webhooks for the project in GitLab using a GitLab token.

                5. The next step is to integrate Jenkins with GitHub or GitLab by setting their plugins.

                "},{"location":"operator-guide/import-strategy/#related-articles","title":"Related Articles","text":"
                • Add Git Server
                • Add Application
                • GitHub Webhook Configuration
                • GitLab Webhook Configuration
                "},{"location":"operator-guide/install-argocd/","title":"Install Argo CD","text":"

                Inspect the prerequisites and the main steps to perform for enabling Argo CD in EDP.

                "},{"location":"operator-guide/install-argocd/#prerequisites","title":"Prerequisites","text":"

                The following tools must be installed:

                • Keycloak
                • EDP
                • Kubectl version 1.23.0
                • Helm version 3.10.0
                "},{"location":"operator-guide/install-argocd/#installation","title":"Installation","text":"

                Argo CD enablement for EDP consists of two major steps:

                • Argo CD integration with EDP (SSO enablement, codebase onboarding, etc.)
                • Argo CD installation

                Info

                It is also possible to install Argo CD using the Helmfile. For details, please refer to the Install via Helmfile page.

                "},{"location":"operator-guide/install-argocd/#integrate-with-edp","title":"Integrate With EDP","text":"

                To enable Argo CD integration, ensure that the argocd.enabled flag values.yaml is set to true

                "},{"location":"operator-guide/install-argocd/#install-with-helm","title":"Install With Helm","text":"

                Argo CD can be installed in several ways, please follow the official documentation for more details.

                Follow the steps below to install Argo CD using Helm:

                For the OpenShift users:

                When using the OpenShift platform, apply the SecurityContextConstraints resource. Change the namespace in the users section if required.

                View: argocd-scc.yaml

                allowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegeEscalation: true\nallowPrivilegedContainer: false\nallowedCapabilities: null\napiVersion: security.openshift.io/v1\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- min: 99\nmax: 65543\ngroups: []\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: argo-redis-ha\npriority: 1\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities:\n- KILL\n- MKNOD\n- SETUID\n- SETGID\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMin: 1\nuidRangeMax: 65543\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nseccompProfiles:\n- '*'\nusers:\n- system:serviceaccount:argocd:argo-redis-ha\n- system:serviceaccount:argocd:argo-redis-ha-haproxy\n- system:serviceaccount:argocd:argocd-notifications-controller\n- system:serviceaccount:argocd:argo-argocd-repo-server\n- system:serviceaccount:argocd:argocd-server\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n

                1. Check out the values.yaml file sample of the Argo CD customization, which is based on the HA mode without autoscaling:

                  View: kubernetes-values.yaml
                  redis-ha:\nenabled: true\n\ncontroller:\nenableStatefulSet: true\n\nserver:\nreplicas: 2\nextraArgs:\n- \"--insecure\"\nenv:\n- name: ARGOCD_API_SERVER_REPLICAS\nvalue: '2'\ningress:\nenabled: true\nhosts:\n- \"argocd.<Values.global.dnsWildCard>\"\nconfig:\n# required when SSO is enabled\nurl: \"https://argocd.<.Values.global.dnsWildCard>\"\napplication.instanceLabelKey: argocd.argoproj.io/instance-edp\noidc.config: |\nname: Keycloak\nissuer: https://<.Values.global.keycloakEndpoint>/auth/realms/edp-main\nclientID: argocd\nclientSecret: $oidc.keycloak.clientSecret\nrequestedScopes:\n- openid\n- profile\n- email\n- groups\nrbacConfig:\n# users may be still be able to login,\n# but will see no apps, projects, etc...\npolicy.default: ''\nscopes: '[groups]'\npolicy.csv: |\n# default global admins\ng, ArgoCDAdmins, role:admin\n\nconfigs:\nparams:\napplication.namespaces: edp\n\nrepoServer:\nreplicas: 2\n\n# we use Keycloak so no DEX is required\ndex:\nenabled: false\n\n# Disabled for multitenancy env with single instance deployment\napplicationSet:\nenabled: false\n
                  View: openshift-values.yaml
                  redis-ha:\nenabled: true\n\ncontroller:\nenableStatefulSet: true\n\nserver:\nreplicas: 2\nextraArgs:\n- \"--insecure\"\nenv:\n- name: ARGOCD_API_SERVER_REPLICAS\nvalue: '2'\nroute:\nenabled: true\nhostname: \"argocd.<.Values.global.dnsWildCard>\"\ntermination_type: edge\ntermination_policy: Redirect\nconfig:\n# required when SSO is enabled\nurl: \"https://argocd.<.Values.global.dnsWildCard>\"\napplication.instanceLabelKey: argocd.argoproj.io/instance-edp\noidc.config: |\nname: Keycloak\nissuer: https://<.Values.global.keycloakEndpoint>/auth/realms/edp-main\nclientID: argocd\nclientSecret: $oidc.keycloak.clientSecret\nrequestedScopes:\n- openid\n- profile\n- email\n- groups\nrbacConfig:\n# users may be still be able to login,\n# but will see no apps, projects, etc...\npolicy.default: ''\nscopes: '[groups]'\npolicy.csv: |\n# default global admins\ng, ArgoCDAdmins, role:admin\n\nconfigs:\nparams:\napplication.namespaces: edp\n\nrepoServer:\nreplicas: 2\n\n# we use Keycloak so no DEX is required\ndex:\nenabled: false\n\n# Disabled for multitenancy env with single instance deployment\napplicationSet:\nenabled: false\n

                  Populate Argo CD values with the values from the EDP values.yaml:

                  • <.Values.global.dnsWildCard> is the EDP DNS WildCard.
                  • <.Values.global.keycloakEndpoint> is the Keycloak Hostname.
                  • We use edp namespace.
                2. Run the installation:

                  kubectl create ns argocd\nhelm repo add argo https://argoproj.github.io/argo-helm\nhelm install argo --version 5.33.1 argo/argo-cd -f values.yaml -n argocd\n
                3. Update the argocd-secret secret in the argocd namespace by providing the correct Keycloak client secret (oidc.keycloak.clientSecret) with the value from the keycloak-client-argocd-secret secret in the EDP namespace. Then restart the deployment:

                  ARGOCD_CLIENT=$(kubectl -n edp get secret keycloak-client-argocd-secret  -o jsonpath='{.data.clientSecret}')\nkubectl -n argocd patch secret argocd-secret -p=\"{\\\"data\\\":{\\\"oidc.keycloak.clientSecret\\\": \\\"${ARGOCD_CLIENT}\\\"}}\" -v=1\nkubectl -n argocd rollout restart deployment argo-argocd-server\n
                "},{"location":"operator-guide/install-argocd/#related-articles","title":"Related Articles","text":"
                • Argo CD Integration
                • Install via Helmfile
                "},{"location":"operator-guide/install-defectdojo/","title":"Install DefectDojo","text":"

                Inspect the main steps to perform for installing DefectDojo via Helm Chart.

                Info

                It is also possible to install DefectDojo using the EDP addons approach. For details, please refer to the EDP addons approach.

                "},{"location":"operator-guide/install-defectdojo/#prerequisites","title":"Prerequisites","text":"
                • Kubectl version 1.26.0 is installed.
                • Helm version 3.12.0+ is installed.
                "},{"location":"operator-guide/install-defectdojo/#installation","title":"Installation","text":"

                Info

                Please refer to the DefectDojo Helm Chart and Deploy DefectDojo into the Kubernetes cluster sections for details.

                To install DefectDojo, follow the steps below:

                1. Check that a security namespace is created. If not, run the following command to create it:

                  kubectl create namespace defectdojo\n

                  For the OpenShift users:

                  When using the OpenShift platform, install the SecurityContextConstraints resource. In case of using a custom namespace for defectdojo, change the namespace in the users section.

                  View: defectdojo-scc.yaml

                  allowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegeEscalation: true\nallowPrivilegedContainer: false\nallowedCapabilities: null\napiVersion: security.openshift.io/v1\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- min: 999\nmax: 65543\ngroups: []\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: defectdojo\npriority: 1\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities:\n- KILL\n- MKNOD\n- SETUID\n- SETGID\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMin: 1\nuidRangeMax: 65543\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nusers:\n- system:serviceaccount:defectdojo:defectdojo\n- system:serviceaccount:defectdojo:defectdojo-rabbitmq\n- system:serviceaccount:defectdojo:default\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n

                2. Add a chart repository:

                  helm repo add defectdojo 'https://raw.githubusercontent.com/DefectDojo/django-DefectDojo/helm-charts'\nhelm repo update\n
                3. Create PostgreSQL admin secret:

                  kubectl -n defectdojo create secret generic defectdojo-postgresql-specific \\\n--from-literal=postgresql-password=<postgresql_password> \\\n--from-literal=postgresql-postgres-password=<postgresql_postgres_password>\n

                  Note

                  The postgresql_password and postgresql_postgres_password passwords must be 16 characters long.

                4. Create Rabbitmq admin secret:

                  kubectl -n defectdojo create secret generic defectdojo-rabbitmq-specific \\\n--from-literal=rabbitmq-password=<rabbitmq_password> \\\n--from-literal=rabbitmq-erlang-cookie=<rabbitmq_erlang_cookie>\n

                  Note

                  The rabbitmq_password password must be 10 characters long.

                  The rabbitmq_erlang_cookie password must be 32 characters long.

                5. Create DefectDojo admin secret:

                  kubectl -n defectdojo create secret generic defectdojo \\\n--from-literal=DD_ADMIN_PASSWORD=<dd_admin_password> \\\n--from-literal=DD_SECRET_KEY=<dd_secret_key> \\\n--from-literal=DD_CREDENTIAL_AES_256_KEY=<dd_credential_aes_256_key> \\\n--from-literal=METRICS_HTTP_AUTH_PASSWORD=<metric_http_auth_password>\n

                  Note

                  The dd_admin_password password must be 22 characters long.

                  The dd_secret_key password must be 128 characters long.

                  The dd_credential_aes_256_key password must be 128 characters long.

                  The metric_http_auth_password password must be 32 characters long.

                6. Install DefectDojo v.2.22.4 using defectdojo/defectdojo Helm chart v.1.6.69:

                  helm upgrade --install \\\ndefectdojo \\\n--version 1.6.69 \\\ndefectdojo/defectdojo \\\n--namespace defectdojo \\\n--values values.yaml\n

                  Check out the values.yaml file sample of the DefectDojo customization:

                  View: values.yaml
                  tag: 2.22.4\nfullnameOverride: defectdojo\nhost: defectdojo.<ROOT_DOMAIN>\nsite_url: https://defectdojo.<ROOT_DOMAIN>\nalternativeHosts:\n- defectdojo-django.defectdojo\n\ninitializer:\n# should be false after initial installation was performed\nrun: true\ndjango:\ningress:\nenabled: true # change to 'false' for OpenShift\nactivateTLS: false\nuwsgi:\nlivenessProbe:\n# Enable liveness checks on uwsgi container. Those values are use on nginx readiness checks as well.\n# default value is 120, so in our case 20 is just fine\ninitialDelaySeconds: 20\n
                7. For the OpenShift platform, install a Route:

                  View: defectdojo-route.yaml
                  kind: Route\napiVersion: route.openshift.io/v1\nmetadata:\nname: defectdojo\nnamespace: defectdojo\nspec:\nhost: defectdojo.<ROOT_DOMAIN>\npath: /\ntls:\ninsecureEdgeTerminationPolicy: Redirect\ntermination: edge\nto:\nkind: Service\nname: defectdojo-django\nport:\ntargetPort: http\nwildcardPolicy: None\n
                8. "},{"location":"operator-guide/install-defectdojo/#configuration","title":"Configuration","text":"

                  To prepare DefectDojo for integration with EDP, follow the steps below:

                  1. Create ci user in DefectDojo UI:

                    • Login to DefectDojo UI using admin credentials:
                      echo \"DefectDojo admin password: $(kubectl \\\nget secret defectdojo \\\n--namespace=defectdojo \\\n--output jsonpath='{.data.DD_ADMIN_PASSWORD}' \\\n| base64 --decode)\"\n
                    • Go to User section
                    • Create new user with write permission: DefectDojo set user permission
                  2. Get a token of the DefectDojo user:

                    • Login to the DefectDojo UI using the credentials from previous steps.
                    • Go to the API v2 key (token).
                    • Copy the API key.
                  3. Provision the secret using EDP Portal, Manifest or with the externalSecrets operator:

                  EDP PortalManifestExternal Secrets Operator

                  Go to EDP Portal -> EDP -> Configuration -> DefectDojo. Update or fill in the URL and Token and click the Save button.

                  DefectDojo update manual secret

                  apiVersion: v1\nkind: Secret\nmetadata:\nname: ci-defectdojo\nnamespace: edp\nlabels:\napp.edp.epam.com/secret-type: defectdojo\nstringData:\nurl: https://defectdojo.example.com\ntoken: <token>\n

                  Store defectdojo URL and Token in AWS Parameter Store with following format:

                  \"ci-defectdojo\":\n{\n\"url\": \"https://defectdojo.example.com\",\n\"token\": \"XXXXXXXXXXXX\"\n}\n
                  Go to EDP Portal -> EDP -> Configuration -> DefectDojo and see the Managed by External Secret message.

                  More details about the External Secrets Operator integration procedure can be found in the External Secrets Operator Integration page.

                  After following the instructions provided, you should be able to integrate your DefectDojo with the EPAM Delivery Platform using one of the few available scenarios.

                  "},{"location":"operator-guide/install-defectdojo/#related-articles","title":"Related Articles","text":"
                  • Install External Secrets Operator
                  • External Secrets Operator Integration
                  • Install Harbor
                  "},{"location":"operator-guide/install-edp/","title":"Install EDP","text":"

                  Inspect the main steps to install EPAM Delivery Platform. Please check the Prerequisites Overview page before starting the installation. There are two recommended ways to deploy EPAM Delivery Platform:

                  • Using Helm (see below);
                  • Using Helmfile.

                  Note

                  The installation process below is given for a Kubernetes cluster. The steps that differ for an OpenShift cluster are indicated in the notes.

                  Disclaimer

                  EDP is aligned with industry standards for storing and managing sensitive data, ensuring optimal security. However, the use of custom solutions introduces uncertainties, thus the responsibility for the safety of your data is totally covered by platform administrator.

                  1. EDP manages secrets via External Secret Operator to integrate with a multitude of utilities. For insights into the secrets in use and their utilization, refer to the provided External Secrets Operator Integration.

                  2. Create an edp namespace or a Kiosk space depending on whether Kiosk is used or not.

                    • Without Kiosk, create a namespace:

                      kubectl create namespace edp\n

                      Note

                      For an OpenShift cluster, run the oc command instead of the kubectl one.

                    • With Kiosk, create a relevant space:

                      apiVersion: tenancy.kiosk.sh/v1alpha1\nkind: Space\nmetadata:\nname: edp\nspec:\naccount: edp-admin\n

                    Note

                    Kiosk is mandatory for EDP v.2.8.x. It is not implemented for the previous versions, and is optional for EDP since v.2.9.x.

                  3. For the EDP, it is required to have Keycloak access to perform the integration. To see the details on how to configure Keycloak correctly, please refer to the Install Keycloak page.

                  4. Add the Helm EPAMEDP Charts for local client.

                    helm repo add epamedp https://epam.github.io/edp-helm-charts/stable\n
                  5. Choose the required Helm chart version:

                    helm search repo epamedp/edp-install\nNAME                    CHART VERSION   APP VERSION     DESCRIPTION\nepamedp/edp-install     3.4.1           3.4.1           A Helm chart for EDP Install\n

                    Note

                    It is highly recommended to use the latest released version.

                  6. EDP can be integrated with the following version control systems:

                    • Gerrit (by default)
                    • GitHub
                    • GitLab

                    This integration implies in what system the development of the application will be or is already being carried out. The global.gitProvider flag in the edp-install controls this integration:

                    Gerrit (by default)GitHubGitLab values.yaml
                    ...\nglobal:\ngitProvider: gerrit\n...\n
                    values.yaml
                    ...\nglobal:\ngitProvider: github\n...\n
                    values.yaml
                    ...\nglobal:\ngitProvider: gitlab\n...\n

                    By default, the internal Gerrit server is deployed as a result of EDP deployment. For more details on how to integrate EDP with GitLab or GitHub instead of Gerrit, please refer to the Integrate GitHub/GitLab in Tekton page.

                  7. Configure SonarQube integration. EDP provides two ways to work with SonarQube:

                    • External SonarQube - any SonarQube that is installed separately from EDP. For example, SonarQube that is installed using edp-cluster-add-ons or another public SonarQube server. For more details on how EDP recommends to configure SonarQube to work with the platform, please refer to the SonarQube Integration page.
                    • Internal SonarQube - SonarQube that is installed along with EDP.
                    External SonarQubeInternal SonarQube values.yaml
                    ...\nglobal:\n# -- Optional parameter. Link to use custom sonarqube. Format: http://<service-name>.<sonarqube-namespace>:9000 or http(s)://<endpoint>\nsonarUrl: \"http://sonar.example.com\"\nsonar-operator:\nenabled: false\n...\n

                    This scenario is pre-configured by default, any values are already pre-defined.

                  8. It is also mandatory to have Nexus configured to run the platform. EDP provides two ways to work with Nexus:

                    • External Nexus - any Nexus that is installed separately from EDP. For example, Nexus that installed using edp-cluster-add-ons or another public Nexus server. For more details on how EDP recommends to configure Nexus to work with the platform, please refer to the Nexus Sonatype Integration page.
                    • Internal Nexus - Nexus that is installed along with EDP.
                    External NexusInternal Nexus values.yaml
                    ...\nglobal:\n# -- Optional parameter. Link to use custom nexus. Format: http://<service-name>.<nexus-namespace>:8081 or http://<ip-address>:<port>\nnexusUrl: \"http://nexus.example.com\"\nnexus-operator:\nenabled: false\n...\n

                    This scenario is pre-configured by default, any values are already pre-defined.

                  9. (Optional) Configure Container Registry for image storage.

                    Since EDP v3.4.0, we enabled users to configure Harbor registry instead of AWS ECR and Openshift-registry. We recommend installing Harbor using our edp-cluster-add-ons although you can install it any other way. To integrate EDP with Harbor, see Harbor integration page.

                    To enable Harbor as a registry storage, use the values below:

                    global:\ndockerRegistry:\ntype: \"harbor\"\nurl: \"harbor.example.com\"\n

                  10. Check the parameters in the EDP installation chart. For details, please refer to the values.yaml file.

                  11. Install EDP in the edp namespace with the Helm tool:

                    helm install edp epamedp/edp-install --wait --timeout=900s \\\n--version <edp_version> \\\n--values values.yaml \\\n--namespace edp\n

                    See the details on the parameters below:

                    Example values.yaml file
                    global:\n# -- platform type that can be either \"kubernetes\" or \"openshift\"\nplatform: \"kubernetes\"\n# DNS wildcard for routing in the Kubernetes cluster;\ndnsWildCard: \"example.com\"\n# -- Administrators of your tenant\nadmins:\n- \"stub_user_one@example.com\"\n# -- Developers of your tenant\ndevelopers:\n- \"stub_user_one@example.com\"\n- \"stub_user_two@example.com\"\n# -- Can be gerrit, github or gitlab. By default: gerrit\ngitProvider: gerrit\n# -- Gerrit SSH node port\ngerritSSHPort: \"22\"\n# Keycloak address with which the platform will be integrated\nkeycloakUrl: \"https://keycloak.example.com\"\ndockerRegistry:\n# -- Docker Registry endpoint\nurl: \"<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com\"\ntype: \"ecr\"\n\n# AWS Region, e.g. \"eu-central-1\"\nawsRegion:\n\nargocd:\n# -- Enable ArgoCD integration\nenabled: true\n# -- ArgoCD URL in format schema://URI\n# -- By default, https://argocd.{{ .Values.global.dnsWildCard }}\nurl: \"\"\n\n# Kaniko configuration section\nkaniko:\n# -- AWS IAM role to be used for kaniko pod service account (IRSA). Format: arn:aws:iam::<AWS_ACCOUNT_ID>:role/<AWS_IAM_ROLE_NAME>\nroleArn:\n\nedp-tekton:\n# Tekton Kaniko configuration section\nkaniko:\n# -- AWS IAM role to be used for kaniko pod service account (IRSA). Format: arn:aws:iam::<AWS_ACCOUNT_ID>:role/<AWS_IAM_ROLE_NAME>\nroleArn:\n\nedp-headlamp:\nconfig:\noidc:\nenabled: false\n

                    Note

                    Set global.platform=openshift while deploying EDP in OpenShift.

                    Info

                    The full installation with integration between tools will take at least 10 minutes.

                  12. To check if the installation is successful, run the command below:

                    helm status <edp-release> -n edp\n
                    You can also check ingress endpoints to get EDP Portal endpoint to enter EDP Portal UI:
                    kubectl describe ingress -n edp\n

                  13. Once EDP is successfully installed, you can navigate to our Use Cases to try out EDP functionality.

                  "},{"location":"operator-guide/install-edp/#related-articles","title":"Related Articles","text":"
                  • Quick Start
                  • Install EDP via Helmfile
                  • Integrate GitHub/GitLab in Jenkins
                  • Integrate GitHub/GitLab in Tekton
                  • GitHub Webhook Configuration
                  • GitLab Webhook Configuration
                  • Set Up Kubernetes
                  • Set Up OpenShift
                  • EDP Installation Prerequisites Overview
                  • Headlamp OIDC Integration
                  "},{"location":"operator-guide/install-external-secrets-operator/","title":"Install External Secrets Operator","text":"

                  Inspect the prerequisites and the main steps to perform for enabling External Secrets Operator in EDP.

                  "},{"location":"operator-guide/install-external-secrets-operator/#prerequisites","title":"Prerequisites","text":"
                  • Kubectl version 1.23.0 is installed. Please refer to the Kubernetes official website for details.
                  • Helm version 3.10.0+ is installed. Please refer to the Helm page on GitHub for details.
                  "},{"location":"operator-guide/install-external-secrets-operator/#installation","title":"Installation","text":"

                  To install External Secrets Operator with Helm, run the following commands:

                  helm repo add external-secrets https://charts.external-secrets.io\n\nhelm install external-secrets \\\nexternal-secrets/external-secrets \\\n--version 0.8.3 \\\n-n external-secrets \\\n--create-namespace\n

                  Info

                  It is also possible to install External Secrets Operator using the Helmfile or Operator Lifecycle Manager (OLM).

                  "},{"location":"operator-guide/install-external-secrets-operator/#related-articles","title":"Related Articles","text":"
                  • External Secrets Operator Integration
                  • Install Harbor
                  "},{"location":"operator-guide/install-harbor/","title":"Install Harbor","text":"

                  EPAM Delivery Platform uses Harbor as a storage for application images that are created when building applications.

                  Inspect the prerequisites and the main steps to perform for enabling Harbor in EDP.

                  "},{"location":"operator-guide/install-harbor/#prerequisites","title":"Prerequisites","text":"
                  • Kubectl version 1.26.0 is installed.
                  • Helm version 3.12.0+ is installed.
                  "},{"location":"operator-guide/install-harbor/#installation","title":"Installation","text":"

                  To install Harbor with Helm, follow the steps below:

                  1. Create a namespace for Harbor:

                    kubectl create namespace harbor\n
                  2. Create a secret for administrator user and registry:

                    ManuallyExternal Secret Operator
                    kubectl create secret generic harbor \\\n--from-literal=HARBOR_ADMIN_PASSWORD=<secret> \\\n--from-literal=REGISTRY_HTPASSWD=<secret> \\\n--from-literal=REGISTRY_PASSWD=<secret> \\\n--from-literal=secretKey=<secret> \\\n--namespace harbor\n
                    apiVersion: external-secrets.io/v1beta1\nkind: ExternalSecret\nmetadata:\nname: harbor\nnamespace: harbor\nspec:\nrefreshInterval: 1h\nsecretStoreRef:\nkind: SecretStore\nname: aws-parameterstore\ndata:\n- secretKey: HARBOR_ADMIN_PASSWORD\nremoteRef:\nconversionStrategy: Default\ndecodingStrategy: None\nkey: /control-plane/deploy-secrets\nproperty: harbor.HARBOR_ADMIN_PASSWORD\n- secretKey: secretKey\nremoteRef:\nconversionStrategy: Default\ndecodingStrategy: None\nkey: /control-plane/deploy-secrets\nproperty: harbor.secretKey\n- secretKey: REGISTRY_HTPASSWD\nremoteRef:\nconversionStrategy: Default\ndecodingStrategy: None\nkey: /control-plane/deploy-secrets\nproperty: harbor.REGISTRY_HTPASSWD\n- secretKey: REGISTRY_PASSWD\nremoteRef:\nconversionStrategy: Default\ndecodingStrategy: None\nkey: /control-plane/deploy-secrets\nproperty: harbor.REGISTRY_PASSWD\n

                    Note

                    The HARBOR_ADMIN_PASSWORD is the initial password of Harbor admin. The secretKey is the secret key that is used for encryption. Must be 16 characters long. The REGISTRY_PASSWD is Harbor registry password. The REGISTRY_HTPASSWD is login and password in htpasswd string format. This value is the string in the password file generated by the htpasswd command where the username is harbor_registry_user and the encryption type is bcrypt. See the example below:

                    htpasswd -bBc passwordfile harbor_registry_user harbor_registry_password\n
                    The username must be harbor_registry_user. The password must be the value from REGISTRY_PASSWD.

                  3. Add the Helm Harbor Charts for the local client.

                    helm repo add harbor https://helm.goharbor.io\n
                  4. Check the parameters in the Harbor installation chart. For details, please refer to the values.yaml file.

                  5. Install Harbor in the \u2039harbor\u203a namespace with the Helm tool.

                    helm install harbor harbor/harbor\n    --version 1.12.2 \\\n--namespace harbor \\\n--values values.yaml\n

                    See the details on the parameters below:

                    Example values.yaml file

                    # we use Harbor secret to consolidate all the Harbor secrets\nexistingSecretAdminPassword: harbor\nexistingSecretAdminPasswordKey: HARBOR_ADMIN_PASSWORD\nexistingSecretSecretKey: harbor\n\ncore:\n# The XSRF key. Will be generated automatically if it isn't specified\nxsrfKey: \"\"\njobservice:\n# Secret is used when job service communicates with other components.\n# If a secret key is not specified, Helm will generate one.\n# Must be a string of 16 chars.\nsecret: \"\"\nregistry:\n# Secret is used to secure the upload state from client\n# and registry storage backend.\n# If a secret key is not specified, Helm will generate one.\n# Must be a string of 16 chars.\nsecret: \"\"\ncredentials:\nusername: harbor_registry_user\nexistingSecret: harbor\nfullnameOverride: harbor\n# If Harbor is deployed behind the proxy, set it as the URL of proxy\nexternalURL: https://core.harbor.domain\nipFamily:\nipv6:\nenabled: false\nexpose:\ntls:\nenabled: false\ningress:\nhosts:\ncore: core.harbor.domain\nnotary: notary.harbor.domain\nupdateStrategy:\ntype: Recreate\npersistence:\npersistentVolumeClaim:\nregistry:\nsize: 30Gi\njobservice:\njobLog:\nsize: 1Gi\ndatabase:\nsize: 2Gi\nredis:\nsize: 1Gi\ntrivy:\nsize: 5Gi\ndatabase:\ninternal:\n# The initial superuser password for internal database\npassword: \"changeit\"\n
                  6. To check if the installation is successful, run the command below:

                    helm status <harbor-release> -n harbor\n
                    You can also check ingress endpoints to get Harbor endpoint to enter Harbor UI:
                    kubectl describe ingress <harbor_ingress> -n harbor\n

                  "},{"location":"operator-guide/install-harbor/#related-articles","title":"Related Articles","text":"
                  • Install EDP
                  • Integrate Harbor With EDP Pipelines
                  "},{"location":"operator-guide/install-ingress-nginx/","title":"Install NGINX Ingress Controller","text":"

                  Inspect the prerequisites and the main steps to perform for installing Install NGINX Ingress Controller on Kubernetes.

                  "},{"location":"operator-guide/install-ingress-nginx/#prerequisites","title":"Prerequisites","text":"
                  • Kubectl version 1.23.0 is installed. Please refer to the Kubernetes official website for details.
                  • Helm version 3.10.2 is installed. Please refer to the Helm page on GitHub for details.
                  "},{"location":"operator-guide/install-ingress-nginx/#installation","title":"Installation","text":"

                  Info

                  It is also possible to install NGINX Ingress Controller using the Helmfile. For details, please refer to the Install via Helmfile page.

                  To install the ingress-nginx chart, follow the steps below:

                  1. Create an ingress-nginx namespace:

                    kubectl create namespace ingress-nginx\n
                  2. Add a chart repository:

                    helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx\nhelm repo update\n
                  3. Install the ingress-nginx chart:

                    helm install ingress ingress-nginx/ingress-nginx \\\n--version 4.7.0 \\\n--values values.yaml \\\n--namespace ingress-nginx\n

                    Check out the values.yaml file sample of the ingress-nginx chart customization:

                  View: values.yaml
                  controller:\naddHeaders:\nX-Content-Type-Options: nosniff\nX-Frame-Options: SAMEORIGIN\nresources:\nlimits:\nmemory: \"256Mi\"\nrequests:\ncpu: \"50m\"\nmemory: \"128M\"\nconfig:\nssl-redirect: 'true'\nclient-header-buffer-size: '64k'\nhttp2-max-field-size: '64k'\nhttp2-max-header-size: '64k'\nlarge-client-header-buffers: '4 64k'\nupstream-keepalive-timeout: '120'\nkeep-alive: '10'\nuse-forwarded-headers: 'true'\nproxy-real-ip-cidr: '172.32.0.0/16'\nproxy-buffer-size: '8k'\n\n# To watch Ingress objects without the ingressClassName field set parameter value to true.\n# https://kubernetes.github.io/ingress-nginx/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do\nwatchIngressWithoutClass: true\n\nservice:\ntype: NodePort\nnodePorts:\nhttp: 32080\nhttps: 32443\nupdateStrategy:\nrollingUpdate:\nmaxUnavailable: 1\ntype: RollingUpdate\nmetrics:\nenabled: true\ndefaultBackend:\nenabled: true\nserviceAccount:\ncreate: true\nname: nginx-ingress-service-account\n

                  Warning

                  Align value controller.config.proxy-real-ip-cidr with AWS VPC CIDR.

                  "},{"location":"operator-guide/install-keycloak/","title":"Install Keycloak","text":"

                  Inspect the prerequisites and the main steps to perform for installing Keycloak.

                  Info

                  The installation process below is given for a Kubernetes cluster. The steps that differ for an OpenShift cluster are indicated in the warnings blocks.

                  "},{"location":"operator-guide/install-keycloak/#prerequisites","title":"Prerequisites","text":"
                  • Kubectl version 1.23.0 is installed. Please refer to the Kubernetes official website for details.
                  • Helm version 3.10.0+ is installed. Please refer to the Helm page on GitHub for details.

                  Info

                  EDP team is using a Keycloakx helm chart from the codecentric repository, but other repositories can be used as well (e.g. Bitnami). Before installing Keycloak, it is necessary to install a PostgreSQL database.

                  Info

                  It is also possible to install Keycloak using the Helmfile. For details, please refer to the Install via Helmfile page.

                  "},{"location":"operator-guide/install-keycloak/#postgresql-installation","title":"PostgreSQL Installation","text":"

                  To install PostgreSQL, follow the steps below:

                  1. Check that a security namespace is created. If not, run the following command to create it:

                    kubectl create namespace security\n

                    Warning

                    On the OpenShift platform, apply the SecurityContextConstraints resource. Change the namespace in the users section if required.

                    View: keycloak-scc.yaml
                    allowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegeEscalation: true\nallowPrivilegedContainer: false\nallowedCapabilities: null\napiVersion: security.openshift.io/v1\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- min: 999\nmax: 65543\ngroups: []\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: keycloak\npriority: 1\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities:\n- KILL\n- MKNOD\n- SETUID\n- SETGID\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMin: 1\nuidRangeMax: 65543\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nusers:\n- system:serviceaccount:security:keycloakx\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n
                    View: postgresql-keycloak-scc.yaml
                    allowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegeEscalation: true\nallowPrivilegedContainer: false\nallowedCapabilities: null\napiVersion: security.openshift.io/v1\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- min: 999\nmax: 65543\ngroups: []\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: postgresql-keycloak\npriority: 1\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities:\n- KILL\n- MKNOD\n- SETUID\n- SETGID\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMin: 1\nuidRangeMax: 65543\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nusers:\n- system:serviceaccount:security:default\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n
                  2. Create PostgreSQL admin secret:

                    kubectl -n security create secret generic keycloak-postgresql \\\n--from-literal=password=<postgresql_password> \\\n--from-literal=postgres-password=<postgresql_postgres_password>\n
                  3. Add a helm chart repository:

                    helm repo add bitnami https://charts.bitnami.com/bitnami\nhelm repo update\n
                  4. Install PostgreSQL v15.2.0 using bitnami/postgresql Helm chart v12.1.15:

                    Info

                    The PostgreSQL can be deployed in production ready mode. For example, it may include multiple replicas, persistent storage, autoscaling, and monitoring. For details, please refer to the official Chart documentation.

                    helm install postgresql bitnami/postgresql \\\n--version 12.1.15 \\\n--values values.yaml \\\n--namespace security\n

                    Check out the values.yaml file sample of the PostgreSQL customization:

                    View: values.yaml
                    # PostgreSQL read only replica parameters\nreadReplicas:\n# Number of PostgreSQL read only replicas\nreplicaCount: 1\n\nimage:\ntag: 15.2.0-debian-11-r0\n\nglobal:\npostgresql:\nauth:\nusername: admin\nexistingSecret: keycloak-postgresql\ndatabase: keycloak\n\nprimary:\npersistence:\nenabled: true\nsize: 3Gi\n
                  "},{"location":"operator-guide/install-keycloak/#keycloak-installation","title":"Keycloak Installation","text":"

                  To install Keycloak, follow the steps below:

                  1. Use security namespace from the PostgreSQL installation.

                  2. Add a chart repository:

                    helm repo add codecentric https://codecentric.github.io/helm-charts\nhelm repo update\n
                  3. Create Keycloak admin secret:

                    kubectl -n security create secret generic keycloak-admin-creds \\\n--from-literal=username=<keycloak_admin_username> \\\n--from-literal=password=<keycloak_admin_password>\n
                  4. Install Keycloak 20.0.3 using codecentric/keycloakx Helm chart:

                    Info

                    Keycloak can be deployed in production ready mode. For example, it may include multiple replicas, persistent storage, autoscaling, and monitoring. For details, please refer to the official Chart documentation.

                    helm install keycloakx codecentric/keycloakx \\\n--version 2.2.1 \\\n--values values.yaml \\\n--namespace security\n

                    Check out the values.yaml file sample of the Keycloak customization:

                    View: values.yaml
                    replicas: 1\n\n# Deploy the latest version\nimage:\ntag: \"20.0.3\"\n\n# start: create OpenShift realm which is required by EDP\nextraInitContainers: |\n- name: realm-provider\nimage: busybox\nimagePullPolicy: IfNotPresent\ncommand:\n- sh\nargs:\n- -c\n- |\necho '{\"realm\": \"openshift\",\"enabled\": true}' > /opt/keycloak/data/import/openshift.json\nvolumeMounts:\n- name: realm\nmountPath: /opt/keycloak/data/import\n\n# The following parameter is unrecommended to expose. Exposed health checks lead to an unnecessary attack vector.\nhealth:\nenabled: false\n# The following parameter is unrecommended to expose. Exposed metrics lead to an unnecessary attack vector.\nmetrics:\nenabled: false\n\nextraVolumeMounts: |\n- name: realm\nmountPath: /opt/keycloak/data/import\n\nextraVolumes: |\n- name: realm\nemptyDir: {}\n\ncommand:\n- \"/opt/keycloak/bin/kc.sh\"\n- \"--verbose\"\n- \"start\"\n- \"--auto-build\"\n- \"--http-enabled=true\"\n- \"--http-port=8080\"\n- \"--hostname-strict=false\"\n- \"--hostname-strict-https=false\"\n- \"--spi-events-listener-jboss-logging-success-level=info\"\n- \"--spi-events-listener-jboss-logging-error-level=warn\"\n- \"--import-realm\"\n\nextraEnv: |\n- name: KC_PROXY\nvalue: \"passthrough\"\n- name: KEYCLOAK_ADMIN\nvalueFrom:\nsecretKeyRef:\nname: keycloak-admin-creds\nkey: username\n- name: KEYCLOAK_ADMIN_PASSWORD\nvalueFrom:\nsecretKeyRef:\nname: keycloak-admin-creds\nkey: password\n- name: JAVA_OPTS_APPEND\nvalue: >-\n-XX:+UseContainerSupport\n-XX:MaxRAMPercentage=50.0\n-Djava.awt.headless=true\n-Djgroups.dns.query={{ include \"keycloak.fullname\" . }}-headless\n\n# This block should be uncommented if you install Keycloak on Kubernetes\ningress:\nenabled: true\nannotations:\nkubernetes.io/ingress.class: nginx\ningress.kubernetes.io/affinity: cookie\n# The following parameter is unrecommended to expose. Admin paths lead to an unnecessary attack vector.\nconsole:\nenabled: false\nrules:\n- host: keycloak.<ROOT_DOMAIN>\npaths:\n- path: '{{ tpl .Values.http.relativePath $ | trimSuffix \"/\" }}/'\npathType: Prefix\n\n# This block should be uncommented if you set Keycloak to OpenShift and change the host field\n# route:\n#   enabled: false\n#   # Path for the Route\n#   path: '/'\n#   # Host name for the Route\n#   host: \"keycloak.<ROOT_DOMAIN>\"\n#   # TLS configuration\n#   tls:\n#     enabled: true\n\nresources:\nlimits:\nmemory: \"2048Mi\"\nrequests:\ncpu: \"50m\"\nmemory: \"512Mi\"\n\n# Check database readiness at startup\ndbchecker:\nenabled: true\n\ndatabase:\nvendor: postgres\nexistingSecret: keycloak-postgresql\nhostname: postgresql\nport: 5432\nusername: admin\ndatabase: keycloak\n
                  "},{"location":"operator-guide/install-keycloak/#configuration","title":"Configuration","text":"

                  To prepare Keycloak for integration with EDP, follow the steps below:

                  1. Ensure that the openshift realm is created.

                  2. Create the edp_<EDP_PROJECT> user and set the password in the Master realm.

                    Note

                    This user should be used by EDP to access Keycloak. Please refer to the Install EDP and Install EDP via Helmfile sections for details.

                  3. In the Role Mapping tab, assign the proper roles to the user:

                    • Realm Roles:

                      • create-realm,
                      • offline_access,
                      • uma_authorization
                    • Client Roles openshift-realm:

                      • impersonation,
                      • manage-authorization,
                      • manage-clients,
                      • manage-users

                    Role mappings

                  "},{"location":"operator-guide/install-keycloak/#related-articles","title":"Related Articles","text":"
                  • Install EDP
                  • Install via Helmfile
                  • Install Harbor
                  "},{"location":"operator-guide/install-kiosk/","title":"Set Up Kiosk","text":"

                  Kiosk is a multi-tenancy extension for managing tenants and namespaces in a shared Kubernetes cluster. Within EDP, Kiosk is used to separate resources and enables the following options (see more details):

                  • Access to the EDP tenants in a Kubernetes cluster;
                  • Multi-tenancy access at the service account level for application deploy.

                  Inspect the main steps to set up Kiosk for the proceeding EDP installation.

                  Note

                  Kiosk deploy is mandatory for EDP v.2.8.. In earlier versions, Kiosk is not implemented. Since EDP v.2.9.0, integration with Kiosk is an optional feature. You may not want to use it, so just skip those steps and disable in Helm parameters during EDP deploy.

                  # global.kioskEnabled: <true/false>\n
                  "},{"location":"operator-guide/install-kiosk/#prerequisites","title":"Prerequisites","text":"
                  • Kubectl version 1.18.0 is installed. Please refer to the Kubernetes official website for details.
                  • Helm version 3.6.0 is installed. Please refer to the Helm page on GitHub for details.
                  "},{"location":"operator-guide/install-kiosk/#installation","title":"Installation","text":"
                  • Deploy Kiosk version 0.2.11 in the cluster. To install it, run the following command:
                      # Install kiosk with helm v3\n\n  helm repo add kiosk https://charts.devspace.sh/\n  kubectl create namespace kiosk\n  helm install kiosk --version 0.2.11 kiosk/kiosk -n kiosk --atomic\n

                  For more details, please refer to the Kiosk page on the GitHub.

                  "},{"location":"operator-guide/install-kiosk/#configuration","title":"Configuration","text":"

                  To provide access to the EDP tenant, follow the steps below.

                  • Check that a security namespace is created. If not, run the following command to create it:
                      kubectl create namespace security\n

                  Note

                  On an OpenShift cluster, run the oc command instead of kubectl one.

                  • Add a service account to the security namespace.
                      kubectl -n security create sa edp\n

                  Info

                  Please note that edp is the name of the EDP tenant here and in all the following steps.

                  • Apply the Account template to the cluster. Please check the sample below:
                    apiVersion: tenancy.kiosk.sh/v1alpha1\nkind: Account\nmetadata:\nname: edp-admin\nspec:\nspace:\nclusterRole: kiosk-space-admin\nsubjects:\n- kind: ServiceAccount\nname: edp\nnamespace: security\n
                  • Apply the ClusterRoleBinding to the 'kiosk-edit' cluster role (current role is added during installation of Kiosk). Please check the sample below:
                    apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\nname: edp-kiosk-edit\nsubjects:\n- kind: ServiceAccount\nname: edp\nnamespace: security\nroleRef:\nkind: ClusterRole\nname: kiosk-edit\napiGroup: rbac.authorization.k8s.io\n
                  • To provide access to the EDP tenant, generate kubeconfig with Service Account edp permission. The edp account created earlier is located in the security namespace.
                  "},{"location":"operator-guide/install-loki/","title":"Install Grafana Loki","text":"

                  EDP configures the logging with the help of Grafana Loki aggregation system.

                  "},{"location":"operator-guide/install-loki/#installation","title":"Installation","text":"

                  To install Loki, follow the steps below:

                  1. Create logging namespace:

                      kubectl create namespace logging\n

                    Note

                    On the OpenShift cluster, run the oc command instead of the kubectl command.

                  2. Add a chart repository:

                      helm repo add grafana https://grafana.github.io/helm-charts\n  helm repo update\n

                    Note

                    It is possible to use Amazon Simple Storage Service Amazon S3 as an object storage for Loki. To configure access, please refer to the IRSA for Loki documentation.

                  3. Install Loki v.2.6.0:

                      helm install loki grafana/loki \\\n  --version 2.6.0 \\\n  --values values.yaml \\\n  --namespace logging\n

                    Check out the values.yaml file sample of the Loki customization:

                    View: values.yaml
                    image:\nrepository: grafana/loki\ntag: 2.3.0\nconfig:\nauth_enabled: false\nschema_config:\nconfigs:\n- from: 2021-06-01\nstore: boltdb-shipper\nobject_store: s3\nschema: v11\nindex:\nprefix: loki_index_\nperiod: 24h\nstorage_config:\naws:\ns3: s3://<AWS_REGION>/loki-<CLUSTER_NAME>\nboltdb_shipper:\nactive_index_directory: /data/loki/index\ncache_location: /data/loki/boltdb-cache\nshared_store: s3\nchunk_store_config:\nmax_look_back_period: 24h\nresources:\nlimits:\nmemory: \"128Mi\"\nrequests:\ncpu: \"50m\"\nmemory: \"128Mi\"\nserviceAccount:\ncreate: true\nname: edp-loki\nannotations:\neks.amazonaws.com/role-arn: \"arn:aws:iam::<AWS_ACCOUNT_ID>:role/AWSIRSA\u2039CLUSTER_NAME\u203a\u2039LOKI_NAMESPACE\u203aLoki\npersistence:\nenabled: false\n

                    Note

                    In case of using cluster scheduling and amazon-eks-pod-identity-webhook, it is necessary to restart the Loki pod after the cluster is up and running. Please refer to the Schedule Pods Restart documentation.

                  4. Configure custom bucket policy to delete the old data.

                  "},{"location":"operator-guide/install-reportportal/","title":"Install ReportPortal","text":"

                  Inspect the prerequisites and the main steps to perform for installing ReportPortal.

                  Info

                  It is also possible to install ReportPortal using the Helmfile. For details, please refer to the Install via Helmfile page.

                  "},{"location":"operator-guide/install-reportportal/#prerequisites","title":"Prerequisites","text":"
                  • Kubectl version 1.23.0 is installed. Please refer to the Kubernetes official website for details.
                  • Helm version 3.10.2 is installed. Please refer to the Helm page on GitHub for details.

                  Info

                  Please refer to the ReportPortal Helm Chart section for details.

                  "},{"location":"operator-guide/install-reportportal/#minio-installation","title":"MinIO Installation","text":"

                  To install MinIO, follow the steps below:

                  1. Check that edp namespace is created. If not, run the following command to create it:

                    kubectl create namespace edp\n

                    For the OpenShift users:

                    When using the OpenShift platform, install the SecurityContextConstraints resources. In case of using a custom namespace for the reportportal, change the namespace in the users section.

                    View: report-portal-third-party-resources-scc.yaml
                    apiVersion: security.openshift.io/v1\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: report-portal-minio-rabbitmq-postgresql\nallowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegeEscalation: true\nallowPrivilegedContainer: false\nallowedCapabilities: null\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- min: 999\nmax: 65543\ngroups: []\npriority: 1\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities:\n- KILL\n- MKNOD\n- SETUID\n- SETGID\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMin: 1\nuidRangeMax: 65543\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nusers:\n- system:serviceaccount:report-portal:minio\n- system:serviceaccount:report-portal:rabbitmq\n- system:serviceaccount:report-portal:postgresql\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n
                    View: report-portal-elasticsearch-scc.yaml
                    apiVersion: security.openshift.io/v1\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: report-portal-elasticsearch\nallowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegedContainer: true\nallowedCapabilities: []\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- max: 1000\nmin: 1000\ngroups: []\npriority: 0\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities: []\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMax: 1000\nuidRangeMin: 0\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nusers:\n- system:serviceaccount:report-portal:elasticsearch-master\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n
                  2. Add a chart repository:

                    helm repo add bitnami https://charts.bitnami.com/bitnami\nhelm repo update\n
                  3. Create MinIO admin secret:

                    kubectl -n edp create secret generic reportportal-minio-creds \\\n--from-literal=root-password=<root_password> \\\n--from-literal=root-user=<root_user>\n
                  4. Install MinIO v.11.10.3 using bitnami/minio Helm chart v.11.10.3:

                    helm install minio bitnami/minio \\\n--version 11.10.3 \\\n--values values.yaml \\\n--namespace edp\n

                    Check out the values.yaml file sample of the MinIO customization:

                    View: values.yaml
                    auth:\nexistingSecret: reportportal-minio-creds\npersistence:\nsize: 1Gi\n
                  "},{"location":"operator-guide/install-reportportal/#rabbitmq-installation","title":"RabbitMQ Installation","text":"

                  To install RabbitMQ, follow the steps below:

                  1. Use edp namespace from the MinIO installation.

                  2. Use bitnami chart repository from the MinIO installation.

                  3. Create RabbitMQ admin secret:

                    kubectl -n edp create secret generic reportportal-rabbitmq-creds \\\n--from-literal=rabbitmq-password=<rabbitmq_password> \\\n--from-literal=rabbitmq-erlang-cookie=<rabbitmq_erlang_cookie>\n

                    Warning

                    The rabbitmq_password password must be 10 characters long. The rabbitmq_erlang_cookie password must be 32 characters long.

                  4. Install RabbitMQ v.10.3.8 using bitnami/rabbitmq Helm chart v.10.3.8:

                    helm install rabbitmq bitnami/rabbitmq \\\n--version 10.3.8 \\\n--values values.yaml \\\n--namespace edp\n

                    Check out the values.yaml file sample of the RabbitMQ customization:

                    View: values.yaml
                    auth:\nexistingPasswordSecret: reportportal-rabbitmq-creds\nexistingErlangSecret: reportportal-rabbitmq-creds\npersistence:\nsize: 1Gi\n
                  5. After the rabbitmq pod gets the status Running, you need to configure the RabbitMQ memory threshold

                    kubectl -n edp exec -it rabbitmq-0 -- rabbitmqctl set_vm_memory_high_watermark 0.8\n
                  "},{"location":"operator-guide/install-reportportal/#elasticsearch-installation","title":"Elasticsearch Installation","text":"

                  To install Elasticsearch, follow the steps below:

                  1. Use edp namespace from the MinIO installation.

                  2. Add a chart repository:

                    helm repo add elastic https://helm.elastic.co\nhelm repo update\n
                  3. Install Elasticsearch v.7.17.3 using elastic/elasticsearch Helm chart v.7.17.3:

                    helm install elasticsearch elastic/elasticsearch \\\n--version 7.17.3 \\\n--values values.yaml \\\n--namespace edp\n

                    Check out the values.yaml file sample of the Elasticsearch customization:

                    View: values.yaml
                    replicas: 1\n\nextraEnvs:\n- name: discovery.type\nvalue: single-node\n- name: cluster.initial_master_nodes\nvalue: \"\"\n\nrbac:\ncreate: true\n\nresources:\nrequests:\ncpu: \"100m\"\nmemory: \"2Gi\"\n\nvolumeClaimTemplate:\nresources:\nrequests:\nstorage: 3Gi\n
                  "},{"location":"operator-guide/install-reportportal/#postgresql-installation","title":"PostgreSQL Installation","text":"

                  To install PostgreSQL, follow the steps below:

                  1. Use edp namespace from the MinIO installation.

                  2. Add a chart repository:

                    helm repo add bitnami-archive https://raw.githubusercontent.com/bitnami/charts/archive-full-index/bitnami\nhelm repo update\n
                  3. Create PostgreSQL admin secret:

                    kubectl -n edp create secret generic reportportal-postgresql-creds \\\n--from-literal=postgresql-password=<postgresql_password> \\\n--from-literal=postgresql-postgres-password=<postgresql_postgres_password>\n

                    Warning

                    The postgresql_password and postgresql_postgres_password passwords must be 16 characters long.

                  4. Install PostgreSQL v.10.9.4 using Helm chart v.10.9.4:

                    helm install postgresql bitnami-archive/postgresql \\\n--version 10.9.4 \\\n--values values.yaml \\\n--namespace edp\n

                    Check out the values.yaml file sample of the PostgreSQL customization:

                    View: values.yaml
                    persistence:\nsize: 1Gi\nresources:\nrequests:\ncpu: \"100m\"\nserviceAccount:\nenabled: true\npostgresqlUsername: \"rpuser\"\npostgresqlDatabase: \"reportportal\"\nexistingSecret: \"reportportal-postgresql-creds\"\ninitdbScripts:\ninit_postgres.sh: |\n#!/bin/sh\n/opt/bitnami/postgresql/bin/psql -U postgres -d ${POSTGRES_DB} -c 'CREATE EXTENSION IF NOT EXISTS ltree; CREATE EXTENSION IF NOT EXISTS pgcrypto; CREATE EXTENSION IF NOT EXISTS pg_trgm;'\n
                  "},{"location":"operator-guide/install-reportportal/#reportportal-installation","title":"ReportPortal Installation","text":"

                  To install ReportPortal, follow the steps below:

                  1. Use edp namespace from the MinIO installation.

                    For the OpenShift users:

                    When using the OpenShift platform, install the SecurityContextConstraints resource. In case of using a custom namespace for the reportportal, change the namespace in the users section.

                    View: report-portal-reportportal-scc.yaml
                    apiVersion: security.openshift.io/v1\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: report-portal\nallowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegedContainer: true\nallowedCapabilities: []\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- max: 1000\nmin: 1000\ngroups: []\npriority: 0\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities: []\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMax: 1000\nuidRangeMin: 0\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nusers:\n- system:serviceaccount:report-portal:reportportal\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n
                  2. Add a chart repository:

                    helm repo add report-portal \"https://reportportal.github.io/kubernetes\"\nhelm repo update\n
                  3. Install ReportPortal v.5.8.0 using Helm chart v.5.8.0:

                    helm install report-portal report-portal/reportportal \\\n--values values.yaml \\\n--namespace edp\n

                    Check out the values.yaml file sample of the ReportPortal customization:

                    View: values.yaml
                    serviceindex:\nresources:\nrequests:\ncpu: 50m\nuat:\nresources:\nrequests:\ncpu: 50m\nserviceui:\nresources:\nrequests:\ncpu: 50m\nserviceAccountName: \"reportportal\"\nsecurityContext:\nrunAsUser: 0\nserviceapi:\nresources:\nrequests:\ncpu: 50m\nserviceanalyzer:\nresources:\nrequests:\ncpu: 50m\nserviceanalyzertrain:\nresources:\nrequests:\ncpu: 50m\n\nrabbitmq:\nSecretName: \"reportportal-rabbitmq-creds\"\nendpoint:\naddress: rabbitmq.<EDP_PROJECT>.svc.cluster.local\nuser: user\napiuser: user\n\npostgresql:\nSecretName: \"reportportal-postgresql-creds\"\nendpoint:\naddress: postgresql.<EDP_PROJECT>.svc.cluster.local\n\nelasticsearch:\nendpoint: http://elasticsearch-master.<EDP_PROJECT>.svc.cluster.local:9200\n\nminio:\nsecretName: \"reportportal-minio-creds\"\nendpoint: http://minio.<EDP_PROJECT>.svc.cluster.local:9000\nendpointshort: minio.<EDP_PROJECT>.svc.cluster.local:9000\naccesskeyName: \"root-user\"\nsecretkeyName: \"root-password\"\n\ningress:\n# IF YOU HAVE SOME DOMAIN NAME SET INGRESS.USEDOMAINNAME to true\nusedomainname: true\nhosts:\n- report-portal-<EDP_PROJECT>.<ROOT_DOMAIN>\n
                  4. For the OpenShift platform, install a Gateway with Route:

                    View: gateway-config-cm.yaml
                    kind: ConfigMap\nmetadata:\nname: gateway-config\nnamespace: report-portal\napiVersion: v1\ndata:\ntraefik-dynamic-config.yml: |\nhttp:\nmiddlewares:\nstrip-ui:\nstripPrefix:\nprefixes:\n- \"/ui\"\nforceSlash: false\nstrip-api:\nstripPrefix:\nprefixes:\n- \"/api\"\nforceSlash: false\nstrip-uat:\nstripPrefix:\nprefixes:\n- \"/uat\"\nforceSlash: false\n\nrouters:\nindex-router:\nrule: \"Path(`/`)\"\nservice: \"index\"\nui-router:\nrule: \"PathPrefix(`/ui`)\"\nmiddlewares:\n- strip-ui\nservice: \"ui\"\nuat-router:\nrule: \"PathPrefix(`/uat`)\"\nmiddlewares:\n- strip-uat\nservice: \"uat\"\napi-router:\nrule: \"PathPrefix(`/api`)\"\nmiddlewares:\n- strip-api\nservice: \"api\"\n\nservices:\nuat:\nloadBalancer:\nservers:\n- url: \"http://report-portal-reportportal-uat:9999/\"\n\nindex:\nloadBalancer:\nservers:\n- url: \"http://report-portal-reportportal-index:8080/\"\n\napi:\nloadBalancer:\nservers:\n- url: \"http://report-portal-reportportal-api:8585/\"\n\nui:\nloadBalancer:\nservers:\n- url: \"http://report-portal-reportportal-ui:8080/\"\ntraefik.yml: |\nentryPoints:\nhttp:\naddress: \":8081\"\nmetrics:\naddress: \":8082\"\n\nmetrics:\nprometheus:\nentryPoint: metrics\naddEntryPointsLabels: true\naddServicesLabels: true\nbuckets:\n- 0.1\n- 0.3\n- 1.2\n- 5.0\n\nproviders:\nfile:\nfilename: /etc/traefik/traefik-dynamic-config.yml\n
                    View: gateway-deployment.yaml
                    apiVersion: apps/v1\nkind: Deployment\nmetadata:\nlabels:\napp: reportportal\nname: gateway\nnamespace: report-portal\nspec:\nreplicas: 1\nselector:\nmatchLabels:\ncomponent: gateway\ntemplate:\nmetadata:\nlabels:\ncomponent: gateway\nspec:\ncontainers:\n- image: quay.io/waynesun09/traefik:2.3.6\nname: traefik\nports:\n- containerPort: 8080\nprotocol: TCP\nresources: {}\nvolumeMounts:\n- mountPath: /etc/traefik/\nname: config\nreadOnly: true\nvolumes:\n- name: config\nconfigMap:\ndefaultMode: 420\nname: gateway-config\n
                    View: gateway-route.yaml
                    kind: Route\napiVersion: route.openshift.io/v1\nmetadata:\nlabels:\napp: reportportal\nname: reportportal\nnamespace: report-portal\nspec:\nhost: report-portal.<CLUSTER_DOMAIN>\nport:\ntargetPort: http\ntls:\ninsecureEdgeTerminationPolicy: Redirect\ntermination: edge\nto:\nkind: Service\nname: gateway\nweight: 100\nwildcardPolicy: None\n
                    View: gateway-service.yaml
                    apiVersion: v1\nkind: Service\nmetadata:\nlabels:\napp: reportportal\ncomponent: gateway\nname: gateway\nnamespace: report-portal\nspec:\nports:\n# use 8081 to allow for usage of the dashboard which is on port 8080\n- name: http\nport: 8081\nprotocol: TCP\ntargetPort: 8081\nselector:\ncomponent:  gateway\nsessionAffinity: None\ntype: ClusterIP\n

                  Note

                  For user access: default/1q2w3e For admin access: superadmin/erebus Please refer to the ReportPortal.io page for details.

                  "},{"location":"operator-guide/install-reportportal/#related-articles","title":"Related Articles","text":"
                  • Install via Helmfile
                  "},{"location":"operator-guide/install-tekton/","title":"Install Tekton","text":"

                  EPAM Delivery Platform uses Tekton resources, such as Tasks, Pipelines, Triggers, and Interceptors, for running the CI/CD pipelines.

                  Inspect the main steps to perform for installing the Tekton resources via the Tekton release files.

                  "},{"location":"operator-guide/install-tekton/#prerequisites","title":"Prerequisites","text":"
                  • Kubectl version 1.24.0 or higher is installed. Please refer to the Kubernetes official website for details.
                  • For Openshift/OKD, the latest version of the oc utility is required. Please refer to the OKD page on GitHub for details.
                  • Created AWS ECR repository for Kaniko cache. By default, the Kaniko cache repository name is kaniko-cache and this parameter is located in our Tekton common-library.
                  "},{"location":"operator-guide/install-tekton/#installation-on-kubernetes-cluster","title":"Installation on Kubernetes Cluster","text":"

                  To install Tekton resources, follow the steps below:

                  Info

                  Please refer to the Install Tekton Pipelines and Install and set up Tekton Triggers sections for details.

                  1. Install Tekton pipelines v0.51.0 using the release file:

                    Note

                    Tekton Pipeline resources are used for managing and running EDP Tekton Pipelines and Tasks. Please refer to the EDP Tekton Pipelines and EDP Tekton Tasks pages for details.

                    kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.51.0/release.yaml\n
                  2. Install Tekton Triggers v0.25.0 using the release file:

                    Note

                    Tekton Trigger resources are used for managing and running EDP Tekton EventListeners, Triggers, TriggerBindings and TriggerTemplates. Please refer to the EDP Tekton Triggers page for details.

                    kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.25.0/release.yaml\n
                  3. Install Tekton Interceptors v0.25.0 using the release file:

                    Note

                    EPAM Delivery Platform uses GitLab and GitHub ClusterInterceptors for managing requests from webhooks.

                    kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.25.0/interceptors.yaml\n
                  "},{"location":"operator-guide/install-tekton/#installation-on-okd-cluster","title":"Installation on OKD cluster","text":"

                  To install Tekton resources, follow the steps below:

                  Info

                  Please refer to the Install Tekton Operator documentation for details.

                  Note

                  Tekton Operator also deploys Pipelines as Code CI that requires OpenShift v4.11 (based on Kubernetes v1.24) or higher. This feature is optional and its deployments can be scaled to zero replicas.

                  Install Tekton Operator v0.67.0 using the release file:

                  kubectl apply -f https://github.com/tektoncd/operator/releases/download/v0.67.0/openshift-release.yaml\n

                  After the installation, the Tekton Operator will install the following components: Pipeline, Trigger, and Addons.

                  Note

                  If there is the following error in the openshift-operators namespace for openshift-pipelines-operator and tekton-operator-webhook deployments:

                  Error: container has runAsNonRoot and image will run as root\n

                  Patch the deployments with the following commands:

                  kubectl -n openshift-operators patch deployment openshift-pipelines-operator -p '{\"spec\": {\"template\": {\"spec\": {\"securityContext\": {\"runAsUser\": 1000}}}}}'\nkubectl -n openshift-operators patch deployment tekton-operator-webhook -p '{\"spec\": {\"template\": {\"spec\": {\"securityContext\": {\"runAsUser\": 1000}}}}}'\n

                  Grant access for Tekton Service Accounts in the openshift-pipelines namespace to the Privileged SCC:

                  oc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-operators-proxy-webhook\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-pipelines-controller\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-pipelines-resolvers\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-pipelines-webhook\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-triggers-controller\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-triggers-core-interceptors\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-triggers-webhook\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:pipelines-as-code-controller\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:pipelines-as-code-watcher\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:pipelines-as-code-webhook\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:default\n
                  "},{"location":"operator-guide/install-tekton/#related-articles","title":"Related Articles","text":"
                  • Install via Helmfile
                  "},{"location":"operator-guide/install-velero/","title":"Install Velero","text":"

                  Velero is an open source tool to safely back up, recover, and migrate Kubernetes clusters and persistent volumes. It works both on premises and in a public cloud. Velero consists of a server process running as a deployment in your Kubernetes cluster and a command-line interface (CLI) with which DevOps teams and platform operators configure scheduled backups, trigger ad-hoc backups, perform restores, and more.

                  "},{"location":"operator-guide/install-velero/#installation","title":"Installation","text":"

                  To install Velero, follow the steps below:

                  1. Create velero namespace:

                      kubectl create namespace velero\n

                    Note

                    On an OpenShift cluster, run the oc command instead of kubectl one.

                  2. Add a chart repository:

                      helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts\n  helm repo update\n

                    Note

                    Velero AWS Plugin requires access to AWS resources. To configure access, please refer to the IRSA for Velero documentation.

                  3. Install Velero v.2.14.13:

                      helm install velero vmware-tanzu/velero \\\n  --version 2.14.13 \\\n  --values values.yaml \\\n  --namespace velero\n

                    Check out the values.yaml file sample of the Velero customization:

                    View: values.yaml
                    image:\nrepository: velero/velero\ntag: v1.5.3\nsecurityContext:\nfsGroup: 65534\nrestic:\nsecurityContext:\nfsGroup: 65534\nserviceAccount:\nserver:\ncreate: true\nname: edp-velero\nannotations:\neks.amazonaws.com/role-arn: \"arn:aws:iam::<AWS_ACCOUNT_ID>:role/AWSIRSA\u2039CLUSTER_NAME\u203a\u2039VELERO_NAMESPACE\u203aVelero\"\ncredentials:\nuseSecret: false\nconfiguration:\nprovider: aws\nbackupStorageLocation:\nname: default\nbucket: velero-<CLUSTER_NAME>\nconfig:\nregion: eu-central-1\nvolumeSnapshotLocation:\nname: default\nconfig:\nregion: <AWS_REGION>\ninitContainers:\n- name: velero-plugin-for-aws\nimage: velero/velero-plugin-for-aws:v1.1.0\nvolumeMounts:\n- mountPath: /target\nname: plugins\n

                    Note

                    In case of using cluster scheduling and amazon-eks-pod-identity-webhook, it is necessary to restart the Velero pod after the cluster is up and running. Please refer to the Schedule Pods Restart documentation.

                  4. Install the client side (velero cli) according to the official documentation.

                  "},{"location":"operator-guide/install-velero/#configuration","title":"Configuration","text":"
                  1. Create backup for all components in the namespace:

                      velero backup create <BACKUP_NAME> --include-namespaces <NAMESPACE>\n
                  2. Create a daily backup of the namespace:

                      velero schedule create <BACKUP_NAME>  --schedule \"0 10 * * MON-FRI\" --include-namespaces <NAMESPACE> --ttl 120h0m0s\n
                  3. To restore from backup, use the following command:

                      velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME>\n
                  "},{"location":"operator-guide/install-via-helmfile/","title":"Install via Helmfile","text":"

                  This article provides the instruction on how to deploy EDP and components in Kubernetes using Helmfile that is intended for deploying Helm charts. Helmfile templates are available in GitHub repository.

                  Important

                  The Helmfile installation method for EPAM Delivery Platform (EDP) is currently not actively maintained. We strongly recommend exploring alternative installation options for the most up-to-date and well-supported deployment experience. You may consider using the Add-Ons approach or opting for installation via the AWS Marketplace to ensure a reliable and secure deployment of EDP.

                  "},{"location":"operator-guide/install-via-helmfile/#prerequisites","title":"Prerequisites","text":"

                  The following tools and plugins must be installed:

                  • Kubectl version 1.23.0;
                  • Helm version 3.10.0+;
                  • Helmfile version 0.144.0;
                  • Helm diff plugin version 3.6.0.
                  "},{"location":"operator-guide/install-via-helmfile/#helmfile-structure","title":"Helmfile Structure","text":"
                  • The envs/common.yaml file contains the specification for environments pattern, list of helm repositories from which it is necessary to fetch the helm charts and additional Helm parameters.
                  • The envs/platform.yaml file contains global parameters that are used in various Helmfiles.
                  • The releases/envs/ contains symbol links to environments files.
                  • The releases/*.yaml file contains description of parameters that is used when deploying a Helm chart.
                  • The helmfile.yaml file defines components to be installed by defining a path to Helm releases files.
                  • The envs/ci.yaml file contains stub parameters for CI linter.
                  • The test/lint-ci.sh script for running CI linter with debug loglevel and stub parameters.
                  • The resources/*.yaml file contains additional resources for the OpenShift platform.
                  "},{"location":"operator-guide/install-via-helmfile/#operate-helmfile","title":"Operate Helmfile","text":"

                  Before applying the Helmfile, please fill in the global parameters in the envs/platform.yaml (check the examples in the envs/ci.yaml) and releases/*.yaml files for every Helm deploy.

                  Pay attention to the following recommendations while working with the Helmfile:

                  • To launch Lint, run the test/lint-ci.sh script.
                  • Display the difference between the deployed and environment state (helm diff):
                    helmfile --environment platform -f helmfile.yaml diff\n
                  • Apply the deployment:
                    helmfile  --selector component=ingress --environment platform -f helmfile.yaml apply\n
                  • Modify the deployment and apply the changes:
                    helmfile  --selector component=ingress --environment platform -f helmfile.yaml sync\n
                  • To deploy the components according to the label, use the selector to target a subset of releases when running the Helmfile. It can be useful for large Helmfiles with the releases that are logically grouped together. For example, to display the difference only for the nginx-ingress file, use the following command:
                    helmfile  --selector component=ingress --environment platform -f helmfile.yaml diff\n
                  • To destroy the release, run the following command:
                    helmfile  --selector component=ingress --environment platform -f helmfile.yaml destroy\n
                  "},{"location":"operator-guide/install-via-helmfile/#deploy-components","title":"Deploy Components","text":"

                  Using the Helmfile, the following components can be installed:

                  • NGINX Ingress Controller
                  • Keycloak
                  • EPAM Delivery Platform
                  • Argo CD
                  • External Secrets Operator
                  • DefectDojo
                  • Moon
                  • ReportPortal
                  • Kiosk
                  • Monitoring stack, included Prometheus, Alertmanager, Grafana, PrometheusOperator
                  • Logging ELK stack, included Elasticsearch, Fluent-bit, Kibana
                  • Logging Grafana/Loki stack, included Grafana, Loki, Promtail, Logging Operator, Logging Operator Logging
                  "},{"location":"operator-guide/install-via-helmfile/#deploy-nginx-ingress-controller","title":"Deploy NGINX Ingress Controller","text":"

                  Info

                  Skip this step for the OpenShift platform, because it has its own Ingress Controller.

                  To install NGINX Ingress controller, follow the steps below:

                  1. In the releases/nginx-ingress.yaml file, set the proxy-real-ip-cidr parameter according to the value with AWS VPC IPv4 CIDR.

                  2. Install NGINX Ingress controller:

                    helmfile  --selector component=ingress --environment platform -f helmfile.yaml apply\n
                  "},{"location":"operator-guide/install-via-helmfile/#deploy-keycloak","title":"Deploy Keycloak","text":"

                  Keycloak requires a database deployment, so it has two charts: releases/keycloak.yaml and releases/postgresql-keycloak.yaml.

                  To install Keycloak, follow the steps below:

                  1. Create a security namespace:

                    Note

                    For the OpenShift users: This namespace is also indicated as users in the following custom SecurityContextConstraints resources: resources/keycloak-scc.yaml and resources/postgresql-keycloak-scc.yaml. Change the namespace name when using a custom namespace.

                    kubectl create namespace security\n
                  2. Create PostgreSQL admin secret:

                    kubectl -n security create secret generic keycloak-postgresql \\\n--from-literal=password=<postgresql_password> \\\n--from-literal=postgres-password=<postgresql_postgres_password>\n
                  3. In the envs/platform.yaml file, set the dnsWildCard parameter.

                  4. Create Keycloak admin secret:

                    kubectl -n security create secret generic keycloak-admin-creds \\\n--from-literal=username=<keycloak_admin_username> \\\n--from-literal=password=<keycloak_admin_password>\n
                  5. Install Keycloak:

                    helmfile  --selector component=sso --environment platform -f helmfile.yaml apply\n
                  "},{"location":"operator-guide/install-via-helmfile/#deploy-external-secrets-operator","title":"Deploy External Secrets Operator","text":"

                  To install External Secrets Operator, follow the steps below:

                  helmfile --selector component=secrets --environment platform -f helmfile.yaml apply\n
                  "},{"location":"operator-guide/install-via-helmfile/#deploy-kiosk","title":"Deploy Kiosk","text":"

                  To install Kiosk, follow the steps below:

                  helmfile --selector component=kiosk --environment platform -f helmfile.yaml apply\n
                  "},{"location":"operator-guide/install-via-helmfile/#deploy-epam-delivery-platform","title":"Deploy EPAM Delivery Platform","text":"

                  To install EDP, follow the steps below:

                  1. Create a platform namespace:

                    kubectl create namespace platform\n
                  2. For EDP, it is required to have Keycloak access to perform the integration. Create a secret with the user and password provisioned in the step 2 of the Keycloak Configuration section.

                    kubectl -n platform create secret generic keycloak \\\n  --from-literal=username=<username> \\\n  --from-literal=password=<password>\n
                  3. In the envs/platform.yaml file, set the edpName and keycloakEndpoint parameters.

                  4. In the releases/edp-install.yaml file, check and fill in all values.

                  5. Install EDP:

                    helmfile  --selector component=edp --environment platform -f helmfile.yaml apply\n
                  "},{"location":"operator-guide/install-via-helmfile/#deploy-argo-cd","title":"Deploy Argo CD","text":"

                  Before Argo CD deployment, install the following tools:

                  • Keycloak
                  • EDP

                  To install Argo CD, follow the steps below:

                  1. Install Argo CD:

                    For the OpenShift users:

                    When using a custom namespace for Argo CD, the argocd namespace is also indicated as users in the resources/argocd-scc.yaml custom SecurityContextConstraints resource. Change it there as well.

                    helmfile --selector component=argocd --environment platform -f helmfile.yaml apply\n
                  2. Update the argocd-secret secret in the Argo CD namespace by providing the correct Keycloak client secret (oidc.keycloak.clientSecret) with the value from the keycloak-client-argocd-secret secret in EDP namespace. Then restart the deployment:

                    ARGOCD_CLIENT=$(kubectl -n platform get secret keycloak-client-argocd-secret  -o jsonpath='{.data.clientSecret}')\nkubectl -n argocd patch secret argocd-secret -p=\"{\\\"data\\\":{\\\"oidc.keycloak.clientSecret\\\": \\\"${ARGOCD_CLIENT}\\\"}}\" -v=1\nkubectl -n argocd rollout restart deployment argo-argocd-server\n
                  "},{"location":"operator-guide/install-via-helmfile/#deploy-defectdojo","title":"Deploy DefectDojo","text":"

                  Prerequisites

                  1. Before DefectDojo deployment,first make sure to have the Keycloak configuration.

                  Info

                  It is also possible to install DefectDojo via Helm Chart. For details, please refer to the Install DefectDojo page.

                  To install DefectDojo via Helmfile, follow the steps below:

                  1. Create a DefectDojo namespace:

                    For the OpenShift users:

                    This namespace is also indicated as users in the resources/defectdojo-scc.yaml custom SecurityContextConstraints resource. Change it when using a custom namespace. Also, change the namespace in the resources/defectdojo-route.yaml file.

                    kubectl create namespace defectdojo\n
                  2. Modify the host in resources/defectdojo-route.yaml (only for OpenShift).

                  3. Create a PostgreSQL admin secret:

                    kubectl -n defectdojo create secret generic defectdojo-postgresql-specific \\\n--from-literal=postgresql-password=<postgresql_password> \\\n--from-literal=postgresql-postgres-password=<postgresql_postgres_password>\n

                    Note

                    The postgresql_password and postgresql_postgres_password passwords must be 16 characters long.

                  4. Create a RabbitMQ admin secret:

                    kubectl -n defectdojo create secret generic defectdojo-rabbitmq-specific \\\n--from-literal=rabbitmq-password=<rabbitmq_password> \\\n--from-literal=rabbitmq-erlang-cookie=<rabbitmq_erlang_cookie>\n

                    Note

                    The rabbitmq_password password must be 10 characters long.

                    The rabbitmq_erlang_cookie password must be 32 characters long.

                  5. Create a DefectDojo admin secret:

                    kubectl -n defectdojo create secret generic defectdojo \\\n--from-literal=DD_ADMIN_PASSWORD=<dd_admin_password> \\\n--from-literal=DD_SECRET_KEY=<dd_secret_key> \\\n--from-literal=DD_CREDENTIAL_AES_256_KEY=<dd_credential_aes_256_key> \\\n--from-literal=METRICS_HTTP_AUTH_PASSWORD=<metric_http_auth_password>\n

                    Note

                    The dd_admin_password password must be 22 characters long.

                    The dd_secret_key password must be 128 characters long.

                    The dd_credential_aes_256_key password must be 128 characters long.

                    The metric_http_auth_password password must be 32 characters long.

                  6. Create a Keycloak client secret for DefectDojo:

                    Note

                    The keycloak_client_secret value received from: edpName-main realm -> clients -> defectdojo -> Credentials -> Client secret

                    kubectl -n defectdojo create secret generic defectdojo-extrasecrets \\\n--from-literal=DD_SOCIAL_AUTH_KEYCLOAK_SECRET=<keycloak_client_secret>\n
                  7. In the envs/platform.yaml file, set the dnsWildCard parameter.

                  8. In the releases/defectdojo.yaml file, check and fill in all values.

                  9. Install DefectDojo:

                    helmfile  --selector component=defectdojo --environment platform -f helmfile.yaml apply\n
                  "},{"location":"operator-guide/install-via-helmfile/#deploy-reportportal","title":"Deploy ReportPortal","text":"

                  Info

                  It is also possible to install ReportPortal via Helm Chart. For details, please refer to the Install ReportPortal page.

                  ReportPortal requires third-party deployments: RabbitMQ, ElasticSearch, PostgreSQL, MinIO.

                  To install third-party resources, follow the steps below:

                  1. Create a RabbitMQ admin secret:

                    kubectl -n report-portal create secret generic reportportal-rabbitmq-creds \\\n--from-literal=rabbitmq-password=<rabbitmq_password> \\\n--from-literal=rabbitmq-erlang-cookie=<rabbitmq_erlang_cookie>\n

                    Warning

                    The rabbitmq_password password must be 10 characters long.

                    The rabbitmq_erlang_cookie password must be 32 characters long.

                  2. Create a PostgreSQL admin secret:

                    kubectl -n report-portal create secret generic reportportal-postgresql-creds \\\n--from-literal=postgresql-password=<postgresql_password> \\\n--from-literal=postgresql-postgres-password=<postgresql_postgres_password>\n

                    Warning

                    The postgresql_password and postgresql_postgres_password passwords must be 16 characters long.

                  3. Create a MinIO admin secret:

                    kubectl -n report-portal create secret generic reportportal-minio-creds \\\n--from-literal=root-password=<root_password> \\\n--from-literal=root-user=<root_user>\n
                  4. In the envs/platform.yaml file, set the dnsWildCard and edpName parameters.

                    For the OpenShift users:

                    The namespace is also indicated as users in the following custom SecurityContextConstraints resources: resources/report-portal-elasticsearch-scc.yaml and resources/report-portal-third-party-resources-scc.yaml. Change the namespace name when using a custom namespace.

                  5. Install third-party resources:

                    helmfile --selector component=report-portal-third-party-resources --environment platform -f helmfile.yaml apply\n
                  6. After the rabbitmq pod gets the status Running, you need to configure the RabbitMQ memory threshold

                    kubectl -n report-portal exec -it rabbitmq-0 -- rabbitmqctl set_vm_memory_high_watermark 0.8\n

                  To install ReportPortal via Helmfile, follow the steps below:

                  For the OpenShift users:

                  1. The namespace is also indicated as users in the resources/report-portal-reportportal-scc.yaml custom SecurityContextConstraints resource. Change it when using a custom namespace.
                  2. Change the namespace in the following files: resources/report-portal-gateway/gateway-config-cm, resources/report-portal-gateway/gateway-deployment, resources/report-portal-gateway/gateway-route, and resources/report-portal-gateway/gateway-service.
                  3. Modify the host in resources/report-portal-gateway/gateway-route
                  helmfile --selector component=report-portal --environment platform -f helmfile.yaml apply\n

                  Note

                  For user access: default/1q2w3e For admin access: superadmin/erebus Please refer to the ReportPortal.io page for details.

                  "},{"location":"operator-guide/install-via-helmfile/#deploy-moon","title":"Deploy Moon","text":"

                  Moon is a browser automation solution compatible with Selenium, Cypress, Playwright, and Puppeteer using Kubernetes or Openshift to launch browsers.

                  Note

                  Aerokube/Moon does not require third-party deployments.

                  Follow the steps below to deploy Moon:

                  1. Use the following command to install Moon:

                    helmfile --selector component=moon --environment platform -f helmfile.yaml apply\n
                  2. After the installation, open the Ingress Dashboard and check that SELENOID and SSE have the CONNECTED status.

                    Main board

                  3. In Moon, use the following command with the Ingress rule, for example, wd/hub:

                        curl -X POST 'http://<INGRESS_LINK>/wd/hub/session' -d '{\n                \"desiredCapabilities\":{\n                    \"browserName\":\"firefox\",\n                    \"version\": \"79.0\",\n                    \"platform\":\"ANY\",\n                    \"enableVNC\": true,\n                    \"name\": \"edp\",\n                    \"sessionTimeout\": \"480s\"\n                }\n            }'\n

                    See below the list of Moon Dashboard Ingress rules:

                    Moon Dashboard Ingress rules

                    After using the command above, the container will start, and the VNC viewer will be displayed on the Moon Dashboard:

                    VNC viewer with the container starting

                  "},{"location":"operator-guide/install-via-helmfile/#deploy-monitoring","title":"Deploy Monitoring","text":"

                  The monitoring stack includes Grafana, Prometheus, Alertmanager, and Karma-dashboard. To deploy it follow the steps:

                  1. Generate a token for Keycloak client:

                    Note

                    The token must be 32-character and include alphabetic and numeric symbols. For example, use the following command:

                    keycloak_client_secret=$(date +%s | sha256sum | base64 | head -c 32 ; echo)\n
                  2. Create a secret for the Keycloak client:

                    kubectl -n platform create secret generic keycloak-client-grafana \\\n--from-literal=clientSecret=<keycloak_client_secret>\n
                  3. Create a secret for the Grafana:

                    kubectl -n monitoring create secret generic keycloak-client-grafana \\\n--from-literal=GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=<keycloak_client_secret> \\\n
                  4. Create a custom resource for the Keycloak client:

                    View: keycloak_client
                    apiVersion: v1.edp.epam.com/v1\nkind: KeycloakClient\nmetadata:\nname: grafana\nnamespace: platform\nspec:\nclientId: grafana\ndirectAccess: true\nserviceAccount:\nenabled: true\ntargetRealm: platform-main\nwebUrl: https://grafana-monitoring.<dnsWildCard>\nsecret: keycloak-client.grafana\n
                  5. Run command:

                    helmfile --selector component=monitoring --environment platform -f helmfile.yaml apply\n
                  "},{"location":"operator-guide/install-via-helmfile/#deploy-logging","title":"Deploy Logging","text":"ELK stackGrafana, Loki, Promtail stack

                  To install Elasticsearch, Kibana and Fluentbit, run command:

                  helmfile --selector component=logging-elastic --environment platform -f helmfile.yaml apply\n

                  To install Grafana, Loki, Promtail, follow the steps below:

                  1. Make sure that appropriate resources are created:

                    • Secret for the Keycloak client
                    • Secret for the Grafana
                  2. Create a custom resource for the Keycloak client:

                    View: keycloak_client
                    apiVersion: v1.edp.epam.com/v1\nkind: KeycloakClient\nmetadata:\nname: grafana\nnamespace: platform\nspec:\nclientId: grafana-logging\ndirectAccess: true\nserviceAccount:\nenabled: true\ntargetRealm: platform-main\nwebUrl: https://grafana-logging.<dnsWildCard>\nsecret: keycloak-client.grafana\n
                  3. Run command:

                    helmfile --selector component=logging --environment platform -f helmfile.yaml apply\n
                  "},{"location":"operator-guide/install-via-helmfile/#related-articles","title":"Related Articles","text":"
                  • Install EDP
                  • Install NGINX Ingress Controller
                  • Install Keycloak
                  • Install DefectDojo
                  • Install ReportPortal
                  • Install Argo CD
                  "},{"location":"operator-guide/jira-gerrit-integration/","title":"Adjust VCS Integration With Jira","text":"

                  In order to adjust the Version Control System integration with Jira Server, first make sure you have the following prerequisites:

                  • VCS Server
                  • Jira
                  • Crucible

                  When checked the prerequisites, follow the steps below to proceed with the integration:

                  1. Integrate every project in VCS Server with every project in Crucible by creating a corresponding request in EPAM Support Portal. Add the repositories links and fill in the Keep Informed field as this request must be approved.

                    Request example

                  2. Provide additional details to the support team. If the VCS is Gerrit, inspect the sample below of its integration:

                    2.1 Create a new \"crucible-\" user in Gerrit with SSH key and add a new user to the \"Non-Interactive Users\" Gerrit group;

                    2.2 Create a new group in Gerrit \"crucible-watcher-group\" and add the \"crucible-\" user;

                    2.3 Provide access to All-Projects for the \"crucible-watcher-group\" group:

                    Gerrit All-Projects configuration

                    Gerrit All-Projects configuration

                  3. To link commits with Jira ticket, being in Gerrit, enter a Jira ticket ID in a commit message using the specific format:

                    [PROJECT-CODE-1234]: commit message

                    where PROJECT-CODE is a specific code of a project, 1234 is an ID number, and a commit message.

                  4. As a result, all Gerrit commits will be displayed on Crucible:

                    Crucible project

                  5. "},{"location":"operator-guide/jira-gerrit-integration/#related-articles","title":"Related Articles","text":"
                    • Adjust Jira Integration
                    "},{"location":"operator-guide/jira-integration/","title":"Adjust Jira Integration","text":"

                    This documentation guide provides step-by-step instructions for enabling the Jira integration option in the EDP Portal UI for EPAM Delivery Platform. Jira integration allows including useful metadata in Jira tickets.

                    "},{"location":"operator-guide/jira-integration/#overview","title":"Overview","text":"

                    Integrating Jira can provide a number of benefits, such as increased visibility and traceability, automatic linking code changes to relevant Jira issues, streamlining the management and tracking of development progress.

                    By linking CI pipelines to Jira issues, teams can get a better understanding of the status of their work and how it relates to the overall development process. This can help to improve communication and collaboration, and ultimately lead to faster and more efficient delivery of software.

                    Enabling Jira integration allows for the automatic population of three fields in Jira tickets: Fix Versions, Components, and Labels. Each of these fields provides distinct benefits:

                    • Fix Versions: helps track progress against release schedules;
                    • Components: allows grouping related issues together;
                    • Labels: enables identification of specific types of work.

                    Teams can utilize these fields to enhance their work prioritization, identify dependencies, improve collaboration, and ultimately achieve faster software delivery.

                    "},{"location":"operator-guide/jira-integration/#integration-procedure","title":"Integration Procedure","text":"

                    In order to adjust the Jira server integration, add the JiraServer CR by performing the following:

                    1. Provision the secret using EDP Portal, Manifest or with the externalSecrets operator:

                      EDP PortalManifestExternal Secrets Operator

                      Go to EDP Portal -> EDP -> Configuration -> Jira. Update or fill in the URL, User, Password fields and click the Save button:

                      Jira update manual secret

                      apiVersion: v1\nkind: Secret\nmetadata:\nname: ci-jira\nnamespace: edp\nlabels:\napp.edp.epam.com/secret-type=jira\nstringData:\nurl: https://jira.example.com\nusername: username\npassword: password\n
                      \"ci-jira\":\n{\n\"url\": \"https://jira.example.com\",\n\"username\": \"username\",\n\"password\": \"password\"\n}\n
                    2. Create JiraServer CR in the OpenShift/Kubernetes namespace with the apiUrl, credentialName and rootUrl fields:

                      apiVersion: v2.edp.epam.com/v1\nkind: JiraServer\nmetadata:\nname: jira-server\nspec:\napiUrl: 'https://jira-api.example.com'\ncredentialName: ci-jira\nrootUrl: 'https://jira.example.com'\n

                      Note

                      The value of the credentialName property is the name of the Secret, which is indicated in the first point above.

                    3. In the EDP Portal UI, navigate to the Advanced Settings menu to check that the Integrate with Jira server check box appeared:

                      Advanced settings

                      Note

                      There are four predefined variables with the respective values that can be specified singly or as a combination:

                      EDP_COMPONENT \u2013 returns application-name EDP_VERSION \u2013 returns 0.0.0-SNAPSHOT or 0.0.0-RC EDP_SEM_VERSION \u2013 returns 0.0.0 EDP_GITTAG \u2013 returns build/0.0.0-SNAPSHOT.2 or build/0.0.0-RC.2

                      There are no character restrictions when combining the variables, combination samples: EDP_SEM_VERSION-EDP_COMPONENT or EDP_COMPONENT-hello-world/EDP_VERSION, etc.

                      As a result of successful Jira integration, the additional information will be added to tickets.

                    "},{"location":"operator-guide/jira-integration/#related-articles","title":"Related Articles","text":"
                    • Adjust VCS Integration With Jira
                    • Add Application
                    "},{"location":"operator-guide/kaniko-irsa/","title":"IAM Roles for Kaniko Service Accounts","text":"

                    Note

                    The information below is relevant in case ECR is used as Docker container registry. Make sure that IRSA is enabled and amazon-eks-pod-identity-webhook is deployed according to the Associate IAM Roles With Service Accounts documentation.

                    The \"build-image-kaniko\" stage manages ECR through IRSA that should be available on the cluster. Follow the steps below to create a required role:

                    1. Create AWS IAM Policy \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039EDP_NAMESPACE\u203aKaniko_policy\":

                      {\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n        \"Effect\": \"Allow\",\n        \"Action\": [\n            \"ecr:*\",\n            \"cloudtrail:LookupEvents\"\n        ],\n        \"Resource\": \"arn:aws:ecr:<AWS_REGION>:<AWS_ACCOUNT_ID>:repository/<EDP_NAMESPACE>/*\"\n    },\n    {\n        \"Effect\": \"Allow\",\n        \"Action\": \"ecr:GetAuthorizationToken\",\n        \"Resource\": \"*\"\n    },\n    {\n        \"Effect\": \"Allow\",\n        \"Action\": [\n            \"ecr:DescribeRepositories\",\n            \"ecr:CreateRepository\"\n        ],\n        \"Resource\": \"arn:aws:ecr:<AWS_REGION>:<AWS_ACCOUNT_ID>:repository/*\"\n    }\n  ]\n}\n
                    2. Create AWS IAM Role \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039EDP_NAMESPACE\u203aKaniko\" with trust relationships:

                      {\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Principal\": {\n        \"Federated\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/<OIDC_PROVIDER>\"\n      },\n      \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n      \"Condition\": {\n        \"StringEquals\": {\n          \"<OIDC_PROVIDER>:sub\": \"system:serviceaccount:edp:edp-kaniko\"\n        }\n      }\n    }\n  ]\n}\n
                    3. Attach the \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039EDP_NAMESPACE\u203aKaniko_policy\" policy to the \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039EDP_NAMESPACE\u203aKaniko\" role.

                    4. Define the resulted arn role value into the kaniko.roleArn parameter in values.yaml during the EDP installation.

                    "},{"location":"operator-guide/kaniko-irsa/#related-articles","title":"Related Articles","text":"
                    • Associate IAM Roles With Service Accounts
                    • Install EDP
                    "},{"location":"operator-guide/kibana-ilm-rollover/","title":"Aggregate Application Logs Using EFK Stack","text":"

                    This documentation describes the advantages of EFK stack over the traditional ELK stack, explains the value that this stack brings to EDP and instructs how to set up the EFK stack to integrate the advanced logging system with your application.

                    "},{"location":"operator-guide/kibana-ilm-rollover/#elk-stack-overview","title":"ELK Stack Overview","text":"

                    The ELK (Elasticsearch, Logstash and Kibana) stack gives the ability to aggregate logs from all the managed systems and applications, analyze these logs and create visualizations for application and infrastructure monitoring, faster troubleshooting, security analytics and more.

                    Here is a brief description of the ELK stack default components:

                    • Beats family - The logs shipping tool that conveys logs from the source locations, such as Filebeat, Metricbeat, Packetbeat, etc. Beats can work instead of Logstash or along with it.
                    • Logstash - The log processing framework for log collecting, processing, storing and searching activities.
                    • Elasticsearch - The distributed search and analytics engine based on Lucene Java library.
                    • Kibana - The visualization engine that queries the data from Elasticsearch.

                    ELK Stack

                    "},{"location":"operator-guide/kibana-ilm-rollover/#efk-stack-overview","title":"EFK Stack Overview","text":"

                    We use FEK (also called EFK) (Fluent Bit, Elasticsearch, Kibana) stack in Kubernetes instead of ELK because this stack provides us with the support for Logsight for Stage Verification and Incident Detection. In addition to it, Fluent Bit has a smaller memory fingerprint than Logstash. Fluent Bit has the Inputs, Parsers, Filters and Outputs plugins similarly to Logstash.

                    FEK Stack

                    "},{"location":"operator-guide/kibana-ilm-rollover/#automate-elasticsearch-index-rollover-with-ilm","title":"Automate Elasticsearch Index Rollover With ILM","text":"

                    In this guide, index rollover with the Index Lifecycle Management ILM is automated in the FEK stack.

                    The resources can be created via API using curl, Postman, Kibana Dev Tools console or via GUI. They are going to be created them using Kibana Dev Tools.

                    1. Go to Management \u2192 Dev Tools in the Kibana dashboard:

                      Dev Tools

                    2. Create index lifecycle policy with the index rollover:

                      Note

                      This policy can also be created in GUI in Management \u2192 Stack Management \u2192 Index Lifecycle Policies.

                      Index Lifecycle has several phases: Hot, Warm, Cold, Frozen, Delete. Indices also have different priorities in each phase. The warmer the phase, the higher the priority is supposed to be, e.g., 100 for the hot phase, 50 for the warm phase, and 0 for the cold phase.

                      In this Use Case, only the Hot and Delete phases are configured. So an index will be created, rolled over to a new index when 1gb in size or 1day in time and deleted in 7 days. The rollover may not happen exactly at 1GB because it depends on how often Kibana checks the index size. Kibana usually checks the index size every 10 minutes but this can be changed by setting the indices.lifecycle.poll_interval monitoring timer.

                      The index lifecycle policy example:

                      Index Lifecycle Policy
                      PUT _ilm/policy/fluent-bit-policy\n{\n\"policy\": {\n\"phases\": {\n\"hot\": {\n\"min_age\": \"0ms\",\n\"actions\": {\n\"set_priority\": {\n\"priority\": 100\n},\n\"rollover\": {\n\"max_size\": \"1gb\",\n\"max_primary_shard_size\": \"1gb\",\n\"max_age\": \"1d\"\n}\n}\n},\n\"delete\": {\n\"min_age\": \"7d\",\n\"actions\": {\n\"delete\": {\n\"delete_searchable_snapshot\": true\n}\n}\n}\n}\n}\n}\n

                      Insert the code above into the Dev Tools and click the arrow to send the PUT request.

                    3. Create an index template so that a new index is created according to this template after the rollover:

                      Note

                      This policy can also be created in GUI in Management \u2192 Stack Management \u2192 Index Management \u2192 Index Templates.

                      Expand the menu below to see the index template example:

                      Index Template
                      PUT /_index_template/fluent-bit\n{\n\"index_patterns\": [\"fluent-bit-kube-*\"],\n\"template\": {\n\"settings\": {\n\"index\": {\n\"lifecycle\": {\n\"name\": \"fluent-bit-policy\",\n\"rollover_alias\": \"fluent-bit-kube\"\n},\n\"number_of_shards\": \"1\",\n\"number_of_replicas\": \"0\"\n}\n}\n}\n}\n

                      Note

                      • index.lifecycle.rollover_alias is required when using a policy containing the rollover action and specifies which alias to rollover on behalf of this index. The intention here is that the rollover alias is also defined on the index.
                      • number_of_shards is the quantity of the primary shards. Elasticsearch index is really just a logical grouping of one or more physical shards, where each shard is actually a self-contained index. By distributing the documents in an index across multiple shards and distributing those shards across multiple nodes, Elasticsearch can ensure redundancy, which both protects against hardware failures and increases query capacity as nodes are added to a cluster. As the cluster grows (or shrinks), Elasticsearch automatically migrates shards to re-balance the cluster. Please refer to the official documentation here.
                      • number_of_replicas is the number of replica shards. A replica shard is a copy of a primary shard. Elasticsearch will never assign a replica to the same node as the primary shard, so make sure you have more than one node in your Elasticsearch cluster if you need to use replica shards. The Elasticsearch cluster details and the quantity of nodes can be checked with:

                        GET _cluster/health\n

                      Since we use one node, the number_of_shards is 1 and number_of_replicas is 0. If you put more replicas within one node, your index will get yellow status in Kibana, yet still be working.

                    4. Create an empty index with write permissions:

                      Note

                      This index can also be created in GUI in Management \u2192 Stack Management \u2192 Index Management \u2192 Indices.

                      Index example with the date math format:

                      Index
                      # URI encoded /<fluent-bit-kube-{now/d}-000001>\nPUT /%3Cfluent-bit-kube-%7Bnow%2Fd%7D-000001%3E\n{\n\"aliases\": {\n\"fluent-bit-kube\": {\n\"is_write_index\": true\n}\n}\n}\n

                      The code above will create an index in the{index_name}-{current_date}-{rollover_index_increment} format. For example: fluent-bit-kube-2023.03.17-000001.

                      Please refer to the official documentation on the index rollover with Date Math here.

                      Note

                      It is also possible to use index pattern below if the date math format does not seem applicable:

                      Index

                      PUT fluent-bit-kube-000001\n{\n\"aliases\": {\n\"fluent-bit-kube\": {\n\"is_write_index\": true\n}\n}\n}\n

                      Check the status of the created index:

                      GET fluent-bit-kube*-000001/_ilm/explain\n
                    5. Configure Fluent Bit. Play attention to the Elasticsearch Output plugin configuration.

                      The important fields in the [OUTPUT] section are Index fluent-bit-kube since we should use the index with the same name as Rollover Alias in Kibana and Logstash_Format Off as we use the Rollover index pattern in Kibana that increments by 1.

                      ConfigMap example with Configuration Variables for HTTP_User and HTTP_Passwd:

                      ConfigMap fluent-bit
                      data:\nfluent-bit.conf: |\n[SERVICE]\nDaemon Off\nFlush 10\nLog_Level info\nParsers_File parsers.conf\nParsers_File custom_parsers.conf\nHTTP_Server On\nHTTP_Listen 0.0.0.0\nHTTP_Port 2020\nHealth_Check On\n\n[INPUT]\nName tail\nTag kube.*\nPath /var/log/containers/*.log\nParser docker\nMem_Buf_Limit 5MB\nSkip_Long_Lines Off\nRefresh_Interval 10\n[INPUT]\nName systemd\nTag host.*\nSystemd_Filter _SYSTEMD_UNIT=kubelet.service\nRead_From_Tail On\nStrip_Underscores On\n\n[FILTER]\nName                kubernetes\nMatch               kube.*\nKube_Tag_Prefix     kube.var.log.containers.\nKube_URL            https://kubernetes.default.svc:443\nKube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\nKube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token\nMerge_Log           Off\nMerge_Log_Key       log_processed\nK8S-Logging.Parser  On\nK8S-Logging.Exclude On\n[FILTER]\nName nest\nMatch kube.*\nOperation lift\nNested_under kubernetes\nAdd_prefix kubernetes.\n[FILTER]\nName modify\nMatch kube.*\nCopy kubernetes.container_name tags.container\nCopy log message\nCopy kubernetes.container_image tags.image\nCopy kubernetes.namespace_name tags.namespace\n[FILTER]\nName nest\nMatch kube.*\nOperation nest\nWildcard tags.*\nNested_under tags\nRemove_prefix tags.\n\n[OUTPUT]\nName            es\nMatch           kube.*\nIndex           fluent-bit-kube\nHost            elasticsearch-master\nPort            9200\nHTTP_User       ${ES_USER}\nHTTP_Passwd     ${ES_PASSWORD}\nLogstash_Format Off\nTime_Key       @timestamp\nType            flb_type\nReplace_Dots    On\nRetry_Limit     False\nTrace_Error     Off\n
                    6. Create index pattern (Data View starting from Kibana v8.0):

                      Go to Management \u2192 Stack Management \u2192 Kibana \u2192 Index patterns and create an index with the fluent-bit-kube-* pattern:

                      Index Pattern

                    7. Check logs in Kibana. Navigate to Analytics \u2192 Discover:

                      Logs in Kibana

                      Note

                      In addition, in the top-right corner of the Discover window, there is a button called Inspect. Clicking on it will reveal the query that Kibana is sending to Elasticsearch. These queries can be used in Dev Tools.

                    8. Monitor the created indices:

                      GET _cat/indices/fluent-bit-kube-*\n

                      Note

                      Physically, the indices are located on the elasticsearch Kubernetes pod in /usr/share/elasticsearch/data/nodes/0/indices. It is recommended to backup indices only via Snapshots.

                    We've configured the index rollover process. Now the index will be rolled over to a new one once it reaches the indicated size or time in the policy, and old indices will be removed according to the policy as well.

                    When you create an empty index that corresponds to the pattern indicated in the index template, the index template attaches rollover_alias with the fluent-bit-kube name, policy and other configured data. Then the Fluent Bit Elasticsearch output plugin sends logs to the Index fluent-bit-kube rollover alias. The index rollover process is managed by ILM that increments our indices united by the rollover_alias and distributes the log data to the latest index.

                    "},{"location":"operator-guide/kibana-ilm-rollover/#ilm-without-rollover-policy","title":"ILM Without Rollover Policy","text":"

                    It is also possible to manage index lifecycle without rollover indicated in the policy. If this is the case, this section will explain how to refactor the index to make it look that way: fluent-bit-kube-2023.03.18.

                    Note

                    The main drawback of this method is that the indices can be managed only by their creation date.

                    To manage index lifecycle without rollover policy, follow the steps below:

                    1. Create a Policy without rollover but with indices deletion:

                      Index Lifecycle Policy
                      PUT _ilm/policy/fluent-bit-policy\n{\n\"policy\": {\n\"phases\": {\n\"hot\": {\n\"min_age\": \"0ms\",\n\"actions\": {\n\"set_priority\": {\n\"priority\": 100\n}\n}\n},\n\"delete\": {\n\"min_age\": \"7d\",\n\"actions\": {\n\"delete\": {\n\"delete_searchable_snapshot\": true\n}\n}\n}\n}\n}\n}\n
                    2. Create an index template with the rollover_alias parameter:

                      Index Template
                      PUT /_index_template/fluent-bit\n{\n\"index_patterns\": [\"fluent-bit-kube-*\"],\n\"template\": {\n\"settings\": {\n\"index\": {\n\"lifecycle\": {\n\"name\": \"fluent-bit-policy\",\n\"rollover_alias\": \"fluent-bit-kube\"\n},\n\"number_of_shards\": \"1\",\n\"number_of_replicas\": \"0\"\n}\n}\n}\n}\n
                    3. Change the Fluent Bit [OUTPUT] config to this one:

                      ConfigMap fluent-bit
                      [OUTPUT]\nName            es\nMatch           kube.*\nHost            elasticsearch-master\nPort            9200\nHTTP_User       ${ES_USER}\nHTTP_Passwd     ${ES_PASSWORD}\nLogstash_Format On\nLogstash_Prefix fluent-bit-kube\nLogstash_DateFormat %Y.%m.%d\nTime_Key        @timestamp\nType            flb_type\nReplace_Dots    On\nRetry_Limit     False\nTrace_Error     On\n
                    4. Restart Fluent Bit pods.

                    Fluent Bit will be producing a new index every day with the new date in its name like in the fluent-bit-kube-2023.03.18 name. Index deleting will be performed according to the policy.

                    "},{"location":"operator-guide/kibana-ilm-rollover/#tips-on-fluent-bit-debugging","title":"Tips on Fluent Bit Debugging","text":"

                    If you experience a lot of difficulties when dealing with Fluent Bit, this section may help you.

                    Fluent Bit has docker images labelled -debug, e.g., cr.fluentbit.io/fluent/fluent-bit:2.0.9-debug.

                    Change that image in the Kubernetes Fluent Bit DaemonSet and add the Trace_Error On parameter to the [OUTPUT] section in the Fluent Bit configmap:

                    [OUTPUT]\nTrace_Error On\n

                    After adding the parameter above, you will start seeing more informative logs that will probably help you find out the reason of the problem.

                    "},{"location":"operator-guide/kibana-ilm-rollover/#related-articles","title":"Related Articles","text":"
                    • Index Lifecycle Management
                    • Elasticsearch Output
                    "},{"location":"operator-guide/kubernetes-cluster-settings/","title":"Set Up Kubernetes","text":"

                    Make sure the cluster meets the following conditions:

                    1. Kubernetes cluster is installed with minimum 2 worker nodes with total capacity 8 Cores and 32Gb RAM.

                    2. Machine with kubectl is installed with a cluster-admin access to the Kubernetes cluster.

                    3. Ingress controller is installed in a cluster, for example ingress-nginx.

                    4. Ingress controller is configured with the disabled HTTP/2 protocol and header size of 64k support.

                      Find below an example of the Config Map for the NGINX Ingress controller:

                      kind: ConfigMap\napiVersion: v1\nmetadata:\nname: nginx-configuration\nnamespace: ingress-nginx\nlabels:\napp.kubernetes.io/name: ingress-nginx\napp.kubernetes.io/part-of: ingress-nginx\ndata:\nclient-header-buffer-size: 64k\nlarge-client-header-buffers: 4 64k\nuse-http2: \"false\"\n
                    5. Load balancer (if any exists in front of the Ingress controller) is configured with session stickiness, disabled HTTP/2 protocol and header size of 32k support.

                    6. Cluster nodes and pods have access to the cluster via external URLs. For instance, add in AWS the VPC NAT gateway elastic IP to the cluster external load balancers security group).

                    7. Keycloak instance is installed. To get accurate information on how to install Keycloak, please refer to the Install Keycloak instruction.

                    8. Helm 3.10 or higher is installed on the installation machine with the help of the Installing Helm instruction.

                    9. Storage classes are used with the Retain Reclaim Policy and Delete Reclaim Policy.

                    10. We recommended using our storage class as default storage class.

                      Info

                      By default, EDP uses the default Storage Class in a cluster. The EDP development team recommends using the following Storage Classes. See an example below.

                      Storage class templates with the Retain and Delete Reclaim Policies:

                      ebs-scgp3gp3-retain
                      apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\nname: ebs-sc\nannotations:\nstorageclass.kubernetes.io/is-default-class: 'true'\nallowedTopologies: []\nmountOptions: []\nprovisioner: ebs.csi.aws.com\nreclaimPolicy: Retain\nvolumeBindingMode: Immediate\n
                      kind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\nname: gp3\nannotations:\nstorageclass.kubernetes.io/is-default-class: 'true'\nallowedTopologies: []\nmountOptions: []\nprovisioner: ebs.csi.aws.com\nreclaimPolicy: Delete\nvolumeBindingMode: Immediate\nallowVolumeExpansion: true\n
                      kind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\nname: gp3-retain\nallowedTopologies: []\nmountOptions: []\nprovisioner: ebs.csi.aws.com\nreclaimPolicy: Retain\nvolumeBindingMode: Immediate\nallowVolumeExpansion: true\n
                    "},{"location":"operator-guide/kubernetes-cluster-settings/#related-articles","title":"Related Articles","text":"
                    • Install Amazon EBS CSI Driver
                    • Install NGINX Ingress Controller
                    • Install Keycloak
                    "},{"location":"operator-guide/logsight-integration/","title":"Logsight Integration","text":"

                    Logsight can be integrated with the CI/CD pipeline. It connects to log data sources, analyses collected logs, and evaluates deployment risk scores.

                    "},{"location":"operator-guide/logsight-integration/#overview","title":"Overview","text":"

                    In order to understand if a microservice or a component is ready for the deployment, EDP suggests analysing logs via Logsight to decide if the deployment is risky or not.

                    Please find more about Logsight in the official documentation:

                    • Logsight key features and workflow
                    • Log analysis
                    • Stage verification
                    "},{"location":"operator-guide/logsight-integration/#logsight-as-a-quality-gate","title":"Logsight as a Quality Gate","text":"

                    Integration with Logsight allows enhancing and optimizing software releases by creating an additional quality gate.

                    Logsight can be configured in two ways:

                    • SAAS - online system; for this solution a connection string is required.
                    • Self-deployment - local installation.

                    To work with Logsight, a new Deployment Risk stage must be added to the pipeline. On this stage, the logs are analysed with the help of Logsight mechanisms.

                    On the verification screen of Logsight, continuous verification of the application deployment can be monitored, and tests can be compared for detecting test flakiness.

                    For example, two versions of a microservice can be compared in order to detect critical differences. Risk score will be calculated for the state reached by version A and version B. Afterwards, the deployment risk will be calculated based on individual risk scores.

                    If the deployment failure risk is greater than a predefined threshold, the verification gate blocks the deployment from going to the target environment. In such case, the Deployment Risk stage of the pipeline is not passed, and additional attention is required. The exact log messages can be displayed in the Logsight verification screen, to help debug the problem.

                    "},{"location":"operator-guide/logsight-integration/#use-logsight-for-edp-development","title":"Use Logsight for EDP Development","text":"

                    Please find below the detailed description of Logsight integration with EDP.

                    "},{"location":"operator-guide/logsight-integration/#deployment-approach","title":"Deployment Approach","text":"

                    EDP uses Logsight in a self-deploying mode.

                    Logsight provides a deployment approach using Helm charts. Please find below the stack of components that must be deployed:

                    • logsight\u00a0- the core component.
                    • logsight-backend\u00a0- the backend that provides all necessary APIs and user management.
                    • logsight-frontend\u00a0- the frontend that provides the user interface.
                    • logsight-result-api\u00a0- responsible for obtaining results, for example, during the verification.

                    Below is a diagram of interaction when integrating the components:

                    Logsight Structure

                    "},{"location":"operator-guide/logsight-integration/#configure-fluentbit-for-sending-log-data","title":"Configure FluentBit for Sending Log Data","text":"

                    Logsight is integrated with the EDP logging stack. The integration is based on top of the EFK (ElasticSearch-FluentBit-Kibana) stack. It is necessary to deploy a stack with the security support, namely, enable the certificate support.

                    A FluentBit config indicates the namespace from which the logs will be received for further analysis. Below is an example of the FluentBit config for getting logs from the edp-delivery-edp-delivery-sit namespace:

                    View: fluent-bit.conf
                    [INPUT]\nName              tail\nTag               kube.sit.*\nPath              /var/log/containers/*edp-delivery-edp-delivery-sit*.log\nParser            docker\nMem_Buf_Limit     5MB\nSkip_Long_Lines   Off\nRefresh_Interval  10\n\n[FILTER]\nName                kubernetes\nMatch               kube.sit.*\nKube_URL            https://kubernetes.default.svc:443\nKube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\nKube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token\nKube_Tag_Prefix     kube.sit.var.log.containers.\nMerge_Log           Off\nK8S-Logging.Parser  On\nK8S-Logging.Exclude On\n\n[FILTER]\nName nest\nMatch kube.sit.*\nOperation lift\nNested_under kubernetes\nAdd_prefix kubernetes.\n\n[FILTER]\nName modify\nMatch kube.sit.*\nCopy kubernetes.container_name tags.container\nCopy log message\nCopy kubernetes.container_image tags.image\nCopy kubernetes.namespace_name tags.namespace\n\n[FILTER]\nName nest\nMatch kube.sit.*\nOperation nest\nWildcard kubernetes.*\nNested_under kubernetes\nRemove_prefix kubernetes.\n\n[OUTPUT]\nName            es\nMatch           kube.sit.*\nHost            elasticsearch-master\nPort            9200\nHTTP_User elastic\nHTTP_Passwd *****\nLogstash_Format On\nLogstash_Prefix sit\nTime_Key        @timestamp\nType            flb_type\nReplace_Dots    On\nRetry_Limit     False\n\n[OUTPUT]\nMatch kube.sit.*\nName  http\nHost logsight-backend\nPort 8080\nhttp_User logsight@example.com\nhttp_Passwd *****\nuri /api/v1/logs/singles\nFormat json\njson_date_format iso8601\njson_date_key timestamp\n
                    "},{"location":"operator-guide/logsight-integration/#deployment-risk-analysis","title":"Deployment Risk Analysis","text":"

                    A deployment-risk stage is added to the EDP CD pipeline.

                    Deployment Risk

                    If the deployment risk is above 70%, the red state of the pipeline is expected.

                    EDP consists of a set of containerized components. For the convenience of tracking the risk deployment trend for each component, this data is stored as Jenkins artifacts.

                    If the deployment risk is higher than the threshold of 70%, the EDP promotion of artifacts for the next environments does not pass. The deployment risk report can be analysed in order to avoid the potential problems with updating the components.

                    To study the report in detail, use the link from the Jenkins pipeline to the Logsight verification screen:

                    Logsight Insights Logsight Insights

                    In this example, logs from different versions of the gerrit-operator were analyzed. As can be seen from the report, a large number of new messages appeared in the logs, and the output frequency of other notifications has also changed, which led to the high deployment risk.

                    The environment on which the analysis is performed can exist for different time periods. Logsight only processes the minimum total number of logs since the creating of the environment.

                    "},{"location":"operator-guide/logsight-integration/#related-articles","title":"Related Articles","text":"
                    • Customize CD Pipeline
                    • Adjust Jira Integration
                    "},{"location":"operator-guide/loki-irsa/","title":"IAM Roles for Loki Service Accounts","text":"

                    Note

                    Make sure that IRSA is enabled and amazon-eks-pod-identity-webhook is deployed according to the Associate IAM Roles With Service Accounts documentation.

                    It is possible to use Amazon Simple Storage Service Amazon S3 as object storage for Loki. In this case Loki requires access to AWS resources. Follow the steps below to create a required role:

                    1. Create AWS IAM Policy \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039LOKI_NAMESPACE\u203aLoki_policy\":

                      {\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"s3:ListObjects\",\n                \"s3:ListBucket\",\n                \"s3:PutObject\",\n                \"s3:GetObject\",\n                \"s3:DeleteObject\"\n            ],\n            \"Resource\": [\n                \"arn:aws:s3:::loki-*\"\n            ]\n        },\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"s3:ListBucket\"\n            ],\n            \"Resource\": [\n                \"arn:aws:s3:::loki-*\"\n            ]\n        }\n    ]\n}\n
                    2. Create AWS IAM Role \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039LOKI_NAMESPACE\u203aLoki\" with trust relationships:

                      {\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Principal\": {\n        \"Federated\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/<OIDC_PROVIDER>\"\n      },\n      \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n      \"Condition\": {\n        \"StringEquals\": {\n          \"<OIDC_PROVIDER>:sub\": \"system:serviceaccount:<LOKI_NAMESPACE>:edp-loki\"\n       }\n     }\n   }\n ]\n}\n
                    3. Attach the \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039LOKI_NAMESPACE\u203aLoki_policy\" policy to the \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039LOKI_NAMESPACE\u203aLoki\" role.

                    4. Make sure that Amazon S3 bucket with name loki-\u2039CLUSTER_NAME\u203a exists.

                    5. Provide key value eks.amazonaws.com/role-arn: \"arn:aws:iam:::role/AWSIRSA\u2039CLUSTER_NAME\u203a\u2039LOKI_NAMESPACE\u203aLoki\" into the serviceAccount.annotations parameter in values.yaml during the Loki Installation."},{"location":"operator-guide/loki-irsa/#related-articles","title":"Related Articles","text":"

                      • Associate IAM Roles With Service Accounts
                      • Install Grafana Loki
                      "},{"location":"operator-guide/manage-custom-certificate/","title":"Manage Custom Certificates","text":"

                      Familiarize yourself with the detailed instructions on adding certificates to EDP resources as well as with the respective setup for Keycloak.

                      EDP components that support custom certificates can be found in the table below:

                      Helm Chart Sub Resources admin-console-operator admin-console gerrit-operator edp-gerrit jenkins-operator jenkins-operator, edp-jenkins, jenkins agents sonar-operator sonar-operator, edp-sonar keycloak-operator keycloak-operator nexus-operator oauth2-proxy edp-install oauth2-proxy edp-headlamp edp-headlamp"},{"location":"operator-guide/manage-custom-certificate/#prerequisites","title":"Prerequisites","text":"
                      • The certificate in the *.crt format is used;
                      • Kubectl version 1.23.0 is installed;
                      • Helm version 3.10.2 is installed;
                      • Java with the keytool command inside;
                      • jq is installed.
                      "},{"location":"operator-guide/manage-custom-certificate/#enable-the-spi-truststore-of-keycloak","title":"Enable the SPI Truststore of Keycloak","text":"

                      To import custom certificates to Keycloak, follow the steps below:

                      1. Generate the cacerts local keystore and import the certificate there using the keytool tool:

                        keytool -importcert -file CA.crt \\\n-alias CA.crt -keystore ./cacerts \\\n-storepass changeit -trustcacerts \\\n-noprompt\n
                      2. Create the custom-keycloak-keystore keystore secret from the cacerts file in the security namespace:

                        kubectl -n security create secret generic custom-keycloak-keystore \\\n--from-file=./cacerts\n
                      3. Create the spi-truststore-data SPI truststore secret in the security namespace:

                        kubectl -n security create secret generic spi-truststore-data \\\n--from-literal=KC_SPI_TRUSTSTORE_FILE_FILE=/opt/keycloak/spi-certs/cacerts \\\n--from-literal=KC_SPI_TRUSTSTORE_FILE_PASSWORD=changeit\n
                      4. Update the Keycloak values.yaml file from the Install Keycloak page.

                        View: values.yaml
                        ...\nextraVolumeMounts: |\n...\n# Use the Keycloak truststore for SPI connection over HTTPS/TLS\n- name: spi-certificates\nmountPath: /opt/keycloak/spi-certs\nreadOnly: true\n...\n\nextraVolumes: |\n...\n# Use the Keycloak truststore for SPI connection over HTTPS/TLS\n- name: spi-certificates\nsecret:\nsecretName: custom-keycloak-keystore\ndefaultMode: 420\n...\n\n...\nextraEnvFrom: |\n- secretRef:\nname: spi-truststore-data\n...\n
                      "},{"location":"operator-guide/manage-custom-certificate/#enable-custom-certificates-in-edp-components","title":"Enable Custom Certificates in EDP Components","text":"

                      Creating custom certificates is a necessary but not sufficient condition for applying, therefore, certificates should be enabled as well.

                      1. Create the custom-ca-certificates secret in the EDP namespace.

                        kubectl -n edp create secret generic custom-ca-certificates \\\n--from-file=CA.crt\n
                      2. Add the certificate by mounting the custom-ca-certificates secret to the operator pod as a volume.

                        Example of specifying custom certificates for the keycloak-operator:

                        ...\nkeycloak-operator:\nenabled: true\n\n# -- Additional volumes to be added to the pod\nextraVolumes:\n- name: custom-ca\nsecret:\ndefaultMode: 420\nsecretName: custom-ca-certificates\n\n# -- Additional volumeMounts to be added to the container\nextraVolumeMounts:\n- name: custom-ca\nmountPath: /etc/ssl/certs/CA.crt\nreadOnly: true\nsubPath: CA.crt\n...\n
                      3. For Sonar, Jenkins and Gerrit, change the flag in the caCerts.enabled field to true. Also, change the name of the secret in the caCerts.secret field to custom-ca-certificates.

                        Example of specifying custom certificates for Gerrit via the gerrit-operator helm chart values:

                        ...\ngerrit-operator:\nenabled: true\ngerrit:\ncaCerts:\n# -- Flag for enabling additional CA certificates\nenabled: true\n# -- Change init CA certificates container image\nimage: adoptopenjdk/openjdk11:alpine\n# -- Name of the secret containing additional CA certificates\nsecret: custom-ca-certificates\n...\n
                      "},{"location":"operator-guide/manage-custom-certificate/#integrate-custom-certificates-into-jenkins-agents","title":"Integrate Custom Certificates Into Jenkins Agents","text":"

                      This section describes how to add custom certificates to Jenkins agents to use them from Java applications.

                      Info

                      For example, curl doesn't use keystore files specified in this part of the documentation.

                      EDP Jenkins agents keep keystore files in two places:

                      • /etc/ssl/certs/java folder with the cacerts file;
                      • /opt/java/openjdk/lib/security folder with the blocked.certs, cacerts, default.policy and public_suffix_list.dat files.
                      1. Copy the files in /etc/ssl/certs/java and /opt/java/openjdk/lib/security directories from Jenkins agent pod to the local tmp folder. There is a copy_certs.sh script below that can manage this. It copies the files in /etc/ssl/certs/java and /opt/java/openjdk/lib/security directories from Jenkins agent pod to the local tmp folder and imports the custom certificate into the keystore files, after which it creates the jenkins-agent-opt-java-openjdk-lib-security-cacerts and jenkins-agent-etc-ssl-certs-java-cacerts secrets from updated keystore files in EDP namespace. Also, the jenkins-agent-opt-java-openjdk-lib-security-cacerts secret contains three additional files: blocked.certs, default.policy and public_suffix_list.dat which managed by the copy_certs.sh script as well. Expand the drop-down button below to see the contents of the copy_certs.sh script.

                        View: copy_certs.sh
                        # Fill in the variables `ns` and `ca_file`\nns=\"edp-project\"\nca_file=\"/tmp/CA.crt\"\n\nimages=$(kubectl get -n \"${ns}\" cm jenkins-slaves -ojson | jq -r \".data[]\" | grep image\\> | sed 's/\\s*<.*>\\(.*\\)<.*>/\\1/')\n\nimage=$(for i in ${images[@]}; do echo $i; done | grep maven-java8)\npod_name=$(echo \"${image}\" | tr '.:/' '-')\n\noverrides=\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"name\\\":\\\"${pod_name}\\\", \\\"namespace\\\": \\\"${ns}\\\"},\n\\\"spec\\\":{\\\"containers\\\":[{\\\"name\\\":\\\"${pod_name}\\\",\\\"image\\\":\\\"${image}\\\",\n\\\"command\\\":[\\\"sh\\\",\\\"-c\\\",\\\"while true;do sleep 30;done;\\\"]}]}}\"\n\nkubectl run -n \"${ns}\" \"${pod_name}\" --image \"${image}\" --overrides=\"${overrides}\"\n\nkubectl wait --for=condition=ready pod \"${pod_name}\" -n \"${ns}\"\n\ncacerts_location=$(kubectl exec -n \"${ns}\" \"${pod_name}\" \\\n-- find / -name cacerts -exec ls -la \"{}\" \\; 2>/dev/null | grep -v ^l | awk '{print $9}')\n\nfor cacerts in ${cacerts_location[@]}; do echo $(dirname \"${cacerts}\"); kubectl exec -n \"${ns}\" \"${pod_name}\" -- ls $(dirname \"${cacerts}\"); done\n\nfor cacerts in ${cacerts_location[@]}; do \\\necho $(dirname \"${cacerts}\"); \\\nmkdir -p \"/tmp$(dirname \"${cacerts}\")\"; \\\nfrom_files=''; \\\nfor file in $(kubectl exec -n \"${ns}\" \"${pod_name}\" -- ls $(dirname \"${cacerts}\")); do \\\nkubectl exec -n \"${ns}\" \"${pod_name}\" -- cat \"$(dirname \"${cacerts}\")/${file}\" > \"/tmp$(dirname \"${cacerts}\")/${file}\"; \\\nfrom_files=\"${from_files} --from-file=/tmp$(dirname \"${cacerts}\")/${file}\"\ndone ; \\\nkeytool -import -storepass changeit -alias kubernetes -file ${ca_file} -noprompt -keystore \"/tmp${cacerts}\"; \\\nkubectl -n \"${ns}\" create secret generic \"jenkins-agent${cacerts//\\//-}\" $from_files \\\ndone\n\nkubectl delete -n \"${ns}\" pod \"${pod_name}\" --force --grace-period=0\n

                        Before using the copy_certs.sh script, keep in mind the following:

                        • assign actual values to the variables ns and ca_file;
                        • the script collects all the images from the jenkins-slaves ConfigMap and uses the image of the maven-java8 agent as the base image of the temporary pod to get the keystore files;
                        • custom certificate is imported using the keytool application;
                        • the jenkins-agent-opt-java-openjdk-lib-security-cacerts and jenkins-agent-etc-ssl-certs-java-cacerts secrets will be created in the EDP namespace.
                      2. Run the copy_certs.sh script from the previous point after the requirements are met.

                      3. Update manually the jenkins-slaves ConfigMap.

                        Add this block with the mount of secrets to the <volumes></volumes> block of each Jenkins agent:

                        ...\n        <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n<mountPath>/etc/ssl/certs/java</mountPath>\n<secretName>jenkins-agent-etc-ssl-certs-java-cacerts</secretName>\n</org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n<org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n<mountPath>/opt/java/openjdk/lib/security</mountPath>\n<secretName>jenkins-agent-opt-java-openjdk-lib-security-cacerts</secretName>\n</org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n...\n

                        As an example, the template of gradle-java11-template is shown below:

                        ...\n      </workspaceVolume>\n<volumes>\n<org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n<mountPath>/etc/ssl/certs/java</mountPath>\n<secretName>jenkins-agent-etc-ssl-certs-java-cacerts</secretName>\n</org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n<org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n<mountPath>/opt/java/openjdk/lib/security</mountPath>\n<secretName>jenkins-agent-opt-java-openjdk-lib-security-cacerts</secretName>\n</org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n</volumes>\n<containers>\n...\n
                      4. Reload the Jenkins pod:

                        ns=\"edp\"\nkubectl rollout restart -n \"${ns}\" deployment/jenkins\n
                      "},{"location":"operator-guide/manage-custom-certificate/#related-articles","title":"Related Articles","text":"
                      • Install EDP
                      • Install Keycloak
                      "},{"location":"operator-guide/manage-jenkins-cd-job-provision/","title":"Manage Jenkins CD Pipeline Job Provisioner","text":"

                      The Jenkins CD job provisioner (or seed-job) is used to create and manage the cd-pipeline folder, and its Deploy pipelines. There is a special job-provisions/cd folder in Jenkins for these provisioners. Explore the steps for managing different provisioner types below.

                      "},{"location":"operator-guide/manage-jenkins-cd-job-provision/#default","title":"Default","text":"

                      During the EDP deployment, a default provisioner is created to deploy application with container and custom deployment type.

                      1. Find the configuration in job-provisions/cd/default.

                      2. Default template is presented below:

                        View: Default template
                        /* Copyright 2022 EPAM Systems.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n\nSee the License for the specific language governing permissions and\nlimitations under the License. */\n\nimport groovy.json.*\nimport jenkins.model.Jenkins\n\nJenkins jenkins = Jenkins.instance\n\ndef pipelineName = \"${PIPELINE_NAME}-cd-pipeline\"\ndef stageName = \"${STAGE_NAME}\"\ndef qgStages = \"${QG_STAGES}\"\ndef gitServerCrVersion = \"${GIT_SERVER_CR_VERSION}\"\ndef gitCredentialsId = \"${GIT_CREDENTIALS_ID}\"\ndef sourceType = \"${SOURCE_TYPE}\"\ndef libraryURL = \"${LIBRARY_URL}\"\ndef libraryBranch = \"${LIBRARY_BRANCH}\"\ndef isAutoDeploy = \"${AUTODEPLOY}\"\ndef scriptPath = \"Jenkinsfile\"\ndef containerDeploymentType = \"container\"\ndef deploymentType = \"${DEPLOYMENT_TYPE}\"\ndef codebaseFolder = jenkins.getItem(pipelineName)\n\ndef autoDeploy = '{\"name\":\"auto-deploy-input\",\"step_name\":\"auto-deploy-input\"}'\ndef manualDeploy = '{\"name\":\"manual-deploy-input\",\"step_name\":\"manual-deploy-input\"}'\ndef runType = isAutoDeploy.toBoolean() ? autoDeploy : manualDeploy\n\ndef stages = buildStages(deploymentType, containerDeploymentType, qgStages, runType)\n\nif (codebaseFolder == null) {\nfolder(pipelineName)\n}\n\nif (deploymentType == containerDeploymentType) {\ncreateContainerizedCdPipeline(pipelineName, stageName, stages, scriptPath, sourceType,\nlibraryURL, libraryBranch, gitCredentialsId, gitServerCrVersion,\nisAutoDeploy)\n} else {\ncreateCustomCdPipeline(pipelineName, stageName)\n}\n\ndef buildStages(deploymentType, containerDeploymentType, qgStages, runType) {\nreturn deploymentType == containerDeploymentType\n? '[{\"name\":\"init\",\"step_name\":\"init\"},' + runType + ',{\"name\":\"deploy\",\"step_name\":\"deploy\"},' + qgStages + ',{\"name\":\"promote-images\",\"step_name\":\"promote-images\"}]'\n: ''\n}\n\ndef createContainerizedCdPipeline(pipelineName, stageName, stages, pipelineScript, sourceType, libraryURL, libraryBranch, libraryCredId, gitServerCrVersion, isAutoDeploy) {\npipelineJob(\"${pipelineName}/${stageName}\") {\nif (sourceType == \"library\") {\ndefinition {\ncpsScm {\nscm {\ngit {\nremote {\nurl(libraryURL)\ncredentials(libraryCredId)\n}\nbranches(\"${libraryBranch}\")\nscriptPath(\"${pipelineScript}\")\n}\n}\n}\n}\n} else {\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\nDeploy()\")\nsandbox(true)\n}\n}\n}\nproperties {\ndisableConcurrentBuilds()\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${stages}\", \"Consequence of stages in JSON format to be run during execution\")\n\nif (isAutoDeploy?.trim() && isAutoDeploy.toBoolean()) {\nstringParam(\"CODEBASE_VERSION\", null, \"Codebase versions to deploy.\")\n}\n}\n}\n}\n\ndef createCustomCdPipeline(pipelineName, stageName) {\npipelineJob(\"${pipelineName}/${stageName}\") {\nproperties {\ndisableConcurrentBuilds()\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\n}\n}\n}\n
                      "},{"location":"operator-guide/manage-jenkins-cd-job-provision/#custom","title":"Custom","text":"

                      In some cases, it is necessary to modify or update the job provisioner logic. For example, when adding a new stage requires a custom job provisioner created on the basis of an existing one out of the box. Take the steps below to add a custom job provision.

                      1. Navigate to the Jenkins main page and open the job-provisions/cd folder, click New Item and type the name of job provisions, for example - custom.

                        CD provisioner name

                        Scroll down to the Copy from field, enter \"/job-provisions/cd/default\", and click OK: Copy CD provisioner

                      2. Update the required parameters in the new provisioner. For example, if it is necessary to implement a new stage clean, add the following code to the provisioner:

                           def buildStages(deploymentType, containerDeploymentType, qgStages) {\n       return deploymentType == containerDeploymentType\n? '[{\"name\":\"init\",\"step_name\":\"init\"},{\"name\":\"clean\",\"step_name\":\"clean\"},{\"name\":\"deploy\",\"step_name\":\"deploy\"},' + qgStages + ',{\"name\":\"promote-images-ecr\",\"step_name\":\"promote-images\"}]'\n: ''\n}\n

                        Note

                        Make sure the support for the above mentioned logic is implemented. Please refer to the How to Redefine or Extend the EDP Pipeline Stages Library section of the guide.

                        After the steps above are performed, the new custom job-provision will be available in Adding Stage during the CD pipeline creation in Admin Console.

                        Custom CD provision

                      "},{"location":"operator-guide/manage-jenkins-ci-job-provision/","title":"Manage Jenkins CI Pipeline Job Provisioner","text":"

                      The Jenkins CI job provisioner (or seed-job) is used to create and manage the application folder, and its Code Review, Build and Create Release pipelines. Depending on the version control system, different job provisioners are used. EDP supports integration with the following version control systems:

                      • Gerrit (default)
                      • GitHub (github)
                      • GitLab (gitlab)

                      By default, the Jenkins operator creates a pipeline for several types of application and libraries. There is a special job-provisions/ci folder in Jenkins for these provisioners. During the EDP deployment, a default provisioner is created for integration with Gerrit version control system. To configure integration with other version control systems, you need to add the required job provisioners to job-provisions/ci folder in Jenkins.

                      "},{"location":"operator-guide/manage-jenkins-ci-job-provision/#create-custom-provisioner-custom-defaultgithubgitlab","title":"Create Custom Provisioner (custom-default/github/gitlab)","text":"

                      In some cases it is necessary to modify or update the job provisioner logic, for example when an added other code language needs to create a custom job provisioner on the basis of an existing one out of the box. Take the steps below to add a custom job provision:

                      1. Navigate to the Jenkins main page and open the job-provisions/ci folder, click New Item and type the name of job-provisions, for example - custom-github.

                        CI provisioner name

                        Scroll down to the Copy from field and enter \"/job-provisions/ci/github\", and click OK: Copy ci provisioner

                      2. Update the required parameters in the new provisioner. For example, if it is necessary to implement a new build tool docker, several parameters are to be updated. Add the following stages to the docker Code Review and Build pipelines for docker application:

                        stages['Code-review-application-docker'] = '[{\"name\": \"checkout\"},{\"name\": \"lint\"},{\"name\": \"build\"}]'\n...\nstages['Build-application-docker'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"lint\"},{\"name\": \"build\"},{\"name\": \"push\"},{\"name\": \"git-tag\"}]'\n...\ndef getStageKeyName(buildTool) {\n    ...\n    if (buildTool.toString().equalsIgnoreCase('docker')) {\n    return \"Code-review-application-docker\"\n}\n    ...\n}\n

                        Note

                        Make sure the support for the above mentioned logic is implemented. Please refer to the How to Redefine or Extend the EDP Pipeline Stages Library section of the guide.

                        Note

                        The default template should be changed if there is another creation logic for the Code Review, Build and Create Release pipelines. Furthermore, all pipeline types should have the necessary stages as well.

                        After the steps above are performed, the new custom job provision will be available in Advanced Settings during the application creation in the EDP Portal UI:

                        Custom ci provision

                      "},{"location":"operator-guide/manage-jenkins-ci-job-provision/#gerrit-default","title":"Gerrit (default)","text":"

                      During the EDP deployment, a default provisioner is created for integration with Gerrit version control system.

                      1. Find the configuration in job-provisions/ci/default.

                      2. Default template is presented below:

                        View: Default template
                        /* Copyright 2022 EPAM Systems.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n\nSee the License for the specific language governing permissions and\nlimitations under the License. */\n\nimport groovy.json.*\nimport jenkins.model.Jenkins\n\nJenkins jenkins = Jenkins.instance\ndef stages = [:]\ndef jiraIntegrationEnabled = Boolean.parseBoolean(\"${JIRA_INTEGRATION_ENABLED}\" as String)\ndef commitValidateStage = jiraIntegrationEnabled ? ',{\"name\": \"commit-validate\"}' : ''\ndef createJIMStage = jiraIntegrationEnabled ? ',{\"name\": \"create-jira-issue-metadata\"}' : ''\ndef platformType = \"${PLATFORM_TYPE}\"\ndef buildStage = platformType.toString() == \"kubernetes\" ? ',{\"name\": \"build-image-kaniko\"}' : ',{\"name\": \"build-image-from-dockerfile\"}'\ndef buildTool = \"${BUILD_TOOL}\"\ndef goBuildStage = buildTool.toString() == \"go\" ? ',{\"name\": \"build\"}' : ',{\"name\": \"compile\"}'\n\nstages['Code-review-application'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" + goBuildStage +\n',{\"name\": \"tests\"},[{\"name\": \"sonar\"},{\"name\": \"dockerfile-lint\"},{\"name\": \"helm-lint\"}]]'\nstages['Code-review-library'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"compile\"},{\"name\": \"tests\"},{\"name\": \"sonar\"}]'\nstages['Code-review-autotests'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"},{\"name\": \"sonar\"}' + ']'\nstages['Code-review-default'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" + ']'\nstages['Code-review-library-terraform'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"terraform-lint\"}]'\nstages['Code-review-library-opa'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"}]'\nstages['Code-review-library-codenarc'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"sonar\"},{\"name\": \"build\"}]'\nstages['Code-review-library-kaniko'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"dockerbuild-verify\"}]'\n\nstages['Build-library-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-npm'] = stages['Build-library-maven']\nstages['Build-library-gradle'] = stages['Build-library-maven']\nstages['Build-library-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-terraform'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"terraform-lint\"}' +\n',{\"name\": \"terraform-plan\"},{\"name\": \"terraform-apply\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-opa'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"tests\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-codenarc'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' +\n\"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-kaniko'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"build-image-kaniko\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Build-autotests-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-autotests-gradle'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Build-application-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sast\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildStage}\" +\n',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-npm'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildStage}\" +\n',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-gradle'] = stages['Build-application-maven']\nstages['Build-application-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"}' + \"${buildStage}\" +\n',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-go'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sast\"},{\"name\": \"tests\"}' +\n',{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildStage}\" +\n\"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"}' +\n\"${buildStage}\" + ',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Create-release'] = '[{\"name\": \"checkout\"},{\"name\": \"create-branch\"},{\"name\": \"trigger-job\"}]'\n\ndef defaultBuild = '[{\"name\": \"checkout\"}' + \"${createJIMStage}\" + ']'\n\ndef codebaseName = \"${NAME}\"\ndef gitServerCrName = \"${GIT_SERVER_CR_NAME}\"\ndef gitServerCrVersion = \"${GIT_SERVER_CR_VERSION}\"\ndef gitCredentialsId = \"${GIT_CREDENTIALS_ID ? GIT_CREDENTIALS_ID : 'gerrit-ciuser-sshkey'}\"\ndef repositoryPath = \"${REPOSITORY_PATH}\"\ndef defaultBranch = \"${DEFAULT_BRANCH}\"\n\ndef codebaseFolder = jenkins.getItem(codebaseName)\nif (codebaseFolder == null) {\nfolder(codebaseName)\n}\n\ncreateListView(codebaseName, \"Releases\")\ncreateReleasePipeline(\"Create-release-${codebaseName}\", codebaseName, stages[\"Create-release\"], \"CreateRelease\",\nrepositoryPath, gitCredentialsId, gitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, platformType, defaultBranch)\n\nif (buildTool.toString().equalsIgnoreCase('none')) {\nreturn true\n}\n\nif (BRANCH) {\ndef branch = \"${BRANCH}\"\ndef formattedBranch = \"${branch.toUpperCase().replaceAll(/\\//, \"-\")}\"\ncreateListView(codebaseName, formattedBranch)\n\ndef type = \"${TYPE}\"\ndef crKey = getStageKeyName(buildTool)\ncreateCiPipeline(\"Code-review-${codebaseName}\", codebaseName, stages[crKey], \"CodeReview\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion)\n\ndef buildKey = \"Build-${type}-${buildTool.toLowerCase()}\".toString()\nif (type.equalsIgnoreCase('application') || type.equalsIgnoreCase('library') || type.equalsIgnoreCase('autotests')) {\ndef jobExists = false\nif(\"${formattedBranch}-Build-${codebaseName}\".toString() in Jenkins.instance.getAllItems().collect{it.name})\njobExists = true\n\ncreateCiPipeline(\"Build-${codebaseName}\", codebaseName, stages.get(buildKey, defaultBuild), \"Build\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion)\n\nif(!jobExists)\nqueue(\"${codebaseName}/${formattedBranch}-Build-${codebaseName}\")\n}\n}\n\ndef createCiPipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId, watchBranch, gitServerCrName, gitServerCrVersion) {\npipelineJob(\"${codebaseName}/${watchBranch.toUpperCase().replaceAll(/\\//, \"-\")}-${pipelineName}\") {\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\ntriggers {\ngerrit {\nevents {\nif (pipelineName.contains(\"Build\"))\nchangeMerged()\nelse\npatchsetCreated()\n}\nproject(\"plain:${codebaseName}\", [\"plain:${watchBranch}\"])\n}\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${codebaseStages}\", \"Consequence of stages in JSON format to be run during execution\")\nstringParam(\"GERRIT_PROJECT_NAME\", \"${codebaseName}\", \"Gerrit project name(Codebase name) to be build\")\nstringParam(\"BRANCH\", \"${watchBranch}\", \"Branch to build artifact from\")\n}\n}\n}\n}\n\ndef getStageKeyName(buildTool) {\nif (buildTool.toString().equalsIgnoreCase('terraform')) {\nreturn \"Code-review-library-terraform\"\n}\nif (buildTool.toString().equalsIgnoreCase('opa')) {\nreturn \"Code-review-library-opa\"\n}\nif (buildTool.toString().equalsIgnoreCase('codenarc')) {\nreturn \"Code-review-library-codenarc\"\n}\nif (buildTool.toString().equalsIgnoreCase('kaniko')) {\nreturn \"Code-review-library-kaniko\"\n}\ndef buildToolsOutOfTheBox = [\"maven\",\"npm\",\"gradle\",\"dotnet\",\"none\",\"go\",\"python\"]\ndef supBuildTool = buildToolsOutOfTheBox.contains(buildTool.toString())\nreturn supBuildTool ? \"Code-review-${TYPE}\" : \"Code-review-default\"\n}\n\ndef createReleasePipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId,\ngitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, platformType, defaultBranch) {\npipelineJob(\"${codebaseName}/${pipelineName}\") {\nlogRotator {\nnumToKeep(14)\ndaysToKeep(30)\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"STAGES\", \"${codebaseStages}\", \"\")\nif (pipelineName.contains(\"Create-release\")) {\nstringParam(\"JIRA_INTEGRATION_ENABLED\", \"${jiraIntegrationEnabled}\", \"Is Jira integration enabled\")\nstringParam(\"PLATFORM_TYPE\", \"${platformType}\", \"Platform type\")\nstringParam(\"GERRIT_PROJECT\", \"${codebaseName}\", \"\")\nstringParam(\"RELEASE_NAME\", \"\", \"Name of the release(branch to be created)\")\nstringParam(\"COMMIT_ID\", \"\", \"Commit ID that will be used to create branch from for new release. If empty, DEFAULT_BRANCH will be used\")\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"REPOSITORY_PATH\", \"${repository}\", \"Full repository path\")\nstringParam(\"DEFAULT_BRANCH\", \"${defaultBranch}\", \"Default repository branch\")\n}\n}\n}\n}\n}\n\ndef createListView(codebaseName, branchName) {\nlistView(\"${codebaseName}/${branchName}\") {\nif (branchName.toLowerCase() == \"releases\") {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^Create-release.*\")\n}\n}\n} else {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^${branchName}-(Code-review|Build).*\")\n}\n}\n}\ncolumns {\nstatus()\nweather()\nname()\nlastSuccess()\nlastFailure()\nlastDuration()\nbuildButton()\n}\n}\n}\n

                        Job Provision Pipeline Parameters

                        The job-provisions pipeline consists of the following parameters of type string:

                      • NAME - the application name;
                      • TYPE - the codebase type (the application / library / autotest);
                      • BUILD_TOOL - a tool that is used to build the application;
                      • BRANCH - a branch name;
                      • GIT_SERVER_CR_NAME - the name of the application Git server custom resource;
                      • GIT_SERVER_CR_VERSION - the version of the application Git server custom resource;
                      • GIT_CREDENTIALS_ID - the secret name where Git server credentials are stored (default 'gerrit-ciuser-sshkey');
                      • REPOSITORY_PATH - the full repository path;
                      • JIRA_INTEGRATION_ENABLED - the Jira integration is enabled or not;
                      • PLATFORM_TYPE - the type of platform (kubernetes or openshift);
                      • DEFAULT_BRANCH - the default repository branch.
                      "},{"location":"operator-guide/manage-jenkins-ci-job-provision/#github-github","title":"GitHub (github)","text":"

                      To create a new job provision for work with GitHub, take the following steps:

                      1. Navigate to the Jenkins main page and open the job-provisions/ci folder.

                      2. Click New Item and type the name of job-provisions - github.

                      3. Select the Freestyle project option and click OK.

                      4. Select the Discard old builds check box and configure a few parameters:

                        Strategy: Log Rotation

                        Days to keep builds: 10

                        Max # of builds to keep: 10

                      5. Select the This project is parameterized check box and add a few input parameters (the type of the variables is string):

                        • NAME;
                        • TYPE;
                        • BUILD_TOOL;
                        • BRANCH;
                        • GIT_SERVER_CR_NAME;
                        • GIT_SERVER_CR_VERSION;
                        • GIT_CREDENTIALS_ID;
                        • REPOSITORY_PATH;
                        • JIRA_INTEGRATION_ENABLED;
                        • PLATFORM_TYPE;
                        • DEFAULT_BRANCH.
                      6. Check the Execute concurrent builds if necessary option.

                      7. Check the Restrict where this project can be run option.

                      8. Fill in the Label Expression field by typing master to ensure job runs on Jenkins Master.

                      9. In the Build section, perform the following:

                        • Select DSL Script;
                        • Select the Use the provided DSL script check box:

                        DSL script check box

                      10. As soon as all the steps above are performed, insert the code:

                        View: Template
                        import groovy.json.*\nimport jenkins.model.Jenkins\nimport javaposse.jobdsl.plugin.*\nimport com.cloudbees.hudson.plugins.folder.*\n\nJenkins jenkins = Jenkins.instance\ndef stages = [:]\ndef jiraIntegrationEnabled = Boolean.parseBoolean(\"${JIRA_INTEGRATION_ENABLED}\" as String)\ndef commitValidateStage = jiraIntegrationEnabled ? ',{\"name\": \"commit-validate\"}' : ''\ndef createJIMStage = jiraIntegrationEnabled ? ',{\"name\": \"create-jira-issue-metadata\"}' : ''\ndef platformType = \"${PLATFORM_TYPE}\"\ndef buildStage = platformType == \"kubernetes\" ? ',{\"name\": \"build-image-kaniko\"}' : ',{\"name\": \"build-image-from-dockerfile\"}'\ndef buildTool = \"${BUILD_TOOL}\"\ndef goBuildStage = buildTool.toString() == \"go\" ? ',{\"name\": \"build\"}' : ',{\"name\": \"compile\"}'\n\nstages['Code-review-application'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" + goBuildStage +\n',{\"name\": \"tests\"},[{\"name\": \"sonar\"},{\"name\": \"dockerfile-lint\"},{\"name\": \"helm-lint\"}]]'\nstages['Code-review-library'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"compile\"},{\"name\": \"tests\"},' +\n'{\"name\": \"sonar\"}]'\nstages['Code-review-autotests'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"},{\"name\": \"sonar\"}' + ']'\nstages['Code-review-default'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" + ']'\nstages['Code-review-library-terraform'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"terraform-lint\"}]'\nstages['Code-review-library-opa'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"}]'\nstages['Code-review-library-codenarc'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"sonar\"},{\"name\": \"build\"}]'\nstages['Code-review-library-kaniko'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"dockerbuild-verify\"}]'\n\nstages['Build-library-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-npm'] = stages['Build-library-maven']\nstages['Build-library-gradle'] = stages['Build-library-maven']\nstages['Build-library-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-terraform'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"terraform-lint\"}' +\n',{\"name\": \"terraform-plan\"},{\"name\": \"terraform-apply\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-opa'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"tests\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-codenarc'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' +\n\"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-kaniko'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"build-image-kaniko\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Build-autotests-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-autotests-gradle'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Build-application-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sast\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildStage}\" +\n',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-npm'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildStage}\" +\n',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-gradle'] = stages['Build-application-maven']\nstages['Build-application-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"}' + \"${buildStage}\" +\n',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-go'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sast\"},{\"name\": \"tests\"},{\"name\": \"sonar\"},' +\n'{\"name\": \"build\"}' + \"${buildStage}\" + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},{\"name\": \"tests\"},{\"name\": \"sonar\"}' +\n\"${buildStage}\" + ',{\"name\":\"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Create-release'] = '[{\"name\": \"checkout\"},{\"name\": \"create-branch\"},{\"name\": \"trigger-job\"}]'\n\ndef buildToolsOutOfTheBox = [\"maven\",\"npm\",\"gradle\",\"dotnet\",\"none\",\"go\",\"python\"]\ndef defaultStages = '[{\"name\": \"checkout\"}' + \"${createJIMStage}\" + ']'\n\ndef codebaseName = \"${NAME}\"\ndef gitServerCrName = \"${GIT_SERVER_CR_NAME}\"\ndef gitServerCrVersion = \"${GIT_SERVER_CR_VERSION}\"\ndef gitCredentialsId = \"${GIT_CREDENTIALS_ID ? GIT_CREDENTIALS_ID : 'gerrit-ciuser-sshkey'}\"\ndef repositoryPath = \"${REPOSITORY_PATH.replaceAll(~/:\\d+\\\\//,\"/\")}\"\ndef githubRepository = \"https://${repositoryPath.split(\"@\")[1]}\"\ndef defaultBranch = \"${DEFAULT_BRANCH}\"\n\ndef codebaseFolder = jenkins.getItem(codebaseName)\nif (codebaseFolder == null) {\n    folder(codebaseName)\n}\n\ncreateListView(codebaseName, \"Releases\")\ncreateReleasePipeline(\"Create-release-${codebaseName}\", codebaseName, stages[\"Create-release\"], \"CreateRelease\",\n        repositoryPath, gitCredentialsId, gitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, platformType, defaultBranch)\n\nif (buildTool.toString().equalsIgnoreCase('none')) {\n    return true\n}\n\nif (BRANCH) {\n    def branch = \"${BRANCH}\"\n    def formattedBranch = \"${branch.toUpperCase().replaceAll(/\\\\//, \"-\")}\"\ncreateListView(codebaseName, formattedBranch)\n\ndef type = \"${TYPE}\"\ndef supBuildTool = buildToolsOutOfTheBox.contains(buildTool.toString())\ndef crKey = getStageKeyName(buildTool).toString()\ncreateCodeReviewPipeline(\"Code-review-${codebaseName}\", codebaseName, stages.get(crKey, defaultStages), \"CodeReview\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion, githubRepository)\nregisterWebHook(repositoryPath)\n\n\ndef buildKey = \"Build-${type}-${buildTool.toLowerCase()}\".toString()\n\nif (type.equalsIgnoreCase('application') || type.equalsIgnoreCase('library') || type.equalsIgnoreCase('autotests')) {\ndef jobExists = false\nif(\"${formattedBranch}-Build-${codebaseName}\".toString() in Jenkins.instance.getAllItems().collect{it.name})\njobExists = true\ncreateBuildPipeline(\"Build-${codebaseName}\", codebaseName, stages.get(buildKey, defaultStages), \"Build\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion, githubRepository)\nregisterWebHook(repositoryPath, 'build')\n\nif(!jobExists)\nqueue(\"${codebaseName}/${formattedBranch}-Build-${codebaseName}\")\n}\n}\n\ndef getStageKeyName(buildTool) {\nif (buildTool.toString().equalsIgnoreCase('terraform')) {\nreturn \"Code-review-library-terraform\"\n}\nif (buildTool.toString().equalsIgnoreCase('opa')) {\nreturn \"Code-review-library-opa\"\n}\nif (buildTool.toString().equalsIgnoreCase('codenarc')) {\nreturn \"Code-review-library-codenarc\"\n}\nif (buildTool.toString().equalsIgnoreCase('kaniko')) {\nreturn \"Code-review-library-kaniko\"\n}\ndef buildToolsOutOfTheBox = [\"maven\",\"npm\",\"gradle\",\"dotnet\",\"none\",\"go\",\"python\"]\ndef supBuildTool = buildToolsOutOfTheBox.contains(buildTool.toString())\nreturn supBuildTool ? \"Code-review-${TYPE}\" : \"Code-review-default\"\n}\n\ndef createCodeReviewPipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId, defaultBranch, gitServerCrName, gitServerCrVersion, githubRepository) {\npipelineJob(\"${codebaseName}/${defaultBranch.toUpperCase().replaceAll(/\\\\//, \"-\")}-${pipelineName}\") {\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${codebaseStages}\", \"Consequence of stages in JSON format to be run during execution\")\nstringParam(\"GERRIT_PROJECT_NAME\", \"${codebaseName}\", \"Gerrit project name(Codebase name) to be build\")\nif (pipelineName.contains(\"Build\"))\nstringParam(\"BRANCH\", \"${defaultBranch}\", \"Branch to build artifact from\")\nelse\nstringParam(\"BRANCH\", \"\\${ghprbActualCommit}\", \"Branch to build artifact from\")\n}\n}\ntriggers {\ngithubPullRequest {\ncron('')\nonlyTriggerPhrase(false)\nuseGitHubHooks(true)\npermitAll(true)\nautoCloseFailedPullRequests(false)\ndisplayBuildErrorsOnDownstreamBuilds(false)\nwhiteListTargetBranches([defaultBranch.toString()])\nextensions {\ncommitStatus {\ncontext('Jenkins Code-Review')\ntriggeredStatus('Build is Triggered')\nstartedStatus('Build is Started')\naddTestResults(true)\ncompletedStatus('SUCCESS', 'Verified')\ncompletedStatus('FAILURE', 'Failed')\ncompletedStatus('PENDING', 'Penging')\ncompletedStatus('ERROR', 'Error')\n}\n}\n}\n}\nproperties {\ngithubProjectProperty {\nprojectUrlStr(\"${githubRepository}\")\n}\n}\n}\n}\n\ndef createBuildPipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId, defaultBranch, gitServerCrName, gitServerCrVersion, githubRepository) {\npipelineJob(\"${codebaseName}/${defaultBranch.toUpperCase().replaceAll(/\\\\//, \"-\")}-${pipelineName}\") {\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\nnode {\\n    git credentialsId: \\'${credId}\\', url: \\'${repository}\\', branch: \\'${BRANCH}\\'\\n}\\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${codebaseStages}\", \"Consequence of stages in JSON format to be run during execution\")\nstringParam(\"GERRIT_PROJECT_NAME\", \"${codebaseName}\", \"Gerrit project name(Codebase name) to be build\")\nstringParam(\"BRANCH\", \"${defaultBranch}\", \"Branch to run from\")\n}\n}\ntriggers {\ngitHubPushTrigger()\n}\nproperties {\ngithubProjectProperty {\nprojectUrlStr(\"${githubRepository}\")\n}\n}\n}\n}\n\n\ndef createListView(codebaseName, branchName) {\nlistView(\"${codebaseName}/${branchName}\") {\nif (branchName.toLowerCase() == \"releases\") {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^Create-release.*\")\n}\n}\n} else {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^${branchName}-(Code-review|Build).*\")\n}\n}\n}\ncolumns {\nstatus()\nweather()\nname()\nlastSuccess()\nlastFailure()\nlastDuration()\nbuildButton()\n}\n}\n}\n\ndef createReleasePipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId,\ngitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, platformType, defaultBranch) {\npipelineJob(\"${codebaseName}/${pipelineName}\") {\nlogRotator {\nnumToKeep(14)\ndaysToKeep(30)\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"STAGES\", \"${codebaseStages}\", \"\")\nif (pipelineName.contains(\"Create-release\")) {\nstringParam(\"JIRA_INTEGRATION_ENABLED\", \"${jiraIntegrationEnabled}\", \"Is Jira integration enabled\")\nstringParam(\"PLATFORM_TYPE\", \"${platformType}\", \"Platform type\")\nstringParam(\"GERRIT_PROJECT\", \"${codebaseName}\", \"\")\nstringParam(\"RELEASE_NAME\", \"\", \"Name of the release(branch to be created)\")\nstringParam(\"COMMIT_ID\", \"\", \"Commit ID that will be used to create branch from for new release. If empty, DEFAULT_BRANCH will be used\")\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"REPOSITORY_PATH\", \"${repository}\", \"Full repository path\")\nstringParam(\"DEFAULT_BRANCH\", \"${defaultBranch}\", \"Default repository branch\")\n}\n}\n}\n}\n}\n\ndef registerWebHook(repositoryPath, type = 'code-review') {\ndef url = repositoryPath.split('@')[1].split('/')[0]\ndef owner = repositoryPath.split('@')[1].split('/')[1]\ndef repo = repositoryPath.split('@')[1].split('/')[2]\ndef apiUrl = 'https://api.' + url + '/repos/' + owner + '/' + repo + '/hooks'\ndef webhookUrl = ''\ndef webhookConfig = [:]\ndef config = [:]\ndef events = []\n\nif (type.equalsIgnoreCase('build')) {\nwebhookUrl = System.getenv('JENKINS_UI_URL') + \"/github-webhook/\"\nevents = [\"push\"]\nconfig[\"url\"] = webhookUrl\nconfig[\"content_type\"] = \"json\"\nconfig[\"insecure_ssl\"] = 0\nwebhookConfig[\"name\"] = \"web\"\nwebhookConfig[\"config\"] = config\nwebhookConfig[\"events\"] = events\nwebhookConfig[\"active\"] = true\n\n} else {\nwebhookUrl = System.getenv('JENKINS_UI_URL') + \"/ghprbhook/\"\nevents = [\"issue_comment\",\"pull_request\"]\nconfig[\"url\"] = webhookUrl\nconfig[\"content_type\"] = \"form\"\nconfig[\"insecure_ssl\"] = 0\nwebhookConfig[\"name\"] = \"web\"\nwebhookConfig[\"config\"] = config\nwebhookConfig[\"events\"] = events\nwebhookConfig[\"active\"] = true\n}\n\ndef requestBody = JsonOutput.toJson(webhookConfig)\ndef http = new URL(apiUrl).openConnection() as HttpURLConnection\nhttp.setRequestMethod('POST')\nhttp.setDoOutput(true)\nprintln(apiUrl)\nhttp.setRequestProperty(\"Accept\", 'application/json')\nhttp.setRequestProperty(\"Content-Type\", 'application/json')\nhttp.setRequestProperty(\"Authorization\", \"token ${getSecretValue('github-access-token')}\")\nhttp.outputStream.write(requestBody.getBytes(\"UTF-8\"))\nhttp.connect()\nprintln(http.responseCode)\n\nif (http.responseCode == 201) {\nresponse = new JsonSlurper().parseText(http.inputStream.getText('UTF-8'))\n} else {\nresponse = new JsonSlurper().parseText(http.errorStream.getText('UTF-8'))\n}\n\nprintln \"response: ${response}\"\n}\n\ndef getSecretValue(name) {\ndef creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(\ncom.cloudbees.plugins.credentials.common.StandardCredentials.class,\nJenkins.instance,\nnull,\nnull\n)\n\ndef secret = creds.find { it.properties['id'] == name }\nreturn secret != null ? secret['secret'] : null\n}\n

                        After the steps above are performed, the new custom job-provision will be available in Advanced Settings during the application creation in the EDP Portal UI:

                        Github job provision

                      "},{"location":"operator-guide/manage-jenkins-ci-job-provision/#gitlab-gitlab","title":"GitLab (gitlab)","text":"

                      To create a new job provision for work with GitLab, take the following steps:

                      1. Navigate to the Jenkins main page and open the job-provisions/ci folder.

                      2. Click New Item and type the name of job-provisions - gitlab.

                      3. Select the Freestyle project option and click OK.

                      4. Select the Discard old builds check box and configure a few parameters:

                        Strategy: Log Rotation

                        Days to keep builds: 10

                        Max # of builds to keep: 10

                      5. Select the This project is parameterized check box and add a few input parameters as the following strings (the type of the variables is string):

                        • NAME;
                        • TYPE;
                        • BUILD_TOOL;
                        • BRANCH;
                        • GIT_SERVER_CR_NAME;
                        • GIT_SERVER_CR_VERSION;
                        • GIT_SERVER;
                        • GIT_SSH_PORT;
                        • GIT_USERNAME;
                        • GIT_CREDENTIALS_ID;
                        • REPOSITORY_PATH;
                        • JIRA_INTEGRATION_ENABLED;
                        • PLATFORM_TYPE;
                        • DEFAULT_BRANCH;
                      6. Check the Execute concurrent builds if necessary option.

                      7. Check the Restrict where this project can be run option.

                      8. Fill in the Label Expression field by typing master to ensure job runs on Jenkins Master.

                      9. In the Build Steps section, perform the following:

                        • Select Add build step;
                        • Choose Process Job DSLs;
                        • Select the Use the provided DSL script check box:

                        DSL script check box

                      10. As soon as all the steps above are performed, insert the code:

                        View: Template
                        import groovy.json.*\nimport jenkins.model.Jenkins\n\nJenkins jenkins = Jenkins.instance\ndef stages = [:]\ndef jiraIntegrationEnabled = Boolean.parseBoolean(\"${JIRA_INTEGRATION_ENABLED}\" as String)\ndef commitValidateStage = jiraIntegrationEnabled ? ',{\"name\": \"commit-validate\"}' : ''\ndef createJIMStage = jiraIntegrationEnabled ? ',{\"name\": \"create-jira-issue-metadata\"}' : ''\ndef platformType = \"${PLATFORM_TYPE}\"\ndef buildTool = \"${BUILD_TOOL}\"\ndef buildImageStage = platformType == \"kubernetes\" ? ',{\"name\": \"build-image-kaniko\"},' : ',{\"name\": \"build-image-from-dockerfile\"},'\ndef goBuildImageStage = platformType == \"kubernetes\" ? ',{\"name\": \"build-image-kaniko\"}' : ',{\"name\": \"build-image-from-dockerfile\"}'\ndef goBuildStage = buildTool.toString() == \"go\" ? ',{\"name\": \"build\"}' : ',{\"name\": \"compile\"}'\n\nstages['Code-review-application'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" + goBuildStage +\n',{\"name\": \"tests\"},[{\"name\": \"sonar\"},{\"name\": \"dockerfile-lint\"},{\"name\": \"helm-lint\"}]]'\nstages['Code-review-library'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"compile\"},{\"name\": \"tests\"},' +\n'{\"name\": \"sonar\"}]'\nstages['Code-review-autotests'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"},{\"name\": \"sonar\"}' + ']'\nstages['Code-review-default'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" + ']'\nstages['Code-review-library-terraform'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"terraform-lint\"}]'\nstages['Code-review-library-opa'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"}]'\nstages['Code-review-library-codenarc'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"sonar\"},{\"name\": \"build\"}]'\nstages['Code-review-library-kaniko'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"dockerbuild-verify\"}]'\n\nstages['Build-library-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-npm'] = stages['Build-library-maven']\nstages['Build-library-gradle'] = stages['Build-library-maven']\nstages['Build-library-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-terraform'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"terraform-lint\"}' +\n',{\"name\": \"terraform-plan\"},{\"name\": \"terraform-apply\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-opa'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"tests\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-codenarc'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' +\n\"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-kaniko'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"build-image-kaniko\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Build-autotests-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-autotests-gradle'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Build-application-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sast\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildImageStage}\" +\n'{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},{\"name\": \"tests\"},{\"name\": \"sonar\"}' +\n\"${buildImageStage}\" + '{\"name\":\"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-npm'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildImageStage}\" +\n'{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-gradle'] = stages['Build-application-maven']\nstages['Build-application-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"}' + \"${buildImageStage}\" +\n'{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-terraform'] = '[{\"name\": \"checkout\"},{\"name\": \"tool-init\"},' +\n'{\"name\": \"lint\"},{\"name\": \"git-tag\"}]'\nstages['Build-application-helm'] = '[{\"name\": \"checkout\"},{\"name\": \"lint\"}]'\nstages['Build-application-docker'] = '[{\"name\": \"checkout\"},{\"name\": \"lint\"}]'\nstages['Build-application-go'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sast\"},{\"name\": \"tests\"},{\"name\": \"sonar\"},' +\n'{\"name\": \"build\"}' + \"${goBuildImageStage}\" + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Create-release'] = '[{\"name\": \"checkout\"},{\"name\": \"create-branch\"},{\"name\": \"trigger-job\"}]'\n\ndef defaultStages = '[{\"name\": \"checkout\"}' + \"${createJIMStage}\" + ']'\n\n\ndef codebaseName = \"${NAME}\"\ndef gitServerCrName = \"${GIT_SERVER_CR_NAME}\"\ndef gitServerCrVersion = \"${GIT_SERVER_CR_VERSION}\"\ndef gitServer = \"${GIT_SERVER ? GIT_SERVER : 'gerrit'}\"\ndef gitSshPort = \"${GIT_SSH_PORT ? GIT_SSH_PORT : '29418'}\"\ndef gitUsername = \"${GIT_USERNAME ? GIT_USERNAME : 'jenkins'}\"\ndef gitCredentialsId = \"${GIT_CREDENTIALS_ID ? GIT_CREDENTIALS_ID : 'gerrit-ciuser-sshkey'}\"\ndef defaultRepoPath = \"ssh://${gitUsername}@${gitServer}:${gitSshPort}/${codebaseName}\"\ndef repositoryPath = \"${REPOSITORY_PATH ? REPOSITORY_PATH : defaultRepoPath}\"\ndef defaultBranch = \"${DEFAULT_BRANCH}\"\n\ndef codebaseFolder = jenkins.getItem(codebaseName)\nif (codebaseFolder == null) {\nfolder(codebaseName)\n}\n\ncreateListView(codebaseName, \"Releases\")\ncreateReleasePipeline(\"Create-release-${codebaseName}\", codebaseName, stages[\"Create-release\"], \"CreateRelease\",\nrepositoryPath, gitCredentialsId, gitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, platformType, defaultBranch)\n\nif (BRANCH) {\ndef branch = \"${BRANCH}\"\ndef formattedBranch = \"${branch.toUpperCase().replaceAll(/\\\\//, \"-\")}\"\ncreateListView(codebaseName, formattedBranch)\n\ndef type = \"${TYPE}\"\ndef crKey = getStageKeyName(buildTool).toString()\ncreateCiPipeline(\"Code-review-${codebaseName}\", codebaseName, stages.get(crKey, defaultStages), \"CodeReview\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion)\n\ndef buildKey = \"Build-${type}-${buildTool.toLowerCase()}\".toString()\n\nif (type.equalsIgnoreCase('application') || type.equalsIgnoreCase('library') || type.equalsIgnoreCase('autotests')) {\ndef jobExists = false\nif(\"${formattedBranch}-Build-${codebaseName}\".toString() in Jenkins.instance.getAllItems().collect{it.name}) {\njobExists = true\n}\ncreateCiPipeline(\"Build-${codebaseName}\", codebaseName, stages.get(buildKey, defaultStages), \"Build\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion)\nif(!jobExists) {\nqueue(\"${codebaseName}/${formattedBranch}-Build-${codebaseName}\")\n}\n}\n}\n\n\ndef createCiPipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId, defaultBranch, gitServerCrName, gitServerCrVersion) {\ndef jobName = \"${defaultBranch.toUpperCase().replaceAll(/\\\\//, \"-\")}-${pipelineName}\"\ndef existingJob = Jenkins.getInstance().getItemByFullName(\"${codebaseName}/${jobName}\")\ndef webhookToken = null\nif (existingJob) {\ndef triggersMap = existingJob.getTriggers()\ntriggersMap.each { key, value ->\nwebhookToken = value.getSecretToken()\n}\n} else {\ndef random = new byte[16]\nnew java.security.SecureRandom().nextBytes(random)\nwebhookToken = random.encodeHex().toString()\n}\npipelineJob(\"${codebaseName}/${jobName}\") {\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\nproperties {\ngitLabConnection {\ngitLabConnection('gitlab')\n}\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${codebaseStages}\", \"Consequence of stages in JSON format to be run during execution\")\nstringParam(\"GERRIT_PROJECT_NAME\", \"${codebaseName}\", \"Gerrit project name(Codebase name) to be build\")\nif (pipelineName.contains(\"Build\"))\nstringParam(\"BRANCH\", \"${defaultBranch}\", \"Branch to build artifact from\")\nelse\nstringParam(\"BRANCH\", \"\\${gitlabMergeRequestLastCommit}\", \"Branch to build artifact from\")\n}\n}\ntriggers {\ngitlabPush {\nbuildOnMergeRequestEvents(pipelineName.contains(\"Build\") ? false : true)\nbuildOnPushEvents(pipelineName.contains(\"Build\") ? true : false)\nenableCiSkip(false)\nsetBuildDescription(true)\nrebuildOpenMergeRequest(pipelineName.contains(\"Build\") ? 'never' : 'source')\ncommentTrigger(\"Build it please\")\nskipWorkInProgressMergeRequest(true)\ntargetBranchRegex(\"${defaultBranch}\")\n}\n}\nconfigure {\nit / triggers / 'com.dabsquared.gitlabjenkins.GitLabPushTrigger' << secretToken(webhookToken)\nit / triggers / 'com.dabsquared.gitlabjenkins.GitLabPushTrigger' << triggerOnApprovedMergeRequest(pipelineName.contains(\"Build\") ? false : true)\nit / triggers / 'com.dabsquared.gitlabjenkins.GitLabPushTrigger' << pendingBuildName(pipelineName.contains(\"Build\") ? \"\" : \"Jenkins\")\n}\n}\nregisterWebHook(repository, codebaseName, jobName, webhookToken)\n}\n\ndef getStageKeyName(buildTool) {\nif (buildTool.toString().equalsIgnoreCase('terraform')) {\nreturn \"Code-review-library-terraform\"\n}\nif (buildTool.toString().equalsIgnoreCase('opa')) {\nreturn \"Code-review-library-opa\"\n}\nif (buildTool.toString().equalsIgnoreCase('codenarc')) {\nreturn \"Code-review-library-codenarc\"\n}\nif (buildTool.toString().equalsIgnoreCase('kaniko')) {\nreturn \"Code-review-library-kaniko\"\n}\ndef buildToolsOutOfTheBox = [\"maven\",\"npm\",\"gradle\",\"dotnet\",\"none\",\"go\",\"python\"]\ndef supBuildTool = buildToolsOutOfTheBox.contains(buildTool.toString())\nreturn supBuildTool ? \"Code-review-${TYPE}\" : \"Code-review-default\"\n}\n\ndef createReleasePipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId,\ngitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, platformType, defaultBranch) {\npipelineJob(\"${codebaseName}/${pipelineName}\") {\nlogRotator {\nnumToKeep(14)\ndaysToKeep(30)\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"STAGES\", \"${codebaseStages}\", \"\")\nif (pipelineName.contains(\"Create-release\")) {\nstringParam(\"JIRA_INTEGRATION_ENABLED\", \"${jiraIntegrationEnabled}\", \"Is Jira integration enabled\")\nstringParam(\"PLATFORM_TYPE\", \"${platformType}\", \"Platform type\")\nstringParam(\"GERRIT_PROJECT\", \"${codebaseName}\", \"\")\nstringParam(\"RELEASE_NAME\", \"\", \"Name of the release(branch to be created)\")\nstringParam(\"COMMIT_ID\", \"\", \"Commit ID that will be used to create branch from for new release. If empty, DEFAULT_BRANCH will be used\")\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"REPOSITORY_PATH\", \"${repository}\", \"Full repository path\")\nstringParam(\"DEFAULT_BRANCH\", \"${defaultBranch}\", \"Default repository branch\")\n}\n}\n}\n}\n}\n\ndef createListView(codebaseName, branchName) {\nlistView(\"${codebaseName}/${branchName}\") {\nif (branchName.toLowerCase() == \"releases\") {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^Create-release.*\")\n}\n}\n} else {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^${branchName}-(Code-review|Build).*\")\n}\n}\n}\ncolumns {\nstatus()\nweather()\nname()\nlastSuccess()\nlastFailure()\nlastDuration()\nbuildButton()\n}\n}\n}\n\ndef registerWebHook(repositoryPath, codebaseName, jobName, webhookToken) {\ndef apiUrl = 'https://' + repositoryPath.replaceAll(\"ssh://\", \"\").split('@')[1].replace('/', \"%2F\").replaceAll(~/:\\d+%2F/, '/api/v4/projects/') + '/hooks'\ndef jobWebhookUrl = \"${System.getenv('JENKINS_UI_URL')}/project/${codebaseName}/${jobName}\"\ndef gitlabToken = getSecretValue('gitlab-access-token')\n\nif (checkWebHookExist(apiUrl, jobWebhookUrl, gitlabToken)) {\nprintln(\"[JENKINS][DEBUG] Webhook for job ${jobName} is already exist\\r\\n\")\nreturn\n}\n\nprintln(\"[JENKINS][DEBUG] Creating webhook for job ${jobName}\")\ndef webhookConfig = [:]\nwebhookConfig[\"url\"] = jobWebhookUrl\nwebhookConfig[\"push_events\"] = jobName.contains(\"Build\") ? \"true\" : \"false\"\nwebhookConfig[\"merge_requests_events\"] = jobName.contains(\"Build\") ? \"false\" : \"true\"\nwebhookConfig[\"issues_events\"] = \"false\"\nwebhookConfig[\"confidential_issues_events\"] = \"false\"\nwebhookConfig[\"tag_push_events\"] = \"false\"\nwebhookConfig[\"note_events\"] = \"true\"\nwebhookConfig[\"job_events\"] = \"false\"\nwebhookConfig[\"pipeline_events\"] = \"false\"\nwebhookConfig[\"wiki_page_events\"] = \"false\"\nwebhookConfig[\"enable_ssl_verification\"] = \"true\"\nwebhookConfig[\"token\"] = webhookToken\ndef requestBody = JsonOutput.toJson(webhookConfig)\ndef httpConnector = new URL(apiUrl).openConnection() as HttpURLConnection\nhttpConnector.setRequestMethod('POST')\nhttpConnector.setDoOutput(true)\n\nhttpConnector.setRequestProperty(\"Accept\", 'application/json')\nhttpConnector.setRequestProperty(\"Content-Type\", 'application/json')\nhttpConnector.setRequestProperty(\"PRIVATE-TOKEN\", \"${gitlabToken}\")\nhttpConnector.outputStream.write(requestBody.getBytes(\"UTF-8\"))\nhttpConnector.connect()\n\nif (httpConnector.responseCode == 201)\nprintln(\"[JENKINS][DEBUG] Webhook for job ${jobName} has been created\\r\\n\")\nelse {\nprintln(\"[JENKINS][ERROR] Responce code - ${httpConnector.responseCode}\")\ndef response = new JsonSlurper().parseText(httpConnector.errorStream.getText('UTF-8'))\nprintln(\"[JENKINS][ERROR] Failed to create webhook for job ${jobName}. Response - ${response}\")\n}\n}\n\ndef checkWebHookExist(apiUrl, jobWebhookUrl, gitlabToken) {\nprintln(\"[JENKINS][DEBUG] Checking if webhook ${jobWebhookUrl} exists\")\ndef httpConnector = new URL(apiUrl).openConnection() as HttpURLConnection\nhttpConnector.setRequestMethod('GET')\nhttpConnector.setDoOutput(true)\n\nhttpConnector.setRequestProperty(\"Accept\", 'application/json')\nhttpConnector.setRequestProperty(\"Content-Type\", 'application/json')\nhttpConnector.setRequestProperty(\"PRIVATE-TOKEN\", \"${gitlabToken}\")\nhttpConnector.connect()\n\nif (httpConnector.responseCode == 200) {\ndef response = new JsonSlurper().parseText(httpConnector.inputStream.getText('UTF-8'))\nreturn response.find { it.url == jobWebhookUrl } ? true : false\n}\n}\n\ndef getSecretValue(name) {\ndef creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(\ncom.cloudbees.plugins.credentials.common.StandardCredentials.class,\nJenkins.instance,\nnull,\nnull\n)\n\ndef secret = creds.find { it.properties['id'] == name }\nreturn secret != null ? secret['secret'] : null\n}\n

                        After the steps above are performed, the new custom job-provision will be available in Advanced Settings during the application creation in the EDP Portal UI:

                        Gitlab job provision

                      "},{"location":"operator-guide/manage-jenkins-ci-job-provision/#related-articles","title":"Related Articles","text":"
                      • CI Pipeline for Container
                      • GitLab Webhook Configuration
                      • GitHub Webhook Configuration
                      • Integrate GitHub/GitLab in Jenkins
                      • Integrate GitHub/GitLab in Tekton
                      "},{"location":"operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/","title":"Migrate CI Pipelines From Jenkins to Tekton","text":"

                      To migrate the CI pipelines for a codebase from Jenkins to Tekton, follow the steps below:

                      • Migrate CI Pipelines From Jenkins to Tekton
                      • Deploy a Custom EDP Scenario With Tekton and Jenkins CI Tools
                      • Disable Jenkins Triggers
                      • Manage Tekton Triggers the Codebase(s)
                      • Switch CI Tool for Codebase(s)
                      "},{"location":"operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/#deploy-a-custom-edp-scenario-with-tekton-and-jenkins-ci-tools","title":"Deploy a Custom EDP Scenario With Tekton and Jenkins CI Tools","text":"

                      Make sure that Tekton stack is deployed according to the documentation. Enable Tekton as an EDP subcomponent:

                      values.yaml
                      edp-tekton:\nenabled: true\n
                      "},{"location":"operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/#disable-jenkins-triggers","title":"Disable Jenkins Triggers","text":"

                      To disable Jenkins Triggers for the codebase, add the following code to the provisioner:

                      job-provisioner
                      def tektonCodebaseList = [\"<codebase_name>\"]\nif (!tektonCodebaseList.contains(codebaseName.toString())){\ntriggers {\ngerrit {\nevents {\nif (pipelineName.contains(\"Build\"))\nchangeMerged()\nelse\npatchsetCreated()\n}\nproject(\"plain:${codebaseName}\", [\"plain:${watchBranch}\"])\n}\n}\n}\n

                      Note

                      The sample above shows the usage of Gerrit VCS where the <codebase_name> value is your codebase name.

                      • If using GitHub or GitLab, additionally remove the webhook from the relevant repository.
                      • If webhooks generation for new codebase(s) is not required, correct the code above so that it creates a webhook in the job-provisioner.
                      • To recreate the pipeline in Jenkins, trigger the job-provisioner.
                      • Check that the new pipeline is created without triggering Gerrit events.
                      "},{"location":"operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/#manage-tekton-triggers-the-codebases","title":"Manage Tekton Triggers the Codebase(s)","text":"

                      By default, each Gerrit project inherits configuration from the All-Projects repository.

                      To exclude triggering in Jenkins and Tekton CI tools simultaneously, edit the configuration in the All-Projects repository or in the project which inherits rights from your project.

                      Edit the webhooks.config file in the refs/meta/config and remove all context from this configuration.

                      Warning

                      The clearance of the webhooks.config file will disable the pipeline trigger in Tekton.

                      To use Tekton pipelines, add the configuration to the corresponding Gerrit project (webhooks.config file in the refs/meta/config):

                      webhooks.config
                      [remote \"changemerged\"]\nurl = http://el-gerrit-listener:8080\nevent = change-merged\n[remote \"patchsetcreated\"]\nurl = http://el-gerrit-listener:8080\nevent = patchset-created\n
                      "},{"location":"operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/#switch-ci-tool-for-codebases","title":"Switch CI Tool for Codebase(s)","text":"

                      Go to the codebase Custom Resource and change the spec.ciTool field from jenkins to tekton.

                      "},{"location":"operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/#related-articles","title":"Related Articles","text":"
                      • Install EDP
                      • Install Tekton
                      "},{"location":"operator-guide/multitenant-logging/","title":"Multitenant Logging","text":"

                      Get acquainted with the multitenant logging components and the project logs location in the Shared cluster.

                      "},{"location":"operator-guide/multitenant-logging/#logging-components","title":"Logging Components","text":"

                      To configure the multitenant logging, it is necessary to deploy the following components:

                      • Grafana
                      • Loki
                      • Logging-operator
                      • Logging-operator stack-fluentbit

                      In Grafana, every tenant represents an organization, i.e. it is necessary to create an organization for every namespace in the cluster. To get more details regarding the architecture of the Logging Operator, please review the Diagram 1.

                      Logging operator scheme

                      Note

                      It is necessary to deploy Loki with the auth_enabled: true flag with the aim to ensure that the logs are separated for each tenant. For the authentication, Loki requires the HTTP header X-Scope-OrgID.

                      "},{"location":"operator-guide/multitenant-logging/#review-project-logs-in-grafana","title":"Review Project Logs in Grafana","text":"

                      To find the project logs, navigate to Grafana and follow the steps below:

                      Note

                      Grafana is a common service for different customers where each customer works in its own separated Grafana Organization and doesn't have any access to another project.

                      1. Choose the organization by clicking the Current Organization drop-down list. If a user is assigned to several organizations, switch easily by using the Switch button.

                        Current organization

                      2. Navigate to the left-side menu and click the Explore button to see the Log Browser:

                        Grafana explore

                      3. Click the Log Browser button to see the labels that can be used to filter logs (e.g., hostname, namespace, application name, pod, etc.):

                        Note

                        Enable the correct data source, select the relevant logging data from the top left-side corner, and pay attention that the data source name always follows the \u2039project_name\u203a-logging pattern.

                        Log browser

                      4. Filter out logs by clicking the Show logs button or write the query and click the Run query button.

                      5. Review the results with the quantity of logs per time, see the example below:

                        Logs example

                        • Expand the logs to get detailed information about the object entry:

                        Expand logs

                        • Use the following buttons to include or remove the labels from the query:

                        Addition button

                        • See the ad-hoc statistics for a particular label:

                        Ad-hoc stat example

                      "},{"location":"operator-guide/multitenant-logging/#related-articles","title":"Related Articles","text":"
                      • Grafana Documentation
                      "},{"location":"operator-guide/namespace-management/","title":"Manage Namespace","text":"

                      EDP provides the ability to deploy services to namespaces. By default, EDP creates these namespaces automatically. This chapter describes the alternative way of namespace creation and management.

                      "},{"location":"operator-guide/namespace-management/#overview","title":"Overview","text":"

                      Namespaces are typically created by the platform when running CD Pipelines. The operator creates them according to the specific format: edp-<application-name>-<stage-name>. The cd-pipeline-operator should have the permissions to automatically create namespaces when deploying applications and delete them when uninstalling applications.

                      "},{"location":"operator-guide/namespace-management/#disable-automatic-namespace-creation","title":"Disable Automatic Namespace Creation","text":"

                      Occasionally, there are cases when automatic creation of namespaces is not allowed. For example, due to security reasons of the project, EDP user may need to disable this setting. This option is manipulated by the manageNamespace parameter which is located in the values.yaml file. The manageNamespace parameter is set to true by default, but it can be changed to false. As an aftermath, after setting the manageNamespace parameter users are supposed to face the problem that they can not deploy their application in EDP Portal UI because of permission restrictions:

                      Namespace creation error

                      The error message shown above says that user needs to create the namespace in the edp-<application-name>-<stage-name> format first before creating stages. In addition to it, the cd-pipeline-operator must be granted with the administrator permissions to have the ability to manage this namespace. The manual namespace creation procedure does not depend on the deployment scenario whether Jenkins or Tekton is used. To create namespace manually, follow the steps below:

                      1. Create the namespace by running the command below:

                         kubectl create namespace edp-<pipelineName>-<stageName>\n
                      2. Create the administrator RoleBinding resource by applying the file below with the kubectl apply -f grant_admin_permissions.yaml command:

                        View: grant_admin_permissions.yaml
                         kind: RoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\nname: edp-cd-pipeline-operator-admin\nnamespace: edp-<pipelineName>-<stageName>\nsubjects:\n- kind: ServiceAccount\nname: edp-cd-pipeline-operator\nnamespace: edp\nroleRef:\napiGroup: rbac.authorization.k8s.io\nkind: ClusterRole\nname: admin\n
                      3. Restart the cd-pipeline-operator pod, in order not to wait for the operator reconciliation.

                      "},{"location":"operator-guide/namespace-management/#cd-pipeline-operator-rbac-model","title":"CD Pipeline Operator RBAC Model","text":"

                      The manageNamespace parameter also defines the resources that will be created depending on the cluster deployed whether it is OpenShift or Kubernetes. This scheme displays the nesting of operator input parameters:

                      CD Pipeline Operator Input Parameter Scheme

                      Note

                      When deploying application on the OpenShift cluster, the registry-view RoleBinding is created in the main namespace.

                      "},{"location":"operator-guide/namespace-management/#related-articles","title":"Related Articles","text":"
                      • EDP Access Model
                      • EKS OIDC With Keycloak
                      "},{"location":"operator-guide/nexus-sonatype/","title":"Nexus Sonatype Integration","text":"

                      This documentation guide provides comprehensive instructions for integrating Nexus with the EPAM Delivery Platform.

                      Info

                      In EDP release 3.5, we have changed the deployment strategy for the nexus-operator component, now it is not installed by default. The nexusURL parameter management has been transferred from the values.yaml file to Kubernetes secrets.

                      "},{"location":"operator-guide/nexus-sonatype/#prerequisites","title":"Prerequisites","text":"

                      Before proceeding, ensure that you have the following prerequisites:

                      • Kubectl version 1.26.0 is installed.
                      • Helm version 3.12.0+ is installed.
                      "},{"location":"operator-guide/nexus-sonatype/#installation","title":"Installation","text":"

                      To install Nexus with pre-defined templates, use the nexus-operator installed via Cluster Add-Ons approach.

                      "},{"location":"operator-guide/nexus-sonatype/#configuration","title":"Configuration","text":"

                      To ensure strong authentication and accurate access control, creating a Nexus Sonatype service account with the name ci.user is crucial. This user serves as a unique identifier, facilitating connection with the EDP ecosystem.

                      To create the Nexus ci.userand define repository parameters follow the steps below:

                      1. Open the Nexus UI and navigate to Server administration and configuration -> Security -> User. Click the Create local user button to create a new user:

                        Nexus user settings

                      2. Type the ci-user username, define an expiration period, and click the Generate button to create the token:

                        Nexus create user

                      3. EDP relies on a predetermined repository naming convention all repository names are predefined. Navigate to Server administration and configuration -> Repository -> Repositories in Nexus. You can only create a repository with the required language.

                        Nexus repository list

                        JavaJavaScriptDotnetPython

                        a) Click Create a repository by selecting \"maven2(proxy)\" and set the name as \"edp-maven-proxy\". Enter the remote storage URL as \"https://repo1.maven.org/maven2/\". Save the configuration.

                        b) Click Create a repository by selecting \"maven2(hosted)\" and set the name as \"edp-maven-snapshot\". Change the Version policy to \"snapshot\". Save the configuration.

                        c) Click Create a repository by selecting \"maven2(hosted)\" and set the name as \"edp-maven-releases\". Change the Version policy to \"release\". Save the configuration.

                        d) Click Create a repository by selecting \"maven2(group)\" and set the name as \"edp-maven-group\". Change the Version policy to \"release\". Add repository to group. Save the configuration.

                        a) Click Create a repository by selecting \"npm(proxy)\" and set the name as \"edp-npm-proxy\". Enter the remote storage URL as \"https://registry.npmjs.org\". Save the configuration.

                        b) Click Create a repository by selecting \"npm(hosted)\" and set the name as \"edp-npm-snapshot\". Save the configuration.

                        c) Click Create a repository by selecting \"npm(hosted)\" and set the name as \"edp-npm-releases\". Save the configuration.

                        d) Click Create a repository by selecting \"npm(hosted)\" and set the name as \"edp-npm-hosted\". Save the configuration.

                        e) Click Create a repository by selecting \"npm(group)\" and set the name as \"edp-npm-group\". Add repository to group. Save the configuration.

                        a) Click Create a repository by selecting \"nuget(proxy)\" and set the name as \"edp-dotnet-proxy\". Select Protocol version NuGet V3. Enter the remote storage URL as \"https://api.nuget.org/v3/index.json\". Save the configuration.

                        b) Click Create a repository by selecting \"nuget(hosted)\" and set the name as \"edp-dotnet-snapshot\". Save the configuration.

                        c) Click Create a repository by selecting \"nuget(hosted)\" and set the name as \"edp-dotnet-releases\". Save the configuration.

                        d) Click Create a repository by selecting \"nuget(hosted)\" and set the name as \"edp-dotnet-hosted\". Save the configuration.

                        e) Click Create a repository by selecting \"nuget(group)\" and set the name as \"edp-dotnet-group\". Add repository to group. Save the configuration.

                        a) Click Create a repository by selecting \"pypi(proxy)\" and set the name as \"edp-python-proxy\". Enter the remote storage URL as \"https://pypi.org\". Save the configuration.

                        b) Click Create a repository by selecting \"pypi(hosted)\" and set the name as \"edp-python-snapshot\". Save the configuration.

                        c) Click Create a repository by selecting \"pypi(hosted)\" and set the name as \"edp-python-releases\". Save the configuration.

                        d) Click Create a repository by selecting \"pypi(group)\" and set the name as \"edp-python-group\". Add repository to group. Save the configuration.

                      4. Provision secrets using manifest, EDP Portal or with the externalSecrets operator

                      EDP PortalManifestExternal Secrets Operator

                      Go to EDP Portal -> EDP -> Configuration -> Nexus. Update or fill in the URL, nexus-user-id, nexus-user-password and click the Save button:

                      Nexus update manual secret

                      apiVersion: v1\nkind: Secret\nmetadata:\nname: ci-nexus\nnamespace: edp\nlabels:\napp.edp.epam.com/secret-type: nexus\ntype: Opaque\nstringData:\nurl: https://nexus.example.com\nusername: <nexus-user-id>\npassword: <nexus-user-password>\n
                      \"ci-nexus\":\n{\n\"url\": \"https://nexus.example.com\",\n\"username\": \"XXXXXXX\",\n\"password\": \"XXXXXXX\"\n},\n

                      Go to EDP Portal -> EDP -> Configuration -> Nexus and see Managed by External Secret message.

                      Nexus managed by external secret operator

                      More detail of External Secrets Operator Integration can found on the following page

                      "},{"location":"operator-guide/nexus-sonatype/#related-articles","title":"Related Articles","text":"
                      • Install EDP With Values File
                      • Install External Secrets Operator
                      • External Secrets Operator Integration
                      • Cluster Add-Ons Overview
                      "},{"location":"operator-guide/notification-msteams/","title":"Microsoft Teams Notification","text":"

                      This section describes how to set up and add notification status to Tekton pipelines by sending pipeline status to the Microsoft Teams channel.

                      "},{"location":"operator-guide/notification-msteams/#create-incoming-webhook","title":"Create Incoming WebHook","text":"

                      To create a link to Incoming Webhook for the Microsoft Teams channel, follow the steps below:

                      1. Open the channel which will be receiving notifications and click the \u2022\u2022\u2022 button from the upper-right corner. Select Connectors in the dropdown menu: Microsoft Teams menu

                      2. In the search field, type Incoming Webhook and click Configure: Connectors

                      3. Provide a name and upload an image for the webhook if necessary. Click Create: Connectors setup

                      4. Copy and save the unique WebHookURL presented in the dialog. Click Done: WebHookURL

                      5. Create a secret with the within the edp namespace.

                        kubectl -n edp create secret generic microsoft-teams-webhook-url \\\n--from-literal=url=<webhookURL>\n

                      6. Add the notification task to the pipeline and add the code below in final-block in the pipeline and save:

                      {{ include \"send-to-microsoft-teams-build\" . | nindent 4 }}\n
                      "},{"location":"operator-guide/notification-msteams/#customize-notification-message","title":"Customize Notification Message","text":"

                      To make notification message informative, relevant text should be added to the message. Here are the steps to implement it:

                      1. Create a new pipeline with a unique name or modify your custom pipeline created before.

                      2. Add the task below in the finally block with a unique name. Edit the params.message value if necessary:

                      View: Task send-to-microsoft-teams
                      - name: 'microsoft-teams-pipeline-status-notification-failed\nparams:\n- name: webhook-url-secret\nvalue: microsoft-teams-webhook-url\n- name: webhook-url-secret-key\nvalue: url\n- name: message\nvalue: >-\nBuild Failed project: $(params.CODEBASE_NAME)<br> branch: $(params.git-source-revision)<br> pipeline: <a href=$(params.pipelineUrl)>$(context.pipelineRun.name)</a><br> commit message: $(params.COMMIT_MESSAGE)\ntaskRef:\nkind: Task\nname: send-to-microsoft-teams\nwhen:\n- input: $(tasks.status)\noperator: in\nvalues:\n- Failed\n- PipelineRunTimeout\n

                      After customization, the following message is supposed to appear in the channel when failing pipelines:

                      Notification example

                      "},{"location":"operator-guide/notification-msteams/#related-articles","title":"Related Articles","text":"
                      • Install EDP
                      • Install Tekton
                      "},{"location":"operator-guide/oauth2-proxy/","title":"Protect Endpoints","text":"

                      OAuth2-Proxy is a versatile tool that serves as a reverse proxy, utilizing the OAuth 2.0 protocol with various providers like Google, GitHub, and Keycloak to provide both authentication and authorization. This guide instructs readers on how to protect their applications' endpoints using OAuth2-Proxy. By following these steps, users can strengthen their endpoints' security without modifying their current application code. In the context of EDP, it has integration with the Keycloak OIDC provider, enabling it to link with any component that lacks built-in authentication.

                      Note

                      OAuth2-Proxy is disabled by default when installing EDP.

                      "},{"location":"operator-guide/oauth2-proxy/#prerequisites","title":"Prerequisites","text":"
                      • Keycloak with OIDC authentication is installed.
                      "},{"location":"operator-guide/oauth2-proxy/#enable-oauth2-proxy","title":"Enable OAuth2-Proxy","text":"

                      Enabling OAuth2-Proxy implies the following general steps:

                      1. Update your EDP deployment using command --set 'oauth2_proxy.enabled=true' or the --values file by enabling the oauth2_proxy parameter.
                      2. Check that OAuth2-Proxy is deployed successfully.
                      3. Enable authentication for your Ingress by adding auth-signin and auth-url of OAuth2-Proxy to its annotation.

                      This will deploy and connect OAuth2-Proxy to your application endpoint.

                      "},{"location":"operator-guide/oauth2-proxy/#enable-oauth2-proxy-on-tekton-dashboard","title":"Enable OAuth2-Proxy on Tekton Dashboard","text":"

                      The example below illustrates how to use OAuth2-Proxy in practice when using the Tekton dashboard:

                      KubernetesOpenshift
                      1. Run helm upgrade to update edp-install release:
                        helm upgrade --version <version> --set 'oauth2_proxy.enabled=true' edp-install --namespace edp\n
                      2. Check that OAuth2-Proxy is deployed successfully.
                      3. Edit the Tekton dashboard Ingress annotation by adding auth-signin and auth-url of oauth2-proxy by kubectl command:
                        kubectl annotate ingress <application-ingress-name> nginx.ingress.kubernetes.io/auth-signin='https://<oauth-ingress-host>/oauth2/start?rd=https://$host$request_uri' nginx.ingress.kubernetes.io/auth-url='http://oauth2-proxy.edp.svc.cluster.local:8080/oauth2/auth'\n
                      1. Generate a cookie-secret for proxy with the following command:
                        tekton_dashboard_cookie_secret=$(openssl rand -base64 32 | head -c 32)\n
                      2. Create tekton-dashboard-proxy-cookie-secret in the edp namespace:
                        kubectl -n edp create secret generic tekton-dashboard-proxy-cookie-secret \\\n--from-literal=cookie-secret=${tekton_dashboard_cookie_secret}\n
                      3. Run helm upgrade to update edp-install release:
                        helm upgrade --version <version> --set 'edp-tekton.dashboard.openshift_proxy.enabled=true' edp-install --namespace edp\n
                      "},{"location":"operator-guide/oauth2-proxy/#related-articles","title":"Related Articles","text":"

                      Keycloak Installation Keycloak OIDC Installation Tekton Installation

                      "},{"location":"operator-guide/openshift-cluster-settings/","title":"Set Up OpenShift","text":"

                      Make sure the cluster meets the following conditions:

                      1. OpenShift cluster is installed with minimum 2 worker nodes with total capacity 8 Cores and 32Gb RAM.

                      2. Load balancer (if any exists in front of OpenShift router or ingress controller) is configured with session stickiness, disabled HTTP/2 protocol and header size of 64k support.

                        Find below an example of the Config Map for the NGINX Ingress Controller:

                        kind: ConfigMap\napiVersion: v1\nmetadata:\nname: nginx-configuration\nnamespace: ingress-nginx\nlabels:\napp.kubernetes.io/name: ingress-nginx\napp.kubernetes.io/part-of: ingress-nginx\ndata:\nclient-header-buffer-size: 64k\nlarge-client-header-buffers: 4 64k\nuse-http2: \"false\"\n
                      3. Cluster nodes and pods have access to the cluster via external URLs. For instance, add in AWS the VPC NAT gateway elastic IP to the cluster external load balancers security group).

                      4. Keycloak instance is installed. To get accurate information on how to install Keycloak, please refer to the Install Keycloak instruction.

                      5. The installation machine with oc is installed with the cluster-admin access to the OpenShift cluster.

                      6. Helm 3.10 is installed on the installation machine with the help of the Installing Helm instruction.

                      7. Storage classes are used with the Retain Reclaim Policy and Delete Reclaim Policy.

                      8. We recommended using our storage class as default storage class.

                        Info

                        By default, EDP uses the default Storage Class in a cluster. The EDP development team recommends using the following Storage Classes. See an example below.

                        Storage class templates with the Retain and Delete Reclaim Policies:

                        ebs-scgp3gp3-retain
                        apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\nname: ebs-sc\nannotations:\nstorageclass.kubernetes.io/is-default-class: 'true'\nallowedTopologies: []\nmountOptions: []\nprovisioner: ebs.csi.aws.com\nreclaimPolicy: Retain\nvolumeBindingMode: Immediate\n
                        kind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\nname: gp3\nannotations:\nstorageclass.kubernetes.io/is-default-class: 'true'\nallowedTopologies: []\nmountOptions: []\nprovisioner: ebs.csi.aws.com\nreclaimPolicy: Delete\nvolumeBindingMode: Immediate\nallowVolumeExpansion: true\n
                        kind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\nname: gp3-retain\nallowedTopologies: []\nmountOptions: []\nprovisioner: ebs.csi.aws.com\nreclaimPolicy: Retain\nvolumeBindingMode: WaitForFirstConsumer\nallowVolumeExpansion: true\n
                      "},{"location":"operator-guide/openshift-cluster-settings/#related-articles","title":"Related Articles","text":"
                      • Install Amazon EBS CSI Driver
                      • Install Keycloak
                      "},{"location":"operator-guide/overview-devsecops/","title":"Secure Delivery on the Platform","text":"

                      The EPAM Delivery Platform emphasizes the importance of incorporating security practices into the software development lifecycle through the DevSecOps approach. By integrating a diverse range of open-source and enterprise security tools tailored to specific functionalities, organizations can ensure efficient and secure software development. These tools, combined with fundamental DevSecOps principles such as collaboration, continuous security, and automation, contribute to the identification and remediation of vulnerabilities early in the process, minimizes risks, and fosters a security-first culture across the organization.

                      The EPAM Delivery Platform enabling seamless integration with various security tools and vulnerability management systems, enhancing the security of source code and ensuring compliance.

                      "},{"location":"operator-guide/overview-devsecops/#supported-solutions","title":"Supported Solutions","text":"

                      The below table categorizes various open-source and enterprise security tools based on their specific functionalities. It provides a comprehensive view of the available options for each security aspect. This classification facilitates informed decision-making when selecting and integrating security tools into a development pipeline, ensuring an efficient and robust security stance. EDP supports the integration of both open-source and enterprise security tools, providing a flexible and versatile solution for security automation. See table below for more details.

                      Functionality Open-Source Tools (integrated in Pipelines) Enterprise Tools (available for Integration) Hardcoded Credentials Scanner TruffleHog, GitLeaks, GitSecrets GitGuardian, SpectralOps, Bridgecrew Static Application Security Testing SonarQube, Semgrep CLI Veracode, Checkmarx, Coverity Software Composition Analysis OWASP Dependency-Check, cdxgen, Nancy Black Duck Hub, Mend, Snyk Container Security Trivy, Grype, Clair Aqua Security, Sysdig Secure, Snyk Infrastructure as Code Security Checkov, Tfsec Bridgecrew, Prisma Cloud, Snyk Dynamic Application Security Testing OWASP Zed Attack Proxy Fortify WebInspect, Rapid7 InsightAppSec, Checkmarx Continuous Monitoring and Logging ELK Stack, OpenSearch, Loki Splunk, Datadog Security Audits and Assessments OpenVAS Tenable Nessus, QualysGuard, BurpSuite Professional Vulnerability Management and Reporting DefectDojo, OWASP Dependency-Track -

                      For obtaining and managing report post scanning, deployment of various vulnerability management systems and security tools is required. These include:

                      "},{"location":"operator-guide/overview-devsecops/#defectdojo","title":"DefectDojo","text":"

                      DefectDojo is a comprehensive vulnerability management and security orchestration platform facilitating the handling of uploaded security reports. Examine the prerequisites and fundamental instructions for installing DefectDojo on Kubernetes or OpenShift platforms.

                      "},{"location":"operator-guide/overview-devsecops/#owasp-dependency-track","title":"OWASP Dependency Track","text":"

                      Dependency Track is an intelligent Software Composition Analysis (SCA) platform that provides a comprehensive solution for managing vulnerabilities in third-party and open-source components.

                      "},{"location":"operator-guide/overview-devsecops/#gitleaks","title":"Gitleaks","text":"

                      Gitleaks is a versatile SAST tool used to scan Git repositories for hardcoded secrets, such as passwords and API keys, to prevent potential data leaks and unauthorized access.

                      "},{"location":"operator-guide/overview-devsecops/#trivy","title":"Trivy","text":"

                      Trivy is a simple and comprehensive vulnerability scanner for containers and other artifacts, providing insight into potential security issues across multiple ecosystems.

                      "},{"location":"operator-guide/overview-devsecops/#grype","title":"Grype","text":"

                      Grype is a fast and reliable vulnerability scanner for container images and filesystems, maintaining an up-to-date vulnerability database for efficient and accurate scanning.

                      "},{"location":"operator-guide/overview-devsecops/#tfsec","title":"Tfsec","text":"

                      Tfsec is an effective Infrastructure as Code (IaC) security scanner, tailored specifically for reviewing Terraform templates. It helps identify potential security issues related to misconfigurations and non-compliant practices, enabling developers to address vulnerabilities and ensure secure infrastructure deployment.

                      "},{"location":"operator-guide/overview-devsecops/#checkov","title":"Checkov","text":"

                      Checkov is a robust static code analysis tool designed for IaC security, supporting various IaC frameworks such as Terraform, CloudFormation, and Kubernetes. It assists in detecting and mitigating security and compliance misconfigurations, promoting best practices and adherence to industry standards across the infrastructure.

                      "},{"location":"operator-guide/overview-devsecops/#cdxgen","title":"Cdxgen","text":"

                      Cdxgen is a lightweight and efficient tool for generating Software Bill of Materials (SBOM) using CycloneDX, a standard format for managing component inventory. It helps organizations maintain an up-to-date record of all software components, their versions, and related vulnerabilities, streamlining monitoring and compliance within the software supply chain.

                      "},{"location":"operator-guide/overview-devsecops/#semgrep-cli","title":"Semgrep CLI","text":"

                      Semgrep CLI is a versatile and user-friendly command-line interface for the Semgrep security scanner, enabling developers to perform Static Application Security Testing (SAST) for various programming languages. It focuses on detecting and preventing potential security vulnerabilities, code quality issues, and custom anti-patterns, ensuring secure and efficient code development.

                      "},{"location":"operator-guide/overview-manage-jenkins-pipelines/","title":"Overview","text":"

                      Jenkins job provisioners are responsible for creating and managing pipelines in Jenkins. In other words, provisioners configure all Jenkins pipelines and bring them to the state described in the provisioners code. Two types of provisioners are available in EDP:

                      • CI-provisioner - manages the application folder, and its Code Review, Build and Create Release pipelines.
                      • CD-provisioner - manages the Deploy pipelines.

                      The subsections describe the creation/update process of provisioners and their content depending on EDP customization.

                      "},{"location":"operator-guide/overview-sast/","title":"Static Application Security Testing Overview","text":"

                      EPAM Delivery Platform provides the implemented Static Application Security Testing support allowing to work with the Semgrep security scanner and the DefectDojo vulnerability management system to check the source code for known vulnerabilities.

                      "},{"location":"operator-guide/overview-sast/#supported-languages","title":"Supported Languages","text":"

                      EDP SAST supports a number of languages and package managers.

                      Language (Package Managers) Scan Tool Build Tool Java Semgrep Maven, Gradle Go Semgrep Go React Semgrep Npm"},{"location":"operator-guide/overview-sast/#supported-vulnerability-management-system","title":"Supported Vulnerability Management System","text":"

                      To get and then manage a SAST report after scanning, it is necessary to deploy the vulnerability management system, for instance, DefectDojo.

                      "},{"location":"operator-guide/overview-sast/#defectdojo","title":"DefectDojo","text":"

                      DefectDojo is a vulnerability management and security orchestration platform that allows managing the uploaded security reports.

                      Inspect the prerequisites and the main steps for installing DefectDojo on Kubernetes or OpenShift platforms.

                      "},{"location":"operator-guide/overview-sast/#related-articles","title":"Related Articles","text":"
                      • Add Security Scanner
                      • Semgrep
                      "},{"location":"operator-guide/perf-integration/","title":"Perf Server Integration","text":"

                      Integration with Perf Server allows connecting to the PERF Board (Project Performance Board) and monitoring the overall team performance as well as setting up necessary metrics.

                      Note

                      To adjust the PERF Server integration, make sure that PERF Operator is deployed. To get more information about the PERF Operator installation and architecture, please refer to the PERF Operator page.

                      For integration, take the following steps:

                      1. Create Secret in the OpenShift/Kubernetes namespace for Perf Server account with the username and password fields:

                        apiVersion: v1\ndata:\npassword: passwordInBase64\nusername: usernameInBase64\nkind: Secret\nmetadata:\nname: epam-perf-user\ntype: kubernetes.io/basic-auth\n
                      2. In the edp-config config map, enable the perf_integration flag and click Save:

                         perf_integration_enabled: 'true'\n
                      3. Being in Admin Console, navigate to the Advanced Settings menu to check that the Integrate with Perf Server check box appeared:

                        Advanced settings

                      "},{"location":"operator-guide/perf-integration/#related-articles","title":"Related Articles","text":"
                      • Add Application
                      • Add Autotest
                      • Add Library
                      "},{"location":"operator-guide/prerequisites/","title":"EDP Installation Prerequisites Overview","text":"

                      Before installing EDP:

                      • Install and configure Kubernetes or OpenShift cluster.
                      • Install EDP components for the selected EDP installation scenario.
                      "},{"location":"operator-guide/prerequisites/#edp-installation-scenarios","title":"EDP Installation Scenarios","text":"

                      There are two EDP installation scenarios based on the selected CI tool: Tekton (default) or Jenkins.

                      Scenario 1: Tekton CI tool. By default, EDP uses Tekton as a CI tool and EDP Portal as a UI tool.

                      Scenario 2: Jenkins CI tool. To use Jenkins as a CI tool, it is required to install the deprecated Admin Console UI tool. Admin Console is used only as a dependency for Jenkins, and Portal will still be used as a UI tool.

                      Note

                      Starting from version 3.0.0, all the new enhancements and functionalities will be introduced only for Tekton deploy scenario. Jenkins deploy scenario will be supported at the bug fix and security breach level only. We understand that some users may need additional functionality in Jenkins, so if any, please create your request here. To stay up-to-date with all the updates, please check the Release Notes page.

                      Find below the list of the components to be installed for each scenario:

                      Component Tekton CI tool Jenkins CI tool Cluster Tekton Mandatory - NGINX Ingress Controller1 Mandatory Mandatory Keycloak Mandatory Mandatory DefectDojo Mandatory Mandatory Argo CD Mandatory Optional ReportPortal Optional Optional Kiosk Optional Optional External Secrets Optional Optional Harbor Optional Optional

                      Note

                      Alternatively, use Helmfiles to install the EDP components.

                      After setting up the cluster and installing EDP components according to the selected scenario, proceed to the EDP installation.

                      "},{"location":"operator-guide/prerequisites/#related-articles","title":"Related Articles","text":"
                      • Set Up Kubernetes
                      • Set Up OpenShift
                      • Install EDP
                      1. OpenShift cluster uses Routes to provide access to pods from external resources.\u00a0\u21a9

                      "},{"location":"operator-guide/report-portal-integration-tekton/","title":"Integration With Tekton","text":"

                      ReportPortal integration with Tekton allows managing all automation results and reports in one place, visualizing metrics and analytics, team collaborating to associate statistics results.

                      For integration, take the following steps:

                      1. Log in to the ReportPortal console and navigate to the User Profile menu:

                        ReportPortal profile

                      2. Copy the Access token and use it as a value while creating a kubernetes secret for the ReportPortal credentials:

                        apiVersion: v1\nkind: Secret\ntype: Opaque\nmetadata:\nname: rp-credentials\nnamespace: edp\nstringData:\nrp_uuid: <access-token>\n
                      3. In the Configuration examples section of the ReportPortal User Profile menu, copy the following REQUIRED fields: rp.endpoint, rp.launch and rp.project. Insert these fields to the pytest.ini file in root directory of your project:

                        [pytest]\naddopts = -rsxX -l --tb=short --junitxml test-report.xml\nrp_endpoint = <endpoint>\nrp_launch = <launch>\nrp_project = <project>\n
                      4. In root directory of the project create/update requirements.txt file and fill with following. it's mandatory to install report-portal python library (version may vary):

                        pytest-reportportal == 5.1.2\n

                      5. Create a custom Tekton task:

                        View: Custom Tekton task
                        apiVersion: tekton.dev/v1beta1\nkind: Task\nmetadata:\nlabels:\napp.kubernetes.io/version: '0.1'\nname: pytest-reportportal\nnamespace: edp\nspec:\ndescription: |-\nThis task can be used to run pytest integrated with report portal.\nparams:\n- default: .\ndescription: The path where package.json of the project is defined.\nname: PATH_CONTEXT\ntype: string\n- name: EXTRA_COMMANDS\ntype: string\n- default: python:3.8-alpine3.16\ndescription: The python image you want to use.\nname: BASE_IMAGE\ntype: string\n- default: rp-credentials\ndescription: name of the secret holding the rp token\nname: rp-secret\ntype: string\nsteps:\n- env:\n- name: HOME\nvalue: $(workspaces.source.path)\n- name: RP_UUID\nvalueFrom:\nsecretKeyRef:\nkey: rp_uuid\nname: $(params.rp-secret)\nimage: $(params.BASE_IMAGE)\nname: pytest\nresources: {}\nscript: >\n#!/usr/bin/env sh\nset -e\nexport PATH=$PATH:$HOME/.local/bin\n$(params.EXTRA_COMMANDS)\n# tests are being run from ./test directory in the project\npytest ./tests --reportportal\nworkingDir: $(workspaces.source.path)/$(params.PATH_CONTEXT)\nworkspaces:\n- name: source\n
                      6. Add this task ref to your Tekton pipeline after tasks:

                        View: Tekton pipeline
                        - name: pytest\nparams:\n- name: BASE_IMAGE\nvalue: $(params.image)\n- name: EXTRA_COMMANDS\nvalue: |\nset -ex\npip3 install -r requirements.txt\n[ -f run_service.py ] && python run_service.py &\nrunAfter:\n- compile\ntaskRef:\nkind: Task\nname: pytest-reportportal\nworkspaces:\n- name: source\nworkspace: shared-workspace\n
                      7. Launch your Tekton pipeline and check that the custom task has been successfully executed:

                        Tekton task successfully executed

                      8. Test reports will be displayed in the Launches section of the ReportPortal:

                        Test report results

                      "},{"location":"operator-guide/report-portal-integration-tekton/#related-articles","title":"Related Articles","text":"
                      • ReportPortal Installation
                      • Keycloak Integration
                      • Pytest Integration With ReportPortal
                      "},{"location":"operator-guide/reportportal-keycloak/","title":"Keycloak Integration","text":"

                      Follow the steps below to integrate the ReportPortal with Keycloak.

                      "},{"location":"operator-guide/reportportal-keycloak/#prerequisites","title":"Prerequisites","text":"
                      • Installed Keycloak. Please follow the instruction for details.
                      • Installed ReportPortal. Please follow the instruction to install it from Helmfile or using the Helm Chart.
                      "},{"location":"operator-guide/reportportal-keycloak/#keycloak-configuration","title":"Keycloak Configuration","text":"
                      1. Navigate to Client Scopes > Create client scope and create a new scope with the SAML protocol type.

                      2. Navigate to Client Scopes > your_scope_name > Mappers > Configure a new mapper > select the User Attribute mapper type. Add three mappers for the email, first name, and last name by typing lastName, firstName, and email in the User Attribute field:

                        • Name is a display name in Keycloak.
                        • User Attribute is a user property for mapping.
                        • SAML Attribute Name is an attribute used for requesting information in the ReportPortal configuration.
                        • SAML Attribute NameFormat: Basic.
                        • Aggregate attribute values: Off.

                        User mapper sample Scope mappers

                      3. Navigate to Clients > Create client and fill in the following fields:

                        • Client type: SAML.
                        • Client ID: report.portal.sp.id.

                        Warning

                        The report.portal.sp.id Client ID is a constant value.

                      4. Navigate to Client > your_client > Settings and add https://<report-portal-url\\>/* to the Valid redirect URIs.

                      5. Navigate to Client > your_client > Keys and disable Client signature required.

                        Client keys

                      6. Navigate to Client > your_client > Client scopes and add the scope created on step 3 with the default Assigned type.

                        Client scopes

                      "},{"location":"operator-guide/reportportal-keycloak/#reportportal-configuration","title":"ReportPortal Configuration","text":"
                      1. Log in to the ReportPortal with the admin permissions.

                      2. Navigate to Client > Administrate > Plugins and select the SAML plugin.

                        Plugins menu

                      3. To add a new integration, fill in the following fields:

                        Add SAML configuration

                        • Provider name is the display name in the ReportPortal login page.
                        • Metadata URL: https://<keycloak_url\\>/auth/realms/<realm\\>/protocol/saml/descriptor.
                        • Email is the value from the SAML Attribute Name field in the Keycloak mapper.
                        • RP callback URL: https://<report_portal_url\\>/uat.
                        • Name attributes mode is the first & last name (type based on your mapper).
                        • First name is the value from the SAML Attribute Name field in the Keycloak mapper.
                        • Last name is the value from the SAML Attribute Name field in the Keycloak mapper.
                      4. Log in to the ReportPortal.

                        Note

                        By default, after the first login, ReportPortal creates the <your_email>_personal project and adds an account with the Project manager role.

                        Report portal login page

                      "},{"location":"operator-guide/reportportal-keycloak/#related-articles","title":"Related Articles","text":"
                      • ReportPortal Installation
                      • Integration With Tekton
                      "},{"location":"operator-guide/restore-edp-with-velero/","title":"Restore EDP Tenant With Velero","text":"

                      You can use the Velero tool to restore a EDP tenant. Explore the main steps for backup and restoring below.

                      1. Delete all related entities in Keycloak: realm and clients from master/openshift realms. Navigate to the entities list in the Keycloak, select the necessary ones, and click the deletion icon on the entity overview page. If there are customized configs in Keycloak, save them before making backup.

                        Remove keycloak realm

                      2. To restore EDP, install and configure the Velero tool. Please refer to the Install Velero documentation for details.

                      3. Remove all locks for operators. Delete all config maps that have \u2039OPERATOR_NAME\u203a-operator-lock names. Then restart all pods with operators, or simply run the following command:

                             kubectl -n edp delete cm $(kubectl -n edp get cm | grep 'operator-lock' | awk '{print $1}')\n
                      4. Recreate the admin password and delete the Jenkins pod. Or change the script to update the admin password in Jenkins every time when the pod is updated.

                      "},{"location":"operator-guide/sast-scaner-semgrep/","title":"Semgrep","text":"

                      Semgrep is an open-source static source code analyzer for finding bugs and enforcing code standards.

                      Semgrep scanner is installed on the EDP Jenkins SAST agent and runs on the sast pipeline stage. For details, please refer to the edp-library-stages repository.

                      "},{"location":"operator-guide/sast-scaner-semgrep/#supported-languages","title":"Supported Languages","text":"

                      Semgrep supports more than 20 languages, see the full list in the official documentation. EDP uses Semgrep to scan Java, JavaScript and Go languages.

                      "},{"location":"operator-guide/sast-scaner-semgrep/#related-articles","title":"Related Articles","text":"
                      • Add Security Scanner
                      "},{"location":"operator-guide/schedule-pods-restart/","title":"Schedule Pods Restart","text":"

                      In case it is necessary to restart pods, use a CronJob according to the following template:

                      View: template
                      ---\nkind: Role\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\nnamespace: <NAMESPACE>\nname: apps-restart\nrules:\n- apiGroups: [\"apps\"]\nresources:\n- deployments\n- statefulsets\nverbs:\n- 'get'\n- 'list'\n- 'patch'\n---\nkind: RoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\nname: apps-restart\nnamespace: <NAMESPACE>\nsubjects:\n- kind: ServiceAccount\nname: apps-restart-sa\nnamespace: <NAMESPACE>\nroleRef:\nkind: Role\nname: apps-restart\napiGroup: \"\"\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\nname: apps-restart-sa\nnamespace: <NAMESPACE>\n---\napiVersion: batch/v1beta1\nkind: CronJob\nmetadata:\nname: apps-rollout-restart\nnamespace: <NAMESPACE>\nspec:\nschedule: \"0 9 * * MON-FRI\"\njobTemplate:\nspec:\ntemplate:\nspec:\nserviceAccountName: apps-restart-sa\ncontainers:\n- name: kubectl-runner\nimage: bitnami/kubectl\ncommand:\n- /bin/sh\n- -c\n- kubectl get -n <NAMESPACE> -o name deployment,statefulset | grep <NAME_PATTERN>| xargs kubectl -n <NAMESPACE> rollout restart\nrestartPolicy: Never\n

                      Modify the Cron expression in the CronJob manifest if needed.

                      "},{"location":"operator-guide/sonarqube/","title":"SonarQube Integration","text":"

                      This documentation guide provides comprehensive instructions for integrating SonarQube with the EPAM Delivery Platform.

                      Info

                      In EDP release 3.5, we have changed the deployment strategy for the sonarqube-operator component, now it is not installed by default. The sonarURL parameter management has been transferred from the values.yaml file to Kubernetes secrets.

                      "},{"location":"operator-guide/sonarqube/#prerequisites","title":"Prerequisites","text":"

                      Before proceeding, ensure that you have the following prerequisites:

                      • Kubectl version 1.26.0 is installed.
                      • Helm version 3.12.0+ is installed.
                      "},{"location":"operator-guide/sonarqube/#installation","title":"Installation","text":"

                      To install SonarQube with pre-defined templates, use the sonar-operator installed via Cluster Add-Ons approach.

                      "},{"location":"operator-guide/sonarqube/#configuration","title":"Configuration","text":"

                      To establish robust authentication and precise access control, generating a SonarQube token is essential. This token is a distinct identifier, enabling effortless integration between SonarQube and EDP. To generate the SonarQube token, proceed with the following steps:

                      1. Open the SonarQube UI and navigate to Administration -> Security -> User. Create a new user or select an existing one. Click the Options List icon to create a token:

                        SonarQube user settings

                      2. Type the ci-user username, define an expiration period, and click the Generate button to create the token:

                        SonarQube create token

                      3. Click the Copy button to copy the generated <Sonarqube-token>:

                        SonarQube token

                      4. Provision secrets using Manifest, EDP Portal or with the externalSecrets operator:

                      EDP PortalManifestExternal Secrets Operator

                      Go to EDP Portal -> EDP -> Configuration -> SonarQube. Update or fill in the URL and Token fields and click the Save button:

                      SonarQube update manual secret

                      apiVersion: v1\nkind: Secret\nmetadata:\nname: ci-sonarqube\nnamespace: edp\nlabels:\napp.edp.epam.com/secret-type: sonar\ntype: Opaque\nstringData:\nurl: https://sonarqube.example.com\ntoken: <sonarqube-token>\n
                      \"ci-sonarqube\":\n{\n\"url\": \"https://sonarqube.example.com\",\n\"token\": \"XXXXXXXXXXXX\"\n},\n

                      Go to EDP Portal -> EDP -> Configuration -> SonarQube and see the Managed by External Secret message:

                      SonarQube managed by external secret operator

                      More details about External Secrets Operator integration can be found in the External Secrets Operator Integration page.

                      "},{"location":"operator-guide/sonarqube/#related-articles","title":"Related Articles","text":"
                      • Install EDP With Values File
                      • Install External Secrets Operator
                      • External Secrets Operator Integration
                      • Cluster Add-Ons Overview
                      "},{"location":"operator-guide/ssl-automation-okd/","title":"Use Cert-Manager in OpenShift","text":"

                      The following material covers Let's Encrypt certificate automation with cert-manager using AWS Route53.

                      The cert-manager is a Kubernetes/OpenShift operator that allows to issue and automatically renew SSL certificates. In this tutorial, the steps to secure DNS Name will be demonstrated.

                      Below is an instruction on how to automatically issue and install wildcard certificates on OpenShift Ingress Controller and API Server covering all cluster Routes. To secure separate OpenShift Routes, please refer to the OpenShift Route Support project for cert-manager.

                      "},{"location":"operator-guide/ssl-automation-okd/#prerequisites","title":"Prerequisites","text":"
                      • The cert-manager;
                      • OpenShift v4.7 - v4.11;
                      • Connection to the OpenShift Cluster;
                      • Enabled AWS IRSA;
                      • The latest oc utility. The kubectl tool can also be used for most of the commands.
                      "},{"location":"operator-guide/ssl-automation-okd/#install-cert-manager-operator","title":"Install Cert-Manager Operator","text":"

                      Install the cert-manager operator via OpenShift OperatorHub that uses Operator Lifecycle Manager (OLM):

                      1. Go to the OpenShift Admin Console \u2192 OperatorHub, search for the cert-manager, and click Install:

                        Cert-Manager Installation

                      2. Modify the ClusterServiceVersion OLM resource, by selecting the Update approval \u2192 Manual. If selecting Update approval \u2192 Automatic after the automatic operator update, the parameters in the ClusterServiceVersion will be reset to default.

                        Note

                        Installing an operator with Manual approval causes all operators installed in namespace openshift-operators to function as manual approval strategy. In case the Manual approval is chosen, review the manual installation plan and approve it.

                        Cert-Manager Installation

                      3. Navigate to Operators \u2192 Installed Operators and check the operator status to be Succeeded:

                        Cert-Manager Installation

                      4. In case of errors, troubleshoot the Operator issues:

                        oc describe operator cert-manager -n openshift-operators\noc describe sub cert-manager -n openshift-operators\n
                      "},{"location":"operator-guide/ssl-automation-okd/#create-aws-role-for-route53","title":"Create AWS Role for Route53","text":"

                      The cert-manager should be configured to validate Wildcard certificates using the DNS-based method.

                      1. Check the DNS Hosted zone ID in AWS Route53 for your domain.

                        Hosted Zone ID

                      2. Create Route53 Permissions policy in AWS for cert-manager to be able to create DNS TXT records for the certificate validation. In this example, cert-manager permissions are given for a particular DNS zone only. Replace Hosted zone ID XXXXXXXX in the \"Resource\": \"arn:aws:route53:::hostedzone/XXXXXXXXXXXX\".

                        {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Effect\": \"Allow\",\n\"Action\": \"route53:GetChange\",\n\"Resource\": \"arn:aws:route53:::change/*\"\n},\n{\n\"Effect\": \"Allow\",\n\"Action\": [\n\"route53:ChangeResourceRecordSets\",\n\"route53:ListResourceRecordSets\"\n],\n\"Resource\": \"arn:aws:route53:::hostedzone/XXXXXXXXXXXX\"\n}\n]\n}\n
                      3. Create an AWS Role with Custom trust policy for the cert-manager service account to use the AWS IRSA feature and then attach the created policy. Replace the following:

                        • ${aws-account-id} with the AWS account ID of the EKS cluster.
                        • ${aws-region} with the region where the EKS cluster is located.
                        • ${eks-hash} with the hash in the EKS API URL; this will be a random 32 character hex string, for example, 45DABD88EEE3A227AF0FA468BE4EF0B5.
                        • ${namespace} with the namespace where cert-manager is running.
                        • ${service-account-name} with the name of the ServiceAccount object created by cert-manager.
                        • By default, it is \"system:serviceaccount:openshift-operators:cert-manager\" if cert-manager is installed via OperatorHub.
                        • Attach the created Permission policy for Route53 to the Role.
                        • Optionally, add Permissions boundary to the Role.

                          {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Effect\": \"Allow\",\n\"Action\": \"sts:AssumeRoleWithWebIdentity\",\n\"Principal\": {\n\"Federated\": \"arn:aws:iam::* ${aws-account-id}:oidc-provider/oidc.eks.${aws-region}.amazonaws.com/id/${eks-hash}\"\n},\n\"Condition\": {\n\"StringEquals\": {\n\"oidc.eks.${aws-region}.amazonaws.com/id/${eks-hash}:sub\": \"system:serviceaccount:${namespace}:${service-account-name}\"\n}\n}\n}\n]\n}\n
                      4. Copy the created Role ARN.

                      "},{"location":"operator-guide/ssl-automation-okd/#configure-cert-manager-integration-with-aws-route53","title":"Configure Cert-Manager Integration With AWS Route53","text":"
                      1. Annotate the ServiceAccount created by cert-manager (required for AWS IRSA), and restart the cert-manager pod.

                      2. Replace the eks.amazonaws.com/role-arn annotation value with your own Role ARN.

                        oc edit sa cert-manager -n openshift-operators\n
                        apiVersion: v1\nkind: ServiceAccount\nmetadata:\nannotations:\neks.amazonaws.com/role-arn: arn:aws:iam::XXXXXXXXXXXX:role/cert-manager\n
                      3. Modify the cert-manager Deployment with the correct file system permissions fsGroup: 1001, so that the ServiceAccount token can be read.

                        Note

                        In case the ServiceAccount token cannot be read and the operator is installed using the OperatorHub, add fsGroup: 1001 via OpenShift ClusterServiceVersion OLM resource. It should be a cert-manager controller spec. These actions are not required for OpenShift v4.10.

                        oc get csv\noc edit csv cert-manager.${VERSION}\n
                        spec:\ntemplate:\nspec:\nsecurityContext:\nfsGroup: 1001\nserviceAccountName: cert-manager\n

                        Cert-Manager System Permissions

                        Info

                        A mutating admission controller will automatically modify all pods running with the service account:

                        cert-manager controller pod

                        apiVersion: apps/v1\nkind: Pod\n# ...\nspec:\n# ...\nserviceAccountName: cert-manager\nserviceAccount: cert-manager\ncontainers:\n- name: ...\n# ...\nenv:\n- name: AWS_ROLE_ARN\nvalue: >-\narn:aws:iam::XXXXXXXXXXX:role/cert-manager\n- name: AWS_WEB_IDENTITY_TOKEN_FILE\nvalue: /var/run/secrets/eks.amazonaws.com/serviceaccount/token\nvolumeMounts:\n- name: aws-iam-token\nreadOnly: true\nmountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount\nvolumes:\n- name: aws-iam-token\nprojected:\nsources:\n- serviceAccountToken:\naudience: sts.amazonaws.com\nexpirationSeconds: 86400\npath: token\ndefaultMode: 420\n

                      4. If you have separate public and private DNS zones for the same domain (split-horizon DNS), modify the cert-manager Deployment in order to validate DNS TXT records via public recursive nameservers.

                        Note

                        Otherwise, you will be getting an error during a record validation:

                        Waiting for DNS-01 challenge propagation: NS ns-123.awsdns-00.net.:53 returned REFUSED for _acme-challenge.\n
                        To avoid the error, add --dns01-recursive-nameservers-only --dns01-recursive-nameservers=8.8.8.8:53,1.1.1.1:53 as ARGs to the cert-manager controller Deployment.

                        oc get csv\noc edit csv cert-manager.${VERSION}\n
                          labels:\napp: cert-manager\napp.kubernetes.io/component: controller\napp.kubernetes.io/instance: cert-manager\napp.kubernetes.io/name: cert-manager\napp.kubernetes.io/version: v1.9.1\nspec:\ncontainers:\n- args:\n- '--v=2'\n- '--cluster-resource-namespace=$(POD_NAMESPACE)'\n- '--leader-election-namespace=kube-system'\n- '--dns01-recursive-nameservers-only'\n- '--dns01-recursive-nameservers=8.8.8.8:53,1.1.1.1:53'\n

                        Note

                        The Deployment must be modified via OpenShift ClusterServiceVersion OLM resource if the operator was installed using the OperatorHub. The OpenShift ClusterServiceVersion OLM resource includes several Deployments, and the ARGs must be modified only for the cert-manager controller.

                        • Save the resource. After that, OLM will try to reload the resource automatically and save it to the YAML file. If OLM resets the config file, double-check the entered values.

                        Cert-Manager Nameservers

                      "},{"location":"operator-guide/ssl-automation-okd/#configure-clusterissuers","title":"Configure ClusterIssuers","text":"

                      ClusterIssuer is available on the whole cluster.

                      1. Create the ClusterIssuer resource for Let's Encrypt Staging and Prod environments that signs a Certificate using cert-manager.

                        Note

                        Let's Encrypt has a limit of duplicate certificates in the Prod environment. Therefore, a ClusterIssuer has been created for Let's Encrypt Staging environment. By default, Let's Encrypt Staging certificates will not be trusted in your browser. The certificate validation cannot be tested in the Let's Encrypt Staging environment.

                        • Change user@example.com with your contact email.
                        • Replace hostedZoneID XXXXXXXXXXX with the DNS Hosted zone ID in AWS for your domain.
                        • Replace the region value ${region}.
                        • The secret under privateKeySecretRef will be created automatically by the cert-manager operator.
                        apiVersion: cert-manager.io/v1\nkind: ClusterIssuer\nmetadata:\nname: letsencrypt-staging\nspec:\nacme:\nemail: user@example.com\nserver: https://acme-staging-v02.api.letsencrypt.org/directory\nprivateKeySecretRef:\nname: letsencrypt-staging-issuer-account-key\nsolvers:\n- dns01:\nroute53:\nregion: ${region}\nhostedZoneID: XXXXXXXXXXX\n
                        apiVersion: cert-manager.io/v1\nkind: ClusterIssuer\nmetadata:\nname: letsencrypt-prod\nspec:\nacme:\nemail: user@example.com\nserver: https://acme-v02.api.letsencrypt.org/directory\nprivateKeySecretRef:\nname: letsencrypt-prod-issuer-account-key\nsolvers:\n- dns01:\nroute53:\nregion: ${region}\nhostedZoneID: XXXXXXXXXXX\n

                        Cert-Manager ClusterIssuer

                      2. Check the ClusterIssuer status:

                        Cert-Manager ClusterIssuer

                        oc describe clusterissuer letsencrypt-prod\noc describe clusterissuer letsencrypt-staging\n
                      3. If the ClusterIssuer state is not ready, investigate cert-manager controller pod logs:

                        oc get pod -n openshift-operators | grep 'cert-manager'\noc logs -f cert-manager-${replica_set}-${random_string} -n openshift-operators\n
                      "},{"location":"operator-guide/ssl-automation-okd/#configure-certificates","title":"Configure Certificates","text":"
                      1. In two different namespaces, create a Certificate resource for the OpenShift Router (Ingress controller for OpenShift) and for the OpenShift APIServer.

                        • OpenShift Router supports a single wildcard certificate for Ingress/Route resources in different namespaces (so called, default SSL certificate). The Ingress controller expects the certificates in a Secret to be created in the openshift-ingress namespace; the API Server, in the openshift-config namespace. The cert-manager operator will automatically create these secrets from the Certificate resource.
                        • Replace ${DOMAIN} with your domain name. It can be checked with oc whoami --show-server. Put domain names in quotes.
                        The certificate for OpenShift Router in the `openshift-ingress` namespace
                        apiVersion: cert-manager.io/v1\nkind: Certificate\nmetadata:\nname: router-certs\nnamespace: openshift-ingress\nlabels:\napp: cert-manager\nspec:\nsecretName: router-certs\nsecretTemplate:\nlabels:\napp: cert-manager\nduration: 2160h # 90d\nrenewBefore: 360h # 15d\nsubject:\norganizations:\n- Org Name\ncommonName: '*.${DOMAIN}'\nprivateKey:\nalgorithm: RSA\nencoding: PKCS1\nsize: 2048\nrotationPolicy: Always\nusages:\n- server auth\n- client auth\ndnsNames:\n- '*.${DOMAIN}'\n- '*.apps.${DOMAIN}'\nissuerRef:\nname: letsencrypt-staging\nkind: ClusterIssuer\n
                        The certificate for OpenShift APIServer in the `openshift-config` namespace
                        apiVersion: cert-manager.io/v1\nkind: Certificate\nmetadata:\nname: api-certs\nnamespace: openshift-config\nlabels:\napp: cert-manager\nspec:\nsecretName: api-certs\nsecretTemplate:\nlabels:\napp: cert-manager\nduration: 2160h # 90d\nrenewBefore: 360h # 15d\nsubject:\norganizations:\n- Org Name\ncommonName: '*.${DOMAIN}'\nprivateKey:\nalgorithm: RSA\nencoding: PKCS1\nsize: 2048\nrotationPolicy: Always\nusages:\n- server auth\n- client auth\ndnsNames:\n- '*.${DOMAIN}'\n- '*.apps.${DOMAIN}'\nissuerRef:\nname: letsencrypt-staging\nkind: ClusterIssuer\n

                        Info

                        • cert-manager supports ECDSA key pairs in the Certificate resource. To use it, change RSA privateKey to ECDSA:

                          privateKey:\nalgorithm: ECDSA\nencoding: PKCS1\nsize: 256\nrotationPolicy: Always\n
                        • rotationPolicy: Always is highly recommended since cert-manager does not rotate private keys by default.
                        • Full Certificate spec is described in the cert-manager API documentation.
                      2. Check that the certificates in the namespaces are ready:

                        Cert-Manager Certificate Status

                        Cert-Manager Certificate Status

                      3. Check the details of the certificates via CLI:

                        oc describe certificate api-certs -n openshift-config\noc describe certificate router-certs -n openshift-ingress\n
                      4. Check the cert-manager controller pod logs if the Staging Certificate condition is not ready for more than 7 minutes:

                        oc get pod -n openshift-operators | grep 'cert-manager'\noc logs -f cert-manager-${replica_set}-${random_string} -n openshift-operators\n
                      5. When the certificate is ready, its private key will be put into the OpenShift Secret in the namespace indicated in the Certificate resource:

                        oc describe secret api-certs -n openshift-config\noc describe secret router-certs -n openshift-ingress\n
                      "},{"location":"operator-guide/ssl-automation-okd/#modify-openshift-router-and-api-server-custom-resources","title":"Modify OpenShift Router and API Server Custom Resources","text":"
                      1. Update the Custom Resource of your Router (Ingress controller). Patch the defaultCertificate object value with { \"name\": \"router-certs\" }:

                        oc patch ingresscontroller default -n openshift-ingress-operator --type=merge --patch='{\"spec\": { \"defaultCertificate\": { \"name\": \"router-certs\" }}}' --insecure-skip-tls-verify\n

                        Info

                        After updating the IngressController object, the OpenShift Ingress operator redeploys the router.

                      2. Update the Custom Resource for the OpenShift API Server:

                        • Export the name of APIServer:

                          export OKD_API=$(oc whoami --show-server --insecure-skip-tls-verify | cut -f 2 -d ':' | cut -f 3 -d '/' | sed 's/-api././')\n
                        • Patch the servingCertificate object value with { \"name\": \"api-certs\" }:

                          oc patch apiserver cluster --type merge --patch=\"{\\\"spec\\\": {\\\"servingCerts\\\": {\\\"namedCertificates\\\": [ { \\\"names\\\": [  \\\"$OKD_API\\\"  ], \\\"servingCertificate\\\": {\\\"name\\\": \\\"api-certs\\\" }}]}}}\" --insecure-skip-tls-verify\n
                      "},{"location":"operator-guide/ssl-automation-okd/#move-from-lets-encrypt-staging-environment-to-prod","title":"Move From Let's Encrypt Staging Environment to Prod","text":"
                      1. Test the Staging certificate on the OpenShift Admin Console. The --insecure flag is used because Let's Encrypt Staging certificates are not trusted in browsers by default:

                        curl -v --insecure https://console-openshift-console.apps.${DOMAIN}\n
                      2. Change issuerRef to letsencrypt-prod in both Certificate resources:

                        oc edit certificate api-certs -n openshift-config\noc edit certificate router-certs -n openshift-ingress\n
                        issuerRef:\nname: letsencrypt-prod\nkind: ClusterIssuer\n

                        Note

                        In case the certificate reissue is not triggered after that, try to force the certificate renewal with cmctl:

                        cmctl renew router-certs -n openshift-ingress\ncmctl renew api-certs -n openshift-config\n

                        If this won't work, delete the api-certs and router-certs secrets. It should trigger the Prod certificates issuance:

                        oc delete secret router-certs -n openshift-ingress\noc delete secret api-certs -n openshift-config\n

                        Please note that these actions will lead to logging your account out of the OpenShift Admin Console, since certificates will be deleted. Accept the certificate warning in the browser and log in again after that.

                      3. Check the status of the Prod certificates:

                        oc describe certificate api-certs -n openshift-config\noc describe certificate router-certs -n openshift-ingress\n
                        cmctl status certificate api-certs -n openshift-config\ncmctl status certificate router-certs -n openshift-ingress\n
                      4. Check the web console and make sure it has secure connection:

                        curl -v https://console-openshift-console.apps.${DOMAIN}\n
                      "},{"location":"operator-guide/ssl-automation-okd/#troubleshoot-certificates","title":"Troubleshoot Certificates","text":"

                      Below is an example of the DNS TXT challenge record created by the cert-manager operator:

                      DNS Validation

                      Use nslookup or dig tools to check if the DNS propagation for the TXT record is complete:

                      nslookup -type=txt _acme-challenge.${DOMAIN}\ndig txt _acme-challenge.${DOMAIN}\n

                      Otherwise, use web tools like Google Admin Toolbox:

                      DNS Validation

                      If the correct TXT value is shown (the value corresponds to the current TXT value in the DNS zone), it means that the DNS propagation is complete and Let's Encrypt is able to access the record in order to validate it and issue a trusted certificate.

                      Note

                      If the DNS validation challenge self check fails, cert-manager will retry the self check with a fixed 10-second retry interval. Challenges that do not ever complete the self check will continue retrying until the user intervenes by either retrying the Order (by deleting the Order resource) or amending the associated Certificate resource to resolve any configuration errors.

                      As soon as the domain ownership has been verified, any cert-manager affected validation TXT records in the AWS Route53 DNS zone will be cleaned up.

                      Please find below the issues that may occur and their troubleshooting:

                      • When certificates are not issued for a long time, or a cert-manager resource is not in a Ready state, describing a resource may show the reason for the error.
                      • Basically, the cert-manager creates the following resources during a Certificate issuance: CertificateRequest, Order, and Challenge. Investigate each of them in case of errors.
                      • Use the cmctl tool to show the state of a Certificate and its associated resources.
                      • Check the cert-manager controller pod logs:

                        oc get pod -n openshift-operators | grep 'cert-manager'\noc logs -f cert-manager-${replica_set}-${random_string} -n openshift-operators\n
                      • Certificate error debugging: a. Decode certificate chain located in the secrets:

                        oc get secret router-certs -n openshift-ingress -o 'go-template={{index .data \"tls.crt\"}}' | base64 -d | while openssl x509 -noout -text; do :; done 2>/dev/null\noc get secret api-certs -n openshift-config -o 'go-template={{index .data \"tls.crt\"}}' | base64 -d | while openssl x509 -noout -text; do :; done 2>/dev/null\n
                        cmctl inspect secret router-certs -n openshift-ingress\ncmctl inspect secret api-certs -n openshift-config\n

                        b. Check the SSL RSA private key consistency:

                        oc get secret router-certs -n openshift-ingress -o 'go-template={{index .data \"tls.key\"}}' | base64 -d | openssl rsa -check -noout\noc get secret api-certs -n openshift-config -o 'go-template={{index .data \"tls.key\"}}' | base64 -d | openssl rsa -check -noout\n

                        c. Match the SSL certificate public key against its RSA private key. Their modulus must be identical:

                        diff <(oc get secret api-certs -n openshift-config -o 'go-template={{index .data \"tls.crt\"}}' | base64 -d | openssl x509 -noout -modulus | openssl md5) <(oc get secret api-certs -n openshift-config -o 'go-template={{index .data \"tls.key\"}}' | base64 -d | openssl rsa -noout -modulus | openssl md5)\ndiff <(oc get secret router-certs -n openshift-ingress -o 'go-template={{index .data \"tls.crt\"}}' | base64 -d | openssl x509 -noout -modulus | openssl md5) <(oc get secret router-certs -n openshift-ingress -o 'go-template={{index .data \"tls.key\"}}' | base64 -d | openssl rsa -noout -modulus | openssl md5)\n
                      "},{"location":"operator-guide/ssl-automation-okd/#remove-obsolete-certificate-authority-data-from-kubeconfig","title":"Remove Obsolete Certificate Authority Data From Kubeconfig","text":"

                      After updating the certificates, the access to the cluster via Lens or CLI will be denied because of the untrusted certificate errors:

                      $ oc whoami\nUnable to connect to the server: x509: certificate signed by unknown authority\n

                      Such behavior appears because the oc tool references an old CA data in the kubeconfig file.

                      Note

                      Examine the Certificate Authority data using the following command:

                      oc config view --minify --raw -o jsonpath='{.clusters[].cluster.certificate-authority-data}' | base64 -d | openssl x509 -text\n

                      This certificate has the CA:TRUE parameter, which means that this is a self-signed root CA certificate.

                      To fix the error, remove the old CA data from your OpenShift kubeconfig file:

                      sed -i \"/certificate-authority-data/d\" $KUBECONFIG\n

                      Since this field will be absent in the kubeconfig file, system root SSL certificate will be used to validate the cluster certificate trust chain. On Ubuntu, Let's Encrypt OpenShift cluster certificates will be validated against Internet Security Research Group root in /etc/ssl/certs/ca-certificates.crt.

                      "},{"location":"operator-guide/ssl-automation-okd/#certificate-renewals","title":"Certificate Renewals","text":"

                      The cert-manager automatically renews the certificates based on the X.509 certificate's duration and the renewBefore value. The minimum value for the spec.duration is 1 hour; for spec.renewBefore, 5 minutes. It is also required that spec.duration > spec.renewBefore.

                      Use the cmctl tool to manually trigger a single instant certificate renewal:

                      cmctl renew router-certs -n openshift-ingress\ncmctl renew api-certs -n openshift-config\n

                      Otherwise, manually renew all certificates in all namespaces with the app=cert-manager label:

                      cmctl renew --all-namespaces -l app=cert-manager\n

                      Run the cmctl renew --help command to get more details.

                      "},{"location":"operator-guide/ssl-automation-okd/#related-articles","title":"Related Articles","text":"
                      • Cert-Manager Official Documentation
                      • Installing the Cert-Manager Operator for Red Hat OpenShift
                      • Checking Issued Certificate Details
                      "},{"location":"operator-guide/tekton-monitoring/","title":"Monitoring","text":"

                      This documentation describes how to integrate tekton-pipelines metrics with Prometheus and Grafana monitoring stack.

                      "},{"location":"operator-guide/tekton-monitoring/#prerequisites","title":"Prerequisites","text":"

                      Ensure the following requirements are met first before moving ahead:

                      • Kube prometheus stack is installed;
                      • Tekton pipeline is installed.
                      "},{"location":"operator-guide/tekton-monitoring/#create-and-apply-the-additional-scrape-config","title":"Create and Apply the Additional Scrape Config","text":"

                      To create and apply the additional scrape config, follow the steps below:

                      1. Create the kubernetes secret file with the additional scrape config:

                        additional-scrape-configs.yaml file
                        apiVersion: v1\nkind: Secret\nmetadata:\n  name: additional-scrape-configs\nstringData:\n  prometheus-additional-job.yaml: |\n    - job_name: \"tekton-pipelines\"\n      scrape_interval: 30s\n      static_configs:\n      - targets: [\"tekton-pipelines-controller.<tekton-pipelines-namespace>.svc.cluster.local:9090\"]\n
                      2. Apply the created secret:

                        kubectl apply -f additional-scrape-configs.yaml -n <monitoring-namespace>\n
                      3. Update the prometheus stack:

                        helm update --install prometheus prometheus-community/kube-prometheus-stack --values values.yaml -n <monitoring-namespace>\n

                        The values.yaml file should have the following contents:

                        values.yaml file
                        prometheus:\nprometheusSpec:\nadditionalScrapeConfigsSecret:\nenabled: true\nname: additional-scrape-configs\nkey: prometheus-additional-job.yaml\n
                      4. Download the EDP Tekton Pipeline dashboard:

                        Import Grafana dashboard

                        a. Click on the dashboard menu;

                        b. In the dropdown menu, click the + Import button;

                        c. Select the created edp-tekton-overview_rev1.json file;

                        Import Grafana dashboard: Options

                        d. Type the name of the dashboard;

                        e. Select the folder for the dashboard;

                        f. Type the UID (set of eight numbers or letters and symbols);

                        g. Click the Import button.

                      As soon as the dashboard procedure is completed, you can track the newcoming metrics in the dashboard menu:

                      Tekton dashboard

                      "},{"location":"operator-guide/tekton-monitoring/#related-articles","title":"Related Articles","text":"
                      • Install Tekton
                      • Install EDP
                      • Install via Helmfile
                      "},{"location":"operator-guide/tekton-overview/","title":"Tekton Overview","text":"

                      EPAM Delivery Platform provides Continuous Integration based on Tekton.

                      Tekton is an open-source Kubernetes native framework for creating CI pipelines, allowing a user to compile, build and test applications.

                      The edp-tekton GitHub repository provides all Tekton implementation logic on the platform. The Helm charts are used to deploy the resources inside the Kubernetes cluster. Tekton logic is decoupled into separate components:

                      Edp-tekton components diagram

                      The diagram above describes the following:

                      • Common-library is the Helm chart of Library type which stores the common logic shareable across all Tekton pipelines. This library contains Helm templates that generate common Tekton resources.
                      • Pipelines-library is the Helm chart of the Application type which stores the core logic for the EDP pipelines. Tekton CRs like Pipelines, Tasks, EventListeners, Triggers, TriggerTemplates, and other resources are delivered with this chart.
                      • Custom-pipelines is the Helm chart of the Application type which implements custom logic running specifically for internal EDP development, for example, CI and Release. It also demonstrates the customization flow on the platform.
                      • Tekton-dashboard is a multitenancy-adopted implementation of the upstream Tekton Dashboard. It is configured to share Tekton resources across a single namespace.
                      • EDP Interceptor is the custom Tekton Interceptor which enriches the payload from the VCSs events with EDP data from the Codebase CR specification. These data are used to define the Pipeline logic.

                      Inspect the schema below that describes the logic behind the Tekton functionality on the platform:

                      Component view for the Tekton on EDP

                      The platform logic consists of the following:

                      1. The EventListener exposes a dedicated Pod that runs the sink logic and receives incoming events from the VCSs (Gerrit, GitHub, GitLab) through the Ingress. It contains triggers with filtering and routing rules for incoming requests.

                      2. Upon the Event Payload arrival, the EventListener runs triggers to process information or validate it via different interceptors.

                      3. The EDP Interceptor extracts information from the codebases.v2.edp.epam.com CR and injects the received data into top-level 'extensions' field of the Event Payload. The Interceptor consists of running Pod and Service.

                      4. The Tekton Cel Interceptor does simple transformations of the resulting data and prepares them for the Pipeline parameters substitution.

                      5. The TriggerTemplate creates a PipelineRun instance with the required parameters extracted from the Event Payload by Interceptors. These parameters are mandatory for Pipelines.

                      6. The PipelineRun has a mapping to the EDP Tekton Pipelines using a template approach which reduces code duplication. Each Pipeline is designed for a specific VCS (Gerrit, GitLab, GitHub), technology stack (such as Java or Python), and type (code-review, build).

                      7. A Pipeline consists of separate EDP Tekton or open-source Tasks. They are arranged in a specific order of execution in the Pipeline.

                      8. Each Task is executed as a Pod on the Kubernetes cluster. Also, Tasks can have a different number of steps that are executed as a \u0421ontainer in Pod.

                      9. The Kubernetes native approach allows the creation of PipelineRun either with the kubectl tool or using the EDP Portal UI.

                      "},{"location":"operator-guide/upgrade-edp-2.10/","title":"Upgrade EDP v2.9 to 2.10","text":"

                      This section provides the details on the EDP upgrade to 2.10.2. Explore the actions and requirements below.

                      Note

                      Kiosk is optional for EDP v.2.9.0 and higher, and is enabled by default. To disable it, add the following parameter to the values.yaml file: global.kioskEnabled: false. Please refer to the Set Up Kiosk documentation for the details.

                      Note

                      In the process of updating the EDP, it is necessary to migrate the database for SonarQube, before performing the update procedure, please carefully read section 4 of this guide.

                      1. Before updating EDP to 2.10.2, delete SonarQube plugins by executing the following command in SonarQube pod:

                        rm -r /opt/sonarqube/extensions/plugins/*\n
                      2. Update Custom Resource Definitions. Run the following command to apply all the necessary CRDs to the cluster:

                        kubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.10/deploy-templates/crds/v2_v1alpha1_jenkins_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.10/deploy-templates/crds/v1_v1alpha1_keycloakclient_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.10/deploy-templates/crds/v1_v1alpha1_keycloakrealmcomponent_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.10/deploy-templates/crds/v1_v1alpha1_keycloakrealmidentityprovider_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.10/deploy-templates/crds/v1_v1alpha1_keycloakrealmrole_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.10/deploy-templates/crds/v1_v1alpha1_keycloakrealmuser_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.10/deploy-templates/crds/v1_v1alpha1_keycloak_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/release/2.10/deploy-templates/crds/edp_v1alpha1_nexus_crd.yaml\n
                      3. To upgrade EDP to the v.2.10.2, run the following command:

                        helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.10.2\n

                        Note

                        To verify the installation, it is possible to test the deployment before applying it to the cluster with: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.10.2 --dry-run

                      4. Migrate the database for SonarQube according to the official documentation.

                        Note

                        Please be aware of possible tables duplication for speeding up the migration process during the upgrade. Due to the duplication, the database disk usage can be temporarily increased to twice as the normal usage. Therefore, the recommended database disk usage is below 50% before the migration start.

                        • Navigate to the project http://SonarQubeServerURL/setup link and follow the setup instructions:

                          Migrate SonarQube database

                        • Click the Upgrade button and wait for the end of the migration process.
                      5. Remove the resources related to the deprecated Sonar Gerrit Plugin that is deleted in EDP 2.10.2:

                        • Remove Sonar Gerrit Plugin from Jenkins (go to Manage Jenkins -> Manage Plugins -> Installed -> Uninstall Sonar Gerrit Plugin).
                        • In Gerrit, clone the All-Projects repository.
                        • Edit the project.config file in the All-Projects repository and remove the Sonar-Verified label declaration:
                          [label \"Sonar-Verified\"]\nfunction = MaxWithBlock\nvalue = -1 Issues found\nvalue = 0 No score\nvalue = +1 Verified\ndefaultValue = 0\n
                        • Also, remove the following permissions for the Sonar-Verified label in the project.config file:
                          label-Sonar-Verified = -1..+1 group Administrators\nlabel-Sonar-Verified = -1..+1 group Project Owners\nlabel-Sonar-Verified = -1..+1 group Service Users\n
                        • Save the changes, and commit and push the repository to HEAD:refs/meta/config bypassing the Gerrit code review:
                          git push origin HEAD:refs/meta/config\n
                      6. Update image versions for the Jenkins agents in the ConfigMap:

                        kubectl edit configmap jenkins-slaves -n <edp-namespace>\n
                        • The versions of the images should be:
                          epamedp/edp-jenkins-codenarc-agent:1.0.1\nepamedp/edp-jenkins-dotnet-21-agent:1.0.5\nepamedp/edp-jenkins-dotnet-31-agent:1.0.4\nepamedp/edp-jenkins-go-agent:1.0.6\nepamedp/edp-jenkins-gradle-java8-agent:1.0.3\nepamedp/edp-jenkins-gradle-java11-agent:2.0.3\nepamedp/edp-jenkins-helm-agent:1.0.10\nepamedp/edp-jenkins-maven-java8-agent:1.0.3\nepamedp/edp-jenkins-maven-java11-agent:2.0.4\nepamedp/edp-jenkins-npm-agent:2.0.3\nepamedp/edp-jenkins-opa-agent:1.0.2\nepamedp/edp-jenkins-python-38-agent:2.0.4\nepamedp/edp-jenkins-terraform-agent:2.0.5\n
                        • Restart the Jenkins pod.
                      7. Since EDP version v.2.10.x, the create-release.groovy, code-review.groovy, and build.groovy files are deprecated (pipeline script from SCM is replaced with pipeline script, see below).

                        • Pipeline script from SCM: Pipeline script from scm example
                        • Pipeline script: Pipeline script example
                        • Update the job-provisioner code and restart the codebase-operator pod. Consult the default job-provisioners code section.
                      "},{"location":"operator-guide/upgrade-edp-2.10/#related-articles","title":"Related Articles","text":"
                      • Manage Jenkins CI Pipeline Job Provisioner
                      • Set Up Kiosk
                      • SonarQube Upgrade Guide
                      "},{"location":"operator-guide/upgrade-edp-2.11/","title":"Upgrade EDP v2.10 to 2.11","text":"

                      This section provides the details on the EDP upgrade to 2.11. Explore the actions and requirements below.

                      1. Update Custom Resource Definitions. Run the following command to apply all the necessary CRDs to the cluster:

                        kubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.12/deploy-templates/crds/edp_v1alpha1_cd_stage_deploy_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.11/deploy-templates/crds/v2_v1alpha1_merge_request_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/release/2.11/deploy-templates/crds/edp_v1alpha1_user_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/release/2.11/deploy-templates/crds/edp_v1alpha1_cdpipeline_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.11/deploy-templates/crds/v2_v1alpha1_jenkinssharedlibrary_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.11/deploy-templates/crds/v2_v1alpha1_cdstagejenkinsdeployment_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.11/deploy-templates/crds/v1_v1alpha1_keycloakauthflow_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.11/deploy-templates/crds/v1_v1alpha1_keycloakrealmuser_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.12/deploy-templates/crds/edp_v1alpha1_codebaseimagestream_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.12/deploy-templates/crds/edp_v1alpha1_codebase_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/release/2.11/deploy-templates/crds/edp_v1alpha1_sonar_group_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/release/2.11/deploy-templates/crds/edp_v1alpha1_permission_template_crd.yaml\n
                      2. Backup kaniko-template config-map and then remove it. This component will be delivered during upgrade.

                      3. Set required awsRegion parameter. Pay attention that the nesting of the kanikoRoleArn parameter has been changed to the kaniko.roleArn parameter. Check the parameters in the EDP installation chart. For details, please refer to the values.yaml file. To upgrade EDP to the v.2.11.x, run the following command:

                        helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.11.x\n

                        Note

                        To verify the installation, it is possible to test the deployment before applying it to the cluster with: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.11.x --dry-run

                      4. Update Sonar Project Key:

                        Note

                        Avoid using special characters when creating projects in SonarQube. Allowed characters are: letters, numbers, -, _, . and :, with at least one non-digit. For details, please refer to the SonarQube documentation. As the result, the project name will be: project-name-release-0.0 or project-name-branchName.

                        Such actions are required to be followed with the aim to store the SonarQube statistics from the previous EDP version:

                        Warning

                        Do not run any pipeline with the updated sonar stage on any existing application before the completion of the first step.

                        4.1. Update the project key in SonarQube from old to new format by adding the default branch name.

                        - Navigate to Project Settings -> Update Key: Update SonarQube project key - Enter the default branch name and click Update: Update SonarQube project key

                        4.2. As the result, after the first run, the project name will be changed to a new format containing all previous statistics:

                        SonarQube project history activity

                      5. Update image versions for the Jenkins agents in the ConfigMap:

                          kubectl edit configmap jenkins-slaves -n <edp-namespace>\n
                        • The versions of the images should be:
                          epamedp/edp-jenkins-codenarc-agent:3.0.4\nepamedp/edp-jenkins-dotnet-21-agent:3.0.4\nepamedp/edp-jenkins-dotnet-31-agent:3.0.3\nepamedp/edp-jenkins-go-agent:3.0.5\nepamedp/edp-jenkins-gradle-java11-agent:3.0.2\nepamedp/edp-jenkins-gradle-java8-agent:3.0.2\nepamedp/edp-jenkins-helm-agent:3.0.3\nepamedp/edp-jenkins-maven-java11-agent:3.0.3\nepamedp/edp-jenkins-maven-java8-agent:3.0.3\nepamedp/edp-jenkins-npm-agent:3.0.4\nepamedp/edp-jenkins-opa-agent:3.0.2\nepamedp/edp-jenkins-python-38-agent:3.0.2\nepamedp/edp-jenkins-terraform-agent:3.0.3\n
                        • Add Jenkins agent by following the template:

                          View: values.yaml

                          kaniko-docker-template: |-\n<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n<inheritFrom></inheritFrom>\n<name>kaniko-docker</name>\n<namespace></namespace>\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<instanceCap>2147483647</instanceCap>\n<slaveConnectTimeout>100</slaveConnectTimeout>\n<idleMinutes>5</idleMinutes>\n<activeDeadlineSeconds>0</activeDeadlineSeconds>\n<label>kaniko-docker</label>\n<serviceAccount>jenkins</serviceAccount>\n<nodeSelector>beta.kubernetes.io/os=linux</nodeSelector>\n<nodeUsageMode>NORMAL</nodeUsageMode>\n<workspaceVolume class=\"org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume\">\n<memory>false</memory>\n</workspaceVolume>\n<volumes/>\n<containers>\n<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n<name>jnlp</name>\n\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<workingDir>/tmp</workingDir>\n<command></command>\n<args>${computer.jnlpmac} ${computer.name}</args>\n<ttyEnabled>false</ttyEnabled>\n<resourceRequestCpu></resourceRequestCpu>\n<resourceRequestMemory></resourceRequestMemory>\n<resourceLimitCpu></resourceLimitCpu>\n<resourceLimitMemory></resourceLimitMemory>\n<envVars>\n<org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n<key>JAVA_TOOL_OPTIONS</key>\n<value>-XX:+UnlockExperimentalVMOptions -Dsun.zip.disableMemoryMapping=true</value>\n</org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n</envVars>\n<ports/>\n</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n</containers>\n<envVars/>\n<annotations/>\n<imagePullSecrets/>\n<podRetention class=\"org.csanchez.jenkins.plugins.kubernetes.pod.retention.Default\"/>\n</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n
                          • Restart the Jenkins pod.
                        • Update the Jenkins plugins with the 'pipeline' name and 'HTTP Request Plugin'.

                        • Update Jenkins provisioners according to the Manage Jenkins CI Pipeline Job Provisioner and Manage Jenkins CD Pipeline Job Provisioner documentation.

                        • Restart the codebase-operator to recreate the Code-review and Build pipelines for codebases.

                        • Run the CD job-provisioners for every CD pipeline to align the CD stages.
                        • "},{"location":"operator-guide/upgrade-edp-2.12/","title":"Upgrade EDP v2.11 to 2.12","text":"

                          Important

                          We suggest making a backup of the EDP environment before starting the upgrade procedure.

                          This section provides the details on the EDP upgrade to 2.12. Explore the actions and requirements below.

                          Notes

                          • EDP now supports Kubernetes 1.22: Ingress Resources use networking.k8s.io/v1, and Ingress Operators use CustomResourceDefinition apiextensions.k8s.io/v1.
                          • EDP Team now delivers its own Gerrit Docker image: epamedp/edp-gerrit. It is based on the openfrontier Gerrit Docker image.
                          1. EDP now uses DefectDojo as a SAST tool. It is mandatory to deploy DefectDojo before updating EDP to v.2.12.x.

                          2. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                            kubectl apply -f https://raw.githubusercontent.com/epam/edp-admin-console-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_adminconsoles.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_cdpipelines.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_cdstagedeployments.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_cdstagejenkinsdeployments.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_codebasebranches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_codebaseimagestreams.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_codebases.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-component-operator/release/0.12/deploy-templates/crds/v1.edp.epam.com_edpcomponents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerritgroupmembers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerritgroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerritmergerequests.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerritprojectaccesses.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerritprojects.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerritreplicationconfigs.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerrits.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_gitservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_gittags.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_imagestreamtags.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkins.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsagents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsauthorizationrolemappings.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsauthorizationroles.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsfolders.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsjobbuildruns.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsjobs.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsscripts.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsserviceaccounts.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinssharedlibraries.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_jiraissuemetadatas.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_jiraservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakauthflows.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakclients.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakclientscopes.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealmcomponents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealmgroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealmidentityproviders.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealmrolebatches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealmroles.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealms.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealmusers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloaks.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_nexuses.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_nexususers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_perfdatasourcegitlabs.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_perfdatasourcejenkinses.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_perfdatasourcesonars.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_perfservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_sonargroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_sonarpermissiontemplates.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_sonars.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_stages.yaml\n
                          3. Set the required parameters. For details, please refer to the values.yaml file.

                            • In version v.2.12.x, EDP contains Gerrit v3.6.1. According to the Official Gerrit Upgrade flow, a user must initially upgrade to Gerrit v3.5.2, and then upgrade to v3.6.1. Therefore, define the gerrit-operator.gerrit.version=3.5.2 value in the edp-install values.yaml file.
                            • Two more components are available with the new functionality:

                              • edp-argocd-operator
                              • external-secrets
                            • If there is no need to use these new operators, define false values for them in the existing value.yaml file:

                              View: values.yaml

                              gerrit-operator:\ngerrit:\nversion: \"3.5.2\"\nexternalSecrets:\nenabled: false\nargocd:\nenabled: false\n
                            • The edp-jenkins-role is renamed to the jenkins-resources-role. Delete the edp-jenkins-role with the following command:

                                kubectl delete role edp-jenkins-role -n <edp-namespace>\n

                              The jenkins-resources-role role will be created automatically while EDP upgrade.

                            • Recreate the edp-jenkins-resources-permissions RoleBinding according to the following template:

                              View: jenkins-resources-role

                              apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\nname: edp-jenkins-resources-permissions\nnamespace: <edp-namespace>\nroleRef:\napiGroup: rbac.authorization.k8s.io\nkind: Role\nname: jenkins-resources-role\n
                            • To upgrade EDP to the v.2.12.x, run the following command:

                              helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.12.x\n

                              Note

                              To verify the installation, it is possible to test the deployment before applying it to the cluster with the following command: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.12.x --dry-run

                            • After the update, please remove the gerrit-operator.gerrit.version value. In this case, the default value will be used, and Gerrit will be updated to the v3.6.1 version. Run the following command:

                                helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.12.x\n

                              Note

                              To verify the installation, it is possible to test the deployment before applying it to the cluster with the following command: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.12.x --dry-run

                            • Update image versions for the Jenkins agents in the ConfigMap:

                                kubectl edit configmap jenkins-slaves -n <edp-namespace>\n
                              • The versions of the images must be the following:
                                epamedp/edp-jenkins-codenarc-agent:3.0.8\nepamedp/edp-jenkins-dotnet-21-agent:3.0.7\nepamedp/edp-jenkins-dotnet-31-agent:3.0.7\nepamedp/edp-jenkins-go-agent:3.0.11\nepamedp/edp-jenkins-gradle-java11-agent:3.0.5\nepamedp/edp-jenkins-gradle-java8-agent:3.0.7\nepamedp/edp-jenkins-helm-agent:3.0.8\nepamedp/edp-jenkins-maven-java11-agent:3.0.6\nepamedp/edp-jenkins-maven-java8-agent:3.0.8\nepamedp/edp-jenkins-npm-agent:3.0.7\nepamedp/edp-jenkins-opa-agent:3.0.5\nepamedp/edp-jenkins-python-38-agent:3.0.5\nepamedp/edp-jenkins-terraform-agent:3.0.6\n
                              • Add Jenkins agents by following the template:

                                View: jenkins-slaves

                                  sast-template: |\n<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n<inheritFrom></inheritFrom>\n<name>sast</name>\n<namespace></namespace>\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<instanceCap>2147483647</instanceCap>\n<slaveConnectTimeout>100</slaveConnectTimeout>\n<idleMinutes>5</idleMinutes>\n<activeDeadlineSeconds>0</activeDeadlineSeconds>\n<label>sast</label>\n<serviceAccount>jenkins</serviceAccount>\n<nodeSelector>beta.kubernetes.io/os=linux</nodeSelector>\n<nodeUsageMode>NORMAL</nodeUsageMode>\n<workspaceVolume class=\"org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume\">\n<memory>false</memory>\n</workspaceVolume>\n<volumes/>\n<containers>\n<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n<name>jnlp</name>\n\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<workingDir>/tmp</workingDir>\n<command></command>\n<args>${computer.jnlpmac} ${computer.name}</args>\n<ttyEnabled>false</ttyEnabled>\n<resourceRequestCpu></resourceRequestCpu>\n<resourceRequestMemory></resourceRequestMemory>\n<resourceLimitCpu></resourceLimitCpu>\n<resourceLimitMemory></resourceLimitMemory>\n<envVars>\n<org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n<key>JAVA_TOOL_OPTIONS</key>\n<value>-XX:+UnlockExperimentalVMOptions -Dsun.zip.disableMemoryMapping=true</value>\n</org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n</envVars>\n<ports/>\n</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n</containers>\n<envVars/>\n<annotations/>\n<imagePullSecrets/>\n<podRetention class=\"org.csanchez.jenkins.plugins.kubernetes.pod.retention.Default\"/>\n</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n
                                • If required, update the requests and limits for the following Jenkins agents:

                                  • edp-jenkins-codenarc-agent
                                  • edp-jenkins-go-agent
                                  • edp-jenkins-gradle-java11-agent
                                  • edp-jenkins-gradle-java8-agent
                                  • edp-jenkins-maven-java11-agent
                                  • edp-jenkins-maven-java8-agent
                                  • edp-jenkins-npm-agent
                                  • edp-jenkins-dotnet-21-agent
                                  • edp-jenkins-dotnet-31-agent

                                  EDP requires to start with the following values:

                                  View: jenkins-slaves

                                    <resourceRequestCpu>500m</resourceRequestCpu>\n<resourceRequestMemory>1Gi</resourceRequestMemory>\n<resourceLimitCpu>2</resourceLimitCpu>\n<resourceLimitMemory>5Gi</resourceLimitMemory>\n
                                  • Restart the Jenkins pod.
                                • Update Jenkins provisioners according to the Manage Jenkins CI Pipeline Job Provisioner instruction.

                                • Restart the codebase-operator, to recreate the Code Review and Build pipelines for the codebases.

                                • Warning

                                  In case there are different EDP versions on one cluster, the following error may occur on the init stage of Jenkins Groovy pipeline: java.lang.NumberFormatException: For input string: \"\". To fix this issue, please run the following command using kubectl v1.24.4+:

                                  kubectl patch codebasebranches.v2.edp.epam.com <codebase-branch-name>  -n <edp-namespace>  '--subresource=status' '--type=merge' -p '{\"status\": {\"build\": \"0\"}}'\n
                                  "},{"location":"operator-guide/upgrade-edp-2.12/#upgrade-edp-to-2122","title":"Upgrade EDP to 2.12.2","text":"

                                  Important

                                  We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                  This section provides the details on the EDP upgrade to 2.12.2. Explore the actions and requirements below.

                                  1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                    kubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.12.2/deploy-templates/crds/v2.edp.epam.com_jenkins.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.12.1/deploy-templates/crds/v2.edp.epam.com_gerrits.yaml\n
                                  2. To upgrade EDP to 2.12.2, run the following command:

                                    helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.12.2\n

                                    Note

                                    To verify the installation, it is possible to test the deployment before applying it to the cluster with the following command: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.12.2 --dry-run

                                  "},{"location":"operator-guide/upgrade-edp-2.8/","title":"Upgrade EDP v2.7 to 2.8","text":"

                                  This section provides the details on the EDP upgrade to 2.8.4. Explore the actions and requirements below.

                                  Note

                                  Kiosk is implemented and mandatory for EDP v.2.8.4 and is optional for EDP v.2.9.0 and higher.

                                  To upgrade EDP to 2.8.4, take the following steps:

                                  1. Deploy and configure Kiosk (create a Service Account, Account, and ClusterRoleBinging) according to the Set Up Kiosk documentation.

                                    • Update the spec field in the Kiosk space:
                                      apiVersion: tenancy.kiosk.sh/v1alpha1\nkind: Space\nmetadata:\nname: <edp-project>\nspec:\naccount: <edp-project>-admin\n
                                    • Create RoleBinding (required for namespaces created before using Kiosk):

                                      Note

                                      In the uid field under the ownerReferences in the Kubernetes manifest, indicate the Account Custom Resource ID from accounts.config.kiosk.sh kubectl get account <edp-project>-admin -o=custom-columns=NAME:.metadata.uid --no-headers=true

                                      View: rolebinding-kiosk.yaml

                                      apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\ngenerateName: <edp-project>-admin-\nnamespace: <edp-project>\nownerReferences:\n- apiVersion: config.kiosk.sh/v1alpha1\nblockOwnerDeletion: true\ncontroller: true\nkind: Account\nname: <edp-project>-admin\nuid: ''\nroleRef:\napiGroup: rbac.authorization.k8s.io\nkind: ClusterRole\nname: kiosk-space-admin\nsubjects:\n- kind: ServiceAccount\nname: <edp-project>\nnamespace: security\n
                                      kubectl create -f rolebinding-kiosk.yaml\n
                                    • With Amazon Elastic Container Registry to store the images, there are two options:

                                      • Enable IRSA and create AWS IAM Role for Kaniko image builder. Please refer to the IAM Roles for Kaniko Service Accounts section for the details.
                                      • The Amazon Elastic Container Registry Roles can be stored in an instance profile.
                                    • Update Custom Resource Definitions by applying all the necessary CRD to the cluster with the command below:

                                      kubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/release/2.8/deploy-templates/crds/edp_v1alpha1_cdpipeline_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.8/deploy-templates/crds/edp_v1alpha1_codebase_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.8/deploy-templates/crds/edp_v1alpha1_cd_stage_deploy_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.8/deploy-templates/crds/v2_v1alpha1_jenkinsjobbuildrun_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.8/deploy-templates/crds/v2_v1alpha1_cdstagejenkinsdeployment_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.8/deploy-templates/crds/v2_v1alpha1_jenkinsjob_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/release/2.8/deploy-templates/crds/edp_v1alpha1_nexus_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.8/deploy-templates/crds/v1_v1alpha1_keycloakauthflow_crd.yaml\n
                                    • With Amazon Elastic Container Registry to store and Kaniko to build the images, add the kanikoRoleArn parameter to the values before starting the update process. This parameter is indicated in AWS Roles once IRSA is enabled and AWS IAM Role is created for Kaniko. The value should look as follows:

                                      kanikoRoleArn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/AWSIRSA\u2039CLUSTER_NAME\u203a\u2039EDP_NAMESPACE\u203aKaniko\n
                                    • To upgrade EDP to the v.2.8.4, run the following command:

                                      helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.8.4\n

                                      Note

                                      To verify the installation, it is possible to test the deployment before applying it to the cluster with: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.8.4 --dry-run

                                    • Remove the following Kubernetes resources left from the previous EDP installation (it is optional):

                                      kubectl delete cm luminatesec-conf -n <edp-namespace>\nkubectl delete sa edp edp-perf-operator -n <edp-namespace>\nkubectl delete deployment perf-operator -n <edp-namespace>\nkubectl delete clusterrole edp-<edp-namespace> edp-perf-operator-<edp-namespace>\nkubectl delete clusterrolebinding edp-<edp-namespace> edp-perf-operator-<edp-namespace>\nkubectl delete rolebinding edp-<edp-namespace> edp-perf-operator-<edp-namespace>-admin -n <edp-namespace>\nkubectl delete perfserver epam-perf -n <edp-namespace>\nkubectl delete services.v2.edp.epam.com postgres rabbit-mq -n <edp-namespace>\n
                                    • Update the CI and CD Jenkins job provisioners:

                                      Note

                                      Please refer to the Manage Jenkins CI Pipeline Job Provisioner section for the details.

                                      View: Default CI provisioner template for EDP 2.8.4
                                      /* Copyright 2021 EPAM Systems.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n\nSee the License for the specific language governing permissions and\nlimitations under the License. */\n\nimport groovy.json.*\nimport jenkins.model.Jenkins\nimport hudson.model.*\n\nJenkins jenkins = Jenkins.instance\ndef stages = [:]\ndef jiraIntegrationEnabled = Boolean.parseBoolean(\"${JIRA_INTEGRATION_ENABLED}\" as String)\ndef commitValidateStage = jiraIntegrationEnabled ? ',{\"name\": \"commit-validate\"}' : ''\ndef createJIMStage = jiraIntegrationEnabled ? ',{\"name\": \"create-jira-issue-metadata\"}' : ''\ndef buildTool = \"${BUILD_TOOL}\"\ndef goBuildStage = buildTool.toString() == \"go\" ? ',{\"name\": \"build\"}' : ',{\"name\": \"compile\"}'\n\nstages['Code-review-application'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" + goBuildStage +\n',{\"name\": \"tests\"},[{\"name\": \"sonar\"},{\"name\": \"dockerfile-lint\"},{\"name\": \"helm-lint\"}]]'\nstages['Code-review-library'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"compile\"},{\"name\": \"tests\"},' +\n'{\"name\": \"sonar\"}]'\nstages['Code-review-autotests'] = '[{\"name\": \"gerrit-checkout\"},{\"name\": \"get-version\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"},{\"name\": \"sonar\"}' + \"${createJIMStage}\" + ']'\nstages['Code-review-default'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" + ']'\nstages['Code-review-library-terraform'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"terraform-lint\"}]'\nstages['Code-review-library-opa'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"}]'\nstages['Code-review-library-codenarc'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"sonar\"},{\"name\": \"build\"}]'\n\nstages['Build-library-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-npm'] = stages['Build-library-maven']\nstages['Build-library-gradle'] = stages['Build-library-maven']\nstages['Build-library-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-terraform'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"terraform-lint\"}' +\n',{\"name\": \"terraform-plan\"},{\"name\": \"terraform-apply\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-opa'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"tests\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-codenarc'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' +\n\"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\n\nstages['Build-application-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},[{\"name\": \"sonar\"}],{\"name\": \"build\"},{\"name\": \"build-image-kaniko\"},' +\n'{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-npm'] = stages['Build-application-maven']\nstages['Build-application-gradle'] = stages['Build-application-maven']\nstages['Build-application-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},[{\"name\": \"sonar\"}],{\"name\": \"build-image-kaniko\"},' +\n'{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-go'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"tests\"},{\"name\": \"sonar\"},' +\n'{\"name\": \"build\"},{\"name\": \"build-image-kaniko\"}' +\n\"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},' +\n'{\"name\": \"build-image-kaniko\"},{\"name\": \"push\"}' + \"${createJIMStage}\" +\n',{\"name\": \"git-tag\"}]'\n\nstages['Create-release'] = '[{\"name\": \"checkout\"},{\"name\": \"create-branch\"},{\"name\": \"trigger-job\"}]'\n\ndef defaultBuild = '[{\"name\": \"checkout\"}' + \"${createJIMStage}\" + ']'\n\ndef codebaseName = \"${NAME}\"\ndef gitServerCrName = \"${GIT_SERVER_CR_NAME}\"\ndef gitServerCrVersion = \"${GIT_SERVER_CR_VERSION}\"\ndef gitCredentialsId = \"${GIT_CREDENTIALS_ID ? GIT_CREDENTIALS_ID : 'gerrit-ciuser-sshkey'}\"\ndef repositoryPath = \"${REPOSITORY_PATH}\"\ndef defaultBranch = \"${DEFAULT_BRANCH}\"\n\ndef codebaseFolder = jenkins.getItem(codebaseName)\nif (codebaseFolder == null) {\nfolder(codebaseName)\n}\n\ncreateListView(codebaseName, \"Releases\")\ncreateReleasePipeline(\"Create-release-${codebaseName}\", codebaseName, stages[\"Create-release\"], \"create-release.groovy\",\nrepositoryPath, gitCredentialsId, gitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, defaultBranch)\n\nif (buildTool.toString().equalsIgnoreCase('none')) {\nreturn true\n}\n\nif (BRANCH) {\ndef branch = \"${BRANCH}\"\ndef formattedBranch = \"${branch.toUpperCase().replaceAll(/\\\\//, \"-\")}\"\ncreateListView(codebaseName, formattedBranch)\n\ndef type = \"${TYPE}\"\ndef crKey = getStageKeyName(buildTool)\ncreateCiPipeline(\"Code-review-${codebaseName}\", codebaseName, stages[crKey], \"code-review.groovy\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion)\n\ndef buildKey = \"Build-${type}-${buildTool.toLowerCase()}\".toString()\nif (type.equalsIgnoreCase('application') || type.equalsIgnoreCase('library')) {\ndef jobExists = false\nif(\"${formattedBranch}-Build-${codebaseName}\".toString() in Jenkins.instance.getAllItems().collect{it.name})\njobExists = true\n\ncreateCiPipeline(\"Build-${codebaseName}\", codebaseName, stages.get(buildKey, defaultBuild), \"build.groovy\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion)\n\nif(!jobExists)\nqueue(\"${codebaseName}/${formattedBranch}-Build-${codebaseName}\")\n}\n}\n\ndef createCiPipeline(pipelineName, codebaseName, codebaseStages, pipelineScript, repository, credId, watchBranch, gitServerCrName, gitServerCrVersion) {\npipelineJob(\"${codebaseName}/${watchBranch.toUpperCase().replaceAll(/\\\\//, \"-\")}-${pipelineName}\") {\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\ntriggers {\ngerrit {\nevents {\nif (pipelineName.contains(\"Build\"))\nchangeMerged()\nelse\npatchsetCreated()\n}\nproject(\"plain:${codebaseName}\", [\"plain:${watchBranch}\"])\n}\n}\ndefinition {\ncpsScm {\nscm {\ngit {\nremote {\nurl(repository)\ncredentials(credId)\n}\nbranches(\"${watchBranch}\")\nscriptPath(\"${pipelineScript}\")\n}\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${codebaseStages}\", \"Consequence of stages in JSON format to be run during execution\")\nstringParam(\"GERRIT_PROJECT_NAME\", \"${codebaseName}\", \"Gerrit project name(Codebase name) to be build\")\nstringParam(\"BRANCH\", \"${watchBranch}\", \"Branch to build artifact from\")\n}\n}\n}\n}\n}\n\ndef getStageKeyName(buildTool) {\nif (buildTool.toString().equalsIgnoreCase('terraform')) {\nreturn \"Code-review-library-terraform\"\n}\nif (buildTool.toString().equalsIgnoreCase('opa')) {\nreturn \"Code-review-library-opa\"\n}\nif (buildTool.toString().equalsIgnoreCase('codenarc')) {\nreturn \"Code-review-library-codenarc\"\n}\ndef buildToolsOutOfTheBox = [\"maven\",\"npm\",\"gradle\",\"dotnet\",\"none\",\"go\",\"python\"]\ndef supBuildTool = buildToolsOutOfTheBox.contains(buildTool.toString())\nreturn supBuildTool ? \"Code-review-${TYPE}\" : \"Code-review-default\"\n}\n\ndef createReleasePipeline(pipelineName, codebaseName, codebaseStages, pipelineScript, repository, credId,\ngitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, defaultBranch) {\npipelineJob(\"${codebaseName}/${pipelineName}\") {\nlogRotator {\nnumToKeep(14)\ndaysToKeep(30)\n}\ndefinition {\ncpsScm {\nscm {\ngit {\nremote {\nurl(repository)\ncredentials(credId)\n}\nbranches(\"${defaultBranch}\")\nscriptPath(\"${pipelineScript}\")\n}\n}\nparameters {\nstringParam(\"STAGES\", \"${codebaseStages}\", \"\")\nif (pipelineName.contains(\"Create-release\")) {\nstringParam(\"JIRA_INTEGRATION_ENABLED\", \"${jiraIntegrationEnabled}\", \"Is Jira integration enabled\")\nstringParam(\"GERRIT_PROJECT\", \"${codebaseName}\", \"\")\nstringParam(\"RELEASE_NAME\", \"\", \"Name of the release(branch to be created)\")\nstringParam(\"COMMIT_ID\", \"\", \"Commit ID that will be used to create branch from for new release. If empty, HEAD of master will be used\")\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"REPOSITORY_PATH\", \"${repository}\", \"Full repository path\")\nstringParam(\"DEFAULT_BRANCH\", \"${defaultBranch}\", \"Default repository branch\")\n}\n}\n}\n}\n}\n}\n\ndef createListView(codebaseName, branchName) {\nlistView(\"${codebaseName}/${branchName}\") {\nif (branchName.toLowerCase() == \"releases\") {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^Create-release.*\")\n}\n}\n} else {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^${branchName}-(Code-review|Build).*\")\n}\n}\n}\ncolumns {\nstatus()\nweather()\nname()\nlastSuccess()\nlastFailure()\nlastDuration()\nbuildButton()\n}\n}\n}\n

                                      Note

                                      Please refer to the Manage Jenkins CD Pipeline Job Provisioner page for the details.

                                      View: Default CD provisioner template for EDP 2.8.4
                                      /* Copyright 2021 EPAM Systems.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n\nSee the License for the specific language governing permissions and\nlimitations under the License. */\n\nimport groovy.json.*\nimport jenkins.model.Jenkins\n\nJenkins jenkins = Jenkins.instance\n\ndef pipelineName = \"${PIPELINE_NAME}-cd-pipeline\"\ndef stageName = \"${STAGE_NAME}\"\ndef qgStages = \"${QG_STAGES}\"\ndef gitServerCrVersion = \"${GIT_SERVER_CR_VERSION}\"\ndef gitCredentialsId = \"${GIT_CREDENTIALS_ID}\"\ndef sourceType = \"${SOURCE_TYPE}\"\ndef libraryURL = \"${LIBRARY_URL}\"\ndef libraryBranch = \"${LIBRARY_BRANCH}\"\ndef autodeploy = \"${AUTODEPLOY}\"\ndef scriptPath = \"Jenkinsfile\"\ndef containerDeploymentType = \"container\"\ndef deploymentType = \"${DEPLOYMENT_TYPE}\"\n\ndef stages = buildStages(deploymentType, containerDeploymentType, qgStages)\n\ndef codebaseFolder = jenkins.getItem(pipelineName)\nif (codebaseFolder == null) {\nfolder(pipelineName)\n}\n\nif (deploymentType == containerDeploymentType) {\ncreateContainerizedCdPipeline(pipelineName, stageName, stages, scriptPath, sourceType,\nlibraryURL, libraryBranch, gitCredentialsId, gitServerCrVersion,\nautodeploy)\n} else {\ncreateCustomCdPipeline(pipelineName, stageName)\n}\n\ndef buildStages(deploymentType, containerDeploymentType, qgStages) {\nreturn deploymentType == containerDeploymentType\n? '[{\"name\":\"init\",\"step_name\":\"init\"},{\"name\":\"deploy\",\"step_name\":\"deploy\"},' + qgStages + ',{\"name\":\"promote-images-ecr\",\"step_name\":\"promote-images\"}]'\n: ''\n}\n\ndef createContainerizedCdPipeline(pipelineName, stageName, stages, pipelineScript, sourceType, libraryURL, libraryBranch, libraryCredId, gitServerCrVersion, autodeploy) {\npipelineJob(\"${pipelineName}/${stageName}\") {\nif (sourceType == \"library\") {\ndefinition {\ncpsScm {\nscm {\ngit {\nremote {\nurl(libraryURL)\ncredentials(libraryCredId)\n}\nbranches(\"${libraryBranch}\")\nscriptPath(\"${pipelineScript}\")\n}\n}\n}\n}\n} else {\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\nDeploy()\")\nsandbox(true)\n}\n}\n}\nproperties {\ndisableConcurrentBuilds()\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${stages}\", \"Consequence of stages in JSON format to be run during execution\")\n\nif (autodeploy?.trim() && autodeploy.toBoolean()) {\nstringParam(\"AUTODEPLOY\", \"${autodeploy}\", \"Is autodeploy enabled?\")\nstringParam(\"CODEBASE_VERSION\", null, \"Codebase versions to deploy.\")\n}\n}\n}\n}\n\ndef createCustomCdPipeline(pipelineName, stageName) {\npipelineJob(\"${pipelineName}/${stageName}\") {\nproperties {\ndisableConcurrentBuilds()\n}\n}\n}\n
                                      • It is also necessary to add the string parameter DEPLOYMENT_TYPE to the CD provisioner:
                                        • Go to job-provisions - > cd -> default -> configure;
                                        • Add Parameter - > String parameter;
                                        • Name -> DEPLOYMENT_TYPE
                                    • Update Jenkins pipelines and stages to the new release tag:

                                      • In Jenkins, go to Manage Jenkins -> Configure system -> Find the Global Pipeline Libraries menu.
                                      • Change the Default version for edp-library-stages from build/2.8.0-RC.6 to build/2.9.0-RC.5
                                      • Change the Default version for edp-library-pipelines from build/2.8.0-RC.4 to build/2.9.0-RC.3
                                    • Update the edp-admin-console Custom Resource in the KeycloakClient Custom Resource Definition:

                                      View: keycloakclient.yaml
                                      kind: KeycloakClient\napiVersion: v1.edp.epam.com/v1alpha1\nmetadata:\nname: edp-admin-console\nnamespace: <edp-namespace>\nspec:\nadvancedProtocolMappers: false\nattributes: null\naudRequired: true\nclientId: admin-console-client\ndirectAccess: true\npublic: false\nsecret: admin-console-client\nserviceAccount:\nenabled: true\nrealmRoles:\n- developer\ntargetRealm: <keycloak-edp-realm>\nwebUrl: >-\nhttps://edp-admin-console-example.com\n
                                      kubectl apply -f keycloakclient.yaml\n
                                    • Remove the admin-console-client client ID in the edp-namespace-main realm in Keycloak, restart the keycloak-operator pod and check that the new KeycloakClient is created with the confidential access type.

                                      Note

                                      If \"Internal error\" occurs, regenerate the admin-console-client secret in the Credentials tab in Keycloak and update the admin-console-client secret key \"clientSecret\" and \"password\".

                                    • Update image versions for the Jenkins agents in the ConfigMap:

                                      kubectl edit configmap jenkins-slaves -n <edp-namespace>\n
                                      • The versions of the images should be:
                                        epamedp/edp-jenkins-dotnet-21-agent:1.0.2\nepamedp/edp-jenkins-dotnet-31-agent:1.0.2\nepamedp/edp-jenkins-go-agent:1.0.3\nepamedp/edp-jenkins-gradle-java11-agent:2.0.2\nepamedp/edp-jenkins-gradle-java8-agent:1.0.2\nepamedp/edp-jenkins-helm-agent:1.0.6\nepamedp/edp-jenkins-maven-java11-agent:2.0.3\nepamedp/edp-jenkins-maven-java8-agent:1.0.2\nepamedp/edp-jenkins-npm-agent:2.0.2\nepamedp/edp-jenkins-python-38-agent:2.0.3\nepamedp/edp-jenkins-terraform-agent:2.0.4\n
                                      • Add new Jenkins agents under the data field:
                                      View
                                      data:\ncodenarc-template: |-\n<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n<inheritFrom></inheritFrom>\n<name>codenarc</name>\n<namespace></namespace>\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<instanceCap>2147483647</instanceCap>\n<slaveConnectTimeout>100</slaveConnectTimeout>\n<idleMinutes>5</idleMinutes>\n<activeDeadlineSeconds>0</activeDeadlineSeconds>\n<label>codenarc</label>\n<serviceAccount>jenkins</serviceAccount>\n<nodeSelector>beta.kubernetes.io/os=linux</nodeSelector>\n<nodeUsageMode>NORMAL</nodeUsageMode>\n<workspaceVolume class=\"org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume\">\n<memory>false</memory>\n</workspaceVolume>\n<volumes/>\n<containers>\n<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n<name>jnlp</name>\n\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<workingDir>/tmp</workingDir>\n<command></command>\n<args>${computer.jnlpmac} ${computer.name}</args>\n<ttyEnabled>false</ttyEnabled>\n<resourceRequestCpu></resourceRequestCpu>\n<resourceRequestMemory></resourceRequestMemory>\n<resourceLimitCpu></resourceLimitCpu>\n<resourceLimitMemory></resourceLimitMemory>\n<envVars>\n<org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n<key>JAVA_TOOL_OPTIONS</key>\n<value>-XX:+UnlockExperimentalVMOptions -Dsun.zip.disableMemoryMapping=true</value>\n</org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n</envVars>\n<ports/>\n</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n</containers>\n<envVars/>\n<annotations/>\n<imagePullSecrets/>\n<podRetention class=\"org.csanchez.jenkins.plugins.kubernetes.pod.retention.Default\"/>\n</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\nopa-template: |-\n<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n<inheritFrom></inheritFrom>\n<name>opa</name>\n<namespace></namespace>\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<instanceCap>2147483647</instanceCap>\n<slaveConnectTimeout>100</slaveConnectTimeout>\n<idleMinutes>5</idleMinutes>\n<activeDeadlineSeconds>0</activeDeadlineSeconds>\n<label>opa</label>\n<serviceAccount>jenkins</serviceAccount>\n<nodeSelector>beta.kubernetes.io/os=linux</nodeSelector>\n<nodeUsageMode>NORMAL</nodeUsageMode>\n<workspaceVolume class=\"org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume\">\n<memory>false</memory>\n</workspaceVolume>\n<volumes/>\n<containers>\n<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n<name>jnlp</name>\n\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<workingDir>/tmp</workingDir>\n<command></command>\n<args>${computer.jnlpmac} ${computer.name}</args>\n<ttyEnabled>false</ttyEnabled>\n<resourceRequestCpu></resourceRequestCpu>\n<resourceRequestMemory></resourceRequestMemory>\n<resourceLimitCpu></resourceLimitCpu>\n<resourceLimitMemory></resourceLimitMemory>\n<envVars>\n<org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n<key>JAVA_TOOL_OPTIONS</key>\n<value>-XX:+UnlockExperimentalVMOptions -Dsun.zip.disableMemoryMapping=true</value>\n</org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n</envVars>\n<ports/>\n</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n</containers>\n<envVars/>\n<annotations/>\n<imagePullSecrets/>\n<podRetention class=\"org.csanchez.jenkins.plugins.kubernetes.pod.retention.Default\"/>\n</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n
                                      • Restart the Jenkins pod.
                                    • Update compatible plugins in Jenkins and install additional plugins:

                                      • Go to Manage Jenkins -> Manage Plugins -> Select Compatible -> Click Download now and install after restart
                                      • Install the following additional plugins (click the Available plugins tab in Jenkins):
                                        • Groovy Postbuild
                                        • CloudBees AWS Credentials
                                        • Badge
                                        • Timestamper
                                    • Add the annotation deploy.edp.epam.com/previous-stage-name: '' (it should be empty if the CD pipeline contains one stage) to each Custom Resource in the Custom Resource Definition Stage, for example:

                                      • List all Custom Resources in Stage: kubectl get stages.v2.edp.epam.com -n <edp-namespace>
                                      • Edit resources: kubectl edit stages.v2.edp.epam.com <cd-stage-name> -n <edp-namespace>
                                        apiVersion: v2.edp.epam.com/v1alpha1\nkind: Stage\nmetadata:\nannotations:\ndeploy.edp.epam.com/previous-stage-name: ''\n

                                      Note

                                      If a pipeline contains several stages, add a previous stage name indicated in the EDP Admin Console to the annotation, for example: deploy.edp.epam.com/previous-stage-name: 'dev'.

                                    • Execute script to align CDPipeline resources to the new API (jq command-line JSON processor is required):

                                      pipelines=$( kubectl get cdpipelines -n <edp-namespace> -ojson | jq -c '.items[]' )\nfor p in $pipelines; do\necho \"$p\" | \\\n    jq '. | .spec.inputDockerStreams = .spec.input_docker_streams | del(.spec.input_docker_streams) | .spec += { \"deploymentType\": \"container\" } ' | \\\n    kubectl apply -f -\ndone\n
                                    • Update the database in the edp-db pod in the edp-namespace:

                                      • Log in to the pod:
                                        kubectl exec -i -t -n <edp-namespace> edp-db-<pod> -c edp-db \"--\" sh -c \"(bash || ash || sh)\"\n
                                      • Log in to the Postgress DB (where \"admin\" is the user the secret was created for):
                                        psql edp-db <admin>;\nSET search_path to '<edp-namespace>';\nUPDATE cd_pipeline SET deployment_type = 'container';\n
                                    • Add \"AUTODEPLOY\":\"true/false\",\"DEPLOYMENT_TYPE\":\"container\" to every Custom Resource in jenkinsjobs.v2.edp.epam.com:

                                      • Edit Kubernetes resources:
                                        kubectl get jenkinsjobs.v2.edp.epam.com -n <edp-namespace>\n\nkubectl edit jenkinsjobs.v2.edp.epam.com <cd-pipeline-name> -n <edp-namespace>\n
                                      • Alternatively, use this script to update all the necessary jenkinsjobs Custom Resources:
                                        edp_namespace=<epd_namespace>\nfor stages in $(kubectl get jenkinsjobs -o=name -n $edp_namespace); do kubectl get $stages -n $edp_namespace -o yaml | grep -q \"container\" && echo -e \"\\n$stages is already updated\" || kubectl get $stages -n $edp_namespace -o yaml | sed 's/\"GIT_SERVER_CR_VERSION\"/\"AUTODEPLOY\":\"false\",\"DEPLOYMENT_TYPE\":\"container\",\"GIT_SERVER_CR_VERSION\"/g' | kubectl apply -f -; done\n
                                      • Make sure the edited resource looks as follows:
                                        job:\nconfig: '{\"AUTODEPLOY\":\"false\",\"DEPLOYMENT_TYPE\":\"container\",\"GIT_SERVER_CR_VERSION\":\"v2\",\"PIPELINE_NAME\":\"your-pipeline-name\",\"QG_STAGES\":\"{\\\"name\\\":\\\"manual\\\",\\\"step_name\\\":\\\"your-step-name\\\"}\",\"SOURCE_TYPE\":\"default\",\"STAGE_NAME\":\"your-stage-name\"}'\nname: job-provisions/job/cd/job/default\n
                                      • Restart the Jenkins operator pod and wait until the CD job provisioner in Jenkins creates the updated pipelines.
                                    • "},{"location":"operator-guide/upgrade-edp-2.8/#possible-issues","title":"Possible Issues","text":"
                                      1. SonarQube fails during the CI pipeline run. The previous builds of SonarQube used the latest version of the OpenID Connect Authentication for SonarQube plugin. Version 2.1.0 of this plugin may have issues with the connection, so it is necessary to downgrade it in order to get rid of errors in the pipeline. Take the following steps:

                                        • Log in to the Sonar pod:
                                          kubectl exec -i -t -n <edp-namespace> sonar-<pod> -c sonar \"--\" sh -c \"(bash || ash || sh)\"\n
                                        • Run the command in the Sonar container:
                                          rm extensions/plugins/sonar-auth-oidc-plugin*\n
                                        • Install the OpenID Connect Authentication for SonarQube plugin v2.0.0:
                                          curl -L  https://github.com/vaulttec/sonar-auth-oidc/releases/download/v2.0.0/sonar-auth-oidc-plugin-2.0.0.jar --output extensions/plugins/sonar-auth-oidc-plugin-2.0.0.jar\n
                                        • Restart the SonarQube pod;
                                      2. The Helm lint checker in EDP 2.8.4 has some additional rules. There can be issues with it during the Code Review pipeline in Jenkins for applications that were transferred from previous EDP versions to EDP 2.8.4. To fix this, add the following annotation to the Chart.yaml file:

                                        • Go to the Git repository -> Choose the application -> Edit the deploy-templates/Chart.yaml file.
                                        • It is necessary to add the following lines to the bottom of the Chart.yaml file:
                                          home: https://github.com/your-repo.git\nsources:\n- https://github.com/your-repo.git\nmaintainers:\n- name: DEV Team\n
                                        • Add a new line character at the end of the last line. Please be aware it is important.
                                      "},{"location":"operator-guide/upgrade-edp-2.8/#related-articles","title":"Related Articles","text":"
                                      • Set Up Kiosk
                                      • IAM Roles for Kaniko Service Accounts
                                      • Manage Jenkins CI Pipeline Job Provisioner
                                      • Manage Jenkins CD Pipeline Job Provisioner
                                      "},{"location":"operator-guide/upgrade-edp-2.9/","title":"Upgrade EDP v2.8 to 2.9","text":"

                                      This section provides the details on the EDP upgrade to 2.9.0. Explore the actions and requirements below.

                                      Note

                                      Kiosk is optional for EDP v.2.9.0 and higher, and enabled by default. To disable it, add the following parameter to the values.yaml file: kioskEnabled: false. Please refer to the Set Up Kiosk documentation for the details.

                                      1. With Amazon Elastic Container Registry to store the images, there are two options:

                                        • Enable IRSA and create AWS IAM Role for Kaniko image builder. Please refer to the IAM Roles for Kaniko Service Accounts section for the details.
                                        • The Amazon Elastic Container Registry Roles can be stored in an instance profile.
                                      2. Before updating EDP to 2.9.0, update the gerrit-is-credentials secret by adding the new clientSecret key with the value from gerrit-is-credentials.client_secret:

                                        kubectl edit secret gerrit-is-credentials -n <edp-namespace>\n
                                        • Make sure it looks as follows (replace with the necessary key value):
                                          data:\nclient_secret: example\nclientSecret: example\n
                                      3. Update Custom Resource Definitions. This command will apply all the necessary CRDs to the cluster:

                                        kubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_gerritgroupmember_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_gerritgroup_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_gerritprojectaccess_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_gerritproject_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_jenkins_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_jenkinsagent_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_jenkinsauthorizationrolemapping_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_jenkinsauthorizationrole_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.9/deploy-templates/crds/v1_v1alpha1_keycloakclientscope_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.9/deploy-templates/crds/v1_v1alpha1_keycloakrealmuser_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/release/2.9/deploy-templates/crds/edp_v1alpha1_nexus_crd.yaml\n
                                      4. With Amazon Elastic Container Registry to store and Kaniko to build the images, add the kanikoRoleArn parameter to the values before starting the update process. This parameter is indicated in AWS Roles once IRSA is enabled and AWS IAM Role is created for Kaniko.The value should look as follows:

                                        kanikoRoleArn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/AWSIRSA\u2039CLUSTER_NAME\u203a\u2039EDP_NAMESPACE\u203aKaniko\n
                                      5. To upgrade EDP to the v.2.9.0, run the following command:

                                        helm upgrade --install edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.9.0\n

                                        Note

                                        To verify the installation, it is possible to test the deployment before applying it to the cluster with: helm upgrade --install edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.9.0 --dry-run

                                      6. Remove the following Kubernetes resources left from the previous EDP installation (it is optional):

                                        kubectl delete rolebinding edp-cd-pipeline-operator-<edp-namespace>-admin -n <edp-namespace>\n
                                      7. After EDP update, please restart the 'sonar-operator' pod to address the proper Sonar plugin versioning. After 'sonar-operator' is restarted, check the list of installed plugins in the corresponding SonarQube menu.

                                      8. Update Jenkins pipelines and stages to the new release tag:

                                        • Restart the Jenkins pod
                                        • In Jenkins, go to Manage Jenkins -> Configure system -> Find the Global Pipeline Libraries menu
                                        • Make sure that the Default version for edp-library-stages is build/2.10.0-RC.1
                                        • Make sure that the Default version for edp-library-pipelines is build/2.10.0-RC.1
                                      9. Update image versions for the Jenkins agents in the ConfigMap:

                                        kubectl edit configmap jenkins-slaves -n <edp-namespace>\n
                                        • The versions of the images should be:
                                          epamedp/edp-jenkins-codenarc-agent:1.0.1\nepamedp/edp-jenkins-dotnet-21-agent:1.0.3\nepamedp/edp-jenkins-dotnet-31-agent:1.0.3\nepamedp/edp-jenkins-go-agent:1.0.4\nepamedp/edp-jenkins-gradle-java8-agent:1.0.3\nepamedp/edp-jenkins-gradle-java11-agent:2.0.3\nepamedp/edp-jenkins-helm-agent:1.0.7\nepamedp/edp-jenkins-maven-java8-agent:1.0.3\nepamedp/edp-jenkins-maven-java11-agent:2.0.4\nepamedp/edp-jenkins-npm-agent:2.0.3\nepamedp/edp-jenkins-opa-agent:1.0.2\nepamedp/edp-jenkins-python-38-agent:2.0.4\nepamedp/edp-jenkins-terraform-agent:2.0.5\n
                                        • Restart the Jenkins pod.
                                      10. Update the compatible plugins in Jenkins:

                                        • Go to Manage Jenkins -> Manage Plugins -> Select Compatible -> Click Download now and install after restart
                                      "},{"location":"operator-guide/upgrade-edp-2.9/#related-articles","title":"Related Articles","text":"
                                      • Set Up Kiosk
                                      • IAM Roles for Kaniko Service Accounts
                                      "},{"location":"operator-guide/upgrade-edp-3.0/","title":"Upgrade EDP v2.12 to 3.0","text":"

                                      Important

                                      • Before starting the upgrade procedure, please make the necessary backups.
                                      • Kiosk integration is disabled by default. With EDP below v.3.0.x, define the global.kioskEnabled parameter in the values.yaml file. For details, please refer to the Set Up Kiosk page.
                                      • The gerrit-ssh-port parameter is moved from the gerrit-operator.gerrit.sshport to global.gerritSSHPort values.yaml file.
                                      • In edp-gerrit-operator, the gitServer.user value is changed from the jenkins to edp-civalues.yaml file.

                                      This section provides the details on upgrading EDP to 3.0. Explore the actions and requirements below.

                                      1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                        kubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/d9a4d15244c527ef6d1d029af27574282a281b98/deploy-templates/crds/v2.edp.epam.com_gerrits.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_cdstagedeployments.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_codebasebranches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_codebaseimagestreams.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_codebases.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_gitservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_gittags.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_imagestreamtags.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_jiraissuemetadatas.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_jiraservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakauthflows.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakclients.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakclientscopes.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealmcomponents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealmgroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealmidentityproviders.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealmrolebatches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealmroles.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealms.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealmusers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloaks.yaml\n
                                      2. Set the required parameters. For more details, please refer to the values.yaml file.

                                        View: values.yaml
                                        edp-tekton:\nenabled: false\nadmin-console-operator:\nenabled: true\njenkins-operator:\nenabled: true\n
                                      3. Add proper Helm annotations and labels as indicated below. This step is necessary starting from the release v.3.0.x as custom resources are managed by Helm and removed from the Keycloak Controller logic.

                                          kubectl label EDPComponent main-keycloak app.kubernetes.io/managed-by=Helm -n <edp-namespace>\n  kubectl annotate EDPComponent main-keycloak meta.helm.sh/release-name=<edp-release-name> -n <edp-namespace>\n  kubectl annotate EDPComponent main-keycloak meta.helm.sh/release-namespace=<edp-namespace> -n <edp-namespace>\n  kubectl label KeycloakRealm main app.kubernetes.io/managed-by=Helm -n <edp-namespace>\n  kubectl annotate KeycloakRealm main meta.helm.sh/release-name=<edp-release-name> -n <edp-namespace>\n  kubectl annotate KeycloakRealm main meta.helm.sh/release-namespace=<edp-namespace> -n <edp-namespace>\n

                                      4. To upgrade EDP to 3.0, run the following command:

                                        helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=3.0.x\n

                                        Note

                                        To verify the installation, it is possible to test the deployment before applying it to the cluster with the following command: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=3.0.x --dry-run

                                      5. Update image versions for the Jenkins agents in the ConfigMap:

                                          kubectl edit configmap jenkins-slaves -n <edp-namespace>\n
                                        • The versions of the images must be the following:
                                          epamedp/edp-jenkins-codenarc-agent:3.0.10\nepamedp/edp-jenkins-dotnet-31-agent:3.0.9\nepamedp/edp-jenkins-go-agent:3.0.17\nepamedp/edp-jenkins-gradle-java11-agent:3.0.7\nepamedp/edp-jenkins-gradle-java8-agent:3.0.10\nepamedp/edp-jenkins-helm-agent:3.0.11\nepamedp/edp-jenkins-kaniko-docker-agent:1.0.9\nepamedp/edp-jenkins-maven-java11-agent:3.0.7\nepamedp/edp-jenkins-maven-java8-agent:3.0.10\nepamedp/edp-jenkins-npm-agent:3.0.9\nepamedp/edp-jenkins-opa-agent:3.0.7\nepamedp/edp-jenkins-python-38-agent:3.0.8\nepamedp/edp-jenkins-sast-agent:0.1.5\nepamedp/edp-jenkins-terraform-agent:3.0.9\n
                                        • Remove the edp-jenkins-dotnet-21-agent agent manifest.
                                        • Restart the Jenkins pod.
                                      6. Attach the id_rsa.pub SSH public key from the gerrit-ciuser-sshkey secret to the edp-ci Gerrit user in the gerrit pod:

                                        ssh -p <gerrit_ssh_port> <host> gerrit set-account --add-ssh-key ~/id_rsa.pub\n

                                        Notes

                                        • For this operation, use the gerrit-admin SSH key from secrets.
                                        • <host> is admin@localhost or any other user with permissions.
                                      7. Change the username from jenkins to edp-ci in the gerrit-ciuser-sshkey secret:

                                        kubectl -n <edp-namespace> patch secret gerrit-ciuser-sshkey\\\n --patch=\"{\\\"data\\\": { \\\"username\\\": \\\"$(echo -n edp-ci |base64 -w0)\\\" }}\" -oyaml\n

                                      Warning

                                      In EDP v.3.0.x, Admin Console is deprecated, and EDP interface is available only via EDP Portal.

                                      "},{"location":"operator-guide/upgrade-edp-3.0/#related-articles","title":"Related Articles","text":"
                                      • Migrate CI Pipelines From Jenkins to Tekton
                                      "},{"location":"operator-guide/upgrade-edp-3.1/","title":"Upgrade EDP v3.0 to 3.1","text":"

                                      Important

                                      We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                      This section provides the details on the EDP upgrade to v3.1. Explore the actions and requirements below.

                                      1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                        kubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.13.2/deploy-templates/crds/v2.edp.epam.com_jenkins.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.13.4/deploy-templates/crds/v2.edp.epam.com_gerrits.yaml\n
                                      2. To upgrade EDP to the v3.1, run the following command:

                                        helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=3.1.0\n

                                        Note

                                        To verify the installation, it is possible to test the deployment before applying it to the cluster with the following command: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=3.1.0 --dry-run

                                      "},{"location":"operator-guide/upgrade-edp-3.2/","title":"Upgrade EDP v3.1 to 3.2","text":"

                                      Important

                                      We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                      This section provides the details on the EDP upgrade to v3.2.2. Explore the actions and requirements below.

                                      1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                        kubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_cdstagedeployments.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_codebasebranches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_codebaseimagestreams.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_codebases.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_gitservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_gittags.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_imagestreamtags.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_jiraissuemetadatas.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_jiraservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_cdstagejenkinsdeployments.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkins.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsagents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsauthorizationrolemappings.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsauthorizationroles.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsfolders.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsjobbuildruns.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsjobs.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsscripts.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsserviceaccounts.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinssharedlibraries.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-component-operator/v0.13.0/deploy-templates/crds/v1.edp.epam.com_edpcomponents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/v2.14.1/deploy-templates/crds/v2.edp.epam.com_cdpipelines.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/v2.14.1/deploy-templates/crds/v2.edp.epam.com_stages.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/v2.14.1/deploy-templates/crds/v2.edp.epam.com_nexuses.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/v2.14.1/deploy-templates/crds/v2.edp.epam.com_nexususers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_sonargroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_sonarpermissiontemplates.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_sonars.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerritgroupmembers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerritgroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerritmergerequests.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerritprojectaccesses.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerritprojects.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerritreplicationconfigs.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerrits.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/v2.13.0/deploy-templates/crds/v2.edp.epam.com_perfdatasourcegitlabs.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/v2.13.0/deploy-templates/crds/v2.edp.epam.com_perfdatasourcejenkinses.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/v2.13.0/deploy-templates/crds/v2.edp.epam.com_perfdatasourcesonars.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/v2.13.0/deploy-templates/crds/v2.edp.epam.com_perfservers.yaml\n
                                      2. Generate a cookie-secret for proxy with the following command:

                                        nexus_proxy_cookie_secret=$(openssl rand -base64 32 | head -c 32)\n
                                        Create nexus-proxy-cookie-secret in the namespace:
                                        kubectl -n <edp-project> create secret generic nexus-proxy-cookie-secret \\\n--from-literal=cookie-secret=${nexus_proxy_cookie_secret}\n
                                      3. EDP 3.2.2 features OIDC configuration for EDP Portal. If this parameter is required, create keycloak-client-headlamp-secret as described in this article:

                                        kubectl -n <edp-project> create secret generic keycloak-client-edp-portal-secret \\\n--from-literal=clientSecret=<keycloak_client_secret_key>\n
                                      4. Delete the following resources:

                                        kubectl -n <edp-project> delete KeycloakClient nexus\nkubectl -n <edp-project> delete EDPComponent nexus\nkubectl -n <edp-project> delete Ingress nexus\nkubectl -n <edp-project> delete deployment edp-tekton-dashboard\n
                                      5. EDP release 3.2.2 uses the default cluster storageClass and we must check previous storageClass parameters. Align , if required, the storageClassName in EDP values.yaml to the same that were used by EDP PVC. For example:

                                        edp-tekton:\nbuildTool:\ngo:\ncache:\npersistentVolume:\n# -- Specifies storageClass type. If not specified, a default storageClass for go-cache volume is used\nstorageClass: ebs-sc\n\njenkins-operator:\nenabled: true\njenkins:\nstorage:\n# -- Storageclass for Jenkins data volume\nclass: gp2\n\nsonar-operator:\nsonar:\nstorage:\ndata:\n# --  Storageclass for Sonar data volume\nclass: gp2\ndatabase:\n# --  Storageclass for database data volume\nclass: gp2\n\ngerrit-operator:\ngerrit:\nstorage:\n# --  Storageclass for Gerrit data volume\nclass: gp2\n\nnexus-operator:\nnexus:\nstorage:\n# --  Storageclass for Nexus data volume\nclass: gp2\n
                                      6. To upgrade EDP to the v3.2.2, run the following command:

                                        helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=3.2.2\n

                                        Note

                                        To verify the installation, it is possible to test the deployment before applying it to the cluster with the following command: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=3.2.2 --dry-run

                                      7. "},{"location":"operator-guide/upgrade-edp-3.3/","title":"Upgrade EDP v3.2 to 3.3","text":"

                                        Important

                                        We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                        Note

                                        We currently disabled cache volumes for go and npm in the EDP 3.3 release.

                                        This section provides the details on the EDP upgrade to v3.3.0. Explore the actions and requirements below.

                                        1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                          kubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.16.0/deploy-templates/crds/v2.edp.epam.com_codebases.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_jenkins.yaml\n
                                        2. If you use Gerrit VCS, delete the corresponding resource due to changes in annotations:

                                          kubectl -n edp delete EDPComponent gerrit\n
                                          The deployment will create a new EDPComponent called gerrit instead.

                                        3. To upgrade EDP to the v3.3.0, run the following command:

                                          helm upgrade edp epamedp/edp-install -n edp --values values.yaml --version=3.3.0\n

                                          Note

                                          To verify the installation, it is possible to test the deployment before applying it to the cluster with the --dry-run tag: helm upgrade edp epamedp/edp-install -n edp --values values.yaml --version=3.3.0 --dry-run

                                        4. In EDP v3.3.0, a new feature was introduced allowing manual pipeline re-triggering by sending a comment with /recheck. To enable the re-trigger feature for applications that were added before the upgrade, please proceed with the following:

                                          4.1 For Gerrit VCS, add the following event to the webhooks.config configuration file in the All-Projects repository:

                                          [remote \"commentadded\"]\n  url = http://el-gerrit-listener:8080\n  event = comment-added\n

                                          4.2 For GitHub VCS, check the Issue comments permission for each webhook in every application added before the EDP upgrade to 3.3.0.

                                          4.3 For GitLab VCS, check the Comments permission for each webhook in every application added before the EDP upgrade to 3.3.0.

                                        "},{"location":"operator-guide/upgrade-edp-3.4/","title":"Upgrade EDP v3.3 to 3.4","text":"

                                        Important

                                        We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                        Note

                                        Pay attention that the following components: perf-operator, edp-admin-console, edp-admin-console-operator, and edp-jenkins-operator are deprecated and should be additionally migrated in order to avoid their deletion. For migration details, please refer to the Migrate CI Pipelines From Jenkins to Tekton instruction.

                                        This section provides the details on the EDP upgrade to v3.4.1. Explore the actions and requirements below.

                                        1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                          kubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_cdpipelines.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_stages.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_clusterkeycloakrealms.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_clusterkeycloaks.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakauthflows.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakclients.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakclientscopes.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealmcomponents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealmgroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealmidentityproviders.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealmrolebatches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealmroles.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealms.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealmusers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloaks.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.17.0/deploy-templates/crds/v2.edp.epam.com_templates.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.17.0/deploy-templates/crds/v2.edp.epam.com_codebasebranches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.17.0/deploy-templates/crds/v2.edp.epam.com_codebaseimagestreams.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.17.0/deploy-templates/crds/v2.edp.epam.com_codebases.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.17.0/deploy-templates/crds/v2.edp.epam.com_gitservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.17.0/deploy-templates/crds/v2.edp.epam.com_jiraservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.16.0/deploy-templates/crds/v2.edp.epam.com_gerrits.yaml\n
                                        2. Remove deprecated components:

                                          View: values.yaml

                                          perf-operator:\nenabled: false\nadmin-console-operator:\nenabled: false\njenkins-operator:\nenabled: false\n

                                        3. Since the values.yaml file structure has been modified, move the dockerRegistry subsection to the global section:

                                          The dockerRegistry value has been moved to the global section:

                                          global:\ndockerRegistry:\n# -- Define Image Registry that will to be used in Pipelines. Can be ecr (default), harbor\ntype: \"ecr\"\n# -- Docker Registry endpoint\nurl: \"<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com\"\n
                                        4. (Optional) To integrate EDP with Jira, rename the default values from epam-jira-user to jira-user for a secret name. In case Jira is already integrated, it will continue working.

                                          codebase-operator:\njira:\ncredentialName: \"jira-user\"\n
                                        5. (Optional) To switch to the Harbor registry, change the secret format for the external secret from kaniko-docker-config v3.3.0 to kaniko-docker-config v3.4.1:

                                          View: old format
                                           \"kaniko-docker-config\": {\"secret-string\"} //base64 format\n
                                          View: new format
                                          \"kaniko-docker-config\": {\n\"auths\" : {\n\"registry.com\" :\n{\"username\":\"<registry-username>\",\"password\":\"<registry-password>\",\"auth\":\"secret-string\"}\n}\n}\n
                                        6. To upgrade EDP to the v3.4.1, run the following command:

                                          helm upgrade edp epamedp/edp-install -n edp --values values.yaml --version=3.4.1\n

                                          Note

                                          To verify the installation, it is possible to test the deployment before applying it to the cluster with the --dry-run tag: helm upgrade edp epamedp/edp-install -n edp --values values.yaml --version=3.4.1 --dry-run

                                        7. "},{"location":"operator-guide/upgrade-edp-3.5/","title":"Upgrade EDP v3.4 to 3.5","text":"

                                          Important

                                          We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                          This section provides detailed instructions for upgrading EPAM Delivery Platform to version 3.5.3. Follow the steps and requirements outlined below:

                                          1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                            kubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.19.0/deploy-templates/crds/v2.edp.epam.com_gitservers.yaml\n

                                            Danger

                                            Codebase-operator v2.19.0 is not compatible with the previous versions. Please become familiar with the breaking change in Git Server Custom Resource Definition.

                                          2. Familiarize yourself with the updated file structure of the values.yaml file and adjust your values.yaml file accordingly:

                                            1. By default, the deployment of sub components such as edp-sonar-operator, edp-nexus-operator, edp-gerrit-operator, and keycloak-operator, have been disabled. Set them back to true in case they are needed or manually deploy external tools, such as SonarQube, Nexus, Gerrit and integrate them with the EPAM Delivery Platform.

                                            2. The default Git provider has been changed from Gerrit to GitHub:

                                              Old format:

                                              global:\ngitProvider: gerrit\ngerritSSHPort: \"22\"\n

                                              New format:

                                              global:\ngitProvider: github\n#gerritSSHPort: \"22\"\n
                                            3. The sonarUrl and nexusUrl parameters have been deprecated. All the URLs from external components are stored in integration secrets:

                                              global:\n# -- Optional parameter. Link to use custom sonarqube. Format: http://<service-name>.<sonarqube-namespace>:9000 or http://<ip-address>:9000\nsonarUrl: \"\"\n# -- Optional parameter. Link to use custom nexus. Format: http://<service-name>.<nexus-namespace>:8081 or http://<ip-address>:<port>\nnexusUrl: \"\"\n
                                            4. Keycloak integration has been moved from the global section to the sso section. Update the parameters accordingly:

                                              Old format:

                                              global:\n# -- Keycloak URL\nkeycloakUrl: https://keycloak.example.com\n# -- Administrators of your tenant\nadmins:\n- \"stub_user_one@example.com\"\n# -- Developers of your tenant\ndevelopers:\n- \"stub_user_one@example.com\"\n- \"stub_user_two@example.com\"\n

                                              New format:

                                              sso:\nenabled: true\n# -- Keycloak URL\nkeycloakUrl: https://keycloak.example.com\n# -- Administrators of your tenant\nadmins:\n- \"stub_user_one@example.com\"\n# -- Developers of your tenant\ndevelopers:\n- \"stub_user_one@example.com\"\n- \"stub_user_two@example.com\"\n
                                            5. (Optional) The default secret name for Jira integration has been changed from jira-user to ci-jira. Please adjust the secret name in the parameters accordingly:

                                              codebase-operator:\njira:\ncredentialName: \"ci-jira\"\n
                                          3. The secret naming and format have been refactored. Below are patterns of the changes for various components:

                                            SonarQubeNexusDependency-TrackDefectDojoJiraGitLabGitHub

                                            Old format:

                                            \"sonar-ciuser-token\": {\n\"username\": \"xxxxx\",\n\"secret\": \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\n}\n
                                            New format:
                                            \"ci-sonarqube\": {\n\"token\": \"xxxxxxxxxxxxxxxxxxxxxxx\",\n\"url\":\"https://sonar.example.com\"\n}\n

                                            Old format:

                                            \"nexus-ci-user\": {\n\"username\": \"xxxxx\",\n\"password\": \"xxxxxxxxxxxxxxxxxx\"\n}\n

                                            New format:

                                            \"ci-nexus\": {\n\"username\": \"xxxxx\",\n\"password\": \"xxxxx\",\n\"url\": \"http://nexus.example.com\"\n}\n

                                            Old format:

                                            \"ci-dependency-track\": {\n\"token\": \"xxxxxxxxxxxxxxxxxx\"\n}\n

                                            New format:

                                            \"ci-dependency-track\": {\n\"token\": \"xxxxxxxxxxxxxxxxxx\",\n\"url\": \"http://dependency-track.example.com\"}\n

                                            Old format:

                                            \"defectdojo-ciuser-token\": {\n\"token\": \"xxxxxxxxxxxxxxxxxx\"\n\"url\": \"http://defectdojo.example.com\"\n}\n

                                            New format:

                                            \"ci-defectdojo\": {\n\"token\": \"xxxxxxxxxxxxxxxxxx\",\n\"url\": \"http://defectdojo.example.com\"\n}\n

                                            Old format:

                                            \"jira-user\": {\n\"username\": \"xxxxx\",\n\"password\": \"xxxxx\"\n}\n

                                            New format:

                                            \"ci-jira\": {\n\"username\": \"xxxxx\",\n\"password\": \"xxxxx\"\n}\n

                                            Old format:

                                            \"gitlab\": {\n\"id_rsa\": \"xxxxxxxxxxxxxx\",\n\"token\": \"xxxxxxxxxxxxxx\",\n\"secretString\": \"xxxxxxxxxxxxxx\"\n}\n

                                            New format:

                                            \"ci-gitlab\": {\n\"id_rsa\": \"xxxxxxxxxxxxxx\",\n\"token\": \"xxxxxxxxxxxxxx\",\n\"secretString\": \"xxxxxxxxxxxxxx\"\n}\n

                                            Old format:

                                            \"github\": {\n\"id_rsa\": \"xxxxxxxxxxxxxx\",\n\"token\": \"xxxxxxxxxxxxxx\",\n\"secretString\": \"xxxxxxxxxxxxxx\"\n}\n

                                            New format:

                                            \"ci-github\": {\n\"id_rsa\": \"xxxxxxxxxxxxxx\",\n\"token\": \"xxxxxxxxxxxxxx\",\n\"secretString\": \"xxxxxxxxxxxxxx\"\n}\n

                                            The tables below illustrate the difference between the old and new format:

                                            Old format

                                            Secret Name Username Password Token Secret URL jira-user * * nexus-ci.user * * sonar-ciuser-token * * defectdojo-ciuser-token * * ci-dependency-track *

                                            New format

                                            Secret Name Username Password Token URL ci-jira * * ci-nexus * * * ci-sonarqube * * ci-defectdojo * * ci-dependency-track * *
                                          4. To upgrade EDP to the v3.5.3, run the following command:

                                            helm upgrade edp epamedp/edp-install -n edp --values values.yaml --version=3.5.3\n

                                            Note

                                            To verify the installation, it is possible to test the deployment before applying it to the cluster with the --dry-run tag: helm upgrade edp epamedp/edp-install -n edp --values values.yaml --version=3.5.3 --dry-run

                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/","title":"Upgrade Keycloak v17.0 to 19.0","text":"

                                          Starting from Keycloak v.18.x.x, the Keycloak server has been moved from the Wildfly (JBoss) Application Server to Quarkus framework and is called Keycloak.X.

                                          There are two ways to upgrade Keycloak v.17.0.x-legacy to v.19.0.x on Kubernetes, please perform the steps described in the Prerequisites section of this tutorial, and then select a suitable upgrade strategy for your environment:

                                          • Upgrade Postgres database to a minor release v.11.17
                                          • Migrate Postgres database from Postgres v.11.x to v.14.5
                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#prerequisites","title":"Prerequisites","text":"

                                          Before upgrading Keycloak, please perform the steps below:

                                          1. Create a backup/snapshot of the Keycloak database volume. Locate the AWS volumeID and then create its snapshot on AWS:

                                            • Find the PVC name attached to the Postgres pod. It can be similar to data-keycloak-postgresql-0 if the Postgres StatefulSet name is keycloak-postgresql:

                                              kubectl get pods keycloak-postgresql-0 -n security -o jsonpath='{.spec.volumes[*].persistentVolumeClaim.claimName}{\"\\n\"}'\n
                                            • Locate the PV volumeName in the data-keycloak-postgresql-0 Persistent Volume Claim:

                                              kubectl get pvc data-keycloak-postgresql-0 -n security -o jsonpath='{.spec.volumeName}{\"\\n\"}'\n
                                            • Get volumeID in the Persistent Volume:

                                              kubectl get pv ${pv_name} -n security -o jsonpath='{.spec.awsElasticBlockStore.volumeID}{\"\\n\"}'\n
                                          2. Add two additional keys: password and postgres-password, to the keycloak-postgresql secret in the Keycloak namespace.

                                            Note

                                            • The password key must have the same value as the postgresql-password key.
                                            • The postgres-password key must have the same value as the postgresql-postgres-password key.

                                            The latest chart for Keycloak.X does not have an option to override Postgres password and admin password keys in the secret, and it uses the Postgres defaults, therefore, a new secret scheme must be implemented:

                                            kubectl -n security edit secret keycloak-postgresql\n
                                            data:\npostgresql-password: XXXXXX\npostgresql-postgres-password: YYYYYY\npassword: XXXXXX\npostgres-password: YYYYYY\n
                                          3. Save Keycloak StatefulSet names, for example, keycloak and keycloak-postgresql. These names will be used in the new Helm deployments:

                                            $ kubectl get statefulset -n security\nNAME                  READY   AGE\nkeycloak              1/1     18h\nkeycloak-postgresql   1/1     18h\n
                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#upgrade-postgres-database-to-a-minor-release-v1117","title":"Upgrade Postgres Database to a Minor Release v.11.17","text":"

                                          To upgrade Keycloak by upgrading Postgres Database to a minor release v.11.17, perform the steps described in the Prerequisites section of this tutorial, and then perform the following steps:

                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#delete-keycloak-resources","title":"Delete Keycloak Resources","text":"
                                          1. Delete Keycloak and Prostgres StatefulSets:

                                            kubectl delete statefulset keycloak keycloak-postgresql -n security\n
                                          2. Delete the Keycloak Ingressobject, to prevent hostname duplication issues:

                                            kubectl delete ingress keycloak -n security\n
                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#upgrade-keycloak","title":"Upgrade Keycloak","text":"
                                          1. Make sure the Keycloak chart repository is added:

                                            helm repo add codecentric https://codecentric.github.io/helm-charts\nhelm repo update\n
                                          2. Create values for Keycloak:

                                            Note

                                            Since the Keycloak.X release, Keycloak and Postgres database charts are separated. Upgrade Keycloak, and then install the Postgres database.

                                            Note

                                            • nameOverride: \"keycloak\" sets the name of the Keycloak pod. It must be the same Keycloak name as in the previous StatefulSet.
                                            • Change Ingress host name to the Keycloak host name.
                                            • hostname: keycloak-postgresql is the hostname of the pod with the Postgres database that is the same as Postgres StatefulSet name, for example, keycloak-postgresql.
                                            • \"/opt/keycloak/bin/kc.sh start --auto-build\" was used in the legacy Keycloak version. However, it is no longer required in the new Keycloak version since it is deprecated and used by default.
                                            • Optionally, use the following command for applying the old Keycloak theme:

                                              bin/kc.sh start --features-disabled=admin2\n

                                            View: keycloak-values.yaml
                                            nameOverride: \"keycloak\"\n\nreplicas: 1\n\n# Deploy the latest verion\nimage:\ntag: \"19.0.1\"\n\n# start: create OpenShift realm which is required by EDP\nextraInitContainers: |\n- name: realm-provider\nimage: busybox\nimagePullPolicy: IfNotPresent\ncommand:\n- sh\nargs:\n- -c\n- |\necho '{\"realm\": \"openshift\",\"enabled\": true}' > /opt/keycloak/data/import/openshift.json\nvolumeMounts:\n- name: realm\nmountPath: /opt/keycloak/data/import\n\nextraVolumeMounts: |\n- name: realm\nmountPath: /opt/keycloak/data/import\n\nextraVolumes: |\n- name: realm\nemptyDir: {}\n\ncommand:\n- \"/opt/keycloak/bin/kc.sh\"\n- \"--verbose\"\n- \"start\"\n- \"--http-enabled=true\"\n- \"--http-port=8080\"\n- \"--hostname-strict=false\"\n- \"--hostname-strict-https=false\"\n- \"--spi-events-listener-jboss-logging-success-level=info\"\n- \"--spi-events-listener-jboss-logging-error-level=warn\"\n- \"--import-realm\"\n\nextraEnv: |\n- name: KC_PROXY\nvalue: \"passthrough\"\n- name: KEYCLOAK_ADMIN\nvalueFrom:\nsecretKeyRef:\nname: keycloak-admin-creds\nkey: username\n- name: KEYCLOAK_ADMIN_PASSWORD\nvalueFrom:\nsecretKeyRef:\nname: keycloak-admin-creds\nkey: password\n- name: JAVA_OPTS_APPEND\nvalue: >-\n-XX:+UseContainerSupport\n-XX:MaxRAMPercentage=50.0\n-Djava.awt.headless=true\n-Djgroups.dns.query={{ include \"keycloak.fullname\" . }}-headless\n\n# This block should be uncommented if you install Keycloak on Kubernetes\ningress:\nenabled: true\nannotations:\nkubernetes.io/ingress.class: nginx\ningress.kubernetes.io/affinity: cookie\nrules:\n- host: keycloak.<ROOT_DOMAIN>\npaths:\n- path: '{{ tpl .Values.http.relativePath $ | trimSuffix \"/\" }}/'\npathType: Prefix\n\n# This block should be uncommented if you set Keycloak to OpenShift and change the host field\n# route:\n#   enabled: false\n#   # Path for the Route\n#   path: '/'\n#   # Host name for the Route\n#   host: \"keycloak.<ROOT_DOMAIN>\"\n#   # TLS configuration\n#   tls:\n#     enabled: true\n\nresources:\nlimits:\nmemory: \"2048Mi\"\nrequests:\ncpu: \"50m\"\nmemory: \"512Mi\"\n\n# Check database readiness at startup\ndbchecker:\nenabled: true\n\ndatabase:\nvendor: postgres\nexistingSecret: keycloak-postgresql\nhostname: keycloak-postgresql\nport: 5432\nusername: admin\ndatabase: keycloak\n
                                          3. Upgrade the Keycloak Helm chart:

                                            Note

                                            • The Helm chart is substituted with the new Keyacloak.X instance.
                                            • Change the namespace and the values file name if required.
                                            helm upgrade keycloak codecentric/keycloakx --version 1.6.0 --values keycloak-values.yaml -n security\n

                                            Note

                                            If there are error messages when upgrading via Helm, make sure that StatefulSets are removed. If they are removed and the error still persists, try to add the --force flag to the Helm command:

                                            helm upgrade keycloak codecentric/keycloakx --version 1.6.0 --values keycloak-values.yaml -n security --force\n
                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#install-postgres","title":"Install Postgres","text":"
                                          1. Add Bitnami chart repository and update Helm repos:

                                            helm repo add bitnami https://charts.bitnami.com/bitnami\nhelm repo update\n
                                          2. Create values for Postgres:

                                            Note

                                            • Postgres v.11 and Postgres v.14.5 are not compatible.
                                            • Postgres image will be upgraded to a minor release v.11.17.
                                            • fullnameOverride: \"keycloak-postgresql\" sets the name of the Postgres StatefulSet. It must be the same as in the previous StatefulSet.
                                            View: postgres-values.yaml
                                            fullnameOverride: \"keycloak-postgresql\"\n\n# PostgreSQL read only replica parameters\nreadReplicas:\n# Number of PostgreSQL read only replicas\nreplicaCount: 1\n\nglobal:\npostgresql:\nauth:\nusername: admin\nexistingSecret: keycloak-postgresql\nsecretKeys:\nadminPasswordKey: postgres-password\nuserPasswordKey: password\ndatabase: keycloak\n\nimage:\nregistry: docker.io\nrepository: bitnami/postgresql\ntag: 11.17.0-debian-11-r3\n\nauth:\nexistingSecret: keycloak-postgresql\nsecretKeys:\nadminPasswordKey: postgres-password\nuserPasswordKey: password\n\nprimary:\npersistence:\nenabled: true\nsize: 3Gi\n# If the StorageClass with reclaimPolicy: Retain is used, install an additional StorageClass before installing PostgreSQL\n# (the code is given below).\n# If the default StorageClass will be used - change \"gp2-retain\" to \"gp2\"\nstorageClass: \"gp2-retain\"\n
                                          3. Install the Postgres database chart:

                                            Note

                                            Change the namespace and the values file name if required.

                                            helm install postgresql bitnami/postgresql \\\n--version 11.7.6 \\\n--values postgres-values.yaml \\\n--namespace security\n
                                          4. Log in to Keycloak and check that everything works as expected.

                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#clean-and-analyze-database","title":"Clean and Analyze Database","text":"

                                          Optionally, run the vacuumdb application on the database, to recover space occupied by \"dead tuples\" in the tables, analyze the contents of database tables, and collect statistics for PostgreSQL query engine to improve performance:

                                          PGPASSWORD=\"${postgresql_postgres-password}\" vacuumdb --analyze --verbose -d keycloak -U postgres\n
                                          For all databases, run the following command:

                                          PGPASSWORD=\"${postgresql_postgres-password}\" vacuumdb --analyze --verbose --all -U postgres\n
                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#migrate-postgres-database-from-postgres-v11x-to-v145","title":"Migrate Postgres Database From Postgres v.11.x to v.14.5","text":"

                                          Info

                                          There is a Postgres database migration script at the end of this tutorial. Please read the section below before using the script.

                                          To upgrade Keycloak by migrating Postgres database from Postgres v.11.x to v.14.5, perform the steps described in the Prerequisites section of this tutorial, and then perform the following steps:

                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#export-postgres-databases","title":"Export Postgres Databases","text":"
                                          1. Log in to the current Keycloak Postgres pod and create a logical backup of all roles and databases using the pg_dumpall application. If there is no access to the Postgres Superuser, backup the Keycloak database with the pg_dump application:

                                            Note

                                            • The secret key postgresql-postgres-password is for the postgres Superuser and postgresql-password is for admin user. The admin user is indicated by default in the Postgres Helm chart. The admin user may not have enough permissions to dump all Postgres databases and roles, so the preferred option for exporting all objects is using the pg_dumpall tool with the postgres Superuser.
                                            • If the PGPASSWORD variable is not specified before using the pg_dumpall tool, you will be prompted to enter a password for each database during the export.
                                            • If the -l keycloak parameter is specified, pg_dumpall will connect to the keycloak database for dumping global objects and discovering what other databases should be dumped. By default, pg_dumpall will try to connect to postgres or template1 databases. This parameter is optional.
                                            • The pg_dumpall --clean option adds SQL commands to the dumped file for dropping databases before recreating them during import, as well as DROP commands for roles and tablespaces (pg_dump also has this option). If the --clean parameter is specified, connect to the postgres database initially during import via psql. The psql script will attempt to drop other databases immediately, and that will fail for the database you are connected to. This flag is optional, and it is not included into this tutorial.
                                            PGPASSWORD=\"${postgresql_postgres-password}\" pg_dumpall -h localhost -p 5432 -U postgres -l keycloak > /tmp/keycloak_wildfly_db_dump.sql\n

                                            Note

                                            If there is no working password for the postgres Superuser, try the admin user using the pg_dump tool to export the keycloak database without global roles:

                                            PGPASSWORD=\"${postgresql_password}\" pg_dump -h localhost -p 5432 -U admin -d keycloak > /tmp/keycloak_wildfly_db_dump.sql\n

                                            Info

                                            Double-check that the contents of the dumped file is not empty. It usually contains more than 4000 lines.

                                          2. Copy the file with the database dump to a local machine. Since tar may not be present in the pod and kubectl cp will not work without tar, use the following command:

                                            kubectl exec -n security ${postgresql_pod} -- cat /tmp/keycloak_wildfly_db_dump.sql  > keycloak_wildfly_db_dump.sql\n

                                            Note

                                            Please find below the alternative commands for exporting the database to the local machine without copying the file to a pod for Postgres and admin users:

                                            kubectl exec -n security ${postgresql_pod} \"--\" sh -c \"PGPASSWORD='\"${postgresql_postgres-password}\"' pg_dumpall -h localhost -p 5432 -U postgres\" > keycloak_wildfly_db_dump.sql\nkubectl exec -n security ${postgresql_pod} \"--\" sh -c \"PGPASSWORD='\"${postgresql_password}\"' pg_dump -h localhost -p 5432 -U admin -d keycloak\" > keycloak_wildfly_db_dump.sql\n
                                          3. Delete the dumped file from the pod for security reasons:

                                            kubectl exec -n security ${postgresql_pod} \"--\" sh -c \"rm /tmp/keycloak_wildfly_db_dump.sql\"\n
                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#delete-keycloak-resources_1","title":"Delete Keycloak Resources","text":"
                                          1. Delete all previous Keycloak resources along with the Postgres database and keycloak StatefulSets, Ingress, and custom resources via Helm, or via the tool used for their deployment.

                                            helm list -n security\nhelm delete keycloak -n security\n

                                            Warning

                                            Don't delete the whole namespace. Keep the keycloak-postgresql and keycloak-admin-creds secrets.

                                          2. Delete the volume in AWS, from which a snapshot has been created. Then delete the PVC:

                                            kubectl delete pvc data-keycloak-postgresql-0 -n security\n
                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#install-postgres_1","title":"Install Postgres","text":"
                                          1. Add Bitnami chart repository and update Helm repos:

                                            helm repo add bitnami https://charts.bitnami.com/bitnami\nhelm repo update\n
                                          2. Create Postgres values:

                                            Note

                                            fullnameOverride: \"keycloak-postgresql\" sets the name of the Postgres StatefulSet. It must be same as in the previous StatefulSet.

                                            View: postgres-values.yaml
                                            nameOverride: \"keycloak-postgresql\"\n\n# PostgreSQL read only replica parameters\nreadReplicas:\n# Number of PostgreSQL read only replicas\nreplicaCount: 1\n\nglobal:\npostgresql:\nauth:\nusername: admin\nexistingSecret: keycloak-postgresql\nsecretKeys:\nadminPasswordKey: postgres-password\nuserPasswordKey: password\ndatabase: keycloak\n\nauth:\nexistingSecret: keycloak-postgresql\nsecretKeys:\nadminPasswordKey: postgres-password\nuserPasswordKey: password\n\nprimary:\npersistence:\nenabled: true\nsize: 3Gi\n# If the StorageClass with reclaimPolicy: Retain is used, install an additional StorageClass before installing PostgreSQL\n# (the code is given below).\n# If the default StorageClass will be used - change \"gp2-retain\" to \"gp2\"\nstorageClass: \"gp2-retain\"\n
                                          3. Install the Postgres database:

                                            Note

                                            Change the namespace and the values file name if required.

                                            helm install postgresql bitnami/postgresql \\\n--version 11.7.6 \\\n--values postgres-values.yaml \\\n--namespace security\n
                                          4. Wait for the database to be ready.

                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#import-postgres-databases","title":"Import Postgres Databases","text":"
                                          1. Upload the database dump to the new Keycloak Postgres pod:

                                            cat keycloak_wildfly_db_dump.sql | kubectl exec -i -n security ${postgresql_pod} \"--\" sh -c \"cat > /tmp/keycloak_wildfly_db_dump.sql\"\n

                                            Warning

                                            Database import must be done before deploying Keycloak, because Keycloak will write its own data to the database during the start, and the import will partially fail. If that happened, scale down the keycloak StatefulSet, and try to drop the Keycloak database in the Postgres pod:

                                            dropdb -i -e keycloak -p 5432 -h localhost -U postgres\n

                                            If there still are some conflicting objects like roles, drop them via the DROP ROLE command.

                                            If the previous steps do not help, downscale the Keycloak and Postgres StatefulSets and delete the attached PVC (save the volumeID before removing), and delete the volume on AWS if using gp2-retain. In case of using gp2, the volume will be deleted automatically after removing PVC. After that, redeploy the Postgres database, so that the new PVC is automatically created.

                                          2. Import the SQL dump file to the Postgres database cluster:

                                            Info

                                            Since the databases were exported in the sql format, the psql tool will be used to restore (reload) them. pg_restore does not support this plain-text format.

                                            • If the entire Postgres database cluster was migrated with the postgres Superuser using pg_dumpall, use the import command without indicating the database:

                                              psql -U postgres -f /tmp/keycloak_wildfly_db_dump.sql\n
                                            • If the database was migrated with the admin user using pg_dump, the postgres Superuser still can be used to restore it, but, in this case, a database must be indicated:

                                              Warning

                                              If the database name was not indicated during the import for the file dumped with pg_dump, the psql tool will import this database to a default Postgres database called postgres.

                                              psql -U postgres -d keycloak -f /tmp/keycloak_wildfly_db_dump.sql\n
                                            • If the postgres Superuser is not accessible in the Postgres pod, run the command under the admin or any other user that has the database permissions. In this case, indicate the database as well:

                                              psql -U admin -d keycloak -f /tmp/keycloak_wildfly_db_dump.sql\n
                                          3. After a successful import, delete the dump file from the pod for security reasons:

                                            kubectl exec -n security ${postgresql_pod} \"--\" sh -c \"rm /tmp/keycloak_wildfly_db_dump.sql\"\n

                                            Note

                                            Please find below the alternative commands for importing the database from the local machine to the pod without storing the backup on a pod for postgres or admin users:

                                            cat \"keycloak_wildfly_db_dump.sql\" | kubectl exec -i -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" sh -c \"cat | PGPASSWORD='\"${postgresql_superuser_password}\"' psql -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\"\"\ncat \"keycloak_wildfly_db_dump.sql\" | kubectl exec -i -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" sh -c \"cat | PGPASSWORD='\"${postgresql_superuser_password}\"' psql -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\" -d \"${database_name}\"\"\ncat \"keycloak_wildfly_db_dump.sql\" | kubectl exec -i -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" sh -c \"cat | PGPASSWORD='\"${postgresql_admin_password}\"' psql -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\" -d \"${database_name}\"\"\n
                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#install-keycloak","title":"Install Keycloak","text":"
                                          1. Make sure the Keycloak chart repository is added:

                                            helm repo add codecentric https://codecentric.github.io/helm-charts\nhelm repo update\n
                                          2. Create Keycloak values:

                                            Note

                                            • nameOverride: \"keycloak\" sets the name of the Keycloak pod. It must be the same Keycloak name as in the previous StatefulSet.
                                            • Change Ingress host name to the Keycloak host name.
                                            • hostname: keycloak-postgresql is the hostname of the pod with the Postgres database that is the same as Postgres StatefulSet name, for example, keycloak-postgresql.
                                            • \"/opt/keycloak/bin/kc.sh start --auto-build\" was used in the legacy Keycloak version. However, it is no longer required in the new Keycloak version since it is deprecated and used by default.
                                            • Optionally, use the following command for applying the old Keycloak theme:

                                              bin/kc.sh start --features-disabled=admin2\n

                                            Info

                                            Automatic database migration will start after the Keycloak installation.

                                            View: keycloak-values.yaml
                                            nameOverride: \"keycloak\"\n\nreplicas: 1\n\n# Deploy the latest verion\nimage:\ntag: \"19.0.1\"\n\n# start: create OpenShift realm which is required by EDP\nextraInitContainers: |\n- name: realm-provider\nimage: busybox\nimagePullPolicy: IfNotPresent\ncommand:\n- sh\nargs:\n- -c\n- |\necho '{\"realm\": \"openshift\",\"enabled\": true}' > /opt/keycloak/data/import/openshift.json\nvolumeMounts:\n- name: realm\nmountPath: /opt/keycloak/data/import\n\nextraVolumeMounts: |\n- name: realm\nmountPath: /opt/keycloak/data/import\n\nextraVolumes: |\n- name: realm\nemptyDir: {}\n\ncommand:\n- \"/opt/keycloak/bin/kc.sh\"\n- \"--verbose\"\n- \"start\"\n- \"--http-enabled=true\"\n- \"--http-port=8080\"\n- \"--hostname-strict=false\"\n- \"--hostname-strict-https=false\"\n- \"--spi-events-listener-jboss-logging-success-level=info\"\n- \"--spi-events-listener-jboss-logging-error-level=warn\"\n- \"--import-realm\"\n\nextraEnv: |\n- name: KC_PROXY\nvalue: \"passthrough\"\n- name: KEYCLOAK_ADMIN\nvalueFrom:\nsecretKeyRef:\nname: keycloak-admin-creds\nkey: username\n- name: KEYCLOAK_ADMIN_PASSWORD\nvalueFrom:\nsecretKeyRef:\nname: keycloak-admin-creds\nkey: password\n- name: JAVA_OPTS_APPEND\nvalue: >-\n-XX:+UseContainerSupport\n-XX:MaxRAMPercentage=50.0\n-Djava.awt.headless=true\n-Djgroups.dns.query={{ include \"keycloak.fullname\" . }}-headless\n\n# This block should be uncommented if you install Keycloak on Kubernetes\ningress:\nenabled: true\nannotations:\nkubernetes.io/ingress.class: nginx\ningress.kubernetes.io/affinity: cookie\nrules:\n- host: keycloak.<ROOT_DOMAIN>\npaths:\n- path: '{{ tpl .Values.http.relativePath $ | trimSuffix \"/\" }}/'\npathType: Prefix\n\n# This block should be uncommented if you set Keycloak to OpenShift and change the host field\n# route:\n#   enabled: false\n#   # Path for the Route\n#   path: '/'\n#   # Host name for the Route\n#   host: \"keycloak.<ROOT_DOMAIN>\"\n#   # TLS configuration\n#   tls:\n#     enabled: true\n\nresources:\nlimits:\nmemory: \"2048Mi\"\nrequests:\ncpu: \"50m\"\nmemory: \"512Mi\"\n\n# Check database readiness at startup\ndbchecker:\nenabled: true\n\ndatabase:\nvendor: postgres\nexistingSecret: keycloak-postgresql\nhostname: keycloak-postgresql\nport: 5432\nusername: admin\ndatabase: keycloak\n
                                          3. Deploy Keycloak:

                                            Note

                                            Change the namespace and the values file name if required.

                                            helm install keycloak codecentric/keycloakx --version 1.6.0 --values keycloak-values.yaml -n security\n
                                          4. Log in to Keycloak and check if everything has been imported correctly.

                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#clean-and-analyze-database_1","title":"Clean and Analyze Database","text":"

                                          Optionally, run the vacuumdb application on the database, to analyze the contents of database tables and collect statistics for the Postgres query optimizer:

                                          PGPASSWORD=\"${postgresql_postgres-password}\" vacuumdb --analyze --verbose -d keycloak -U postgres\n
                                          For all databases, run the following command:

                                          PGPASSWORD=\"${postgresql_postgres-password}\" vacuumdb --analyze --verbose --all -U postgres\n
                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#postgres-database-migration-script","title":"Postgres Database Migration Script","text":"

                                          Info

                                          Please read the Migrate Postgres Database From Postgres v.11.x to v.14.5 section of this tutorial before using the script.

                                          Note

                                          • The kubectl tool is required for using this script.
                                          • This script will likely work for any other Postgres database besides Keycloak after some adjustments. It queries the pg_dump, pg_dumpall, psql, and vacuumdb commands under the hood.

                                          The following script can be used for exporting and importing Postgres databases as well as optimizing them with the vacuumdb application. Please examine the code and make the adjustments if required.

                                          • By default, the following command exports Keycloak Postgres databases from a Kubernetes pod to a local machine:

                                            ./script.sh\n

                                            After running the command, please follow the prompt.

                                          • To import a database backup to a newly created Postgres Kubernetes pod, pass a database dump sql file to the script:
                                            ./script.sh path-to/db_dump.sql\n
                                          • The -h flag prints help, and -c|-v runs the vacuumdb garbage collector and analyzer.
                                          View: keycloak_db_migration.sh
                                          #!/bin/bash\n\n# set -x\n\ndb_migration_help(){\necho \"Keycloak Postgres database migration\"\necho\necho \"Usage:\"\necho \"------------------------------------------\"\necho \"Export Keycloak Postgres database from pod\"\necho \"Run without parameters:\"\necho \"      $0\"\necho \"------------------------------------------\"\necho \"Import Keycloak Postgres database to pod\"\necho \"Pass filename to script:\"\necho \"      $0 path/to/db_dump.sql\"\necho \"------------------------------------------\"\necho \"Additional options: \"\necho \"      $0 [OPTIONS...]\"\necho \"Options:\"\necho \"h     Print Help.\"\necho \"c|v   Run garbage collector and analyzer.\"\n}\n\nkeycloak_ns(){\nprintf '%s\\n' 'Enter keycloak namespace: '\nread -r keycloak_namespace\n\n    if [ -z \"${keycloak_namespace}\" ]; then\necho \"Don't skip namespace\"\nexit 1\nfi\n}\n\npostgres_pod(){\nprintf '%s\\n' 'Enter postgres pod name: '\nread -r postgres_pod_name\n\n    if [ -z \"${postgres_pod_name}\" ]; then\necho \"Don't skip pod name\"\nexit 1\nfi\n}\n\npostgres_user(){\nprintf '%s\\n' 'Enter postgres username: '\nprintf '%s' \"Skip to use [postgres] superuser: \"\nread -r postgres_username\n\n    if [ -z \"${postgres_username}\" ]; then\npostgres_username='postgres'\nfi\n}\n\npgdb_host_info(){\ndatabase_name='keycloak'\ndb_host='localhost'\ndb_port='5432'\n}\n\npostgresql_admin_pass(){\npostgresql_password='POSTGRES_PASSWORD'\npostgresql_admin_password=\"$(kubectl exec -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"printenv ${postgresql_password}\")\"\n}\n\npostgresql_su_pass(){\npostgresql_postgres_password='POSTGRES_POSTGRES_PASSWORD'\npostgresql_superuser_password=\"$(kubectl exec -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"printenv ${postgresql_postgres_password}\")\"\n\nif [ -z \"${postgresql_superuser_password}\" ]; then\necho \"SuperUser password variable does not exist. Using user password instead...\"\npostgresql_admin_pass\n        postgresql_superuser_password=\"${postgresql_admin_password}\"\nfi\n}\n\nkeycloak_pgdb_export(){\ncurrent_cluster=\"$(kubectl config current-context | tr -dc '[:alnum:]-')\"\nexported_db_name=\"keycloak_db_dump_${current_cluster}_${keycloak_namespace}_${postgres_username}_$(date +\"%Y%m%d%H%M\").sql\"\n\nif [ \"${postgres_username}\" == 'postgres' ]; then\n# call a function to get a pass for postgres user\npostgresql_su_pass\n        kubectl exec -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"PGPASSWORD='\"${postgresql_superuser_password}\"' pg_dumpall -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\"\" > \"${exported_db_name}\"\nelse\n# call a function to get a pass for admin user\npostgresql_admin_pass\n        kubectl exec -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"PGPASSWORD='\"${postgresql_admin_password}\"' pg_dump -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\" -d \"${database_name}\"\" > \"${exported_db_name}\"\nfi\n\nseparate_lines=\"---------------\"\n\nif [ ! -s \"${exported_db_name}\" ]; then\nrm -f \"${exported_db_name}\"\necho \"${separate_lines}\"\necho \"Something went wrong. The database dump file is empty and was not saved.\"\nelse\necho \"${separate_lines}\"\ngrep 'Dumped' \"${exported_db_name}\" | sort -u\n        echo \"Database has been exported to $(pwd)/${exported_db_name}\"\nfi\n}\n\nkeycloak_pgdb_import(){\necho \"Preparing Import\"\necho \"----------------\"\n\nif [ ! -f \"$1\" ]; then\necho \"The file $1 does not exist.\"\nexit 1\nfi\n\nkeycloak_ns\n    postgres_pod\n    postgres_user\n    pgdb_host_info\n\n    if [ \"${postgres_username}\" == 'postgres' ]; then\n# restore full backup with all databases and roles as superuser or a single database\npostgresql_su_pass\n        if [ -n \"$(cat \"$1\" | grep 'CREATE ROLE')\" ]; then\ncat \"$1\" | kubectl exec -i -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"cat | PGPASSWORD='\"${postgresql_superuser_password}\"' psql -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\"\"\nelse\ncat \"$1\" | kubectl exec -i -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"cat | PGPASSWORD='\"${postgresql_superuser_password}\"' psql -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\" -d \"${database_name}\"\"\nfi\nelse\n# restore a single database\npostgresql_admin_pass\n        cat \"$1\" | kubectl exec -i -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"cat | PGPASSWORD='\"${postgresql_admin_password}\"' psql -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\" -d \"${database_name}\"\"\nfi\n}\n\nvacuum_pgdb(){\necho \"Preparing garbage collector and analyzer\"\necho \"----------------------------------------\"\n\nkeycloak_ns\n    postgres_pod\n    postgres_user\n    pgdb_host_info\n\n    if [ \"${postgres_username}\" == 'postgres' ]; then\npostgresql_su_pass\n        kubectl exec -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"PGPASSWORD='\"${postgresql_superuser_password}\"' vacuumdb --analyze --all -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\"\"\nelse\npostgresql_admin_pass\n        kubectl exec -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"PGPASSWORD='\"${postgresql_admin_password}\"' vacuumdb --analyze -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\" -d \"${database_name}\"\"\nfi\n}\n\nwhile [ \"$#\" -eq 1 ]; do\ncase \"$1\" in\n-h | --help)\ndb_migration_help\n            exit 0\n;;\n-c | --clean | -v | --vacuum)\nvacuum_pgdb\n            exit 0\n;;\n--)\nbreak\n;;\n-*)\necho \"Invalid option '$1'. Use -h|--help to see the valid options\" >&2\nexit 1\n;;\n*)\nkeycloak_pgdb_import \"$1\"\nexit 0\n;;\nesac\nshift\ndone\n\nif [ \"$#\" -gt 1 ]; then\necho \"Please pass a single file to the script\"\nexit 1\nfi\n\necho \"Preparing Export\"\necho \"----------------\"\nkeycloak_ns\npostgres_pod\npostgres_user\npgdb_host_info\nkeycloak_pgdb_export\n
                                          "},{"location":"operator-guide/upgrade-keycloak-19.0/#related-articles","title":"Related Articles","text":"
                                          • Deploy OKD 4.10 Cluster
                                          "},{"location":"operator-guide/vcs/","title":"Overview","text":"

                                          The Version Control Systems (VCS) section is dedicated to delivering comprehensive information on VCS within the EPAM Delivery Platform. This section comprises detailed descriptions of all the deployment strategies, along with valuable recommendations for their optimal usage, and the list of supported VCS, facilitating seamless integration with EDP.

                                          "},{"location":"operator-guide/vcs/#supported-vcs","title":"Supported VCS","text":"

                                          EDP can be integrated with the following Version Control Systems:

                                          • Gerrit (used by default);
                                          • GitHub;
                                          • GitLab.

                                          Note

                                          So far, EDP doesn't support authorization mechanisms in the upstream GitLab.

                                          "},{"location":"operator-guide/vcs/#vcs-deployment-strategies","title":"VCS Deployment Strategies","text":"

                                          EDP offers the following strategies to work with repositories:

                                          • Create from template \u2013 creates a project on the pattern in accordance with an application language, a build tool, and a framework selected while creating application. This strategy is recommended for projects that start developing their applications from scratch.

                                          Note

                                          Under the hood, all the built-in application frameworks, build tools and frameworks are stored in our public GitHub repository.

                                          • Import project - enables working with the repository located in the added Git server. This scenario is preferred when the users already have an application stored in their own pre-configured repository and intends to continue working with their repository while also utilizing EDP simultaneously.

                                          Note

                                          In order to use the Import project strategy, make sure to adjust it with the Integrate GitHub/GitLab in Jenkins or Integrate GitHub/GitLab in Tekton page. The Import project strategy is not applicable for Gerrit. Also, it is impossible to choose the Empty project field when using the Import project strategy while creating appication since it is implied that you already have a ready-to-work application in your own repository, whereas the \"Empty project\" option creates a repository but doesn't put anything in it.

                                          • Clone project \u2013 clones the indicated repository into EPAM Delivery Platform. In this scenario, the application repository is forked from the original application repository to EDP. Since EDP doesn't support multiple VCS integration for now, this strategy is recommended when the user has several applications located in several repositories.
                                          "},{"location":"operator-guide/vcs/#related-articles","title":"Related Articles","text":"
                                          • Add Git Server
                                          • Add Application
                                          • Integrate GitHub/GitLab in Jenkins
                                          • Integrate GitHub/GitLab in Tekton
                                          "},{"location":"operator-guide/velero-irsa/","title":"IAM Roles for Velero Service Accounts","text":"

                                          Note

                                          Make sure that IRSA is enabled and amazon-eks-pod-identity-webhook is deployed according to the Associate IAM Roles With Service Accounts documentation.

                                          Velero AWS plugin requires access to AWS resources. Follow the steps below to create a required role:

                                          1. Create AWS IAM Policy \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039VELERO_NAMESPACE\u203aVelero_policy\":

                                            {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Effect\": \"Allow\",\n\"Action\": [\n\"ec2:DescribeVolumes\",\n\"ec2:DescribeSnapshots\",\n\"ec2:CreateTags\",\n\"ec2:CreateVolume\",\n\"ec2:CreateSnapshot\",\n\"ec2:DeleteSnapshot\"\n],\n\"Resource\": \"*\"\n},\n{\n\"Effect\": \"Allow\",\n\"Action\": [\n\"s3:GetObject\",\n\"s3:DeleteObject\",\n\"s3:PutObject\",\n\"s3:AbortMultipartUpload\",\n\"s3:ListMultipartUploadParts\"\n],\n\"Resource\": [\n\"arn:aws:s3:::velero-*/*\"\n]\n},\n{\n\"Effect\": \"Allow\",\n\"Action\": [\n\"s3:ListBucket\"\n],\n\"Resource\": [\n\"arn:aws:s3:::velero-*\"\n]\n}\n]\n}\n
                                          2. Create AWS IAM Role \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039VELERO_NAMESPACE\u203aVelero\" with trust relationships:

                                            {\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Principal\": {\n        \"Federated\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/<OIDC_PROVIDER>\"\n      },\n      \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n      \"Condition\": {\n        \"StringEquals\": {\n          \"<OIDC_PROVIDER>:sub\": \"system:serviceaccount:<VELERO_NAMESPACE>:edp-velero\"\n       }\n     }\n   }\n ]\n}\n
                                          3. Attach the \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039VELERO_NAMESPACE\u203aVelero_policy\" policy to the \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039VELERO_NAMESPACE\u203aVelero\" role.

                                          4. Make sure that Amazon S3 bucket with name velero-\u2039CLUSTER_NAME\u203a exists.

                                          5. Provide key value eks.amazonaws.com/role-arn: \"arn:aws:iam:::role/AWSIRSA\u2039CLUSTER_NAME\u203a\u2039VELERO_NAMESPACE\u203aVelero\" into the serviceAccount.server.annotations parameter in values.yaml during the Velero Installation."},{"location":"operator-guide/velero-irsa/#related-articles","title":"Related Articles","text":"

                                            • Associate IAM Roles With Service Accounts
                                            • Install Velero
                                            "},{"location":"operator-guide/waf-tf-configuration/","title":"Configure AWS WAF With Terraform","text":"

                                            This page contains accurate information on how to configure AWS WAF using Terraform with the aim to have a secured traffic exposure and to prevent the Host Header vulnerabilities.

                                            "},{"location":"operator-guide/waf-tf-configuration/#prerequisites","title":"Prerequisites","text":"

                                            To follow the instruction, check the following prerequisites:

                                            1. Deployed infrastructure includes Nginx Ingress Controller
                                            2. Deployed services for testing
                                            3. Separate and exposed AWS ALB
                                            4. terraform 0.14.10
                                            5. hishicorp/aws = 4.8.0
                                            "},{"location":"operator-guide/waf-tf-configuration/#solution-overview","title":"Solution Overview","text":"

                                            The solution includes two parts:

                                            1. Prerequisites (mostly the left part of the scheme) - AWS ALB, Compute Resources (EC2, EKS, etc.).
                                            2. WAF configuration (the right part of the scheme).

                                            The WAF ACL resource is the main resource used for the configuration; The default web ACL option is Block.

                                            Overview WAF Solution

                                            The ACL includes three managed AWS rules that secure the exposed traffic:

                                            • AWS-AWSManagedRulesCommonRuleSet
                                            • AWS-AWSManagedRulesLinuxRuleSet
                                            • AWS-AWSManagedRulesKnownBadInputsRuleSet

                                            AWS provides a lot of rules such as baseline and use-case specific rules, for details, please refer to the Baseline rule groups.

                                            There is the PreventHostInjections rule that prevents the Host Header vulnerabilities. This rule includes one statement that declares that the Host Header should match Regex Pattern Set, thus only in this case it will be passed.

                                            The Regex Pattern Set is another resource that helps to organize regexes, in fact, is a set of regexes. All regexes added to the single set are matched by the OR statement, i.e. when exposing several URLs, it is necessary to add this statement to the set and refer to it in the rule.

                                            "},{"location":"operator-guide/waf-tf-configuration/#waf-acl-configuration","title":"WAF ACL Configuration","text":"

                                            To create the Regex Pattern Set, inspect the following code:

                                            resource \"aws_wafv2_regex_pattern_set\" \"common\" {\nname  = \"Common\"\nscope = \"REGIONAL\"\n\nregular_expression {\nregex_string = \"^.*(some-url).*((.edp-epam)+)\\\\.com$\"\n}\n\n  #  Add here additional regular expressions for other endpoints, they are merging with OR operator, e.g.\n\n  /*\n   regular_expression {\n      regex_string = \"^.*(jenkins).*((.edp-epam)+)\\\\.com$\"\n   }\n   */\n\ntags = var.tags\n}\n

                                            It includes 'regex_string', for example: url - some-url.edp-epam.com, In addition, it is possible to add other links to the same resource using the regular_expression element.

                                            There is the Terraform code for the aws_wafv2_web_acl resource:

                                            resource \"aws_wafv2_web_acl\" \"external\" {\nname  = \"ExternalACL\"\nscope = \"REGIONAL\"\n\ndefault_action {\nblock {}\n}\n\nrule {\nname     = \"AWS-AWSManagedRulesCommonRuleSet\"\npriority = 1\n\noverride_action {\nnone {}\n}\n\nstatement {\nmanaged_rule_group_statement {\nname        = \"AWSManagedRulesCommonRuleSet\"\nvendor_name = \"AWS\"\n}\n}\n\nvisibility_config {\ncloudwatch_metrics_enabled = true\nmetric_name                = \"AWS-AWSManagedRulesCommonRuleSet\"\nsampled_requests_enabled   = true\n}\n}\n\nrule {\nname     = \"AWS-AWSManagedRulesLinuxRuleSet\"\npriority = 2\n\nstatement {\nmanaged_rule_group_statement {\nname        = \"AWSManagedRulesLinuxRuleSet\"\nvendor_name = \"AWS\"\n}\n}\n\noverride_action {\nnone {}\n}\n\nvisibility_config {\ncloudwatch_metrics_enabled = true\nmetric_name                = \"AWS-AWSManagedRulesLinuxRuleSet\"\nsampled_requests_enabled   = true\n}\n}\n\nrule {\nname     = \"AWS-AWSManagedRulesKnownBadInputsRuleSet\"\npriority = 3\n\noverride_action {\nnone {}\n}\n\nstatement {\nmanaged_rule_group_statement {\nname        = \"AWSManagedRulesKnownBadInputsRuleSet\"\nvendor_name = \"AWS\"\n}\n}\n\nvisibility_config {\ncloudwatch_metrics_enabled = true\nmetric_name                = \"AWS-AWSManagedRulesKnownBadInputsRuleSet\"\nsampled_requests_enabled   = true\n}\n}\n\nrule {\nname     = \"PreventHostInjections\"\npriority = 0\n\nstatement {\nregex_pattern_set_reference_statement {\narn = aws_wafv2_regex_pattern_set.common.arn\n\nfield_to_match {\nsingle_header {\nname = \"host\"\n}\n}\n\ntext_transformation {\npriority = 0\ntype     = \"NONE\"\n}\n}\n}\n\naction {\nallow {}\n}\n\nvisibility_config {\ncloudwatch_metrics_enabled = true\nmetric_name                = \"PreventHostInjections\"\nsampled_requests_enabled   = true\n}\n}\n\nvisibility_config {\ncloudwatch_metrics_enabled = true\nmetric_name                = \"ExternalACL\"\nsampled_requests_enabled   = true\n}\n\ntags = var.tags\n}\n

                                            As mentioned previously, ACL includes three managed AWS rules (group rules), for visibility, enabling sampling, and CloudWatch in the config. The 'PreventHostInjections' custom rule refers to the created pattern set and declares the Host Header, as well as sets the 'Action' if matched to 'Allow'.

                                            "},{"location":"operator-guide/waf-tf-configuration/#associate-aws-resource","title":"Associate AWS Resource","text":"

                                            To have the created ACL working, it is necessary to associate an AWS resource with it, in this case, it is AWS ALB:

                                            resource \"aws_wafv2_web_acl_association\" \"waf_alb\" {\nresource_arn = aws_lb.<aws_alb_for_waf>.arn\nweb_acl_arn  = aws_wafv2_web_acl.external.arn\n}\n

                                            Note

                                            AWS ALB can be created in the scope of this Terraform code or created previously. When creating ALB to expose links, the ALB should have a security group that allows some external traffic.

                                            When ALB is associated with the WAF ACL, direct the traffic to the ALB by the Route53 CNAME record:

                                            module \"some_url_exposure\" {\nsource  = \"terraform-aws-modules/route53/aws//modules/records\"\nversion = \"2.0.0\"\n\nzone_name = \"edp-epam.com\"\n\nrecords = [\n{\nname    = \"some-url\"\ntype    = \"CNAME\"\nttl     = 300\nrecords = [aws_lb.<aws_alb_for_waf>.dns_name]\n}\n]\n}\n

                                            In the sample above, the module is used, but it is also possible to use a Terraform resource.

                                            "},{"location":"use-cases/","title":"Overview","text":"

                                            The Use Cases section provides useful recommendations of how to operate with the EPAM Delivery Platform tools and manage the custom resources. Get acquainted with the description of technical scenarios and solutions.

                                            • Scaffold and Deploy FastAPI Application
                                            • Deploy Application With Custom Build Tool/Framework
                                            • Secured Secrets Management for Application Deployment
                                            • Autotest as a Quality Gate
                                            "},{"location":"use-cases/application-scaffolding/","title":"Scaffold and Deploy FastAPI Application","text":""},{"location":"use-cases/application-scaffolding/#overview","title":"Overview","text":"

                                            This use case describes the creation and deployment of a FastAPI application to enable a developer to quickly generate a functional code structure for a FastAPI web application (with basic read functionality), customize it to meet specific requirements, and deploy it to a development environment. By using a scaffolding tool and a standardized process for code review, testing and deployment, developers can reduce the time and effort required to build and deploy a new application while improving the quality and reliability of the resulting code. Ultimately, the goal is to enable the development team to release new features and applications more quickly and efficiently while maintaining high code quality and reliability.

                                            "},{"location":"use-cases/application-scaffolding/#roles","title":"Roles","text":"

                                            This documentation is tailored for the Developers and Team Leads.

                                            "},{"location":"use-cases/application-scaffolding/#goals","title":"Goals","text":"
                                            • Create a new FastAPI application quickly.
                                            • Deploy the initial code to the DEV environment.
                                            • Check CI pipelines.
                                            • Perform code review.
                                            • Delivery update by deploying the new version.
                                            "},{"location":"use-cases/application-scaffolding/#preconditions","title":"Preconditions","text":"
                                            • EDP instance is configured with Gerrit, Tekton and Argo CD.
                                            • Developer has access to the EDP instances using the Single-Sign-On approach.
                                            • Developer has the Administrator role (to perform merge in Gerrit).
                                            "},{"location":"use-cases/application-scaffolding/#scenario","title":"Scenario","text":"

                                            To scaffold and deploy FastAPI Application, follow the steps below.

                                            "},{"location":"use-cases/application-scaffolding/#scaffold-the-new-fastapi-application","title":"Scaffold the New FastAPI Application","text":"
                                            1. Open EDP Portal URL. Use the Sign-In option.

                                              Logging screen

                                            2. Ensure Namespace value in the User Settings tab points to the namespace with the EDP installation.

                                              Settings button

                                            3. Create the new Codebase with the Application type using the Create strategy. To do this, open EDP tab.

                                              Cluster overview

                                            4. Select the Components Section under the EDP tab and push the create + button.

                                              Components tab

                                            5. Select the Application Codebase type because we are going to deliver our application as a container and deploy it inside the Kubernetes cluster. Choose the Create strategy to scaffold our application from the template provided by the EDP and press the Proceed button.

                                              Step codebase info

                                            6. On the Application Info tab, define the following values and press the Proceed button:

                                              • Application name: fastapi-demo
                                              • Default branch: main
                                              • Application code language: Python
                                              • Language version/framework: FastAPI
                                              • Build tool: Python

                                              Application info

                                            7. On the Advances Settings tab, define the below values and push the Apply button:

                                              • CI tool: Tekton
                                              • Codebase versioning type: edp
                                              • Start version from: 0.0.1 and SNAPSHOT

                                              Advanced settings

                                            8. Check the application status. It should be green:

                                              Application status

                                            "},{"location":"use-cases/application-scaffolding/#deploy-the-application-to-the-development-environment","title":"Deploy the Application to the Development Environment","text":"

                                            This section describes the application deployment approach from the latest branch commit. The general steps are:

                                            • Build the initial version (generated from the template) of the application from the last commit of the main branch.
                                            • Create a CD Pipeline to establish continuous delivery to the development environment.
                                            • Deploy the initial version to the development env.

                                            To succeed with the steps above, follow the instructions below:

                                            1. Build Container from the latest branch commit. To build the initial version of the application's main branch, go to the fastapi-demo application -> branches -> main and select the Build menu.

                                              Application building

                                            2. Build pipeline for the fastapi-demo application starts.

                                              Pipeline building

                                            3. Track Pipeline's status by accessing Tekton Dashboard by clicking the fastapi-demo-main-build-lb57m application link.

                                              Console logs

                                            4. Ensure that Build Pipeline was successfully completed.

                                            5. Create CD Pipeline. To enable application deployment create a CD Pipeline with a single environment - Development (with the name dev).

                                            6. Go to EDP Portal -> EDP -> CD Pipelines tab and push the + button to create pipeline. In the Create CD Pipeline dialog, define the below values:

                                              • Pipeline tab:

                                                • Pipeline name: mypipe
                                                • Deployment type: Container, since we are going to deploy containers

                                                Pipeline tab with parameters

                                              • Applications tab. Add fastapi-demo application, select main branch, and leave Promote in pipeline unchecked:

                                                Applications tab with parameters

                                              • Stages tab. Add the dev stage with the values below:

                                                • Stage name: dev
                                                • Description: Development Environment
                                                • Trigger type: Manual. We plan to deploy applications to this environment manually
                                                • Quality gate type: Manual
                                                • Step name: approve
                                                • Push the Apply button

                                                Stages tab with parameters

                                            7. Deploy the initial version of the application to the development environment:

                                              • Open CD Pipeline with the name mypipe.
                                              • Select the dev stage from the Stages tab.
                                              • In the Image stream version select version 0.0.1-SNAPSHOT.1 and push the Deploy button.

                                              CD Pipeline deploy

                                            "},{"location":"use-cases/application-scaffolding/#check-the-application-status","title":"Check the Application Status","text":"

                                            To ensure the application is deployed successfully, follow the steps below:

                                            1. Ensure application status is Healthy and Synced, and the Deployed version points to 0.0.1-SNAPSHOT.1:

                                              Pipeline health status

                                            2. Check that the selected version of the container is deployed on the dev environment. ${EDP_ENV} - is the EDP namespace name:

                                              # Check the deployment status of fastapi-demo application\n$ kubectl get deployments -n ${EDP_ENV}-mypipe-dev\nNAME                 READY   UP-TO-DATE   AVAILABLE   AGE\nfastapi-demo-dl1ft   1/1     1            1           30m\n\n# Check the image version of fastapi-demo application\n$ kubectl get pods -o jsonpath=\"{.items[*].spec.containers[*].image}\" -n ${EDP_ENV}-mypipe-dev\n012345678901.dkr.ecr.eu-central-1.amazonaws.com/${EDP_ENV}/fastapi-demo:0.0.1-SNAPSHOT.1\n
                                            "},{"location":"use-cases/application-scaffolding/#deliver-new-code","title":"Deliver New Code","text":"

                                            This section describes the Code Review process for a new code. We need to deploy a new version of our fastapi-demo application that deploys Ingress object to expose API outside the Kubernetes cluster.

                                            Perform the below steps to merge new code (Pull Request) that passes the Code Review flow. For the steps below, we use Gerrit UI but the same actions can be performed using the command line and git tool:

                                            1. Login to Gerrit UI, select fastapi-demo project, and create a change request.

                                            2. Browse Gerrit Repositories and select fastapi-demo project.

                                              Browse Gerrit repositories

                                            3. In the Commands section of the project, push the Create Change button.

                                              Create Change request

                                            4. In the Create Change dialog, provide the branch main and the Description (commit message):

                                              Enable ingress for application\n\nCloses: #xyz\n
                                            5. Push the Create button.

                                              Create Change

                                            6. Push the Edit button of the merge request and add deployment-templates/values.yaml for modification.

                                              Update values.yaml file

                                            7. Review the deployment-templates/values.yaml file and change the ingress.enabled flag from false to true. Then push the SAVE & PUBLISH button. As soon as you get Verified +1 from CI, you are ready for review: Push the Mark as Active button.

                                              Review Change

                                            8. You can always check your pipelines status from:

                                              • Gerrit UI.

                                              Pipeline Status Gerrit

                                              • EDP Portal.

                                              Pipeline Status EDP Portal

                                            9. With no Code Review Pipeline issues, set Code-Review +2 for the patchset and push the Submit button. Then, your code is merged to the main branch, triggering the Build Pipeline. The build Pipeline produces the new version of artifact: 0.0.1-SNAPSHOT.2, which is available for the deployment.

                                              Gerrit Code Review screen

                                            10. Deliver the New Version to the Environment. Before the new version deployment, check the ingress object in dev namespace:

                                              $ kubectl get ingress -n ${EDP_ENV}-mypipe-dev\nNo resources found in ${EDP_ENV}-mypipe-dev namespace.\n

                                              No ingress object exists as expected.

                                            11. Deploy the new version 0.0.1-SNAPSHOT.2 which has the ingress object in place. Since we use Manual deployment approach, we perform version upgrade by hand.

                                              • Go to the CD Pipelines section of the EDP Portal, select mypipe pipeline and choose dev stage.
                                              • In the Image stream version select the new version 0.0.1-SNAPSHOT.2 and push the Update button.
                                              • Check that the new version is deployed: application status is Healthy and Synced, and the Deployed version points to 0.0.1-SNAPSHOT.2.

                                              CD Pipeline Deploy New Version

                                            12. Check that the new version with Ingress is deployed:

                                              # Check the version of the deployed image\nkubectl get pods -o jsonpath=\"{.items[*].spec.containers[*].image}\" -n ${EDP_ENV}-mypipe-dev\n012345678901.dkr.ecr.eu-central-1.amazonaws.com/edp-delivery-tekton-dev/fastapi-demo:0.0.1-SNAPSHOT.2\n\n# Check Ingress object\nkubectl get ingress -n ${EDP_ENV}-mypipe-dev\nNAME                 CLASS    HOSTS                            ADDRESS          PORTS   AGE\nfastapi-demo-ko1zs   <none>   fastapi-demo-ko1zs-example.com   12.123.123.123   80      115s\n\n# Check application external URL\ncurl https://your-hostname-appeared-in-hosts-column-above.example.com/\n{\"Hello\":\"World\"}\n
                                            "},{"location":"use-cases/application-scaffolding/#related-articles","title":"Related Articles","text":"
                                            • Use Cases
                                            "},{"location":"use-cases/autotest-as-quality-gate/","title":"Autotest as a Quality Gate","text":"

                                            This use case describes the flow of adding an autotest as a quality gate to a newly created CD pipeline with a selected build version of an application to be promoted. The purpose of autotests is to check if application meets predefined criteria for stability and functionality, ensuring that only reliable versions are promoted. The promotion feature allows users to implement complicated testing, thus improving application stability.

                                            "},{"location":"use-cases/autotest-as-quality-gate/#roles","title":"Roles","text":"

                                            This documentation is tailored for the Developers and Quality Assurance specialists.

                                            "},{"location":"use-cases/autotest-as-quality-gate/#goals","title":"Goals","text":"
                                            • Create several applications and autotests quickly.
                                            • Create a pipeline for Continuous Deployment.
                                            • Perform testing.
                                            • Update delivery by deploying the new version.
                                            "},{"location":"use-cases/autotest-as-quality-gate/#preconditions","title":"Preconditions","text":"
                                            • EDP instance is configured with Gerrit, Tekton and Argo CD.
                                            • Developer has access to the EDP instances using the Single-Sign-On approach.
                                            • Developer has the Administrator role (to perform merge in Gerrit).
                                            "},{"location":"use-cases/autotest-as-quality-gate/#create-applications","title":"Create Applications","text":"

                                            To implement autotests as Quality Gates, follow the steps below:

                                            1. Ensure the namespace is specified in the cluster settings. Click the Settings icon in the top right corner and select Cluster settings:

                                              Cluster settings

                                            2. Enter the name of the default namespace, then enter your default namespace in the Allowed namespaces field and click the + button. You can also add other namespaces to the Allowed namespaces:

                                              Specify namespace

                                            3. Create several applications using the Create strategy. Navigate to the EDP tab, choose Components, click the + button:

                                              Add component

                                            4. Select Application and Create from template:

                                              Create new component menu

                                              Note

                                              Please refer to the Add Application section for details.

                                            5. On the Codebase info tab, define the following values and press the Proceed button:

                                              • Git server: gerrit
                                              • Git repo relative path: js-application
                                              • Component name: js-application
                                              • Description: js application
                                              • Application code language: JavaScript
                                              • Language version/Provider: Vue
                                              • Build tool: NPM

                                              Codebase info tab

                                            6. On the Advanced settings tab, define the below values and push the Apply button:

                                              • Default branch: main
                                              • Codebase versioning type: default

                                              Advanced settings tab

                                            7. Repeat the procedure twice to create the go-application and python-application applications. These applications will have the following parameters:

                                              go-application:

                                              • Git server: gerrit
                                              • Git repo relative path: go-application
                                              • Component name: go-application
                                              • Description: go application
                                              • Application code language: Go
                                              • Language version/Provider: Gin
                                              • Build tool: Go
                                              • Default branch: main
                                              • Codebase versioning type: default

                                              python-application:

                                              • Git server: gerrit
                                              • Git repo relative path: python-application
                                              • Component name: python-application
                                              • Description: python application
                                              • Application code language: Python
                                              • Language version/Provider: FastAPI
                                              • Build tool: Python
                                              • Default branch: main
                                              • Codebase versioning type: default
                                            8. In the Components tab, click one of the applications name to enter the application menu:

                                              Components list

                                            9. Click the three dots (\u22ee) button, select Build:

                                              Application menu

                                            10. Click the down arrow (v) to observe and wait for the application to be built:

                                              Application building

                                            11. Click the application run name to watch the building logs in Tekton:

                                              Tekton pipeline run

                                            12. Wait till the build is successful:

                                              Successful build

                                            13. Repeat steps 8-12 for the rest of the applications.

                                            "},{"location":"use-cases/autotest-as-quality-gate/#create-autotests","title":"Create Autotests","text":"

                                            The steps below instruct how to create autotests in EDP:

                                            1. Create a couple of autotests using the Create strategy. Navigate to the EDP tab, choose Components, click on the + button. Select Autotest and Clone project:

                                              Add autotest

                                              Note

                                              Please refer to the Add Autotest section for details.

                                            2. On the Codebase info tab, define the following values and press the Proceed button:

                                              • Repository URL: https://github.com/SergK/autotests.git
                                              • Git server: gerrit
                                              • Git repo relative path: demo-autotest-gradle
                                              • Component name: demo-autotest-gradle
                                              • Description: demo-autotest-gradle
                                              • Autotest code language: Java
                                              • Language version/framework: Java11
                                              • Build tool: Gradle
                                              • Autotest report framework: Allure

                                              Codebase info tab for autotests

                                            3. On the Advanced settings tab, leave the settings as is and click the Apply button:

                                              Advanced settings tab for autotests

                                            4. Repeat the steps 1-3 to create one more autotest with the parameters below:

                                              • Repository URL: https://github.com/Rolika4/autotests.git
                                              • Git server: gerrit
                                              • Git repo relative path: demo-autotest-maven
                                              • Component name: demo-autotest-maven
                                              • Description: demo-autotest-maven
                                              • Autotest code language: Java
                                              • Language version/framework: Java11
                                              • Build tool: Maven
                                              • Autotest report framework: Allure
                                            "},{"location":"use-cases/autotest-as-quality-gate/#create-cd-pipeline","title":"Create CD Pipeline","text":"

                                            Now that applications and autotests are created, create pipeline for them by following the steps below:

                                            1. Navigate to the CD Pipelines tab and click the + button:

                                              CD pipelines tab

                                            2. On the Pipeline tab, in the Pipeline name field, enter demo-pipeline:

                                              Pipeline tab

                                            3. On the Applications tab, add all the three applications, specify the main branch for all for them and check Promote in pipeline for Go and JavaScript applications:

                                              Applications tab

                                            4. On the Stages tab, click the Add stage button to open the Create stage menu:

                                              Stages tab

                                            5. In the Create stage menu, specify the following parameters and click Apply:

                                              • Cluster: In cluster
                                              • Stage name: dev
                                              • Description: dev
                                              • Trigger type: manual
                                              • Quality gate type: Autotests
                                              • Step name: dev
                                              • Autotest: demo-autotest-gradle
                                              • Autotest branch: main

                                              Create stage menu

                                            6. After the dev stage is added, click Apply:

                                              Create stage menu

                                            7. After the pipeline is created, click its name to open the pipeline details page:

                                              Enter pipeline

                                            8. In the pipeline details page, click the Create button to create a new stage:

                                              Create a new stage

                                            9. In the Create stage menu, specify the following parameters:

                                              • Cluster: In cluster
                                              • Stage name: sit
                                              • Description: sit
                                              • Trigger type: manual
                                              • Quality gate type: Autotests
                                              • Step name: dev
                                              • Autotest: demo-autotest-maven
                                              • Autotest branch: main
                                            "},{"location":"use-cases/autotest-as-quality-gate/#run-autotests","title":"Run Autotests","text":"

                                            After the CD pipeline is created, deploy applications and run autotests by following the steps below:

                                            1. Click the dev stage name to expand its details, specify image versions for each of the applications in the Image stream version field and click Deploy:

                                              Deploy applications

                                            2. Once applications are built, scroll down to Quality Gates and click Promote:

                                              Promote in pipeline

                                            3. Once promotion procedure is finished, the promoted applications will become available in the Sit stage. You will be able to select image stream versions for the promoted applications. The non-promoted application will stay grey in the stage and won't be allowed to get deployed:

                                              Sit stage

                                            "},{"location":"use-cases/autotest-as-quality-gate/#related-articles","title":"Related Articles","text":"
                                            • Add Application
                                            • Add Autotest
                                            • Add CD Pipeline
                                            • Add Quality Gate
                                            "},{"location":"use-cases/external-secrets/","title":"Secured Secrets Management for Application Deployment","text":"

                                            This Use Case demonstrates how to securely manage sensitive data, such as passwords, API keys, and other credentials, that are consumed by application during development or runtime in production. The approach involves storing sensitive data in an external secret store that is located in a \"vault\" namespace (but can be Vault, AWS Secret Store or any other provider). The process implies transmitting confidential information from the vault namespace to the deployed namespace for the purpose of establishing a connection to a database.

                                            "},{"location":"use-cases/external-secrets/#roles","title":"Roles","text":"

                                            This documentation is tailored for the Developers and Team Leads.

                                            "},{"location":"use-cases/external-secrets/#goals","title":"Goals","text":"
                                            • Make confidential information usage secure in the deployment environment.
                                            "},{"location":"use-cases/external-secrets/#preconditions","title":"Preconditions","text":"
                                            • EDP instance is configured with Gerrit, Tekton and Argo CD;
                                            • External Secrets is installed;
                                            • Developer has access to the EDP instances using the Single-Sign-On approach;
                                            • Developer has the Administrator role (to perform merge in Gerrit);
                                            • Developer has access to manage secrets in demo-vault namespace.
                                            "},{"location":"use-cases/external-secrets/#scenario","title":"Scenario","text":"

                                            To use External Secret in EDP approach, follow the steps below:

                                            "},{"location":"use-cases/external-secrets/#add-application","title":"Add Application","text":"

                                            To begin, you will need an application first. Here are the steps to create it:

                                            1. Open EDP Portal URL. Use the Sign-In option:

                                              Logging screen

                                            2. In the top right corner, enter the Cluster settings and ensure that both Default namespace and Allowed namespace are set:

                                              Cluster settings

                                            3. Create the new Codebase with the Application type using the Create strategy. To do this, click the EDP tab:

                                              Cluster overview

                                            4. Select the Components section under the EDP tab and push the + button:

                                              Components tab

                                            5. Select the Application Codebase type because we are going to deliver our application as a container and deploy it inside the Kubernetes cluster. Select the Create strategy to use predefined template:

                                              Step codebase info

                                            6. On the Application Info tab, define the following values and press the Proceed button:

                                              • Application name: es-usage
                                              • Default branch: master
                                              • Application code language: Java
                                              • Language version/framework: Java 17
                                              • Build tool: Maven

                                              Step application info

                                            7. On the Advanced Settings tab, define the below values and push the Apply button:

                                              • CI tool: Tekton
                                              • Codebase versioning type: default

                                              Step application info

                                            8. Check the application status. It should be green:

                                              Application status

                                            "},{"location":"use-cases/external-secrets/#create-cd-pipeline","title":"Create CD Pipeline","text":"

                                            This section outlines the process of establishing a CD pipeline within EDP Portal. There are two fundamental steps in this procedure:

                                            • Build the application from the last commit of the master branch;
                                            • Create a CD Pipeline to establish continuous delivery to the SIT environment.

                                            To succeed with the steps above, follow the instructions below:

                                            1. Create CD Pipeline. To enable application deployment, create a CD Pipeline with a single environment - System Integration Testing (SIT for short). Select the CD Pipelines section under the EDP tab and push the + button:

                                              CD-Pipeline tab

                                            2. On the Pipeline tab, define the following values and press the Proceed button:

                                              • Pipeline name: deploy
                                              • Deployment type: Container

                                              Pipeline tab

                                            3. On the Applications tab, add es-usage application, select master branch, leave Promote in pipeline unchecked and press the Proceed button:

                                              Pipeline tab

                                            4. On the Stage tab, add the sit stage with the values below and push the Apply button:

                                              • Stage name: sit
                                              • Description: System integration testing
                                              • Trigger type: Manual. We plan to deploy applications to this environment manually
                                              • Quality gate type: Manual
                                              • Step name: approve

                                                Stage tab

                                            "},{"location":"use-cases/external-secrets/#configure-rbac-for-external-secret-store","title":"Configure RBAC for External Secret Store","text":"

                                            Note

                                            In this scenario, three namespaces are used: demo, which is the namespace where EDP is deployed, demo-vault, which is the vault where developers store secrets, anddemo-deploy-sit, which is the namespace used for deploying the application. The target namespace name for deploying application is formed with the pattern: edp-<cd_pipeline_name>-<stage_name>.

                                            To make the system to function properly, it is imperative to create the following resources:

                                            1. Create namespace demo-vault to store secrets:

                                               kubectl create namespace demo-vault\n
                                            2. Create Secret:

                                              apiVersion: v1\nkind: Secret\nmetadata:\nname: mongo\nnamespace: demo-vault\nstringData:\npassword: pass\nusername: user\ntype: Opaque\n
                                            3. Create Role to access the secret:

                                              apiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\nnamespace: demo-vault\nname: external-secret-store\nrules:\n- apiGroups: [\"\"]\nresources:\n- secrets\nverbs:\n- get\n- list\n- watch\n- apiGroups:\n- authorization.k8s.io\nresources:\n- selfsubjectrulesreviews\nverbs:\n- create\n
                                            4. Create RoleBinding:

                                              apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\nname: eso-from-edp\nnamespace: demo-vault\nsubjects:\n- kind: ServiceAccount\nname: secret-manager\nnamespace: demo-deploy-sit\nroleRef:\napiGroup: rbac.authorization.k8s.io\nkind: Role\nname: external-secret-store\n
                                            "},{"location":"use-cases/external-secrets/#add-external-secret-to-helm-chart","title":"Add External Secret to Helm Chart","text":"

                                            Now that RBAC is configured properly, it is time to add external secrets templates to application Helm chart. Follow the instructions provided below:

                                            1. Navigate to EDP Portal -> EDP -> Overview, and push the Gerrit link:

                                              Overview page

                                            2. Log in to Gerrit UI, select Repositories and select es-usage project:

                                              Browse Gerrit repositories

                                            3. In the Commands section of the project, push the Create Change button:

                                              Create Change request

                                            4. In the Create Change dialog, provide the branch master and fill in the Description (commit message) field and push the Create button:

                                              Add external secrets templates\n

                                              Create Change

                                            5. Push the Edit button of the merge request and then the ADD/OPEN/UPLOAD button and add files:

                                              Add files to repository

                                              Once the file menu is opened, and click SAVE after editing each of the files:

                                              1. deploy-templates/templates/sa.yaml:

                                                apiVersion: v1\nkind: ServiceAccount\nmetadata:\nname: secret-manager\nnamespace: demo-deploy-sit\n
                                              2. deploy-templates/templates/secret-store.yaml:

                                                apiVersion: external-secrets.io/v1beta1\nkind: SecretStore\nmetadata:\nname: demo\nnamespace: demo-deploy-sit\nspec:\nprovider:\nkubernetes:\nremoteNamespace: demo-vault\nauth:\nserviceAccount:\nname: secret-manager\nserver:\ncaProvider:\ntype: ConfigMap\nname: kube-root-ca.crt\nkey: ca.crt\n
                                              3. deploy-templates/templates/external-secret.yaml:

                                                apiVersion: external-secrets.io/v1beta1\nkind: ExternalSecret\nmetadata:\nname: mongo                            # target secret name\nnamespace: demo-deploy-sit    # target namespace\nspec:\nrefreshInterval: 1h\nsecretStoreRef:\nkind: SecretStore\nname: demo\ndata:\n- secretKey: username                   # target value property\nremoteRef:\nkey: mongo                          # remote secret key\nproperty: username                  # value will be fetched from this field\n- secretKey: password                   # target value property\nremoteRef:\nkey: mongo                          # remote secret key\nproperty: password                  # value will be fetched from this field\n
                                              4. deploy-templates/templates/deployment.yaml. Add the environment variable for mongodb to the existing deployment configuration that used the secret:

                                                          env:\n- name: MONGO_USERNAME\nvalueFrom:\nsecretKeyRef:\nname: mongo\nkey: username\n- name: MONGO_PASSWORD\nvalueFrom:\nsecretKeyRef:\nname: mongo\nkey: password\n
                                            6. Push the Publish Edit button.

                                            7. As soon as review pipeline finished, and you get Verified +1 from CI, you are ready for review. Click Mark as Active -> Code-Review +2 -> Submit:

                                              Apply change

                                            "},{"location":"use-cases/external-secrets/#deploy-application","title":"Deploy Application","text":"

                                            Deploy the application by following the steps provided below:

                                            1. When build pipeline is finished, navigate to EDP Portal -> EDP -> CD-Pipeline and select deploy pipeline.

                                            2. Deploy the initial version of the application to the SIT environment:

                                              • Select the sit stage from the Stages tab;
                                              • In the Image stream version, select latest version and push the Deploy button.
                                            3. Ensure application status is Healthy and Synced:

                                              CD-Pipeline status

                                            "},{"location":"use-cases/external-secrets/#check-application-status","title":"Check Application Status","text":"

                                            To ensure the application is deployed successfully, do the following:

                                            1. Check that the resources are deployed:

                                              kubectl get secretstore -n demo-deploy-sit\nNAME                           AGE     STATUS   READY\ndemo                           5m57s   Valid    True\n
                                              kubectl get externalsecret -n demo-deploy-sit\nNAME    STORE                          REFRESH INTERVAL   STATUS         READY\nmongo   demo                           1h                 SecretSynced   True\n
                                            2. In the top right corner, enter the Cluster settings and add demo-deploy-sit to the Allowed namespace.

                                            3. Navigate EDP Portal -> Configuration -> Secrets and ensure that secret was created:

                                              Secrets

                                            4. Navigate EDP Portal -> Workloads -> Pods and select deployed application:

                                              Pod information

                                            "},{"location":"use-cases/external-secrets/#related-articles","title":"Related Articles","text":"
                                            • Use Cases
                                            • Add Application
                                            • CD Pipeline
                                            "},{"location":"use-cases/tekton-custom-pipelines/","title":"Deploy Application With Custom Build Tool/Framework","text":"

                                            This Use Case describes the procedure of adding custom Tekton libraries that include pipelines with tasks. In addition to it, the process of modifying custom pipelines and tasks is enlightened as well.

                                            "},{"location":"use-cases/tekton-custom-pipelines/#goals","title":"Goals","text":"
                                            • Add custom Tekton pipeline library;
                                            • Modify existing pipelines and tasks in a custom Tekton library.
                                            "},{"location":"use-cases/tekton-custom-pipelines/#preconditions","title":"Preconditions","text":"
                                            • EDP instance with Gerrit and Tekton inside is configured;
                                            • Developer has access to the EDP instances using the Single-Sign-On approach;
                                            • Developer has the Administrator role to perform merge in Gerrit.
                                            "},{"location":"use-cases/tekton-custom-pipelines/#scenario","title":"Scenario","text":"

                                            Note

                                            This case is based on our predefined repository and application. Your case may be different.

                                            To create and then modify a custom Tekton library, please follow the steps below:

                                            "},{"location":"use-cases/tekton-custom-pipelines/#add-custom-application-to-edp","title":"Add Custom Application to EDP","text":"
                                            1. Open EDP Portal URL. Use the Sign-In option:

                                              Logging screen

                                            2. In the top right corner, enter the Cluster settings and ensure that both Default namespace and Allowed namespace are set:

                                              Cluster settings

                                            3. Create the new Codebase with the Application type using the Clone strategy. To do this, click the EDP tab:

                                              Cluster overview

                                            4. Select the Components section under the EDP tab and push the create + button:

                                              Components tab

                                            5. Select the Application codebase type because is meant to be delivered as a container and deployed inside the Kubernetes cluster. Choose the Clone strategy and this example repository:

                                              Step codebase info

                                            6. In the Application Info tab, define the following values and click the Proceed button:

                                              • Application name: tekton-hello-world
                                              • Default branch: master
                                              • Application code language: Other
                                              • Language version/framework: go
                                              • Build tool: shell

                                              Application info

                                              Note

                                              These application details are required to match the Pipeline name gerrit-shell-go-app-build-default.

                                              The PipelineRun name is formed with the help of TriggerTemplates in pipelines-library so the Pipeline name should correspond to the following structure:

                                                pipelineRef:\n    name: gerrit-$(tt.params.buildtool)-$(tt.params.framework)-$(tt.params.cbtype)-build-$(tt.params.versioning-type)\n
                                              The PipelineRun is created as soon as Gerrit (or, if configured, GitHub, GitLab) sends a payload during Merge Request events.

                                            7. In the Advances Settings tab, define the below values and click the Apply button:

                                              • CI tool: Tekton
                                              • Codebase versioning type: default
                                              • Leave Specify the pattern to validate a commit message empty.

                                              Advanced settings

                                            8. Check the application status. It should be green:

                                              Application status

                                              Now that the application is created successfully, proceed to adding the Tekton library.

                                            "},{"location":"use-cases/tekton-custom-pipelines/#add-tekton-library","title":"Add Tekton Library","text":"
                                            1. Select the Components section under the EDP tab and push the create + button:

                                              Components tab

                                            2. Create a new Codebase with the Library type using the Create strategy:

                                              Step codebase info

                                              Note

                                              The EDP Create strategy will automatically pull the code for the Tekton Helm application from here.

                                            3. In the Application Info tab, define the following values and click the Proceed button:

                                              • Application name: custom-tekton-chart
                                              • Default branch: master
                                              • Application code language: Helm
                                              • Language version/framework: Pipeline
                                              • Build tool: Helm

                                              Step codebase info

                                            4. In the Advances Settings tab, define the below values and click the Apply button:

                                              • CI tool: Tekton
                                              • Codebase versioning type: default
                                              • Leave Specify the pattern to validate a commit message empty.

                                              Advanced settings

                                            5. Check the codebase status:

                                              Codebase status

                                            "},{"location":"use-cases/tekton-custom-pipelines/#modify-tekton-pipeline","title":"Modify Tekton Pipeline","text":"

                                            Note

                                            Our recommendation is to avoid modifying the default Tekton resources. Instead, we suggest creating and modifying your own custom Tekton library.

                                            Now that the Tekton Helm library is created, it is time to clone, modify and then apply it to the Kubernetes cluster.

                                            1. Generate SSH key to work with Gerrit repositories:

                                              ssh-keygen -t ed25519 -C \"your_email@example.com\"\n
                                            2. Log into Gerrit UI.

                                            3. Go to Gerrit Settings -> SSH keys, paste your generated public SSH key to the New SSH key field and click ADD NEW SSH KEY:

                                              Gerrit settings Gerrit settings

                                            4. Browse Gerrit Repositories and select custom-tekton-chart project:

                                              Browse Gerrit repositories

                                            5. Clone the repository with SSH using Clone with commit-msg hook command:

                                              Gerrit clone

                                              Note

                                              In case of the strict firewall configurations, please use the HTTP protocol to pull and configure the HTTP Credentials in Gerrit.

                                            6. Examine the repository structure. It should look this way by default:

                                              custom-tekton-chart\n  \u251c\u2500\u2500 Chart.yaml\n  \u251c\u2500\u2500 chart_schema.yaml\n  \u251c\u2500\u2500 ct.yaml\n  \u251c\u2500\u2500 lintconf.yaml\n  \u251c\u2500\u2500 templates\n  \u2502\u00a0\u00a0 \u251c\u2500\u2500 pipelines\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 hello-world\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gerrit-build-default.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gerrit-build-edp.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gerrit-build-lib-default.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gerrit-build-lib-edp.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gerrit-review-lib.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gerrit-review.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 github-build-default.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 github-build-edp.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 github-build-lib-default.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 github-build-lib-edp.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 github-review-lib.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 github-review.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gitlab-build-default.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gitlab-build-edp.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gitlab-build-lib-default.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gitlab-build-lib-edp.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gitlab-review-lib.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u2514\u2500\u2500 gitlab-review.yaml\n  \u2502\u00a0\u00a0 \u2514\u2500\u2500 tasks\n  \u2502\u00a0\u00a0     \u2514\u2500\u2500 task-hello-world.yaml\n  \u2514\u2500\u2500 values.yaml\n

                                              Note

                                              Change the values in the values.yaml file.

                                              The gitProvider parameter is the git hosting provider, Gerrit in this example. The similar approach be made with GitHub, or GitLab.

                                              The dnsWildCard parameter is the cluster DNS address.

                                              The gerritSSHPort parameter is the SSH port of the Gerrit service on Kubernetes. Check the Gerrit port in your edp installation global section.

                                              Note

                                              Our custom Helm chart includes edp-tekton-common-library dependencies in the Chart.yaml file. This library allows to use our predefined code snippets.

                                              Here is an example of the filled in values.yaml file:

                                              nameOverride: \"\"\nfullnameOverride: \"\"\n\nglobal:\n  gitProvider: gerrit\n  dnsWildCard: \"example.domain.com\"\n  gerritSSHPort: \"30009\"\n
                                            7. Modify and add tasks or pipelines.

                                              As an example, let's assume that we need to add the helm-lint pipeline task to the review pipeline. To implement this, insert the code below to the gerrit-review.yaml file underneath the hello task:

                                                  - name: hello\n      taskRef:\n        name: hello\n      runAfter:\n      - init-values\n      params:\n      - name: BASE_IMAGE\n        value: \"$(params.shell-image-version)\"\n      - name: username\n        value: \"$(params.username)\"\n      workspaces:\n        - name: source\n          workspace: shared-workspace\n\n    - name: helm-lint\n      taskRef:\n        kind: Task\n        name: helm-lint\n      runAfter:\n        - hello\n      params:\n        - name: EXTRA_COMMANDS\n          value: |\n            ct lint --validate-maintainers=false --charts deploy-templates/\n      workspaces:\n        - name: source\n          workspace: shared-workspace\n

                                              Note

                                              The helm-lint task references to the default pipeline-library Helm chart which is applied to the cluster during EDP installation.

                                              The runAfter parameter shows that this Pipeline task will be run after the hello pipeline task.

                                            8. Build Helm dependencies in the custom chart:

                                              helm dependency update .\n
                                            9. Ensure that the chart is valid and all the indentations are fine:

                                              helm lint .\n

                                              To validate if the values are substituted in the templates correctly, render the templated YAML files with the values using the following command. It generates and displays all the manifest files with the substituted values:

                                              helm template .\n
                                            10. Install the custom chart with the command below. You can also use the --dry-run flag to simulate the chart installation and catch possible errors:

                                              helm upgrade --install edp-tekton-custom . -n edp --dry-run\n
                                              helm upgrade --install edp-tekton-custom . -n edp\n
                                            11. Check the created pipelines and tasks in the cluster:

                                              kubectl get tasks -n edp\nkubectl get pipelines -n edp\n
                                            12. Commit and push the modified Tekton Helm chart to Gerrit:

                                              git add .\ngit commit -m \"Add Helm chart testing for go-shell application\"\ngit push origin HEAD:refs/for/master\n
                                            13. Check the Gerrit code review for the custom Helm chart pipelines repository in Tekton:

                                              Gerrit code review status

                                            14. Go to Changes -> Open, click CODE-REVIEW and submit the merge request:

                                              Gerrit merge Gerrit merge

                                            15. Check the build Pipeline status for the custom Pipelines Helm chart repository in Tekton:

                                              Tekton status

                                            "},{"location":"use-cases/tekton-custom-pipelines/#create-application-merge-request","title":"Create Application Merge Request","text":"

                                            Since we applied the Tekton library to the Kubernetes cluster in the previous step, let's test the review and build pipelines for our tekton-hello-world application.

                                            Perform the below steps to merge new code (Merge Request) that passes the Code Review flow. For the steps below, we use Gerrit UI but the same actions can be performed using the command line and Git tool:

                                            1. Log into Gerrit UI, select tekton-hello-world project, and create a change request.

                                            2. Browse Gerrit Repositories and select tekton-hello-world project:

                                              Browse Gerrit repositories

                                            3. Clone the tekton-hello-world repository to make the necessary changes or click the Create Change button in the Commands section of the project to make changes via Gerrit GUI:

                                              Create Change request

                                            4. In the Create Change dialog, provide the branch master, write some text in the Description (commit message) and click the Create button:

                                              Create Change

                                            5. Click the Edit button of the merge request and add deployment-templates/values.yaml to modify it and change the ingress.enabled flag from false to true:

                                              Update values.yaml file Update values.yaml file

                                            6. Check the Review Pipeline status. The helm-lint pipeline task should be displayed there:

                                              Review Change

                                            7. Review the deployment-templates/values.yaml file and push the SAVE & PUBLISH button. As soon as you get Verified +1 from CI bot, the change is ready for review. Click the Mark as Active and Code-review buttons:

                                              Review Change

                                            8. Click the Submit button. Then, your code is merged to the main branch, triggering the Build Pipeline.

                                              Review Change

                                              Note

                                              If the build is added and configured, push steps in the pipeline, it will produce a new version of artifact, which will be available for the deployment in EDP Portal.

                                            9. Check the pipelines in the Tekton dashboard:

                                              Tekton custom piplines Tekton custom piplines

                                            What happens under the hood: 1) Gerrit sends a payload during Merge Request event to the Tekton EventListener; 2) EventListener catches it with the help of Interceptor; 3) TriggerTemplate creates a PipelineRun.

                                            The detailed scheme is shown below:

                                            graph LR;\n    A[Gerrit events] --> |Payload| B(Tekton EventListener) --> C(Tekton Interceptor CEL filter) --> D(TriggerTemplate)--> E(PipelineRun)

                                            This chart will be using the core of common-library and pipelines-library and custom resources on the top of them.

                                            "},{"location":"use-cases/tekton-custom-pipelines/#related-articles","title":"Related Articles","text":"
                                            • Tekton Overview
                                            • Add Application using EDP Portal
                                            "},{"location":"user-guide/","title":"Overview","text":"

                                            The EDP Portal user guide is intended for developers and provides details on working with EDP Portal, different codebase types, and EDP CI/CD flow.

                                            "},{"location":"user-guide/#edp-portal","title":"EDP Portal","text":"

                                            EDP Portal is a central management tool in the EDP ecosystem that provides the ability to define pipelines, project resources and new technologies in a simple way. Using EDP Portal enables to manage business entities:

                                            • Create such codebase types as Applications, Libraries, Autotests and Inrastructures;
                                            • Create/Update CD Pipelines;
                                            • Add external Git servers and Clusters.

                                            Overview page

                                            • Navigation bar \u2013 consists of the following sections: Overview, Marketplace, Components, CD Pipelines, and Configuration.
                                            • Top panel bar \u2013 contains documentation link, notifications, EDP Portal settings, and cluster settings, such as default and allowed namespaces.
                                            • Main links \u2013 displays the corresponding links to the major adjusted toolset, to the management tool and to the OpenShift cluster.
                                            • Filters \u2013 used for searching and filtering the namespaces.

                                            EDP Portal is a complete tool allowing to manage and control the codebases (applications, autotests, libraries and infrastructures) added to the environment as well as to create a CD pipeline.

                                            Inspect the main features available in EDP Portal by following the corresponding link:

                                            • Add Application
                                            • Add Autotest
                                            • Add Library
                                            • Add Git Server
                                            • Add CD Pipeline
                                            • Add Quality Gate
                                            "},{"location":"user-guide/add-application/","title":"Add Application","text":"

                                            Portal allows to create, clone and import an application and add it to the environment. It can also be deployed in Gerrit (if the Clone or Create strategy is used) with the Code Review and Build pipelines built in Jenkins/Tekton.

                                            To add an application, navigate to the Components section on the navigation bar and click Create (the plus sign icon in the lower-right corner of the screen). Once clicked, the Create new component dialog will appear, then select Application and choose one of the strategies which will be described later in this page. You can create an Application in YAML or via the two-step menu in the dialog.

                                            "},{"location":"user-guide/add-application/#create-application-in-yaml","title":"Create Application in YAML","text":"

                                            Click Edit YAML in the upper-right corner of the Create Application dialog to open the YAML editor and create the Application.

                                            Edit YAML

                                            To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create Application dialog.

                                            To save the changes, select the Save & Apply button.

                                            "},{"location":"user-guide/add-application/#create-application-via-ui","title":"Create Application via UI","text":"

                                            The Create Application dialog contains the two steps:

                                            • The Codebase Info Menu
                                            • The Advanced Settings Menu
                                            "},{"location":"user-guide/add-application/#codebase-info-menu","title":"Codebase Info Menu","text":"

                                            Follow the instructions below to fill in the fields of the Codebase Info menu:

                                            1. In the Create new component menu, select Application:

                                              Application info

                                            2. Select the necessary configuration strategy. There are three configuration strategies:

                                            • Create from template \u2013 creates a project on the pattern in accordance with an application language, a build tool, and a framework. This strategy is recommended for projects that start developing their applications from scratch.
                                            • Import project - allows configuring a replication from the Git server. While importing the existing repository, select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                              Note

                                              In order to use the Import project strategy, make sure to adjust it with the Integrate GitLab/GitHub With Jenkins or Integrate GitLab/GitHub With Tekton page.

                                            • Clone project \u2013 clones the indicated repository into EPAM Delivery Platform. While cloning the existing repository, it is required to fill in the Repository URL field as well:

                                              Clone application

                                              In our example, we will use the Create from template strategy:

                                              Create application

                                              1. Select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                              2. Type the name of the application in the Component name field by entering at least two characters and by using the lower-case letters, numbers and inner dashes.

                                              3. Type the application description.

                                              4. To create an application with an empty repository in Gerrit, select the Empty project check box.

                                              5. Select any of the supported application languages with their providers in the Application Code Language field:

                                                • Java \u2013 selecting specific Java version (8,11,17 are available).
                                                • JavaScript - selecting JavaScript allows using React, Vue, Angular, Express, Next.js and Antora frameworks.
                                                • Python - selecting Python allows using the Python v.3.8, FastAPI, Flask frameworks.
                                                • Go - selecting Go allows using the Beego, Gin and Operator SDK frameworks.
                                                • C# - selecting C# allows using the .Net v.3.1 and .Net v.6.0 frameworks.
                                                • Helm - selecting Helm allows using the Helm framework.
                                                • Other - selecting Other allows extending the default code languages when creating a codebase with the clone/import strategy. To add another code language, inspect the Add Other Code Language section.

                                                Note

                                                The Create from template strategy does not allow to customize the default code language set.

                                              6. Select necessary Language version/framework depending on the Application code language field.

                                              7. Choose the necessary build tool in the Build Tool field:

                                                • Java - selecting Java allows using the Gradle or Maven tool.
                                                • JavaScript - selecting JavaScript allows using the NPM tool.
                                                • C# - selecting C# allows using the .Net tool.
                                                • Python - selecting Python allows using Python tool.
                                                • Go - selecting Go allows using Go tool.
                                                • Helm - selecting Helm allows using Helm tool.

                                                Note

                                                The Select Build Tool field disposes of the default tools and can be changed in accordance with the selected code language.

                                                Note

                                                Tekton pipelines offer built-in support for Java Maven Multi-Module projects. These pipelines are capable of recognizing Java deployable modules based on the information in the pom.xml file and performing relevant deployment actions. It's important to note that although the Dockerfile is typically located in the root directory, Kaniko, the tool used for building container images, uses the targets folder within the deployable module's context. For a clear illustration of a Multi-Module project structure, please refer to this example on GitHub, which showcases a commonly used structure for Java Maven Multi-Module projects.

                                            "},{"location":"user-guide/add-application/#advanced-settings-menu","title":"Advanced Settings Menu","text":"

                                            The Advanced Settings menu should look similar to the picture below:

                                            Advanced settings

                                            Follow the instructions below to fill in the fields of the Advanced Setting menu:

                                            a. Specify the name of the Default branch where you want the development to be performed.

                                            Note

                                            The default branch cannot be deleted. For the Clone project and Import project strategies: if you want to use the existing branch, enter its name into this field.

                                            b. Select the necessary codebase versioning type:

                                            • default - using the default versioning type, in order to specify the version of the current artifacts, images, and tags in the Version Control System, a developer should navigate to the corresponding file and change the version manually.
                                            • edp - using the edp versioning type, a developer indicates the version number that will be used for all the artifacts stored in artifactory: binaries, pom.xml, metadata, etc. The version stored in repository (e.g. pom.xml) will not be affected or used. Using this versioning overrides any version stored in the repository files without changing actual file.

                                              When selecting the edp versioning type, the extra field will appear:

                                              Edp versioning

                                            Type the version number from which you want the artifacts to be versioned.

                                            Note

                                            The Start Version From field should be filled out in compliance with the semantic versioning rules, e.g. 1.2.3 or 10.10.10. Please refer to the Semantic Versioning page for details.

                                            c. Specify the pattern to validate a commit message. Use regular expression to indicate the pattern that is followed on the project to validate a commit message in the code review pipeline. An example of the pattern: ^[PROJECT_NAME-d{4}]:.*$.

                                            JIRA integration

                                            d. Select the Integrate with Jira Server check box in case it is required to connect Jira tickets with the commits and have a respective label in the Fix Version field.

                                            Note

                                            To adjust the Jira integration functionality, first apply the necessary changes described on the Adjust Jira Integration page, and setup the Adjust VCS Integration With Jira. Pay attention that the Jira integration feature is not available when using the GitLab CI tool.

                                            e. In the Jira Server field, select the Jira server.

                                            f. Specify the pattern to find a Jira ticket number in a commit message. Based on this pattern, the value from EDP will be displayed in Jira. Combine several variables to obtain the desired value.

                                            Note

                                            The GitLab CI tool is available only with the Import strategy and makes the Jira integration feature unavailable.

                                            Mapping fields

                                            g. In the Mapping field name section, specify the names of the Jira fields that should be filled in with attributes from EDP:

                                            1. Select the name of the field in a Jira ticket from the Mapping field name drop-down menu. The available fields are the following: Fix Version/s, Component/s and Labels.

                                            2. Click the Add button to add the mapping field name.

                                            3. Enter Jira pattern for the field name:

                                              • For the Fix Version/s field, select the EDP_VERSION variable that represents an EDP upgrade version, as in 2.7.0-SNAPSHOT. Combine variables to make the value more informative. For example, the pattern EDP_VERSION-EDP_COMPONENT will be displayed as 2.7.0-SNAPSHOT-nexus-operator in Jira.
                                              • For the Component/s field, select the EDP_COMPONENT variable that defines the name of the existing repository. For example, nexus-operator.
                                              • For the Labels field, select the EDP_GITTAG variable that defines a tag assigned to the commit in GitHub. For example, build/2.7.0-SNAPSHOT.59.
                                            4. Click the bin icon to remove the Jira field name.

                                            h. Click the Apply button to add the application to the Applications list.

                                            Note

                                            After the complete adding of the application, inspect the Application Overview part.

                                            Note

                                            Since EDP v3.3.0, the CI tool field has been hidden. Now EDP Portal automatically defines the CI tool depending on which one is deployed with EDP. If both Jenkins and Tekton are deployed, EDP Portal chooses Tekton by default. To define the CI tool manualy, operate with the spec.ciTool parameters.

                                            "},{"location":"user-guide/add-application/#related-articles","title":"Related Articles","text":"
                                            • Manage Applications
                                            • Add CD Pipeline
                                            • Add Other Code Language
                                            • Adjust GitLab CI Tool
                                            • Adjust Jira Integration
                                            • Adjust VCS Integration With Jira
                                            • Integrate GitHub/GitLab in Jenkins
                                            • Integrate GitHub/GitLab in Tekton
                                            • Manage Jenkins CI Pipeline Job Provisioner
                                            • Manage Jenkins Agent
                                            • Perf Server Integration
                                            "},{"location":"user-guide/add-autotest/","title":"Add Autotest","text":"

                                            Portal enables to clone or import an autotest, add it to the environment with its subsequent deployment in Gerrit (in case the Clone strategy is used) and building of the Code Review pipeline in Jenkins/Tekton, as well as to use it for work with an application under development. It is also possible to use autotests as quality gates in a newly created CD pipeline.

                                            Info

                                            Please refer to the Add Application section for the details on how to add an application codebase type. For the details on how to use autotests as quality gates, please refer to the Stages Menu section of the Add CD Pipeline documentation.

                                            To add an autotest, navigate to the Components section on the navigation bar and click Create (the plus sign icon in the lower-right corner of the screen). Once clicked, the Create new component dialog will appear, then select Autotest and choose one of the strategies which will be described later in this page. You can create an autotest in YAML or via the two-step menu in the dialog.

                                            "},{"location":"user-guide/add-autotest/#create-autotest-in-yaml","title":"Create Autotest in YAML","text":"

                                            Click Edit YAML in the upper-right corner of the Create Autotest dialog to open the YAML editor and create an autotest:

                                            Edit YAML

                                            To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create Autotest dialog.

                                            To save the changes, select the Save & Apply button.

                                            "},{"location":"user-guide/add-autotest/#create-autotest-via-ui","title":"Create Autotest via UI","text":"

                                            The Create Autotest dialog contains the two steps:

                                            • The Codebase Info Menu
                                            • The Advanced Settings Menu
                                            "},{"location":"user-guide/add-autotest/#the-codebase-info-menu","title":"The Codebase Info Menu","text":"

                                            There are two available strategies: clone and import.

                                            1. The Create new component menu should look like the picture below:

                                              Create new component menu

                                            2. In the Repository onboarding strategy field, select the necessary configuration strategy:

                                              • Clone project \u2013 clones the indicated repository into EPAM Delivery Platform. While cloning the existing repository, it is required to fill in the Repository URL field as well.
                                              • Import project - allows configuring a replication from the Git server. While importing the existing repository, select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                                Note

                                                In order to use the Import project strategy, make sure to adjust it with the Integrate GitLab/GitHub With Jenkins or Integrate GitLab/GitHub With Tekton page.

                                                In our example, we will use the Clone project strategy:

                                                Clone autotest

                                                1. While cloning the existing repository, it is required to fill in the Repository URL field.

                                                2. Select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                                3. Select the Repository credentials check box in case you clone the private repository, and fill in the repository login and password/access token.

                                                4. Fill in the Component name field by entering at least two characters and by using the lower-case letters, numbers and inner dashes.

                                                5. Type the necessary description in the Description field.

                                                6. In the Autotest code language field, select the Java code language with its framework (specify Java 8 or Java 11 to be used) and get the default Maven build tool OR add another code language. Selecting Other allows extending the default code languages and get the necessary build tool, for details, inspect the Add Other Code Language section.

                                                  Note

                                                  Using the Create strategy does not allow to customize the default code language set.

                                                7. Select the Java framework if Java is selected above.

                                                8. The Build Tool field can dispose of the default Maven tool, Gradle or other built tool in accordance with the selected code language.

                                                9. All the autotest reports will be created in the Allure framework that is available in the Autotest Report Framework field by default.

                                            3. Click the Proceed button to switch to the next menu.

                                            The Advanced Settings menu should look like the picture below:

                                            Advanced settings

                                            a. Specify the name of the default branch where you want the development to be performed.

                                            Note

                                            The default branch cannot be deleted.

                                            b. Select the necessary codebase versioning type:

                                            • default: Using the default versioning type, in order to specify the version of the current artifacts, images, and tags in the Version Control System, a developer should navigate to the corresponding file and change the version manually.
                                            • edp: Using the edp versioning type, a developer indicates the version number from which all the artifacts will be versioned and, as a result, automatically registered in the corresponding file (e.g. pom.xml).

                                              When selecting the edp versioning type, the extra field will appear:

                                              Edp versioning

                                              Type the version number from which you want the artifacts to be versioned.

                                            Note

                                            The Start Version From field must be filled out in compliance with the semantic versioning rules, e.g. 1.2.3 or 10.10.10. Please refer to the Semantic Versioning page for details.

                                            c. Specify the pattern to validate a commit message. Use regular expression to indicate the pattern that is followed on the project to validate a commit message in the code review pipeline. An example of the pattern: ^[PROJECT_NAME-d{4}]:.*$

                                            Jira integration

                                            d. Select the Integrate with Jira Server check box in case it is required to connect Jira tickets with the commits and have a respective label in the Fix Version field.

                                            Note

                                            To adjust the Jira integration functionality, first apply the necessary changes described on the Adjust Jira Integration page, and Adjust VCS Integration With Jira. Pay attention that the Jira integration feature is not available when using the GitLab CI tool.

                                            e. As soon as the Jira server is set, select it in the Jira Server field.

                                            f. Specify the pattern to find a Jira ticket number in a commit message. Based on this pattern, the value from EDP will be displayed in Jira.

                                            Mapping field name

                                            g. In the Advanced Mapping section, specify the names of the Jira fields that should be filled in with attributes from EDP:

                                            1. Select the name of the field in a Jira ticket. The available fields are the following: Fix Version/s, Component/s and Labels.

                                            2. Click the Add button to add the mapping field name.

                                            3. Enter Jira pattern for the field name:

                                              • For the Fix Version/s field, select the EDP_VERSION variable that represents an EDP upgrade version, as in 2.7.0-SNAPSHOT. Combine variables to make the value more informative. For example, the pattern EDP_VERSION-EDP_COMPONENT will be displayed as 2.7.0-SNAPSHOT-nexus-operator in Jira.
                                              • For the Component/s field select the EDP_COMPONENT variable that defines the name of the existing repository. For fexample, nexus-operator.
                                              • For the Labels field select the EDP_GITTAGvariable that defines a tag assigned to the commit in Git Hub. For example, build/2.7.0-SNAPSHOT.59.
                                            4. Click the bin icon to remove the Jira field name.

                                            h. Click the Apply button to add the library to the Libraries list.

                                            Note

                                            After the complete adding of the autotest, inspect the Autotest Overview part.

                                            Note

                                            Since EDP v3.3.0, the CI tool field has been hidden. Now EDP Portal automatically defines the CI tool depending on which one is deployed with EDP. If both Jenkins and Tekton are deployed, EDP Portal chooses Tekton by default. To define the CI tool manualy, operate with the spec.ciTool parameters.

                                            "},{"location":"user-guide/add-autotest/#the-advanced-settings-menu","title":"The Advanced Settings Menu","text":""},{"location":"user-guide/add-autotest/#related-articles","title":"Related Articles","text":"
                                            • Manage Autotests
                                            • Add Application
                                            • Add CD Pipelines
                                            • Add Other Code Language
                                            • Adjust GitLab CI Tool
                                            • Adjust Jira Integration
                                            • Adjust VCS Integration With Jira
                                            • Integrate GitHub/GitLab in Jenkins
                                            • Integrate GitHub/GitLab in Tekton
                                            • Manage Jenkins CI Pipeline Job Provisioner
                                            • Manage Jenkins Agent
                                            • Perf Server Integration
                                            "},{"location":"user-guide/add-cd-pipeline/","title":"Add CD Pipeline","text":"

                                            Portal provides the ability to deploy an environment on your own and specify the essential components.

                                            Navigate to the CD Pipelines section on the navigation bar and click Create (the plus sign icon in the lower-right corner of the screen). Once clicked, the Create CD Pipeline dialog will appear.

                                            The creation of the CD pipeline becomes available as soon as an application is created including its provisioning in a branch and the necessary entities for the environment. You can create the CD pipeline in YAML or via the three-step menu in the dialog.

                                            "},{"location":"user-guide/add-cd-pipeline/#create-cd-pipeline-in-yaml","title":"Create CD Pipeline in YAML","text":"

                                            Click Edit YAML in the upper-right corner of the Create CD Pipeline dialog to open the YAML editor and create the CD Pipeline.

                                            Edit YAML

                                            To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create CD Pipeline dialog.

                                            To save the changes, select the Save & Apply button.

                                            "},{"location":"user-guide/add-cd-pipeline/#create-cd-pipeline-in-the-dialog","title":"Create CD Pipeline in the Dialog","text":"

                                            The Create CD Pipeline dialog contains the three steps:

                                            • The Pipeline Menu
                                            • The Applications Menu
                                            • The Stages Menu
                                            "},{"location":"user-guide/add-cd-pipeline/#the-pipeline-menu","title":"The Pipeline Menu","text":"

                                            The Pipeline tab of the Create CD Pipeline menu should look like the picture below:

                                            Create CD pipeline

                                            1. Type the name of the pipeline in the Pipeline Name field by entering at least two characters and by using the lower-case letters, numbers and inner dashes.

                                              Note

                                              The namespace created by the CD pipeline has the following pattern combination: [edp namespace]-[cd pipeline name]-[stage name]. Please be aware that the namespace length should not exceed 63 symbols.

                                            2. Select the deployment type from the drop-down list:

                                              • Container - the pipeline will be deployed in a Docker container;
                                              • Custom - this mode allows to deploy non-container applications and customize the Init stage of CD pipeline.
                                            3. Click the Proceed button to switch to the next menu.

                                            "},{"location":"user-guide/add-cd-pipeline/#the-applications-menu","title":"The Applications Menu","text":"

                                            The Pipeline tab of the Create CD Pipeline menu should look like the picture below:

                                            CD pipeline applications

                                            1. Select the necessary application from the Mapping field name drop-down menu.
                                            2. Select the plus sign icon near the selected application to specify the necessary codebase Docker branch for the application (the output for the branch and other stages from other CD pipelines).
                                            3. Select the application branch from the drop-down menu.
                                            4. Select the Promote in pipeline check box in order to transfer the application from one to another stage by the specified codebase Docker branch. If the Promote in pipeline check box is not selected, the same codebase Docker stream will be deployed regardless of the stage, i.e. the codebase Docker stream input, which was selected for the pipeline, will always be used.

                                              Note

                                              The newly created CD pipeline has the following pattern combination: [pipeline name]-[branch name]. If there is another deployed CD pipeline stage with the respective codebase Docker stream (= image stream as an OpenShift term), the pattern combination will be as follows: [pipeline name]-[stage name]-[application name]-[verified].

                                            5. Click the Proceed button to switch to the next menu.

                                            "},{"location":"user-guide/add-cd-pipeline/#the-stages-menu","title":"The Stages Menu","text":"
                                            1. Click the plus sign icon in the Stages menu and fill in the necessary fields in the Adding Stage window :

                                              CD stages

                                              Adding stage

                                              a. Type the stage name;

                                              Note

                                              The namespace created by the CD pipeline has the following pattern combination: [cluster name]-[cd pipeline name]-[stage name]. Please be aware that the namespace length should not exceed 63 symbols.

                                              b. Enter the description for this stage;

                                              c. Select the trigger type. The key benefit of the automatic deploy feature is to keep environments up-to-date. The available trigger types are Manual and Auto. When the Auto trigger type is chosen, the CD pipeline will initiate automatically once the image is built. Manual implies that user has to perform deploy manually by clicking the Deploy button in the CD Pipeline menu. Please refer to the Architecture Scheme of CD Pipeline Operator page for additional details.

                                              Note

                                              In Tekton deploy scenario, automatic deploy will start working only after the first manual deploy.

                                              d. Select the job provisioner. In case of working with non-container-based applications, there is an option to use a custom job provisioner. Please refer to the Manage Jenkins CD Job Provision page for details.

                                              e. Select the groovy-pipeline library;

                                              f. Select the branch;

                                              g. Add an unlimited number of quality gates by clicking a corresponding plus sign icon and remove them as well by clicking the recycle bin icon;

                                              h. Type the step name, which will be displayed in Jenkins/Tekton, for every quality gate;

                                              i. Select the quality gate type:

                                              • Manual - means that the promoting process should be confirmed in Jenkins/Tekton manually;
                                              • Autotests - means that the promoting process should be confirmed by the successful passing of the autotests.

                                              In the additional fields, select the previously created autotest name (j) and specify its branch for the autotest that will be launched on the current stage (k).

                                              Note

                                              Execution sequence. The image promotion and execution of the pipelines depend on the sequence in which the environments are added.

                                              l. Click the Apply button to display the stage in the Stages menu.

                                              Continuous delivery menu

                                            2. Edit the stage by clicking its name and applying changes, and remove the added stage by clicking the recycle bin icon next to its name.

                                            3. Click the Apply button to start the provisioning of the pipeline. After the CD pipeline is added, the new project with the stage name will be created in OpenShift.

                                            "},{"location":"user-guide/add-cd-pipeline/#manage-cd-pipeline","title":"Manage CD Pipeline","text":"

                                            As soon as the CD pipeline is provisioned and added to the CD Pipelines list, there is an ability to:

                                            CD pipeline page

                                            1. Create another application by clicking the plus sign icon in the lower-right corner of the screen and performing the same steps as described in the Add CD Pipeline section.

                                            2. Open CD pipeline data by clicking its link name. Once clicked, the following blocks will be displayed:

                                              • General Info - displays common information about the CD pipeline, such as name and deployment type.
                                              • Applications - displays the CD pipeline applications to promote.
                                              • Stages - displays the CD pipeline stages and stage metadata (by selecting the information icon near the stage name); allows to add, edit and delete stages, as well as deploy or uninstall image stream versions of the related applications for a stage.
                                              • Metadata - displays the CD pipeline name, namespace, creation date, finalizers, generation, resource version, and UID. Open this block by selecting the information icon near the options icon next to the CD pipeline name.
                                            3. Edit the CD pipeline by selecting the options icon next to its name in the CD Pipelines list, and then selecting Edit. For details see the Edit Existing CD Pipeline section.

                                            4. Delete the added CD pipeline by selecting the options icon next to its name in the CD Pipelines list, and then selecting Delete.

                                              Info

                                              In OpenShift, if the deployment fails with the ImagePullBackOff error, delete the POD.

                                            5. Sort the existing CD pipelines in a table by clicking the sorting icons in the table header. When sorting by name, the CD pipelines will be displayed in alphabetical order. You can also sort the CD pipelines by their status.

                                            6. Search the necessary CD pipeline by the namespace or by entering the corresponding name, language or the build tool into the Filter tool.

                                            7. Select a number of CD pipelines displayed per page (15, 25 or 50 rows) and navigate between pages if the number of CD pipelines exceeds the capacity of a single page.

                                            "},{"location":"user-guide/add-cd-pipeline/#edit-existing-cd-pipeline","title":"Edit Existing CD Pipeline","text":"

                                            Edit the CD pipeline directly from the CD Pipelines overview page or when viewing the CD Pipeline data:

                                            1. Select Edit in the options icon menu next to the CD pipeline name:

                                              Edit CD pipeline on the CD Pipelines overview page

                                              Edit CD pipeline when viewing the CD pipeline data

                                            2. Apply the necessary changes (edit the list of applications for deploy, application branches, and promotion in the pipeline). Add new extra stages by clicking the plus sign icon and filling in the application branch and promotion in the pipeline.

                                              Edit CD pipeline dialog

                                            3. Select the Apply button to confirm the changes.

                                            "},{"location":"user-guide/add-cd-pipeline/#add-a-new-stage","title":"Add a New Stage","text":"

                                            In order to create a new stage for the existing CD pipeline, follow the steps below:

                                            1. Navigate to the Stages block by clicking the CD pipeline name link in the CD Pipelines list.

                                              Add CD pipeline stage

                                            2. Select Create to open the Create stage dialog.

                                            3. Click Edit YAML in the upper-right corner of the Create stage dialog to open the YAML editor and add a stage. Otherwise, fill in the required fields in the dialog. Please see the Stages Menu section for details.

                                            4. Click the Apply button.

                                            "},{"location":"user-guide/add-cd-pipeline/#edit-stage","title":"Edit Stage","text":"

                                            In order to edit a stage for the existing CD pipeline, follow the steps below:

                                            1. Navigate to the Stages block by clicking the CD pipeline name link in the CD Pipelines list.

                                              Edit CD pipeline stage

                                            2. Select the options icon related to the necessary stage and then select Edit.

                                              Edit CD pipeline stage dialog

                                            3. In the Edit Stage dialog, change the stage trigger type. See more about this field in the Stages Menu section.

                                            4. Click the Apply button.

                                            "},{"location":"user-guide/add-cd-pipeline/#delete-stage","title":"Delete Stage","text":"

                                            Note

                                            You cannot remove the last stage, as the CD pipeline does not exist without stages.

                                            In order to delete a stage for the existing CD pipeline, follow the steps below:

                                            1. Navigate to the Stages block by clicking the CD pipeline name link in the CD Pipelines list.

                                              Delete CD pipeline stage

                                            2. Select the options icon related to the necessary stage and then select Delete. After the confirmation, the CD stage is deleted with all its components: database record, Jenkins/Tekton pipeline, and cluster namespace.

                                            "},{"location":"user-guide/add-cd-pipeline/#view-stage-data","title":"View Stage Data","text":"

                                            To view the CD pipeline stage data for the existing CD pipeline, follow the steps below:

                                            1. Navigate to the Stages block by clicking the CD pipeline name link in the CD Pipelines list.

                                              Expand CD pipeline stage

                                            2. Select the expand icon near the stage name. The following blocks will be displayed:

                                              CD pipeline stage overview

                                            • Applications - displays the status of the applications related to the stage and allows deploying the applications. Applications health and sync statuses are returned from the Argo CD tool.
                                            • General Info - displays the stage status, CD pipeline, description, job provisioning, order, trigger type, and source.
                                            • Quality Gates - displays the stage quality gate type, step name, autotest name, and branch name.
                                            "},{"location":"user-guide/add-cd-pipeline/#deploy-application","title":"Deploy Application","text":"

                                            Navigate to the Applications block of the stage and select an application. Select the image stream version from the drop-down list and click Deploy. The application will be deployed in the Argo CD tool as well.

                                            Deploy the promoted application

                                            To update or uninstall the application, select Update or Uninstall.

                                            Update or uninstall the application

                                            After this, the application will be updated or uninstalled in the Argo CD tool as well.

                                            Note

                                            In a nutshell, the Update button updates your image version in the Helm chart, whereas the Uninstall button deletes the Helm chart from the namespace where the pipeline is deployed.

                                            "},{"location":"user-guide/add-cd-pipeline/#related-articles","title":"Related Articles","text":"
                                            • Manage Jenkins CD Pipeline Job Provision
                                            "},{"location":"user-guide/add-cluster/","title":"Add Cluster","text":"

                                            Adding other clusters allows deploying applications to several clusters when creating a stage of CD pipeline in EDP Portal.

                                            To add a cluster, follow the steps below:

                                            1. Navigate to the Configuration section on the navigation bar and select Clusters. The appearance differs depending on the chosen display option:

                                              List optionTiled option

                                              Configuration menu (List option)

                                              Configuration menu (Tiled option)

                                            2. Click the + button to enter the Create new cluster menu:

                                              Add Cluster

                                            3. Once clicked, the Create new cluster dialog will appear. You can create a Cluster in YAML or via UI:

                                            Add cluster in YAMLAdd cluster via UI

                                            To add cluster in YAML, follow the steps below:

                                            • Click the Edit YAML button in the upper-right corner of the Create New Cluster dialog to open the YAML editor and create a Kubernetes secret.

                                            Edit YAML

                                            • To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create new cluster dialog.
                                            • To save the changes, select the Save & Apply button.

                                            To add cluster in YAML, follow the steps below:

                                            • To add a new cluster via the dialog menu, fill in the following fields in the Create New Cluster dialog:

                                              • Cluster Name - enter a cluster name;
                                              • Cluster Host - enter a cluster host;
                                              • Cluster Token - enter a cluster token;
                                              • Cluster Certificate - enter a cluster certificate.

                                            Add Cluster

                                            • Click the Apply button to add the cluster to the clusters list.

                                            As a result, the Kubernetes secret will be created for further integration.

                                            Currently, the EDP uses the shared Argo CD and the secret needs to be copied to the namespace where the Argo CD is installed.

                                            "},{"location":"user-guide/add-cluster/#related-articles","title":"Related Articles","text":"
                                            • Add Application
                                            • Add Library
                                            • Add Autotest
                                            • Add CD Pipeline
                                            "},{"location":"user-guide/add-custom-global-pipeline-lib/","title":"Add a Custom Global Pipeline Library","text":"

                                            In order to add a new custom global pipeline library, perform the steps below:

                                            1. Navigate to Jenkins and go to Manage Jenkins -> Configure System -> Global Pipeline Libraries.

                                              Note

                                              It is possible to configure as many libraries as necessary. Since these libraries will be globally usable, any pipeline in the system can utilize the functionality implemented in these libraries.

                                            2. Specify the following values:

                                              Add custom library

                                              a. Library name: The name of a custom library.

                                              b. Default version: The version which can be branched, tagged or hashed of a commit.

                                              c. Load implicitly: If checked, scripts will automatically have access to this library without needing to request it via @Library. It means that there is no need to upload the library manually because it will be downloaded automatically during the build for each job.

                                              d. Allow default version to be overridden: If checked, scripts may select a custom version of the library by appending @someversion in the @Library annotation. Otherwise, they are restricted to using the version selected here.

                                              e. Include @Library changes in job recent changes: If checked, any changes in the library will be included in the changesets of a build, and changing the library would cause new builds to run for Pipelines that include this library. This can be overridden in the jenkinsfile: @Library(value=\"name@version\", changelog=true|false).

                                              f. Cache fetched versions on controller for quick retrieval: If checked, versions fetched using this library will be cached on the controller. If a new library version is not downloaded during the build for some reason, remove the previous library version from cache in the Jenkins workspace.

                                              Note

                                              If the Default version check box is not defined, the pipeline must specify a version, for example, @Library('my-shared-library@master'). If the Allow default version to be overridden check box is enabled in the Shared Library\u2019s configuration, a @Library annotation may also override the default version defined for the library.

                                              Source code management

                                              g. Project repository: The URL of the repository

                                              h. Credentials: The credentials for the repository.

                                            3. Use the Custom Global Pipeline Libraries on the pipeline, for example:

                                            Pipeline

                                            @Library(['edp-library-stages', 'edp-library-pipelines', 'edp-custom-shared-library-name'])_\n\nBuild()\n

                                            Note

                                            edp-custom-shared-library-name is the name of the Custom Global Pipeline Library that should be added to the Jenkins Global Settings.

                                            "},{"location":"user-guide/add-custom-global-pipeline-lib/#related-articles","title":"Related Articles","text":"
                                            • Jenkins Official Documentation: Extending with Shared Libraries
                                            "},{"location":"user-guide/add-git-server/","title":"Add Git Server","text":"

                                            Important

                                            This article describes how to add a Git Server when deploying EDP with Jenkins. When deploying EDP with Tekton, Git Server is created automatically.

                                            Add Git servers to use the Import strategy for Jenkins and Tekton when creating an application, autotest or library in EDP Portal (Codebase Info step of the Create Application/Autotest/Library dialog). Enabling the Import strategy is a prerequisite to integrate EDP with Gitlab or GitHub.

                                            Note

                                            GitServer Custom Resource can be also created manually. See step 3 for Jenkins import strategy in the Integrate GitHub/GitLab in Jenkins article.

                                            To add a Git server, navigate to the Git servers section on the navigation bar and click Create (the plus sign icon in the lower-right corner of the screen). Once clicked, the Create Git server dialog will appear. You can create a Git server in YAML or via the three-step menu in the dialog.

                                            "},{"location":"user-guide/add-git-server/#create-git-server-in-yaml","title":"Create Git Server in YAML","text":"

                                            Click Edit YAML in the upper-right corner of the Create Git server dialog to open the YAML editor and create a Git server.

                                            Edit YAML

                                            To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create Git server dialog.

                                            To save the changes, select the Save & Apply button.

                                            "},{"location":"user-guide/add-git-server/#create-git-server-in-the-dialog","title":"Create Git Server in the Dialog","text":"

                                            Fill in the following fields:

                                            Create Git server

                                            • Git provider - select Gerrit, GitLab or GitHub.
                                            • Host - enter a Git server endpoint.
                                            • User - enter a user for Git integration.
                                            • SSH port - enter a Git SSH port.
                                            • HTTPS port - enter a Git HTTPS port.
                                            • Private SSH key - enter a private SSH key for Git integration. To generate this key, follow the instructions of the step 1 for Jenkins in the Integrate GitHub/GitLab in Jenkins article.
                                            • Access token - enter an access token for Git integration. To generate this token, go to GitLab/GitHub account -> Settings -> SSH and GPG keys -> select New SSH key and add SSH key.

                                            Click the Apply button to add the Git server to the Git servers list. As a result, the Git Server object and the corresponding secret for further integration will be created.

                                            "},{"location":"user-guide/add-git-server/#related-articles","title":"Related Articles","text":"
                                            • Integrate GitHub/GitLab in Jenkins
                                            • Integrate GitHub/GitLab in Tekton
                                            • GitHub Webhook Configuration
                                            • GitLab Webhook Configuration
                                            "},{"location":"user-guide/add-infrastructure/","title":"Add Infrastructure","text":"

                                            EDP Portal allows to create, clone and import an infrastructure. Its functionality is to create resources in cloud provider.

                                            To add an application, navigate to the Components section on the navigation bar and click Create (the plus sign icon in the lower-right corner of the screen). Once clicked, the Create new component dialog will appear, then select Application and choose one of the strategies which will be described later in this page. You can create an Application in YAML or via the two-step menu in the dialog.

                                            "},{"location":"user-guide/add-infrastructure/#create-infrastructure-in-yaml","title":"Create Infrastructure in YAML","text":"

                                            Click Edit YAML in the upper-right corner of the Create Infrastructure dialog to open the YAML editor and create the Infrastructure.

                                            Edit YAML

                                            To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create Infrastructure dialog.

                                            To save the changes, select the Save & Apply button.

                                            "},{"location":"user-guide/add-infrastructure/#create-infrastructure-via-ui","title":"Create Infrastructure via UI","text":"

                                            The Create Infrastructure dialog contains the two steps:

                                            • The Codebase Info Menu
                                            • The Advanced Settings Menu
                                            "},{"location":"user-guide/add-infrastructure/#codebase-info-menu","title":"Codebase Info Menu","text":"

                                            Follow the instructions below to fill in the fields of the Codebase Info menu:

                                            1. In the Create new component menu, select Infrastructure:

                                              Infrastructure info

                                            2. Select the necessary configuration strategy:

                                            • Create from template \u2013 creates a project on the pattern in accordance with an infrastructure language, a build tool, and a framework.
                                            • Import project - allows configuring a replication from the Git server. While importing the existing repository, select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                              Note

                                              In order to use the Import project strategy, make sure to adjust it with the Integrate GitLab/GitHub With Jenkins or Integrate GitLab/GitHub With Tekton page.

                                            • Clone project \u2013 clones the indicated repository into EPAM Delivery Platform. While cloning the existing repository, it is required to fill in the Repository URL field as well:

                                              In our example, we will use the Create from template strategy:

                                              Create infrastructure

                                              1. Select the Git server from the drop-down list and define the Git repo relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                              2. Type the name of the infrastructure in the Component name field by entering at least two characters and by using the lower-case letters, numbers and inner dashes.

                                              3. Write the description in the Description field.

                                              4. To create an application with an empty repository in Gerrit, select the Empty project check box.

                                              5. Select any of the supported application languages with their providers in the Infrastructure Code Language field. So far, only HCL is supported.

                                                Note

                                                The Create from template strategy does not allow to customize the default code language set.

                                              6. Select necessary Language version/framework depending on the Infrastructure code language field. So far, only AWS is supported.

                                              7. Choose the necessary build tool in the Build Tool field. So far, only Terraform is supported.\\

                                                Note

                                                The Select Build Tool field disposes of the default tools and can be changed in accordance with the selected code language.

                                            The Advanced Settings menu should look similar to the picture below:

                                            Advanced settings

                                            Follow the instructions below to fill in the fields of the Advanced Setting menu:

                                            a. Specify the name of the Default branch where you want the development to be performed.

                                            Note

                                            The default branch cannot be deleted. For the Clone project and Import project strategies: if you want to use the existing branch, enter its name into this field.

                                            b. Select the necessary codebase versioning type:

                                            • default - using the default versioning type, in order to specify the version of the current artifacts, images, and tags in the Version Control System, a developer should navigate to the corresponding file and change the version manually.
                                            • edp - using the edp versioning type, a developer indicates the version number that will be used for all the artifacts stored in artifactory: binaries, pom.xml, metadata, etc. The version stored in repository (e.g. pom.xml) will not be affected or used. Using this versioning overrides any version stored in the repository files without changing actual file.

                                              When selecting the edp versioning type, the extra field will appear:

                                              Edp versioning

                                            Type the version number from which you want the artifacts to be versioned.

                                            Note

                                            The Start Version From field should be filled out in compliance with the semantic versioning rules, e.g. 1.2.3 or 10.10.10. Please refer to the Semantic Versioning page for details.

                                            c. Specify the pattern to validate a commit message. Use regular expression to indicate the pattern that is followed on the project to validate a commit message in the code review pipeline. An example of the pattern: ^[PROJECT_NAME-d{4}]:.*$.

                                            JIRA integration

                                            d. Select the Integrate with Jira Server check box in case it is required to connect Jira tickets with the commits and have a respective label in the Fix Version field.

                                            Note

                                            To adjust the Jira integration functionality, first apply the necessary changes described on the Adjust Jira Integration page, and setup the Adjust VCS Integration With Jira. Pay attention that the Jira integration feature is not available when using the GitLab CI tool.

                                            e. In the Jira Server field, select the Jira server.

                                            f. Specify the pattern to find a Jira ticket number in a commit message. Based on this pattern, the value from EDP will be displayed in Jira. Combine several variables to obtain the desired value.

                                            Note

                                            The GitLab CI tool is available only with the Import strategy and makes the Jira integration feature unavailable.

                                            Mapping fields

                                            g. In the Mapping field name section, specify the names of the Jira fields that should be filled in with attributes from EDP:

                                            1. Select the name of the field in a Jira ticket from the Mapping field name drop-down menu. The available fields are the following: Fix Version/s, Component/s and Labels.

                                            2. Click the Add button to add the mapping field name.

                                            3. Enter Jira pattern for the field name:

                                              • For the Fix Version/s field, select the EDP_VERSION variable that represents an EDP upgrade version, as in 2.7.0-SNAPSHOT. Combine variables to make the value more informative. For example, the pattern EDP_VERSION-EDP_COMPONENT will be displayed as 2.7.0-SNAPSHOT-nexus-operator in Jira.
                                              • For the Component/s field, select the EDP_COMPONENT variable that defines the name of the existing repository. For example, nexus-operator.
                                              • For the Labels field, select the EDP_GITTAG variable that defines a tag assigned to the commit in GitHub. For example, build/2.7.0-SNAPSHOT.59.
                                            4. Click the bin icon to remove the Jira field name.

                                            h. Click the Apply button to add the application to the Applications list.

                                            Note

                                            After the complete adding of the application, inspect the Application Overview part.

                                            Note

                                            Since EDP v3.3.0, the CI tool field has been hidden. Now EDP Portal automatically defines the CI tool depending on which one is deployed with EDP. If both Jenkins and Tekton are deployed, EDP Portal chooses Tekton by default. To define the CI tool manualy, operate with the spec.ciTool parameters.

                                            "},{"location":"user-guide/add-infrastructure/#advanced-settings-menu","title":"Advanced Settings Menu","text":""},{"location":"user-guide/add-infrastructure/#related-articles","title":"Related Articles","text":"
                                            • Application Overview
                                            • Add CD Pipelines
                                            • Add Other Code Language
                                            • Adjust GitLab CI Tool
                                            • Adjust Jira Integration
                                            • Adjust VCS Integration With Jira
                                            • Enable VCS Import Strategy
                                            • Manage Jenkins CI Pipeline Job Provisioner
                                            • Manage Jenkins Agent
                                            • Perf Server Integration
                                            "},{"location":"user-guide/add-library/","title":"Add Library","text":"

                                            Portal helps to create, clone and import a library and add it to the environment. It can also be deployed in Gerrit (if the Clone or Create strategy is used) with the Code Review and Build pipelines built in Jenkins/Tekton.

                                            To add a library, navigate to the Components section on the navigation bar and click Create (the plus sign icon in the lower-right corner of the screen). Once clicked, the Create new component dialog will appear, then select Library and choose one of the strategies which will be described later in this page. You can create a library in YAML or via the two-step menu in the dialog.

                                            Create new component menu

                                            "},{"location":"user-guide/add-library/#create-library-in-yaml","title":"Create Library in YAML","text":"

                                            Click Edit YAML in the upper-right corner of the Create Library dialog to open the YAML editor and create the Library.

                                            Edit YAML

                                            To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create Application dialog.

                                            To save the changes, select the Save & Apply button.

                                            "},{"location":"user-guide/add-library/#create-library-via-ui","title":"Create Library via UI","text":"

                                            The Create Library dialog contains the two steps:

                                            • The Codebase Info Menu
                                            • The Advanced Settings Menu
                                            "},{"location":"user-guide/add-library/#the-codebase-info-menu","title":"The Codebase Info Menu","text":"
                                            1. The Create new component menu should look like the following:

                                              Create new component menu

                                            2. In the Create new component menu, select the necessary configuration strategy. The choice will define the parameters you will need to specify:

                                              • Create from template \u2013 creates a project on the pattern in accordance with a library language, a build tool, and a framework.
                                              • Import project - allows configuring a replication from the Git server. While importing the existing repository, select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                                Note

                                                In order to use the Import project strategy, make sure to adjust it with the Integrate GitLab/GitHub With Jenkins or Integrate GitLab/GitHub With Tekton page.

                                              • Clone project \u2013 clones the indicated repository into EPAM Delivery Platform. While cloning the existing repository, it is required to fill in the Repository URL field as well:

                                                Clone library

                                                In our example, we will use the Create from template strategy:

                                                Create library

                                                1. While importing the existing repository, select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example
                                                2. Type the name of the library in the Component name field by entering at least two characters and by using the lower-case letters, numbers and inner dashes.
                                                3. Type the library description.
                                                4. To create a library with an empty repository in Gerrit, select the Empty project check box. The empty repository option is available only for the Create from template strategy.
                                                5. Select any of the supported code languages with its framework in the Library code language field:

                                                  • Java \u2013 selecting specific Java version available.
                                                  • JavaScript - selecting JavaScript allows using the NPM tool.
                                                  • Python - selecting Python allows using the Python v.3.8, FastAPI, Flask.
                                                  • Groovy-pipeline - selecting Groovy-pipeline allows having the ability to customize a stages logic. For details, please refer to the Customize CD Pipeline page.
                                                  • Terraform - selecting Terraform allows using the Terraform different versions via the Terraform version manager (tfenv). EDP supports all actions available in Terraform, thus providing the ability to modify the virtual infrastructure and launch some checks with the help of linters. For details, please refer to the Use Terraform Library in EDP page.
                                                  • Rego - this option allows using Rego code language with an Open Policy Agent (OPA) Library. For details, please refer to the Use Open Policy Agent page.
                                                  • Container - this option allows using the Kaniko tool for building the container images from a Dockerfile. For details, please refer to the CI Pipeline for Container page.
                                                  • Helm - this option allows using the chart testing lint (Pipeline) for Helm charts or using Helm chart as a set of other Helm charts organized according to the example.
                                                  • C# - selecting C# allows using .Net v.3.1 and .Net v.6.0.
                                                  • Other - selecting Other allows extending the default code languages when creating a codebase with the Clone/Import strategy. To add another code language, inspect the Add Other Code Language page.

                                                  Note

                                                  The Create strategy does not allow to customize the default code language set.

                                                6. Select necessary Language version/framework depending on the Library code language field.

                                                7. The Select Build Tool field disposes of the default tools and can be changed in accordance with the selected code language.

                                            3. Click the Proceed button to switch to the next menu.

                                            "},{"location":"user-guide/add-library/#the-advanced-settings-menu","title":"The Advanced Settings Menu","text":"

                                            The Advanced Settings menu should look like the picture below:

                                            Advanced settings

                                            a. Specify the name of the default branch where you want the development to be performed.

                                            Note

                                            The default branch cannot be deleted.

                                            b. Select the necessary codebase versioning type:

                                            • default: Using the default versioning type, in order to specify the version of the current artifacts, images, and tags in the Version Control System, a developer should navigate to the corresponding file and change the version manually.
                                            • edp: Using the edp versioning type, a developer indicates the version number from which all the artifacts will be versioned and, as a result, automatically registered in the corresponding file (e.g. pom.xml).

                                            When selecting the edp versioning type, the extra field will appear:

                                            EDP versioning

                                            Type the version number from which you want the artifacts to be versioned.

                                            Note

                                            The Start Version From field should be filled out in compliance with the semantic versioning rules, e.g. 1.2.3 or 10.10.10. Please refer to the Semantic Versioning page for details.

                                            c. Specify the pattern to validate a commit message. Use regular expression to indicate the pattern that is followed on the project to validate a commit message in the code review pipeline. An example of the pattern: ^[PROJECT_NAME-d{4}]:.*$

                                            Integrate with Jira server

                                            d. Select the Integrate with Jira server check box in case it is required to connect Jira tickets with the commits and have a respective label in the Fix Version field.

                                            Note

                                            To adjust the Jira integration functionality, first apply the necessary changes described on the Adjust Jira Integration page, and Adjust VCS Integration With Jira. Pay attention that the Jira integration feature is not available when using the GitLab CI tool.

                                            e. As soon as the Jira server is set, select it in the Jira Server field.

                                            f. Specify the pattern to find a Jira ticket number in a commit message. Based on this pattern, the value from EDP will be displayed in Jira.

                                            Mapping fields

                                            g. In the Advanced Mapping section, specify the names of the Jira fields that should be filled in with attributes from EDP:

                                            1. Select the name of the field in a Jira ticket. The available fields are the following: Fix Version/s, Component/s and Labels.

                                            2. Click the Add button to add the mapping field name.

                                            3. Enter Jira pattern for the field name:

                                              • For the Fix Version/s field, select the EDP_VERSION variable that represents an EDP upgrade version, as in 2.7.0-SNAPSHOT. Combine variables to make the value more informative. For example, the pattern EDP_VERSION-EDP_COMPONENT will be displayed as 2.7.0-SNAPSHOT-nexus-operator in Jira.
                                              • For the Component/s field select the EDP_COMPONENT variable that defines the name of the existing repository. For example, nexus-operator.
                                              • For the Labels field select the EDP_GITTAGvariable that defines a tag assigned to the commit in Git Hub. For example, build/2.7.0-SNAPSHOT.59.
                                            4. Click the bin icon to remove the Jira field name.

                                            h. Click the Apply button to add the library to the Libraries list.

                                            Note

                                            After the complete adding of the library, inspect the Library Overview part.

                                            Note

                                            Since EDP v3.3.0, the CI tool field has been hidden. Now EDP Portal automatically defines the CI tool depending on which one is deployed with EDP. If both Jenkins and Tekton are deployed, EDP Portal chooses Tekton by default. To define the CI tool manualy, operate with the spec.ciTool parameters.

                                            "},{"location":"user-guide/add-library/#related-articles","title":"Related Articles","text":"
                                            • Manage Libraries
                                            • Add CD Pipeline
                                            • Add Other Code Language
                                            • Adjust GitLab CI Tool
                                            • Adjust Jira Integration
                                            • Adjust VCS Integration With Jira
                                            • Integrate GitHub/GitLab in Jenkins
                                            • Integrate GitHub/GitLab in Tekton
                                            • Manage Jenkins CI Pipeline Job Provisioner
                                            • Manage Jenkins Agent
                                            • Perf Server Integration
                                            "},{"location":"user-guide/add-marketplace/","title":"Add Component via Marketplace","text":"

                                            With the built-in Marketplace, users can easily create a new application by clicking several buttons. This page contains detailed guidelines on how to create a new component with the help of the Marketplace feature.

                                            "},{"location":"user-guide/add-marketplace/#add-component","title":"Add Component","text":"

                                            To create a component from template, follow the instructions below:

                                            1. Navigate to the Marketplace section on the navigation bar to see the Marketplace overview page.

                                            2. Click the component name to open its details window and click Create from template:

                                              Create from template

                                            3. Fill in the required fields and click Apply:

                                              Creating from template window

                                            4. As a result, new component will appear in the Components section:

                                              Creating from template window

                                            "},{"location":"user-guide/add-marketplace/#related-articles","title":"Related Articles","text":"
                                            • Marketplace Overview
                                            • Add Application
                                            • Add Library
                                            • Add Infrastructure
                                            "},{"location":"user-guide/add-quality-gate/","title":"Add Quality Gate","text":"

                                            This section describes how to use quality gate in EDP and how to customize the quality gate for the CD pipeline with the selected build version of the promoted application between stages.

                                            "},{"location":"user-guide/add-quality-gate/#apply-new-quality-gate-to-pipelines","title":"Apply New Quality Gate to Pipelines","text":"

                                            Quality gate pipeline is a usual Tekton pipeline but with a specific label: app.edp.epam.com/pipelinetype: deploy. To add and apply the quality gate to your pipelines, follow the steps below:

                                            1. To use the Tekton pipeline as a quality gate pipeline, add this label to the pipelines:

                                            metadata:\n  labels:\n    app.edp.epam.com/pipelinetype: deploy\n
                                            2. Insert the value that is the quality gate name displayed in the quality gate drop-down list of the CD pipeline menu:
                                            metadata:\n  name: <name-of-quality-gate>\n
                                            3. Ensure the task promote-images contains steps and logic to apply to the project. Also ensure that the last task is promote-images which parameters are mandatory.
                                            spec:\n  params:\n    - default: ''\n      description: Codebases with a tag separated with a space.\n      name: CODEBASE_TAG\n      type: string\n    - default: ''\n      name: CDPIPELINE_CR\n      type: string\n    - default: ''\n      name: CDPIPELINE_STAGE\n      type: string\n  tasks:\n    - name: promote-images\n      params:\n        - name: CODEBASE_TAG\n          value: $(params.CODEBASE_TAG)\n        - name: CDPIPELINE_STAGE\n          value: $(params.CDPIPELINE_STAGE)\n        - name: CDPIPELINE_CR\n          value: $(params.CDPIPELINE_CR)\n      runAfter:\n        - <last-task-name>\n      taskRef:\n        kind: Task\n        name: promote-images\n
                                            4. Create a new pipeline with a unique name or modify your created pipeline with the command below. Please be aware that the \u2039edp-project\u203a value is the name of the EDP tenant:
                                            kubectl apply -f <file>.yaml -namespace \u2039edp-project\u203a\n
                                            Example: file.yaml
                                             apiVersion: tekton.dev/v1beta1\n kind: Pipeline\n metadata:\n   labels:\n     app.edp.epam.com/pipelinetype: deploy\n   name: <name-of-quality-gate>\n   namespace: edp\n spec:\n   params:\n     - default: >-\n         https://<CI-pipeline-provisioner>-edp.<cluster-name>.aws.main.edp.projects.epam.com/#/namespaces/$(context.pipelineRun.namespace)/pipelineruns/$(context.pipelineRun.name)\n       name: pipelineUrl\n       type: string\n     - default: ''\n       description: Codebases with a tag separated with a space.\n       name: CODEBASE_TAG\n       type: string\n     - default: ''\n       name: CDPIPELINE_CR\n       type: string\n     - default: ''\n       name: CDPIPELINE_STAGE\n       type: string\n   tasks:\n     - name: autotests\n       params:\n         - name: BASE_IMAGE\n           value: bitnami/kubectl:1.25.4\n         - name: EXTRA_COMMANDS\n           value: echo \"Hello World\"\n       taskRef:\n         kind: Task\n         name: run-quality-gate\n     - name: promote-images\n       params:\n         - name: CODEBASE_TAG\n           value: $(params.CODEBASE_TAG)\n         - name: CDPIPELINE_STAGE\n           value: $(params.CDPIPELINE_STAGE)\n         - name: CDPIPELINE_CR\n           value: $(params.CDPIPELINE_CR)\n       runAfter:\n         - autotests\n       taskRef:\n         kind: Task\n         name: promote-images\n
                                            "},{"location":"user-guide/add-quality-gate/#run-quality-gate","title":"Run Quality Gate","text":"

                                            Before running the quality gate, first of all, ensure that the environment has deployed the created CD pipeline and then ensure that the application is successfully deployed and ready to run the quality gate. To run quality gate, please follow the steps below:

                                            1. Check the CD pipeline status. To do this, open the created CD pipeline, select Image stream version, click DEPLOY button and wait until Applications, Health and Sync statuses become green. This implies that the application is successfully deployed and ready to run the quality gate.

                                              CD pipeline stage overview

                                            2. Select the <name-of-quality-gate> of Quality gates from the drop-down list and click the RUN button.The execution process should be started in the Pipelines menu:

                                              Quality gate pipeline status

                                            "},{"location":"user-guide/add-quality-gate/#add-stage-for-quality-gate","title":"Add Stage for Quality Gate","text":"

                                            For a better understanding of this section, please read the documentation about how to add a new stage for quality gate. The scheme below illustrates two approaches of adding quality gates:

                                            Types of adding quality gate

                                            • The first type of adding a quality gate is about adding the specific quality gate to the specific pipeline stage.
                                            • The second type is rather optional and implies activating the Promote in pipelines option while creating a CD Pipeline to pass the quality gate in a certain sequence.

                                            As a result, after the quality gate is successfully passed, the projected image is promoted to the next stage.

                                            "},{"location":"user-guide/add-quality-gate/#related-articles","title":"Related Articles","text":"
                                            • Add CD Pipeline
                                            "},{"location":"user-guide/application/","title":"Manage Applications","text":"

                                            This section describes the subsequent possible actions that can be performed with the newly added or existing applications.

                                            "},{"location":"user-guide/application/#check-and-remove-application","title":"Check and Remove Application","text":"

                                            As soon as the application is successfully provisioned, the following will be created:

                                            • Code Review and Build pipelines in Jenkins/Tekton for this application. The Build pipeline will be triggered automatically if at least one environment is already added.
                                            • A new project in Gerrit or another VCS.
                                            • SonarQube integration will be available after the Build pipeline in Jenkins/Tekton is passed.
                                            • Nexus Repository Manager will be available after the Build pipeline in Jenkins/Tekton is passed as well.

                                            The added application will be listed in the Applications list allowing you to do the following:

                                            Applications menu

                                            • Application status - displays the Git Server status. Can be red or green depending on if the EDP Portal managed to connect to the Git Server with the specified credentials or not.
                                            • Application name (clickable) - displays the Git Server name set during the Git Server creation.
                                            • Open documentation - opens the documentation that leads to this page.
                                            • Enable filtering - enables filtering by Git Server name and namespace where this custom resource is located in.
                                            • Create new application - displays the Create new component menu.
                                            • Edit application - edit the application by selecting the options icon next to its name in the applications list, and then selecting Edit. For details see the Edit Existing Application section.
                                            • Delete application - remove application by selecting the options icon next to its name in the applications list, and then selecting Delete.

                                              Note

                                              The application that is used in a CD pipeline cannot be removed.

                                            There are also options to sort the applications:

                                            • Sort the existing applications in a table by clicking the sorting icons in the table header. Sort the applications alphabetically by their name, language, build tool, framework, and CI tool. You can also sort the applications by their status: Created, Failed, or In progress.
                                            • Select a number of applications displayed per page (15, 25 or 50 rows) and navigate between pages if the number of applications exceeds the capacity of a single page:

                                              Applications pages

                                            "},{"location":"user-guide/application/#edit-existing-application","title":"Edit Existing Application","text":"

                                            EDP Portal provides the ability to enable, disable or edit the Jira Integration functionality for applications.

                                            1. To edit an application directly from the Applications overview page or when viewing the application data:

                                              • Select Edit in the options icon menu:

                                              Edit application on the Applications overview page

                                              Edit application when viewing the application data

                                              • The Edit Application dialog opens.
                                            2. To enable Jira integration, in the Edit Application dialog do the following:

                                              Edit application

                                              a. Mark the Integrate with Jira server check box and fill in the necessary fields. Please see steps d-h of the Add Application page.

                                              b. Select the Apply button to apply the changes.

                                              c. (Optional) Enable commit validation mechanism by navigating to Jenkins/Tekton and adding the commit-validate stage in the Code Review pipeline to have your commits reviewed.

                                            3. To disable Jira integration, in the Edit Application dialog do the following:

                                              a. Unmark the Integrate with Jira server check box.

                                              b. Select the Apply button to apply the changes.

                                              c. (Optional) Disable commit validation mechanism by navigating to Jenkins/Tekton and removing the commit-validate stage in the Code Review pipeline to have your commits reviewed.

                                            4. To create, edit and delete application branches, please refer to the Manage Branches page.

                                            "},{"location":"user-guide/application/#related-articles","title":"Related Articles","text":"
                                            • Add Application
                                            • Manage Branches
                                            "},{"location":"user-guide/autotest/","title":"Manage Autotests","text":"

                                            This section describes the subsequent possible actions that can be performed with the newly added or existing autotests.

                                            "},{"location":"user-guide/autotest/#check-and-remove-autotest","title":"Check and Remove Autotest","text":"

                                            As soon as the autotest is successfully provisioned, the following will be created:

                                            • Code Review and Build pipelines in Jenkins/Tekton for this autotest. The Build pipeline will be triggered automatically if at least one environment is already added.
                                            • A new project in Gerrit or another VCS.
                                            • SonarQube integration will be available after the Build pipeline in Jenkins/Tekton is passed.
                                            • Nexus Repository Manager will be available after the Build pipeline in Jenkins/Tekton is passed as well.

                                            Info

                                            To navigate quickly to OpenShift, Jenkins/Tekton, Gerrit, SonarQube, Nexus, and other resources, click the Overview section on the navigation bar and hit the necessary link.

                                            The added autotest will be listed in the Autotests list allowing you to do the following:

                                            Autotests page

                                            • Autotest status - displays the Git Server status. Can be red or green depending on EDP Portal managed to connect to the Git Server with the specified credentials or not.
                                            • Autotest name (clickable) - displays the Git Server name set during the Git Server creation.
                                            • Open documentation - opens the documentation that leads to this page.
                                            • Enable filtering - enables filtering by Git Server name and namespace where this custom resource is located in.
                                            • Create new autotest - displays the Create new component menu.
                                            • Edit autotest - edit the autotest by selecting the options icon next to its name in the autotests list, and then selecting Edit. For details see the Edit Existing Autotest section.
                                            • Delete autotest - remove autotest with the corresponding database and Jenkins/Tekton pipelines by selecting the options icon next to its name in the Autotests list, and then selecting Delete:

                                              Note

                                              The autotest that is used in a CD pipeline cannot be removed.

                                            There are also options to sort the applications:

                                            • Sort the existing autotests in a table by clicking the sorting icons in the table header. Sort the autotests alphabetically by their name, language, build tool, framework, and CI tool. You can also sort the autotests by their status: Created, Failed, or In progress.
                                            • Select a number of autotests displayed per page (15, 25 or 50 rows) and navigate between pages if the number of autotests exceeds the capacity of a single page.
                                            "},{"location":"user-guide/autotest/#edit-existing-autotest","title":"Edit Existing Autotest","text":"

                                            EDP Portal provides the ability to enable, disable or edit the Jira Integration functionality for autotests.

                                            1. To edit an autotest directly from the Autotests overview page or when viewing the autotest data:

                                              • Select Edit in the options icon menu:

                                                Edit autotest on the autotests overview page

                                                Edit autotest when viewing the autotest data

                                              • The Edit Autotest dialog opens.
                                            2. To enable Jira integration, on the Edit Autotest page do the following:

                                              Edit library

                                              a. Mark the Integrate with Jira server check box and fill in the necessary fields. Please see steps d-h on the Add Autotests page.

                                              b. Select the Apply button to apply the changes.

                                              c. Navigate to Jenkins/Tekton and add the create-jira-issue-metadata stage in the Build pipeline. Also add the commit-validate stage in the Code Review pipeline.

                                              Note

                                              Pay attention that the Jira integration feature is not available when using the GitLab CI tool.

                                              Note

                                              To adjust the Jira integration functionality, first apply the necessary changes described on the Adjust Jira Integration and Adjust VCS Integration With Jira pages.

                                            3. To disable Jira integration, in the Edit Autotest dialog do the following:

                                              a. Unmark the Integrate with Jira server check box.

                                              b. Select the Apply button to apply the changes.

                                              c. Navigate to Jenkins/Tekton and remove the create-jira-issue-metadata stage in the Build pipeline. Also remove the commit-validate stage in the Code Review pipeline.

                                              As a result, the necessary changes will be applied.

                                            4. To create, edit and delete application branches, please refer to the Manage Branches page.

                                            "},{"location":"user-guide/autotest/#add-autotest-as-a-quality-gate","title":"Add Autotest as a Quality Gate","text":"

                                            In order to add an autotest as a quality gate to a newly added CD pipeline, do the following:

                                            1. Create a CD pipeline with the necessary parameters. Please refer to the Add CD Pipeline section for the details.

                                            2. In the Stages menu, select the Autotest quality gate type. It means the promoting process should be confirmed by the successful passing of the autotests.

                                            3. In the additional fields, select the previously created autotest name and specify its branch.

                                            4. After filling in all the necessary fields, click the Create button to start the provisioning of the pipeline. After the CD pipeline is added, the new namespace containing the stage name will be created in Kubernetes (in OpenShift, a new project will be created) with the following name pattern: [cluster name]-[cd pipeline name]-[stage name].

                                            "},{"location":"user-guide/autotest/#configure-autotest-launch-at-specific-stage","title":"Configure Autotest Launch at Specific Stage","text":"

                                            In order to configure the added autotest launch at the specific stage with necessary parameters, do the following:

                                            1. Add the necessary stage to the CD pipeline. Please refer to the Add CD Pipeline documentation for the details.

                                            2. Navigate to the run.json file and add the stage name and the specific parameters.

                                            "},{"location":"user-guide/autotest/#launch-autotest-locally","title":"Launch Autotest Locally","text":"

                                            There is an ability to run the autotests locally using the IDEA (Integrated Development Environment Application, such as IntelliJ, NetBeans etc.). To launch the autotest project for the local verification, perform the following steps:

                                            1. Clone the project to the local machine.

                                            2. Open the project in IDEA and find the run.json file to copy out the necessary command value.

                                            3. Paste the copied command value into the Command line field and run it with the necessary values and namespace.

                                            4. As a result, all the launched tests will be executed.

                                            "},{"location":"user-guide/autotest/#related-articles","title":"Related Articles","text":"
                                            • Add Application
                                            • Add Autotests
                                            • Add CD Pipeline
                                            • Adjust Jira Integration
                                            • Adjust VCS Integration With Jira
                                            • Manage Branches
                                            "},{"location":"user-guide/build-pipeline/","title":"Build Pipeline","text":"

                                            This section provides details on the Build pipeline of the EDP CI/CD pipeline framework. Explore below the pipeline purpose, stages and possible actions to perform.

                                            "},{"location":"user-guide/build-pipeline/#build-pipeline-purpose","title":"Build Pipeline Purpose","text":"

                                            The purpose of the Build pipeline contains the following points:

                                            • Check out, test, tag and build an image from the mainstream branch after a patch set is submitted in order to inspect whether the integrated with the mainstream code fits all quality gates, can be built and tested;
                                            • Be triggered if any new patch set is submitted;
                                            • Tag a specific commit in Gerrit in case the build is successful;
                                            • Build a Docker image with an application that can be afterward deployed using the Jenkins Deploy pipeline.

                                            Find below the functional diagram of the Build pipeline with the default stages:

                                            build-pipeline

                                            "},{"location":"user-guide/build-pipeline/#build-pipeline-for-application-and-library","title":"Build Pipeline for Application and Library","text":"

                                            The Build pipeline is triggered automatically after the Code Review pipeline is completed and the changes are submitted.

                                            To review the Build pipeline, take the following steps:

                                            1. Open Jenkins via the created link in Gerrit or via the Admin Console Overview page.

                                            2. Click the Build pipeline link to open its stages for the application and library codebases:

                                              • Init - initialization of the Code Review pipeline inputs;
                                              • Checkout - checkout of the application code;
                                              • Get-version - get the version from the pom.XML file and add the build number;
                                              • Compile - code compilation;
                                              • Tests - tests execution;
                                              • Sonar - Sonar launch that checks the whole code;
                                              • Build - artifact building and adding to Nexus;
                                              • Build-image - docker image building and adding to Docker Registry. The Build pipeline for the library has the same stages as the application except the Build-image stage, i.e. the Docker image is not building.
                                              • Push - artifact docker image pushing to Nexus and Docker Registry;
                                              • Ecr-to-docker - the docker image, after being built, is copied from the ECR project registry to DockerHub via the Crane tool. The stage is not the default and can be set for the application codebase type. To set this stage, please refer to the EcrToDocker.groovy file and to the Promote Docker Images From ECR to Docker Hub page.
                                              • Git-tag - adding of the corresponding Git tag of the current commit to relate with the image, artifact, and build version.

                                            Note

                                            For more details on stages, please refer to the Pipeline Stages documentation.

                                            After the Build pipeline runs all the stages successfully, the corresponding tag numbers will be created in Kubernetes/OpenShift and Nexus.

                                            "},{"location":"user-guide/build-pipeline/#check-the-tag-in-kubernetesopenshift-and-nexus","title":"Check the Tag in Kubernetes/OpenShift and Nexus","text":"
                                            1. After the Build pipeline is completed, check the tag name and the same with the commit revision. Simply navigate to Gerrit \u2192 Projects \u2192 List \u2192 select the project \u2192 Tags.

                                              Note

                                              For the Import strategy, navigate to the repository from which a codebase is imported \u2192 Tags. It is actual both for GitHub and GitLab.

                                            2. Open the Kubernetes/OpenShift Overview page and click the link to Nexus and check the build of a new version.

                                            3. Switch to Kubernetes \u2192 CodebaseImageStream (or OpenShift \u2192 Builds \u2192 Images) \u2192 click the image stream that will be used for deployment.

                                            4. Check the corresponding tag.

                                            "},{"location":"user-guide/build-pipeline/#configure-and-start-pipeline-manually","title":"Configure and Start Pipeline Manually","text":"

                                            The Build pipeline can be started manually. To set the necessary stages and trigger the pipeline manually, take the following steps:

                                            1. Open the Build pipeline for the created library.

                                            2. Click the Build with parameters option from the left-side menu. Modify the stages by removing the whole objects massive:{\"name\". \"tests\"} where name is a key and tests is a stage name that should be executed.

                                            3. Open Jenkins and check the successful execution of all stages.

                                            "},{"location":"user-guide/build-pipeline/#related-articles","title":"Related Articles","text":"
                                            • Add Application
                                            • Add Autotest
                                            • Add Library
                                            • Adjust Jira Integration
                                            • Adjust VCS Integration With Jira
                                            • Autotest as Quality Gate
                                            • Pipeline Stages
                                            "},{"location":"user-guide/cd-pipeline-details/","title":"CD Pipeline Details","text":"

                                            CD Pipeline (Continuous Delivery Pipeline) - an EDP business entity that describes the whole delivery process of the selected application set via the respective stages. The main idea of the CD pipeline is to promote the application build version between the stages by applying the sequential verification (i.e. the second stage will be available if the verification on the first stage is successfully completed). The CD pipeline can include the essential set of applications with its specific stages as well.

                                            In other words, the CD pipeline allows the selected image stream (Docker container in Kubernetes terms) to pass a set of stages for the verification process (SIT - system integration testing with the automatic type of a quality gate, QA - quality assurance, UAT - user acceptance testing with the manual testing).

                                            Note

                                            It is possible to change the image stream for the application in the CD pipeline. Please refer to the Edit CD Pipeline section for the details.

                                            A CI/CD pipeline helps to automate steps in a software delivery process, such as the code build initialization, automated tests running, and deploying to a staging or production environment. Automated pipelines remove manual errors, provide standardized development feedback cycle, and enable the fast product iterations. To get more information on the CI pipeline, please refer to the CI Pipeline Details chapter.

                                            The codebase stream is used as a holder for the output of the stage, i.e. after the Docker container (or an image stream in OpenShift terms) passes the stage verification, it will be placed to the new codebase stream. Every codebase has a branch that has its own codebase stream - a Docker container that is an output of the build for the corresponding branch.

                                            Note

                                            For more information on the main terms used in EPAM Delivery Platform, please refer to the EDP Glossary

                                            EDP CD pipeline

                                            Explore the details of the CD pipeline below.

                                            "},{"location":"user-guide/cd-pipeline-details/#deploy-pipeline","title":"Deploy Pipeline","text":"

                                            The Deploy pipeline is used by default on any stage of the Continuous Delivery pipeline. It addresses the following concerns:

                                            • Deploying the application(s) to the main STAGE (SIT, QA, UAT) environment in order to run autotests and to promote image build versions to the next environments afterwards.
                                            • Deploying the application(s) to a custom STAGE environment in order to run autotests and check manually that everything is ok with the application.
                                            • Deploying the latest or a stable and some particular numeric version of an image build that exists in Docker registry.
                                            • Promoting the image build versions from the main STAGE (SIT, QA, UAT) environment.
                                            • Auto deploying the application(s) version from the passed payload (using the CODEBASE_VERSION job parameter).

                                            Find below the functional diagram of the Deploy pipeline with the default stages:

                                            Note

                                            The input for a CD pipeline depends on the Trigger Type for a deploy stage and can be either Manual or Auto.

                                            Deploy pipeline stages

                                            "},{"location":"user-guide/cd-pipeline-details/#related-articles","title":"Related Articles","text":"
                                            • Add Application
                                            • Add Autotest
                                            • Add CD Pipeline
                                            • Add Library
                                            • CI Pipeline Details
                                            • CI/CD Overview
                                            • EDP Glossary
                                            • EDP Pipeline Framework
                                            • EDP Pipeline Stages
                                            • Prepare for Release
                                            "},{"location":"user-guide/ci-pipeline-details/","title":"CI Pipeline Details","text":"

                                            CI Pipeline (Continuous Integration Pipeline) - an EDP business entity that describes the integration of changes made to a codebase into a single project. The main idea of the CI pipeline is to review the changes in the code submitted through a Version Control System (VCS) and build a new codebase version so that it can be transmitted to the Continuous Delivery Pipeline for the rest of the delivery process.

                                            There are three codebase types in EPAM Delivery Platform:

                                            1. Applications - a codebase that is developed in the Version Control System, has the full lifecycle starting from the Code Review stage to its deployment to the environment;
                                            2. Libraries - this codebase is similar to the Application type, but it is not deployed and stored in the Artifactory. The library can be connected to other applications/libraries;
                                            3. Autotests - a codebase that inspects the code and can be used as a quality gate for the CD pipeline stage. The autotest only has the Code Review pipeline and is launched for the stage verification.

                                            Note

                                            For more information on the above mentioned codebase types, please refer to the Add Application, Add Library, Add Autotests and Autotest as Quality Gate pages.

                                            EDP CI pipeline

                                            "},{"location":"user-guide/ci-pipeline-details/#related-articles","title":"Related Articles","text":"
                                            • Add Application
                                            • Add Autotest
                                            • Add Library
                                            • Adjust Jira Integration
                                            • Adjust VCS Integration With Jira
                                            • Autotest as Quality Gate
                                            • Build Pipeline
                                            • Code Review Pipeline
                                            • Pipeline Stages
                                            "},{"location":"user-guide/cicd-overview/","title":"EDP CI/CD Overview","text":"

                                            This chapter provides information on CI/CD basic definitions and flow, as well as its components and process.

                                            "},{"location":"user-guide/cicd-overview/#cicd-basic-definitions","title":"CI/CD Basic Definitions","text":"

                                            The Continuous Integration part means the following:

                                            • all components of the application development are in the same place and perform the same processes for running;
                                            • the results are published in one place and replicated into EPAM GitLab or VCS (version control system);
                                            • the repository also includes a storage tool (e.g. Nexus) for all binary artifacts that are produced by the Jenkins CI server after submitting changes from Code Review tool into VCS;

                                            The Code Review and Build pipelines are used before the code is delivered. An important part of both of them is the integration tests that are launched during the testing stage.

                                            Many applications (SonarQube, Gerrit, etc,) used by the project need databases for their performance.

                                            The Continuous Delivery comprises an approach allowing to produce an application in short cycles so that it can be reliably released at any time point. This part is tightly bound with the usage of the Code Review, Build, and Deploy pipelines.

                                            The Deploy pipelines deploy the applications configuration and their specific versions, launch automated tests and control quality gates for the specified environment. As a result of the successfully completed process, the specific versions of images are promoted to the next environment. All environments are sequential and promote the build versions of applications one-by-one. The logic of each stage is described as a code of Jenkins pipelines and stored in the VCS.

                                            During the CI/CD, there are several continuous processes that run in the repository, find below the list of possible actions:

                                            • Review the code with the help of Gerrit tool;
                                            • Run the static analysis using SonarQube to control the quality of the source code and keep the historical data which helps to understand the trend and effectivity of particular teams and members;
                                            • Analyze application source code using SAST, byte code, and binaries for coding/design conditions that are indicative of security vulnerabilities;
                                            • Build the code with Jenkins and run automated tests that are written to make sure the applied changes will not break any functionality.

                                            Note

                                            For the details on autotests, please refer to the Autotest, Add Autotest, and Autotest as Quality Gate pages.

                                            The release process is divided into cycles and provides regular delivery of completed pieces of functionality while continuing the development and integration of new functionality into the product mainline.

                                            Explore the main flow that is displayed on the diagram below:

                                            EDP CI/CD pipeline

                                            "},{"location":"user-guide/cicd-overview/#related-articles","title":"Related Articles","text":"
                                            • Add Application
                                            • Add Library
                                            • Add CD Pipeline
                                            • CI Pipeline Details
                                            • CD Pipeline Details
                                            • Customize CI Pipeline
                                            • EDP Pipeline Framework
                                            • Customize CD Pipeline
                                            • EDP Stages
                                            • Glossary
                                            • Use Terraform Library in EDP
                                            "},{"location":"user-guide/cluster/","title":"Manage Clusters","text":"

                                            This section describes the subsequent possible actions that can be performed with the newly added or existing clusters.

                                            In a nutshell, cluster in EDP Portal is a Kubernetes secret that stores credentials and enpoint to connect to the another cluster. Adding new clusters allows users to deploy applications in several clusters, thus improving flexibilty of your infrastructure.

                                            The added cluster will be listed in the clusters list allowing you to do the following:

                                            Clusters list

                                            "},{"location":"user-guide/cluster/#view-authentication-data","title":"View Authentication Data","text":"

                                            To view authentication data that is used to log in to the cluster, run the kubectl describe command:

                                            kubectl describe secret cluster_name -n edp\n
                                            "},{"location":"user-guide/cluster/#delete-cluster","title":"Delete Cluster","text":"

                                            To delete cluster, use the kubectl delete command as follows:

                                            kubectl delete secret cluster_name -n edp\n
                                            "},{"location":"user-guide/cluster/#related-articles","title":"Related Articles","text":"
                                            • Add Cluster
                                            • Add Application
                                            "},{"location":"user-guide/code-review-pipeline/","title":"Code Review Pipeline","text":"

                                            This section provides details on the Code Review pipeline of the EDP CI/CD framework. Explore below the pipeline purpose, stages and possible actions to perform.

                                            "},{"location":"user-guide/code-review-pipeline/#code-review-pipeline-purpose","title":"Code Review Pipeline Purpose","text":"

                                            The purpose of the Code Review pipeline contains the following points:

                                            • Check out and test a particular developer's change (Patch Set) in order to inspect whether the code fits all the quality gates and can be built and tested;
                                            • Be triggered if any new Patch Set appears in Gerrit;
                                            • Send feedback about the build process in Jenkins to review the card in Gerrit;
                                            • Send feedback about Sonar violations that have been found during the Sonar stage.

                                            Find below the functional diagram of the Code Review pipeline with the default stages:

                                            Code review pipeline stages

                                            "},{"location":"user-guide/code-review-pipeline/#code-review-pipeline-for-applications-and-libraries","title":"Code Review Pipeline for Applications and Libraries","text":"

                                            Note

                                            Make sure the necessary applications or libraries are added to the Admin Console. For the details on how to add a codebase, please refer to the Add Application or Add Library pages accordingly.

                                            To discover the Code Review pipeline, apply changes that will trigger the Code Review pipeline automatically and take the following steps:

                                            1. Navigate to Jenkins. In Admin Console, go to the Overview section on the left-side navigation bar and click the link to Jenkins.

                                              Link to Jenkins

                                              or

                                              In Gerrit, go to the Patch Set page and click the CI Jenkins link in the Change Log section

                                              Link from Gerrit

                                              Note

                                              The Code Review pipeline starts automatically for every codebase type (Application, Autotests, Library).

                                            2. Check the Code Review pipeline for the application of for the library. Click the application name in Jenkins and switch to the additional release-01 branch that is created with the respective Code Review and Build pipelines.

                                            3. Click the Code Review pipeline link to open the Code Review pipeline stages for the application:

                                              • Init - initialization of the codebase information and loading of the common libraries
                                              • gerrit-checkout / checkout - the checkout of patch sets from Gerrit. The stage is called gerrit-checkout for the Create and Clone strategies of adding a codebase and checkout for the Import strategy.
                                              • compile - the source code compilation
                                              • tests - the launch of the tests
                                              • sonar - the launch of the static code analyzer that checks the whole code
                                              • helm-lint - the launch of the linting tests for deployment charts
                                              • dockerfile-lint - the launch of the linting tests for Dockerfile
                                              • commit-validate - the stage is optional and appears under enabled integration with Jira. Please refer to the Adjust Jira Integration and Adjust VCS Integration With Jira sections for the details.

                                            Note

                                            For more details on EDP pipeline stages, please refer to the Pipeline Stages section.

                                            "},{"location":"user-guide/code-review-pipeline/#code-review-pipeline-for-autotests","title":"Code Review Pipeline for Autotests","text":"

                                            To discover the Code Review pipeline for autotests, first, apply changes to a codebase that will trigger the Code Review pipeline automatically. The flow for the autotest is similar for that for applications and libraries, however, there are some differences. Explore them below.

                                            1. Open the run.json file for the created autotest.

                                              Note

                                              Please refer to the Add Autotest page for the details on how to create an autotest.

                                              The run.json file keeps a command that is executed on this stage.

                                            2. Open the Code Review pipeline in Jenkins (via the link in Gerrit or via the Admin Console Overview page) and click the Configure option from the left side. There are only four stages available: Initialization - Gerrit-checkout - tests - sonar (the launch of the static code analyzer that checks the whole code).

                                            3. Open the Code Review pipeline in Jenkins with the successfully passed stages.

                                            "},{"location":"user-guide/code-review-pipeline/#retrigger-code-review-pipeline","title":"Retrigger Code Review Pipeline","text":"

                                            The Code Review pipeline can be retriggered manually, especially if the pipeline failed before. To retrigger it, take the following steps:

                                            1. In Jenkins, click the Retrigger option from the drop-down menu for the specific Code Review pipeline version number. Alternatively, click the Jenkins main page and select the Query and Trigger Gerrit Patches option.

                                            2. Click Search and select the check box of the necessary change and patch set and then click Trigger Selected.

                                            As a result, the Code Review pipeline will be retriggered.

                                            "},{"location":"user-guide/code-review-pipeline/#configure-code-review-pipeline","title":"Configure Code Review Pipeline","text":"

                                            The Configure option allows adding/removing the stage from the Code Review pipeline if needed. To configure the Code Review pipeline, take the following steps:

                                            1. Being in Jenkins, click the Configure option from the left-side menu.

                                            2. Define the stages set that will be executed for the current pipeline.

                                              • To remove a stage, select and remove the whole objects massive: {\"name\".\"tests\"}, where name is a key and tests is a stage name that should be executed.
                                              • To add a stage, define the objects massive: {\"name\".\"tests\"}, where name is a key and tests is a stage name that should be added.

                                              Note

                                              All stages are launched from the shared library on GitHub. The list of libraries is located in the edp-library-stages repository.

                                            3. To apply the new stage process, retrigger the Code Review pipeline. For details, please refer to the Retrigger Code Review Pipeline section.

                                            4. Open Jenkins and check that there is no removed stage in the Code Review pipeline.

                                            "},{"location":"user-guide/code-review-pipeline/#related-articles","title":"Related Articles","text":"
                                            • Add Application
                                            • Add Autotest
                                            • Add Library
                                            • Adjust Jira Integration
                                            • Adjust VCS Integration With Jira
                                            • Autotest as Quality Gate
                                            • Pipeline Stages
                                            "},{"location":"user-guide/container-stages/","title":"CI Pipeline for Container","text":"

                                            EPAM Delivery Platform ensures the implemented Container support allowing to work with Dockerfile that is processed by means of stages in the Code-Review and Build pipelines. These pipelines are expected to be created after the Container Library is added.

                                            "},{"location":"user-guide/container-stages/#code-review-pipeline-stages","title":"Code Review Pipeline Stages","text":"

                                            In the Code Review pipeline, the following stages are available:

                                            1. checkout stage is a standard step during which all files are checked out from a selected branch of the Git repository.

                                            2. dockerfile-lint stage uses the hadolint tool to perform linting tests for the Dockerfile.

                                            3. dockerbuild-verify stage collects artifacts and builds an image from the Dockerfile without pushing to registry. This stage is intended to check if the image is built.

                                            "},{"location":"user-guide/container-stages/#build-pipeline-stages","title":"Build Pipeline Stages","text":"

                                            In the Build pipeline, the following stages are available:

                                            1. checkout stage is a standard step during which all files are checked out from a master branch of the Git repository.

                                            2. get-version stage where the library version is determined either via:

                                              2.1. EDP versioning functionality.

                                              2.2. Default versioning functionality.

                                            3. dockerfile-lint stage uses the hadolint tool to perform linting tests for Dockerfile.

                                            4. build-image-kaniko stage builds Dockerfile using the Kaniko tool.

                                            5. git-tag stage that is intended for tagging a repository in Git.

                                            "},{"location":"user-guide/container-stages/#tools-for-container-images-building","title":"Tools for Container Images Building","text":"

                                            EPAM Delivery Platform ensures the implemented Kaniko tool and BuildConfig object support. Using Kaniko tool allows building the container images from a Dockerfile both on the Kubernetes and OpenShift platforms. The BuildConfig object enables the building of the container images only on the OpenShift platform.

                                            EDP uses the BuildConfig object and the Kaniko tool for creating containers from a Dockerfile and pushing them to the internal container image registry. For Kaniko, it is also possible to change the Docker config file and push the containers to different container image registries.

                                            "},{"location":"user-guide/container-stages/#supported-container-image-build-tools","title":"Supported Container Image Build Tools","text":"Platform Build Tools Kubernetes Kaniko OpenShift Kaniko, BuildConfig"},{"location":"user-guide/container-stages/#change-build-tool-in-the-build-pipeline","title":"Change Build Tool in the Build Pipeline","text":"

                                            By default, EPAM Delivery Platform uses the build-image-kaniko stage for building container images on the Kubernetes platform and the build-image-from-dockerfile stage for building container images on the OpenShift platform.

                                            In order to change a build tool for the OpenShift Platform from the default buildConfig object to the Kaniko tool, perform the following steps:

                                            1. Modify or update a job provisioner logic, follow the instructions on the Manage Jenkins CI Pipeline Job Provisioner page.
                                            2. Update the required parameters for a new provisioner. For example, if it is necessary to change the build tool for Container build pipeline, update the list of stages:
                                              stages['Build-library-kaniko'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"build-image-from-dockerfile\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n
                                              stages['Build-library-kaniko'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"build-image-kaniko\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n
                                            "},{"location":"user-guide/container-stages/#related-articles","title":"Related Articles","text":"
                                            • Use Dockerfile Linters for Code Review Pipeline
                                            • Manage Jenkins CI Pipeline Job Provisioner
                                            "},{"location":"user-guide/copy-shared-secrets/","title":"Copy Shared Secrets","text":"

                                            The Copy Shared Secrets stage provides the ability to copy secrets from the current Kubernetes namespace into a namespace created during CD pipeline.

                                            Shared secrets

                                            Please follow the steps described below to copy the secrets:

                                            1. Create a secret in the current Kubernetes namespace that should be used in the deployment. The secret label must be app.edp.epam.com/use: cicd, since the pipeline script will attempt to copy the secret by its label. For example:

                                              kind: Secret\nmetadata:\nlabels:\napp.edp.epam.com/use: cicd\n
                                            2. Add the following step to the CD pipeline {\"name\":\"copy-secrets\",\"step_name\":\"copy-secrets\"}. Alternatively, it is possible to create a custom job provisioner with this step.

                                            3. Run the job. The pipeline script will create a secret with the same data in the namespace generated by the cd pipeline.

                                              Note

                                              Service account tokens are not supported.

                                            "},{"location":"user-guide/copy-shared-secrets/#related-articles","title":"Related Articles","text":"
                                            • Customize CD Pipeline
                                            • Manage Jenkins CD Pipeline Job Provisioner
                                            "},{"location":"user-guide/customize-cd-pipeline/","title":"Customize CD Pipeline","text":"

                                            Apart from running CD pipeline stages with the default logic, there is the ability to perform the following:

                                            • Create your own logic for stages;
                                            • Redefine the default EDP stages of a CD pipeline.

                                            In order to have the ability to customize a stage logic, create a CD pipeline stage source as a Library:

                                            1. Navigate to the Libraries section of the Admin Console and create a library with the Groovy-pipeline code language:

                                              Note

                                              If you clone the library, make sure that the correct source branch is selected.

                                              Create library

                                              Select the required fields to build your library:

                                              Advanced settings

                                            2. Go to the Continuous Delivery section of the Admin Console and create a CD pipeline with the library stage source and its branch:

                                              Library source

                                            "},{"location":"user-guide/customize-cd-pipeline/#add-new-stage","title":"Add New Stage","text":"

                                            Follow the steps below to add a new stage:

                                            • Clone the repository with the added library;
                                            • Create a \"stages\" directory in the root;
                                            • Create a Jenkinsfile with default content:
                                              @Library(['edp-library-stages', 'edp-library-pipelines']) _\n\nDeploy()\n
                                            • Create a groovy file with a meaningful name, e.g. NotificationStage.groovy;
                                            • Put the required construction and your own logic into the file:
                                              import com.epam.edp.stages.impl.cd.Stage\n\n@Stage(name = \"notify\")\nclass Notify {\n    Script script\n    void run(context) {\n    --------------- Put your own logic here ------------------\n            script.println(\"Send notification logic\")\n    --------------- Put your own logic here ------------------\n    }\n}\nreturn Notify\n
                                            • Add a new stage to the STAGES parameter of the Jenkins job of your CD pipeline:

                                              Stages parameter

                                              Warning

                                              To make this stage permanently present, please modify the job provisioner.

                                            • Run the job to check that your new stage has been run during the execution.
                                            "},{"location":"user-guide/customize-cd-pipeline/#redefine-existing-stage","title":"Redefine Existing Stage","text":"

                                            By default, the following stages are implemented in EDP pipeline framework:

                                            • deploy,
                                            • deploy-helm,
                                            • autotests,
                                            • manual (Manual approve),
                                            • promote-images.

                                            Using one of these names for annotation in your own class will lead to redefining the default logic with your own.

                                            Find below a sample of the possible flow of the redefining deploy stage:

                                            • Clone the repository with the added library;
                                            • Create a \"stages\" directory in the root;
                                            • Create a Jenkinsfile with default content:
                                              @Library(['edp-library-stages', 'edp-library-pipelines']) _\n\nDeploy()\n
                                            • Create a groovy file with a meaningful name, e.g. CustomDeployStage.groovy;
                                            • Put the required construction and your own logic into the file:
                                              import com.epam.edp.stages.impl.cd.Stage\n\n@Stage(name = \"deploy\")\nclass CustomDeployStage {\n    Script script\n\n    void run(context) {\n    --------------- Put your own logic here ------------------\n            script.println(\"Custom deploy stage logic\")\n    --------------- Put your own logic here ------------------\n    }\n}\nreturn CustomDeployStage\n
                                            "},{"location":"user-guide/customize-cd-pipeline/#add-a-new-stage-using-shared-library-via-custom-global-pipeline-libraries","title":"Add a New Stage Using Shared Library via Custom Global Pipeline Libraries","text":"

                                            Note

                                            To add a new Custom Global Pipeline Library, please refer to the Add a New Custom Global Pipeline Library page.

                                            To redefine any stage and add custom logic using global pipeline libraries, perform the steps below:

                                            • Navigate to the Libraries section of the Admin Console and create a library with the Groovy-pipeline code language:

                                              Create library

                                              Select the required fields to build your library:

                                              Advanced settings

                                            • Clone the repository with the added library;
                                            • Create a directory with the name src/com/epam/edp/customStages/impl/cd/impl/ in the library repository, for instance: src/com/epam/edp/customStages/impl/cd/impl/EmailNotify.groovy;
                                            • Add a Groovy file with another name to the same stages catalog, for instance \u2013 EmailNotify.groovy:
                                              package com.epam.edp.customStages.impl.cd.impl\n\nimport com.epam.edp.stages.impl.cd.Stage\n\n@Stage(name = \"notify\")\nclass Notify {\n    Script script\n    void run(context) {\n    --------------- Put your own logic here ------------------\n    script.println(\"Send notification logic\")\n    --------------- Put your own logic here ------------------\n   }\n}\n
                                            • Create a Jenkinsfile with default content and the added custom library to Jenkins:

                                              @Library(['edp-library-stages', 'edp-library-pipelines', 'edp-custom-shared-library-name']) _\n\nDeploy()\n

                                              Note

                                              edp-custom-shared-library-name is the name of your Custom Global Pipeline Library that should be added to the Jenkins Global Settings.

                                            • Add a new stage to the STAGES parameter of the Jenkins job of your CD pipeline:

                                              Stages parameter

                                              Warning

                                              To make this stage permanently present, please modify the job provisioner.

                                              Note

                                              Pay attention to the appropriate annotation (EDP versions of all stages can be found on GitHub).

                                            • Run the job to check that the new stage has been running during the execution.
                                            "},{"location":"user-guide/customize-cd-pipeline/#redefine-a-default-stage-logic-via-custom-global-pipeline-libraries","title":"Redefine a Default Stage Logic via Custom Global Pipeline Libraries","text":"

                                            Note

                                            To add a new Custom Global Pipeline Library, please refer to the Add a New Custom Global Pipeline Library page.

                                            By default, the following stages are implemented in EDP pipeline framework:

                                            • deploy,
                                            • deploy-helm,
                                            • autotests,
                                            • manual (Manual approve),
                                            • promote-images.

                                            Using one of these names for annotation in your own class will lead to redefining the default logic with your own.

                                            To redefine any stage and add custom logic using global pipeline libraries, perform the steps below:

                                            • Navigate to the Libraries section of the Admin Console and create a library with the Groovy-pipeline code language:

                                              Create library

                                              Select the required fields to build your library:

                                              Advanced settings

                                            • Clone the repository with the added library;
                                            • Create a directory with the name src/com/epam/edp/customStages/impl/cd/impl/ in the library repository, for instance: src/com/epam/edp/customStages/impl/cd/impl/CustomDeployStage.groovy;;
                                            • Add a Groovy file with another name to the same stages catalog, for instance \u2013 CustomDeployStage.groovy:
                                              package com.epam.edp.customStages.impl.cd.impl\n\nimport com.epam.edp.stages.impl.cd.Stage\n\n@Stage(name = \"deploy\")\nclass CustomDeployStage {\n    Script script\n\n    void run(context) {\n    --------------- Put your own logic here ------------------\n            script.println(\"Custom deploy stage logic\")\n    --------------- Put your own logic here ------------------\n    }\n}\n
                                            • Create a Jenkinsfile with default content and the added custom library to Jenkins:

                                              @Library(['edp-library-stages', 'edp-library-pipelines', 'edp-custom-shared-library-name']) _\n\nDeploy()\n

                                              Note

                                              edp-custom-shared-library-name is the name of your Custom Global Pipeline Library that should be added to the Jenkins Global Settings.

                                              Note

                                              Pay attention to the appropriate annotation (EDP versions of all stages can be found on GitHub).

                                            "},{"location":"user-guide/customize-cd-pipeline/#related-articles","title":"Related Articles","text":"
                                            • Add a New Custom Global Pipeline Library
                                            • Manage Jenkins CD Pipeline Job Provisioner
                                            "},{"location":"user-guide/customize-ci-pipeline/","title":"Customize CI Pipeline","text":"

                                            This chapter describes the main steps that should be followed when customizing a CI pipeline.

                                            "},{"location":"user-guide/customize-ci-pipeline/#redefine-a-default-stage-logic-for-a-particular-application","title":"Redefine a Default Stage Logic for a Particular Application","text":"

                                            To redefine any stage and add custom logic, perform the steps below:

                                            1. Open the GitHub repository:

                                              • Create a directory with the name \u201cstages\u201d in the application repository;
                                              • Create a Groovy file with a meaningful name for a custom stage description, for instance: CustomSonar.groovy.
                                            2. Paste the copied skeleton from the reference stage and insert the necessary logic.

                                              Note

                                              Pay attention to the appropriate annotation (EDP versions of all stages can be found on GitHub).

                                              The stage logic structure is the following:

                                              CustomSonar.groovy

                                              import com.epam.edp.stages.impl.ci.ProjectType\nimport com.epam.edp.stages.impl.ci.Stage\n\n@Stage(name = \"sonar\", buildTool = [\"maven\"], type = [ProjectType.APPLICATION, ProjectType.AUTOTESTS, ProjectType.LIBRARY])\nclass CustomSonar {\n    Script script\n    void run(context) {\n        script.sh \"echo 'Your custom logic of the stage'\"\n    }\n}\nreturn CustomSonar\n

                                              Info

                                              There is the ability to redefine the predefined EDP stage as well as to create it from scratch, it depends on the name that is used in the @Stage annotation. For example, using name = \"sonar\" will redefine an existing sonar stage with the same name, but using name=\"new-sonar\" will create a new stage.

                                              By default, the following stages are implemented in EDP:

                                              • build
                                              • build-image-from-dockerfile
                                              • build-image
                                              • build-image-kaniko
                                              • checkout
                                              • compile
                                              • create-branch
                                              • gerrit-checkout
                                              • get-version
                                              • git-tag
                                              • push
                                              • sonar
                                              • sonar-cleanup
                                              • tests
                                              • trigger-job

                                              Mandatory points:

                                              • Importing classes com.epam.edp.stages.impl.ci.ProjectType and com.epam.edp.stages.impl.ci.Stage;
                                              • Annotating \"Stage\" for class - @Stage(name = \"sonar\", buildTool = [\"maven\"], type = [ProjectType.APPLICATION, ProjectType.AUTOTESTS, ProjectType.LIBRARY]);
                                              • Property with the type \"Script\";
                                              • Void the \"run\" method with the \"context input parameter\" value;
                                              • Bring the custom class back to the end of the file: return CustomSonar.
                                            3. Open Jenkins and make sure that all the changes are correct after the completion of the customized pipeline.

                                            "},{"location":"user-guide/customize-ci-pipeline/#add-a-new-stage-for-a-particular-application","title":"Add a New Stage for a Particular Application","text":"

                                            To add a new stage for a particular application, perform the steps below:

                                            1. In the GitHub repository, add a Groovy file with another name to the same stages catalog.
                                            2. Copy the part of a pipeline framework logic that cannot be predefined;

                                              The stage logic structure is the following:

                                              EmailNotify.groovy

                                              import com.epam.edp.stages.impl.ci.ProjectType\nimport com.epam.edp.stages.impl.ci.Stage\n\n@Stage(name = \"email-notify\", buildTool = \"maven\", type = ProjectType.APPLICATION)\nclass EmailNotify {\n    Script script\n    void run(context) {\n        -------------------'Your custom logic here'\n    }\n}\nreturn EmailNotify\n
                                            3. Open the default set of stages and add a new one into the Default Value field by saving the respective type {\"name\": \"email-notify\"}, save the changes: Add stage

                                            4. Open Jenkins to check the pipeline; as soon as the checkout stage is passed, the new stage will appear in the pipeline: Check stage

                                              Warning

                                              To make this stage permanently present, please modify the job provisioner.

                                            "},{"location":"user-guide/customize-ci-pipeline/#redefine-a-default-stage-logic-via-custom-global-pipeline-libraries","title":"Redefine a Default Stage Logic via Custom Global Pipeline Libraries","text":"

                                            Note

                                            To add a new Custom Global Pipeline Library, please refer to the Add a New Custom Global Pipeline Library page.

                                            To redefine any stage and add custom logic using global pipeline libraries, perform the steps below:

                                            1. Open the GitHub repository:

                                              • Create a directory with the name /src/com/epam/edp/customStages/impl/ci/impl/stageName/ in the library repository, for instance: /src/com/epam/edp/customStages/impl/ci/impl/sonar/;
                                              • Create a Groovy file with a meaningful name for a custom stage description, for instance \u2013 CustomSonar.groovy.
                                            2. Paste the copied skeleton from the reference stage and insert the necessary logic.

                                              Note

                                              Pay attention to the appropriate annotation (EDP versions of all stages can be found on GitHub).

                                              The stage logic structure is the following:

                                              CustomSonar.groovy

                                              package com.epam.edp.customStages.impl.ci.impl.sonar\n\nimport com.epam.edp.stages.impl.ci.ProjectType\nimport com.epam.edp.stages.impl.ci.Stage\n\n@Stage(name = \"sonar\", buildTool = [\"maven\"], type = [ProjectType.APPLICATION, ProjectType.AUTOTESTS, ProjectType.LIBRARY])\nclass CustomSonar {\n    Script script\n    void run(context) {\n        script.sh \"echo 'Your custom logic of the stage'\"\n    }\n}\n

                                              Info

                                              There is the ability to redefine the predefined EDP stage as well as to create it from scratch, it depends on the name that is used in the @Stage annotation. For example, using name = \"sonar\" will redefine an existing sonar stage with the same name, but using name=\"new-sonar\" will create a new stage.

                                              By default, the following stages are implemented in EDP:

                                              • build
                                              • build-image-from-dockerfile
                                              • build-image
                                              • build-image-kaniko
                                              • checkout
                                              • compile
                                              • create-branch
                                              • gerrit-checkout
                                              • get-version
                                              • git-tag
                                              • push
                                              • sonar
                                              • sonar-cleanup
                                              • tests
                                              • trigger-job

                                              Mandatory points:

                                              • Defining a package com.epam.edp.customStages.impl.ci.impl.stageName;
                                              • Importing classes com.epam.edp.stages.impl.ci.ProjectType and com.epam.edp.stages.impl.ci.Stage;
                                              • Annotating \"Stage\" for class - @Stage(name = \"sonar\", buildTool = [\"maven\"], type = [ProjectType.APPLICATION, ProjectType.AUTOTESTS, ProjectType.LIBRARY]);
                                              • Property with the type \"Script\";
                                              • Void the \"run\" method with the \"context input parameter\" value.

                                            3.Open Jenkins and make sure that all the changes are correct after the completion of the customized pipeline.

                                            "},{"location":"user-guide/customize-ci-pipeline/#add-a-new-stage-using-shared-library-via-custom-global-pipeline-libraries","title":"Add a New Stage Using Shared Library via Custom Global Pipeline Libraries","text":"

                                            Note

                                            To add a new Custom Global Pipeline Library, please refer to the Add a New Custom Global Pipeline Library page.

                                            To redefine any stage and add custom logic using global pipeline libraries, perform the steps below:

                                            1. Open the GitHub repository:

                                              • Create a directory with the name /src/com/epam/edp/customStages/impl/ci/impl/stageName/ in the library repository, for instance: /src/com/epam/edp/customStages/impl/ci/impl/emailNotify/;
                                              • Add a Groovy file with another name to the same stages catalog, for instance \u2013 EmailNotify.groovy.
                                            2. Copy the part of a pipeline framework logic that cannot be predefined;

                                              Note

                                              Pay attention to the appropriate annotation (EDP versions of all stages can be found on GitHub).

                                              The stage logic structure is the following:

                                              EmailNotify.groovy

                                              package com.epam.edp.customStages.impl.ci.impl.emailNotify\n\nimport com.epam.edp.stages.impl.ci.ProjectType\nimport com.epam.edp.stages.impl.ci.Stage\n\n@Stage(name = \"email-notify\", buildTool = \"maven\", type = ProjectType.APPLICATION)\nclass EmailNotify {\n    Script script\n    void run(context) {\n        -------------------'Your custom logic here'\n    }\n}\n
                                            3. Open the default set of stages and add a new one into the Default Value field by saving the respective type {\"name\": \"email-notify\"}, save the changes: Add stage

                                            4. Open Jenkins to check the pipeline; as soon as the checkout stage is passed, the new stage will appear in the pipeline: Check stage

                                              Warning

                                              To make this stage permanently present, please modify the job provisioner.

                                            "},{"location":"user-guide/customize-ci-pipeline/#related-articles","title":"Related Articles","text":"
                                            • Add a New Custom Global Pipeline Library
                                            • Manage Jenkins CI Pipeline Job Provisioner
                                            • Add Security Scanner
                                            "},{"location":"user-guide/d-d-diagram/","title":"Delivery Dashboard Diagram","text":"

                                            Admin Console allows getting the general visualization of all the relations between CD pipeline, stages, codebases, branches, and image streams that are elements with the specific icon. To open the current project diagram, navigate to the Delivery Dashboard Diagram section on the navigation bar:

                                            Delivery dashboard

                                            Info

                                            All the requested changes (deletion, creation, adding) are displayed immediately on the Delivery Dashboard Diagram.

                                            Possible actions when using dashboard:

                                            • To zoom in or zoom out the diagram scale, scroll up / down.
                                            • To move the diagram, click and drag.
                                            • To move an element, click it and drag to the necessary place.
                                            • To see the relations for one element, click this element.
                                            • To see the whole diagram, click the empty space.
                                            "},{"location":"user-guide/d-d-diagram/#related-articles","title":"Related Articles","text":"
                                            • EDP Admin Console
                                            "},{"location":"user-guide/dockerfile-stages/","title":"Use Dockerfile Linters for Code Review Pipeline","text":"

                                            This section contains the description of dockerbuild-verify, dockerfile-lint stages which one can use in Code Review pipeline.

                                            These stages help to obtain a quick response on the validity of the code in the Code Review pipeline in Kubernetes for all types of applications supported by EDP out of the box.

                                            Add stages

                                            Inspect the functions performed by the following stages:

                                            1. dockerbuild-verify stage collects artifacts and builds an image from the Dockerfile without push to registry. This stage is intended to check if the image is built.

                                            2. dockerfile-lint stage launches the hadolint command in order to check the Dockerfile.

                                            "},{"location":"user-guide/dockerfile-stages/#related-articles","title":"Related Articles","text":"
                                            • Use Terraform Library in EDP
                                            • EDP Pipeline Framework
                                            • Promote Docker Images From ECR to Docker Hub
                                            • CI Pipeline for Container
                                            "},{"location":"user-guide/ecr-to-docker-stages/","title":"Promote Docker Images From ECR to Docker Hub","text":"

                                            This section contains the description of the ecr-to-docker stage, available in the Build pipeline.

                                            The ecr-to-docker stage is intended to perform the push of Docker images collected from the Amazon ECR cluster storage to Docker Hub repositories, where the image becomes accessible to everyone who wants to use it. This stage is optional and is designed for working with various EDP components.

                                            Note

                                            When pushing the image from ECR to Docker Hub using crane, the SHA-256 value remains unchanged.

                                            To run the ecr-to-docker stage just for once, navigate to the Build with Parameters option, add this stage to the stages list, and click Build. To add the ecr-to-docker stage to the pipeline, modify the job provisioner.

                                            Note

                                            To push properly the Docker image from the ECR storage, the ecr-to-docker stage should follow the build-image-kaniko stage. Add custom lib2

                                            The ecr-to-docker stage contains a specific script that launches the following actions:

                                            1. Performs authorization in AWS ECR in the EDP private storage via awsv2.
                                            2. Performs authorization in the Docker Hub.
                                            3. Checks whether a similar image exists in the Docker Hub in order to avoid its overwriting.

                                              • If a similar image exists in the Docker Hub, the script will return the message about it and stop the execution. The ecr-to-docker stage in the Build pipeline will be marked in red.
                                              • If there is no similar image, the script will proceed to promote the image using crane.
                                            "},{"location":"user-guide/ecr-to-docker-stages/#create-secret-for-ecr-to-docker-stage","title":"Create Secret for ECR-to-Docker Stage","text":"

                                            The ecr-to-docker stage expects the authorization credentials to be added as Kubernetes secret into EDP-installed namespace. To create the dockerhub-credentials secret, run the following command:

                                              kubectl -n edp create secret generic dockerhub-credentials \\\n  --from-literal=accesstoken=<dockerhub_access_token> \\\n  --from-literal=account=<dockerhub_account_name> \\\n  --from-literal=username=<dockerhub_user_name>\n

                                            Note

                                            • The \u2039dockerhub_access_token\u203a should be created beforehand and in accordance with the official Docker Hub instruction.
                                            • The \u2039dockerhub_account_name\u203a and \u2039dockerhub_user_name\u203a for the organization account repository will differ and be identical for the personal account repository.
                                            • Pay attention that the Docker Hub repository for images uploading should be created beforehand and named by the following pattern: \u2039dockerhub_account_name\u203a/\u2039Application Name\u203a, where the \u2039Application Name\u203a should match the application name in the EDP Admin Console.
                                            "},{"location":"user-guide/ecr-to-docker-stages/#related-articles","title":"Related Articles","text":"
                                            • EDP Pipeline Framework
                                            • Manage Access Token
                                            • Manage Jenkins CI Pipeline Job Provisioner
                                            "},{"location":"user-guide/git-server-overview/","title":"Manage Git Servers","text":"

                                            Git Server is responsible for integration with Version Control System, whether it is GitHub, Gitlab or Gerrit.

                                            The Git Server is set via the global.gitProvider parameter of the values.yaml file.

                                            To view the current Git Server, you can open EDP -> Configuration -> Git Servers and inspect the following properties:

                                            Git Server menu

                                            • Git Server status and name - displays the Git Server status, which depends on the Git Server integration status (Success/Failed).
                                            • Git Server properties - displays the Git Server type, its host address, username, SSH/HTTPS port, and name of the secret that contains SSH key.
                                            • Open documentation - opens the \"Manage Git Servers\" documentation page.
                                            "},{"location":"user-guide/git-server-overview/#view-authentication-data","title":"View Authentication Data","text":"

                                            To view authentication data that is used to connect to the Git server, use kubectl describe command as follows:

                                            kubectl describe GitServer git_server_name -n edp\n
                                            "},{"location":"user-guide/git-server-overview/#delete-git-server","title":"Delete Git Server","text":"

                                            To remove a Git Server from the Git Servers list, utilize the kubectl delete command as follows:

                                            kubectl delete GitServer git_server_name -n edp\n
                                            "},{"location":"user-guide/git-server-overview/#related-articles","title":"Related Articles","text":"
                                            • Add Git Server
                                            • Integrate GitHub/GitLab in Tekton
                                            "},{"location":"user-guide/helm-release-deletion/","title":"Helm Release Deletion","text":"

                                            The Helm release deletion stage provides the ability to remove Helm releases from the namespace.

                                            Note

                                            Pay attention that this stages will remove all Helm releases from the namespace. To avoid loss of important data, before using this stage, make the necessary backups.

                                            To remove Helm releases, follow the steps below:

                                            1. Add the following step to the CD pipeline {\"name\":\"helm-uninstall\",\"step_name\":\"helm-uninstall\"}. Alternatively, with this step, it is possible to create a custom job provisioner.

                                            2. Run the job. The pipeline script will remove Helm releases from the namespace.

                                            "},{"location":"user-guide/helm-release-deletion/#related-articles","title":"Related Articles","text":"
                                            • Customize CD Pipeline
                                            • Manage Jenkins CD Pipeline Job Provisioner
                                            "},{"location":"user-guide/helm-stages/","title":"Helm Chart Testing and Documentation Tools","text":"

                                            This section contains the description of the helm-lint and helm-docs stages that can be used in the Code Review pipeline.

                                            The stages help to obtain a quick response on the validity of the helm chart code and documentation in the Code Review pipeline.

                                            Inspect the functions performed by the following stages:

                                            1. helm-lint stage launches the ct lint --charts-deploy-templates/ command in order to validate the chart.

                                              Helm lint

                                              • chart_schema.yaml - this file contains some rules by which the chart validity is checked. These rules are necessary for the YAML scheme validation.

                                              See the current scheme:

                                              View: chart_schema.yaml
                                              name: str()\nhome: str()\nversion: str()\ntype: str()\napiVersion: str()\nappVersion: any(str(), num())\ndescription: str()\nkeywords: list(str(), required=False)\nsources: list(str(), required=True)\nmaintainers: list(include('maintainer'), required=True)\ndependencies: list(include('dependency'), required=False)\nicon: str(required=False)\nengine: str(required=False)\ncondition: str(required=False)\ntags: str(required=False)\ndeprecated: bool(required=False)\nkubeVersion: str(required=False)\nannotations: map(str(), str(), required=False)\n---\nmaintainer:\nname: str(required=True)\nemail: str(required=False)\nurl: str(required=False)\n---\ndependency:\nname: str()\nversion: str()\nrepository: str()\ncondition: str(required=False)\ntags: list(str(), required=False)\nenabled: bool(required=False)\nimport-values: any(list(str()), list(include('import-value')), required=False)\nalias: str(required=False)\n
                                              • ct.yaml - this file contains rules that will skip the validation of certain rules.

                                              To get more information about the chart testing lint, please refer to the ct_lint documentation.

                                            2. helm-docs stage helps to validate the generated documentation for the Helm deployment templates in the Code Review pipeline for all types of applications supported by EDP. This stage launches the helm-docs command in order to validate the chart documentation file existence and verify its relevance.

                                              Requirements: helm-docs v1.10.0

                                              Note

                                              The helm-docs stage is optional. To extend the pipeline with an additional stage, please refer to the Configure Code Review Pipeline page.

                                              Helm docs

                                              Note

                                              The example of the generated documentation.

                                            "},{"location":"user-guide/helm-stages/#related-articles","title":"Related Articles","text":"
                                            • EDP Pipeline Framework
                                            "},{"location":"user-guide/infrastructure/","title":"Manage Infrastructures","text":"

                                            This section describes the subsequent possible actions that can be performed with the newly added or existing infrastructures.

                                            "},{"location":"user-guide/infrastructure/#check-and-remove-application","title":"Check and Remove Application","text":"

                                            As soon as the infrastructure is successfully provisioned, the following will be created:

                                            • Code Review and Build pipelines in Jenkins/Tekton for this application. The Build pipeline will be triggered automatically if at least one environment is already added.
                                            • A new project in Gerrit or another VCS.
                                            • SonarQube integration will be available after the Build pipeline in Jenkins/Tekton is passed.
                                            • Nexus Repository Manager will be available after the Build pipeline in Jenkins/Tekton is passed as well.

                                            The added application will be listed in the Applications list allowing you to do the following:

                                            Applications menu

                                            • Infrastructure status - displays the Git Server status. Can be red or green depending on if the EDP Portal managed to connect to the Git Server with the specified credentials or not.
                                            • Infrastructure name (clickable) - displays the infrastructure name set during the Git Server creation.
                                            • Open documentation - opens the documentation that leads to this page.
                                            • Enable filtering - enables filtering by Git Server name and namespace where this custom resource is located in.
                                            • Create new infrastructure - displays the Create new component menu.
                                            • Edit infrastructure - edit the infrastructure by selecting the options icon next to its name in the infrastructures list, and then selecting Edit. For details see the Edit Existing Application section.
                                            • Delete infrastructure - remove infrastructure by selecting the options icon next to its name in the infrastructures list, and then selecting Delete.

                                            There are also options to sort the infrastructures:

                                            • Sort the existing infrastructures in a table by clicking the sorting icons in the table header. Sort the infrastructures alphabetically by their name, language, build tool, framework, and CI tool. You can also sort the infrastructures by their status: Created, Failed, or In progress.
                                            • Select a number of infrastructures displayed per page (15, 25 or 50 rows) and navigate between pages if the number of applications exceeds the capacity of a single page.
                                            "},{"location":"user-guide/infrastructure/#edit-existing-infrastructure","title":"Edit Existing Infrastructure","text":"

                                            EDP Portal provides the ability to enable, disable or edit the Jira Integration functionality for infrastructures.

                                            1. To edit an infrastructure directly from the infrastructures overview page or when viewing the infrastructure data:

                                              • Select Edit in the options icon menu:

                                              Edit infrastructure on the Infrastructures overview page

                                              Edit infrastructure when viewing the infrastructure data

                                              • The Edit Infrastructure dialog opens.
                                            2. To enable Jira integration, in the Edit Infrastructure dialog do the following:

                                              Edit infrastructure

                                              a. Mark the Integrate with Jira server check box and fill in the necessary fields. Please see steps d-h on the Add Infrastructure page.

                                              b. Select the Apply button to apply the changes.

                                              c. Navigate to Jenkins/Tekton and add the create-jira-issue-metadata stage in the Build pipeline. Also add the commit-validate stage in the Code Review pipeline.

                                            3. To disable Jira integration, in the Edit Infrastructure dialog do the following:

                                              a. Unmark the Integrate with Jira server check box.

                                              b. Select the Apply button to apply the changes.

                                              c. Navigate to Jenkins/Tekton and remove the create-jira-issue-metadata stage in the Build pipeline. Also remove the commit-validate stage in the Code Review pipeline.

                                            4. To create, edit and delete infrastructure branches, please refer to the Manage Branches page.

                                            "},{"location":"user-guide/infrastructure/#related-articles","title":"Related Articles","text":"
                                            • Add Infrastructure
                                            • Manage Branches
                                            "},{"location":"user-guide/library/","title":"Manage Libraries","text":"

                                            This section describes the subsequent possible actions that can be performed with the newly added or existing libraries.

                                            "},{"location":"user-guide/library/#check-and-remove-library","title":"Check and Remove Library","text":"

                                            As soon as the library is successfully provisioned, the following will be created:

                                            • Code Review and Build pipelines in Jenkins/Tekton for this library. The Build pipeline will be triggered automatically if at least one environment is already added.
                                            • A new project in Gerrit or another VCS.
                                            • SonarQube integration will be available after the Build pipeline in Jenkins/Tekton is passed.
                                            • Nexus Repository Manager will be available after the Build pipeline in Jenkins/Tekton is passed as well.

                                            Info

                                            To navigate quickly to OpenShift, Jenkins/Tekton, Gerrit, SonarQube, Nexus, and other resources, click the Overview section on the navigation bar and hit the necessary link.

                                            The added library will be listed in the Libraries list allowing to do the following:

                                            Library menu

                                            1. Create another library by clicking the plus sign icon in the lower-right corner of the screen and performing the same steps as described on the Add Library page.

                                            2. Open library data by clicking its link name. Once clicked, the following blocks will be displayed:

                                            • Library status - displays the Git Server status. Can be red or green depending on if the EDP Portal managed to connect to the Git Server with the specified credentials or not.
                                            • Library name (clickable) - displays the Git Server name set during the Git Server creation.
                                            • Open documentation - opens the documentation that leads to this page.
                                            • Enable filtering - enables filtering by Git Server name and namespace where this custom resource is located in.
                                            • Create new library - displays the Create new component menu.
                                            • Edit library - edit the library by selecting the options icon next to its name in the libraries list, and then selecting Edit. For details see the Edit Existing Library section.
                                            • Delete Library - remove library with the corresponding database and Jenkins/Tekton pipelines by selecting the options icon next to its name in the libraries list, and then selecting Delete.

                                              Note

                                              The library that is used in a CD pipeline cannot be removed.

                                            There are also options to sort the libraries:

                                            • Sort the existing libraries in a table by clicking the sorting icons in the table header. Sort the libraries alphabetically by their name, language, build tool, framework, and CI tool. You can also sort the libraries by their status: Created, Failed, or In progress.
                                            • Select a number of libraries displayed per page (15, 25 or 50 rows) and navigate between pages if the number of libraries exceeds the capacity of a single page.
                                            "},{"location":"user-guide/library/#edit-existing-library","title":"Edit Existing Library","text":"

                                            EDP Portal provides the ability to enable, disable or edit the Jira Integration functionality for libraries.

                                            1. To edit a library directly from the Libraries overview page or when viewing the library data:

                                              • Select Edit in the options icon menu:

                                                Edit library on the libraries overview page

                                                Edit library when viewing the library data

                                              • The Edit Library dialog opens.
                                            2. To enable Jira integration, in the Edit Library dialog do the following:

                                              Edit library

                                              a. Mark the Integrate with Jira server check box and fill in the necessary fields. Please see steps d-h on the Add Library page.

                                              b. Select the Apply button to apply the changes.

                                              c. Navigate to Jenkins/Tekton and add the create-jira-issue-metadata stage in the Build pipeline. Also add the commit-validate stage in the Code Review pipeline.

                                            3. To disable Jira integration, in the Edit Library dialog do the following:

                                              a. Unmark the Integrate with Jira server check box.

                                              b. Select the Apply button to apply the changes.

                                              c. Navigate to Jenkins/Tekton and remove the create-jira-issue-metadata stage in the Build pipeline. Also remove the commit-validate stage in the Code Review pipeline.

                                              As a result, the necessary changes will be applied.

                                            4. To create, edit and delete library branches, please refer to the Manage Branches page.

                                            "},{"location":"user-guide/library/#related-articles","title":"Related Articles","text":"
                                            • Add Library
                                            • Manage Branches
                                            "},{"location":"user-guide/manage-branches/","title":"Manage Branches","text":"

                                            This page describes how to manage branches in the created component, whether it is an application, library, autotest or infrastructure.

                                            "},{"location":"user-guide/manage-branches/#add-new-branch","title":"Add New Branch","text":"

                                            Note

                                            When working with libraries, pay attention when specifying the branch name: the branch name is involved in the formation of the library version, so it must comply with the versioning semantic rules for the library.

                                            When adding a component, the default branch is a master branch. In order to add a new branch, follow the steps below:

                                            1. Navigate to the Branches block by clicking the component name link in the Components list.

                                            2. Select the options icon related to the necessary branch and then select Create:

                                              Add branch

                                            3. Click Edit YAML in the upper-right corner of the dialog to open the YAML editor and add a branch. Otherwise, fill in the required fields in the dialog:

                                              New branch

                                              a. Release Branch - select the Release Branch check box if you need to create a release branch.

                                              b. Branch name - type the branch name. Pay attention that this field remains static if you create a release branch. For the Clone and Import strategies: if you want to use the existing branch, enter its name into this field.

                                              c. From Commit Hash - paste the commit hash from which the branch will be created. For the Clone and Import strategies: Note that if the From Commit Hash field is empty, the latest commit from the branch name will be used.

                                              d. Branch version - enter the necessary branch version for the artifact. The Release Candidate (RC) postfix is concatenated to the branch version number.

                                              e. Default branch version - type the branch version that will be used in a master branch after the release creation. The Snapshot postfix is concatenated to the master branch version number.

                                              f. Click the Apply button and wait until the new branch will be added to the list.

                                              Info

                                              Adding of a new branch is indicated in the context of the edp versioning type.

                                            The default component repository is cloned and changed to the new indicated version before the build, i.e. the new indicated version will not be committed to the repository; thus, the existing repository will keep the default version.

                                            "},{"location":"user-guide/manage-branches/#build-branch","title":"Build Branch","text":"

                                            In order to build branch from the latest commit, do the following:

                                            1. Navigate to the Branches block by clicking the library name link in the Libraries list.
                                            2. Select the options icon related to the necessary branch and then select Build:

                                              Build branch

                                            The pipeline run status is displayed near the branch name in the Branches block:

                                            Pipeline run status in EDP Portal

                                            The corresponding item appears on the Tekton Dashboard in the PipelineRuns section:

                                            Pipeline run status in Tekton

                                            "},{"location":"user-guide/manage-branches/#delete-branch","title":"Delete Branch","text":"

                                            Note

                                            The default master branch cannot be removed.

                                            In order to delete the added branch with the corresponding record in the EDP Portal database, do the following:

                                            1. Navigate to the Branches block by clicking the component name link in the compoents list.
                                            2. Select the options icon related to the necessary branch and then select Delete:

                                              Delete branch

                                            "},{"location":"user-guide/manage-branches/#related-articles","title":"Related Articles","text":"
                                            • Add Application
                                            • Add Library
                                            • Add Autotest
                                            "},{"location":"user-guide/marketplace/","title":"Marketplace Overview","text":"

                                            The EDP Marketplace offers a range of Templates, predefined tools and settings for creating software. These Templates speed up development, minimize errors, and ensure consistency. A key EDP Marketplace feature is customization. Organizations can create and share their own Templates, finely tuned to their needs. Each Template serves as a tailored blueprint of tools and settings.

                                            These tailored Templates include preset CI/CD pipelines, automating your development workflows. From initial integration to final deployment, these processes are efficiently managed. Whether for new applications or existing ones, these templates enhance processes, save time, and ensure consistency.

                                            To see the Marketplace section, navigate to the Main menu -> EDP -> Marketplace. General look of the Marketplace section is described below:

                                            Marketplace section (listed view)

                                            • Marketplace templates - all the components marketplace can offer;
                                            • Template properties - the item summary that shows the type, category, language, framework, build tool and maturity;
                                            • Enable/disable filters - enables users to enable/disable searching by the item name or namespace it is available in;
                                            • Change view - allows switching from the listed view to the tiled one and vice versa. See the screenshot below for details.

                                            There is also a possibility to switch into the tiled view instead of the listed one:

                                            Marketplace section (tiled view)

                                            To view the details of a marketplace item, simply click on its name:

                                            Item details

                                            The details window shows suplemental information, such as item's author, keywords, release version and the link to the repository it is located in. The window also contains the Create from template button that allows users to create the component by the chosen template. The procedure of creating new components is described in the Add Component via Marketplace page.

                                            "},{"location":"user-guide/marketplace/#related-articles","title":"Related Articles","text":"
                                            • Add Component via Marketplace
                                            • Add Application
                                            • Add Library
                                            • Add Infrastructure
                                            "},{"location":"user-guide/opa-stages/","title":"Use Open Policy Agent","text":"

                                            Open Policy Agent (OPA) is a policy engine that provides:

                                            • High-level declarative policy language Rego;
                                            • API and tooling for policy execution.

                                            EPAM Delivery Platform ensures the implemented Open Policy Agent support allowing to work with Open Policy Agent bundles that is processed by means of stages in the Code Review and Build pipelines. These pipelines are expected to be created after the Rego OPA Library is added.

                                            "},{"location":"user-guide/opa-stages/#code-review-pipeline-stages","title":"Code Review Pipeline Stages","text":"

                                            In the Code Review pipeline, the following stages are available:

                                            1. checkout stage, a standard step during which all files are checked out from a selected branch of the Git repository.

                                            2. tests stage containing a script that performs the following actions:

                                              2.1. Runs policy tests.

                                              2.2. Converts OPA test results into JUnit format.

                                              2.3. Publishes JUnit-formatted results to Jenkins.

                                            "},{"location":"user-guide/opa-stages/#build-pipeline-stages","title":"Build Pipeline Stages","text":"

                                            In the Build pipeline, the following stages are available:

                                            1. checkout stage, a standard step during which all files are checked out from a selected branch of the Git repository.

                                            2. get-version optional stage, a step where library version is determined either via:

                                              2.1. Standard EDP versioning functionality.

                                              2.2. Manually specified version. In this case .manifest file in a root directory MUST be provided. File must contain a JSON document with revision field. Minimal example: { \"revision\": \"1.0.0\" }\".

                                            3. tests stage containing a script that performs the following actions: 3.1. Runs policy tests. 3.2. Converts OPA test results into JUnit format. 3.3. Publishes JUnit-formatted results to Jenkins.

                                            4. git-tag stage, a standard step where git branch is tagged with a version.

                                            "},{"location":"user-guide/opa-stages/#related-articles","title":"Related Articles","text":"
                                            • EDP Pipeline Framework
                                            "},{"location":"user-guide/pipeline-framework/","title":"EDP Pipeline Framework","text":"

                                            This chapter provides detailed information about the EDP pipeline framework concepts and parts, as well as the accurate data about the Code Review, Build and Deploy pipelines with the respective stages.

                                            "},{"location":"user-guide/pipeline-framework/#edp-pipeline-framework-overview","title":"EDP Pipeline Framework Overview","text":"

                                            Note

                                            The whole logic is applied to Jenkins as it is the main tool for the CI/CD processes organization.

                                            EDP pipeline framework basic

                                            The general EDP Pipeline Framework consists of several parts:

                                            • Jenkinsfile - a text file that keeps the definition of a Jenkins Pipeline and is checked into source control. Every Job has its Jenkinsfile stored in the specific application repository and in Jenkins as the plain text. The behavior logic of the pipelines can be customized easily by modifying a source code which is always copied to the EDP repository after the EDP installation.

                                            Jenkinsfile example

                                            • Loading Shared Libraries - a part where every job loads libraries with the help of the shared libraries mechanism for Jenkins that allows to create reproducible pipelines, write them uniformly, and manage the update process. There are two main libraries: EDP Pipelines with the common logic described for the main pipelines Code Review, Build, Deploy pipelines and EDP Stages library that keeps the description of the stages for every pipeline.
                                            • Run Stages - a part where the predefined default stages are launched.

                                            Pipeline script

                                            "},{"location":"user-guide/pipeline-framework/#cicd-jobs-comparison","title":"CI/CD Jobs Comparison","text":"

                                            Explore the CI and CD job comparison. Please note that the dynamic stages order can be changed, meanwhile, the predefined stages order in the reference pipeline cannot be changed, i.e. only the predefined stages set can be run.

                                            CI/CD jobs comparison

                                            "},{"location":"user-guide/pipeline-framework/#context","title":"Context","text":"

                                            Context - a variable that stores and transfers all necessary parameters between stages that are used by pipeline during performing.

                                            1. The context type is \"Map\".
                                            2. Each stage has input and output context.
                                            3. Each stage has a mandatory input context.

                                            Note

                                            If the input context isn't transferred, the stage will be failed.

                                            "},{"location":"user-guide/pipeline-framework/#annotations-for-cicd-stages","title":"Annotations for CI/CD Stages","text":"

                                            Annotation for CI Stages:

                                            • The annotation type is \"Map\";
                                            • The annotation consists of the name, buildTool, and codebaseType.

                                            Annotation for CD Stages:

                                            • The annotation type is \"Map\";
                                            • The annotation consists of a name.
                                            "},{"location":"user-guide/pipeline-framework/#code-review-pipeline","title":"Code Review Pipeline","text":"

                                            CodeReview() \u2013 a function that allows using the EDP implementation for the Code Review pipeline.

                                            Note

                                            All values of different parameters that are used during the pipeline execution are stored in the \"Map\" context.

                                            The Code Review pipeline consists of several steps:

                                            On the master:

                                            • Initialization of all objects (Platform, Job, Gerrit, Nexus, Sonar, Application, StageFactory) and loading of the default implementations of EDP stages.

                                            On a particular Jenkins agent that depends on the build tool:

                                            • Creating workdir for application sources;
                                            • Loading build tool implementation for a particular application;
                                            • Run in a loop all stages (From) and run them either in parallel or one by one.
                                            "},{"location":"user-guide/pipeline-framework/#code-review-pipeline-overview","title":"Code Review Pipeline Overview","text":"

                                            Using in pipelines - @Library(['edp-library-pipelines@version'])

                                            The corresponding enums, interfaces, classes, and their methods can be used separately from the EDP Pipelines library function (please refer to Table 1 and Table 2).

                                            Table 1. Enums and Interfaces with the respective properties, methods, and examples.

                                            Enums Interfaces PlatformType: - OPENSHIFT - KUBERNETES JobType: - CODEREVIEW - BUILD - DEPLOY BuildToolType: - MAVEN - GRADLE - NPM - DOTNET Platform() - contains methods for working with platform CLI. At the moment only OpenShift is supported. Properties: Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Methods: getJsonPathValue(String k8s_kind, String k8s_kind_name, String jsonPath): return String value of specific parameter of particular object using jsonPath utility. Example: context.platform.getJsonPathValue(''cm'',''project-settings'', ''.data.username''). BuildTool() - contains methods for working with different buildTool from ENUM BuildToolType. Should be invoked on Jenkins build agents. Properties: Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Nexus object - Object of class Nexus. Methods: init: return parameters of buildTool that are needed for running stages. Example: context.buildTool = new BuildToolFactory().getBuildToolImpl(context.application.config.build_tool, this,context.nexus) context.buildTool.init().

                                            Table 2. Classes with the respective properties, methods, and examples.

                                            Classes Description (properties, methods, and examples) PlatformFactory() - Class that contains methods getting an implementation of CLI of the platform. At the moment OpenShift and Kubernetes are supported. Methods: getPlatformImpl(PlatformType platform, Script script): return Class Platform. Example: context.platform = new PlatformFactory().getPlatformImpl(PlatformType.OPENSHIFT, this). Application(String name, Platform platform, Script script) - Class that describes the application object. Properties: Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Platform platform - Object of a class Platform(). String name - Name for the application for creating an object. Map config - Map of configuration settings for the particular application that is loaded from config map project-settings. String version - Application version, initially empty. Is set on the get-version step. String deployableModule - The name of the deployable module for multi-module applications, initially empty. String buildVersion - Version of the built artifact, contains build number of Job initially empty. String deployableModuleDir - The name of deployable module directory for multi-module applications, initially empty. Array imageBuildArgs - List of arguments for building an application Docker image. Methods: setConfig(String gerrit_autouser, String gerrit_host, String gerrit_sshPort, String gerrit_project): set the config property with values from config map. Example: context.application = new Application(context.job, context.gerrit.project, context.platform, this) context.application.setConfig(context.gerrit.autouser, context.gerrit.host, context.gerrit.sshPort, context.gerrit.project) Job(type: JobType.value, platform: Platform, script: Script) - Class that describes the Gerrit tool. Properties: Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Platform platform - Object of a class Platform(). JobType.value type. String deployTemplatesDirectory - The name of the directory in application repository where deploy templates are located. It can be set for a particular Job through DEPLOY_TEMPLATES_DIRECTORY parameter. String edpName - The name of the EDP Project. Map stages - Contains all stages in JSON format that is retrieved from Jenkins job env variable. String envToPromote - The name of the environment for promoting images. Boolean promoteImages - Defines whether images should be promoted or not. Methods: getParameterValue(String parameter, String defaultValue = null): return parameter of ENV variable of Jenkins job. init(): set all the properties of the Job object. setDisplayName(String displayName): set display name of the Jenkins job. setDescription(String description, Boolean addDescription = false): set new or add to the existing description of the Jenkins job. printDebugInfo(Map context): print context info to the log of Jenkins' job. runStage(String stage_name, Map context): run the particular stage according to its name. Example: context.job = new Job(JobType.CODEREVIEW.value, context.platform, this) context.job.init() context.job.printDebugInfo(context) context.job.setDisplayName(\"test\") context.job.setDescription(\"Name: ${context.application.config.name}\") Gerrit(Job job, Platform platform, Script script) - Class that describes the Gerrit tool. Properties: Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().Job job - Object of a class Job().String credentialsId - Credential Id in Jenkins for Gerrit.String autouser - Username of an auto user in Gerrit for integration with Jenkins.String host - Gerrit host.String project - the project name of the built application.String branch - branch to build the application from.String changeNumber - change number of Gerrit commit.String changeName - change name of Gerrit commit.String refspecName - refspecName of Gerrit commit.String sshPort - Gerrit ssh port number.String patchsetNumber - patchsetNumber of Gerrit commit.Methods: init(): set all the properties of Gerrit object. Example: context.gerrit = new Gerrit(context.job, context.platform, this) context.gerrit.init() Nexus(Job job, Platform platform, Script script) - Class that describes the Nexus tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().Job job - Object of a class Job().String autouser - Username of an auto user in Nexus for integration with Jenkins.String credentialsId - Credential Id in Jenkins for Nexus.String host - Nexus host.String port - Nexus http(s) port.String repositoriesUrl - Base URL of repositories in Nexus.String restUrl - URL of Rest API.Methods:init(): set all the properties of Nexus objectExample: context.nexus = new Nexus(context.job, context.platform, this) context.nexus.init() Sonar(Job job, Platform platform, Script script) - Class that describes the Sonar tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().Job job - Object of a class Job().String route - External route of the sonar application.Methods:init(): set all the properties of Sonar objectExample: context.sonar = new Sonar(context.job, context.platform, this) context.sonar.init()"},{"location":"user-guide/pipeline-framework/#code-review-pipeline-stages","title":"Code Review Pipeline Stages","text":"

                                            Each EDP stage implementation has run method that is as input parameter required to pass the \"Map\" context with different keys. Some stages can implement the logic for several build tools and application types, some of them are specific.

                                            The Code Review pipeline includes the following default stages: Checkout \u2192 Gerrit Checkout \u2192 Compile \u2192 Tests \u2192 Sonar.

                                            Info

                                            To get the full description of every stage, please refer to the EDP Stages Framework section.

                                            "},{"location":"user-guide/pipeline-framework/#how-to-redefine-or-extend-the-edp-pipeline-stages-library","title":"How to Redefine or Extend the EDP Pipeline Stages Library","text":"

                                            Inspect the points below to redefine or extend the EDP Pipeline Stages Library:

                                            • Create \u201cstage\u201d folder in your App repository.
                                            • Create a Groovy file with a meaningful name for the custom stage description. For instance \u2013 CustomBuildMavenApplication.groovy.
                                            • Describe the stage logic.

                                            Redefinition:

                                            import com.epam.edp.stages.ProjectType\nimport com.epam.edp.stages.Stage\n@Stage(name = \"compile\", buildTool = \"maven\", type = ProjectType.APPLICATION)\nclass CustomBuildMavenApplication {\nScript script\nvoid run(context) {\nscript.sh \"echo 'Your custom logic of the stage'\"\n}\n}\nreturn CustomBuildMavenApplication\n

                                            Extension:

                                            import com.epam.edp.stages.ProjectType\nimport com.epam.edp.stages.Stage\n@Stage(name = \"new-stage\", buildTool = \"maven\", type = ProjectType.APPLICATION)\nclass NewStageMavenApplication {\nScript script\nvoid run(context) {\nscript.sh \"echo 'Your custom logic of the stage'\"\n}\n}\nreturn NewStageMavenApplication\n
                                            "},{"location":"user-guide/pipeline-framework/#using-edp-stages-library-in-the-pipeline","title":"Using EDP Stages Library in the Pipeline","text":"

                                            In order to use the EDP stages, the created pipeline should fit some requirements, that`s why a developer has to do the following:

                                            • import library - @Library(['edp-library-stages'])
                                            • import StageFactory class - import com.epam.edp.stages.StageFactory
                                            • define context Map \u2013 context = [:]
                                            • define stagesFactory instance and load EDP stages:
                                              context.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n

                                            After that, there is the ability to run any EDP stage beforehand by defining a necessary context: context.factory.getStage(\"checkout\",\"maven\",\"application\").run(context)

                                            For instance, the pipeline can look like:

                                            @Library(['edp-library-stages']) _\n\nimport com.epam.edp.stages.StageFactory\nimport org.apache.commons.lang.RandomStringUtils\n\ncontext = [:]\n\nnode('maven') {\ncontext.workDir = new File(\"/tmp/${RandomStringUtils.random(10, true, true)}\")\ncontext.workDir.deleteDir()\n\ncontext.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n\ncontext.gerrit = [:]\ncontext.application = [:]\ncontext.application.config = [:]\ncontext.buildTool = [:]\ncontext.nexus = [:]\n\n\n\nstage(\"checkout\") {\ncontext.gerrit.branch = \"master\"\ncontext.gerrit.credentialsId = \"jenkins\"\ncontext.application.config.cloneUrl = \"ssh://jenkins@gerrit:32092/sit-718-cloned-java-maven-project\"\ncontext.factory.getStage(\"checkout\",\"maven\",\"application\").run(context)\n}\n\n\nstage(\"compile\") {\ncontext.buildTool.command = \"mvn\"\ncontext.nexus.credentialsId = \"nexus\"\ncontext.factory.getStage(\"compile\",\"maven\",\"application\").run(context)\n}\n}\n

                                            Or in a declarative way:

                                            @Library(['edp-library-stages']) _\n\nimport com.epam.edp.stages.StageFactory\nimport org.apache.commons.lang.RandomStringUtils\n\ncontext = [:]\n\npipeline {\nagent { label 'maven' }\nstages {\nstage('Init'){\nsteps {\nscript {\ncontext.workDir = new File(\"/tmp/${RandomStringUtils.random(10, true, true)}\")\ncontext.workDir.deleteDir()\ncontext.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n\ncontext.gerrit = [:]\ncontext.application = [:]\ncontext.application.config = [:]\ncontext.buildTool = [:]\ncontext.nexus = [:]\n}\n}\n}\n\nstage(\"Checkout\") {\nsteps {\nscript {\ncontext.gerrit.branch = \"master\"\ncontext.gerrit.credentialsId = \"jenkins\"\ncontext.application.config.cloneUrl = \"ssh://jenkins@gerrit:32092/sit-718-cloned-java-maven-project\"\ncontext.factory.getStage(\"checkout\",\"maven\",\"application\").run(context)\n}\n}\n}\n\nstage('Compile') {\nsteps {\nscript {\ncontext.buildTool.command = \"mvn\"\ncontext.nexus.credentialsId = \"nexus\"\ncontext.factory.getStage(\"compile\",\"maven\",\"application\").run(context)\n}\n}\n}\n}\n}\n
                                            "},{"location":"user-guide/pipeline-framework/#build-pipeline","title":"Build Pipeline","text":"

                                            Build() \u2013 a function that allows using the EDP implementation for the Build pipeline. All values of different parameters that are used during the pipeline execution are stored in the \"Map\" context.

                                            The Build pipeline consists of several steps:

                                            On the master:

                                            • Initialization of all objects (Platform, Job, Gerrit, Nexus, Sonar, Application, StageFactory) and loading default implementations of EDP stages.

                                            On a particular Jenkins agent that depends on the build tool:

                                            • Creating workdir for application sources;
                                            • Loading build tool implementation for a particular application;
                                            • Run in a loop all stages (From) and run them either in parallel or one by one.
                                            "},{"location":"user-guide/pipeline-framework/#build-pipeline-overview","title":"Build Pipeline Overview","text":"

                                            Using in pipelines - @Library(['edp-library-pipelines@version'])

                                            The corresponding enums, interfaces, classes, and their methods can be used separately from the EDP Pipelines library function (please refer to Table 3 and Table 4).

                                            Table 3. Enums and Interfaces with the respective properties, methods, and examples. Enums Interfaces PlatformType:- OPENSHIFT- KUBERNETESJobType:- CODEREVIEW- BUILD- DEPLOYBuildToolType:- MAVEN- GRADLE- NPM- DOTNET Platform() - contains methods for working with platform CLI. At the moment only OpenShift is supported.Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Methods:getJsonPathValue(String k8s_kind, String k8s_kind_name, String jsonPath): return String value of specific parameter of particular object using jsonPath utility.Example:context.platform.getJsonPathValue(\"cm\",\"project-settings\",\".data.username\")BuildTool() - contains methods for working with different buildTool from ENUM BuildToolType. Should be invoked on Jenkins build agents.Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Nexus object - Object of class Nexus. See description below:Methods:init: return parameters of buildTool that are needed for running stages.Example:context.buildTool = new BuildToolFactory().getBuildToolImpl(context.application.config.build_tool, this,context.nexus)context.buildTool.init()

                                            Table 4. Classes with the respective properties, methods, and examples.

                                            Classes Description (properties, methods, and examples) PlatformFactory() - Class that contains methods getting an implementation of CLI of the platform. At the moment OpenShift and Kubernetes are supported. Methods:getPlatformImpl(PlatformType platform, Script script): return Class PlatformExample:context.platform = new PlatformFactory().getPlatformImpl(PlatformType.OPENSHIFT, this) Application(String name, Platform platform, Script script) - Class that describes the application object. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().String name - Name for the application for creating an object.Map config - Map of configuration settings for the particular application that is loaded from config map project-settings.String version - Application version, initially empty. Is set on the get-version step.String deployableModule - The name of the deployable module for multi-module applications, initially empty.String buildVersion - Version of the built artifact, contains build number of Job initially empty.String deployableModuleDir - The name of deployable module directory for multi-module applications, initially empty.Array imageBuildArgs - List of arguments for building the application Docker image.Methods:setConfig(String gerrit_autouser, String gerrit_host, String gerrit_sshPort, String gerrit_project): set the config property with values from config map.Example:context.application = new Application(context.job, context.gerrit.project, context.platform, this) context.application.setConfig(context.gerrit.autouser, context.gerrit.host, context.gerrit.sshPort, context.gerrit.project) Job(type: JobType.value, platform: Platform, script: Script) - Class that describes the Gerrit tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().JobType.value type.String deployTemplatesDirectory - The name of the directory in application repository, where deploy templates are located. It can be set for a particular Job through DEPLOY_TEMPLATES_DIRECTORY parameter.String edpName - The name of the EDP Project.Map stages - Contains all stages in JSON format that is retrieved from Jenkins job env variable.String envToPromote - The name of the environment for promoting images.Boolean promoteImages - Defines whether images should be promoted or not.Methods:getParameterValue(String parameter, String defaultValue = null): return parameter of ENV variable of Jenkins job.init(): set all the properties of the Job object.setDisplayName(String displayName): set display name of the Jenkins job.setDescription(String description, Boolean addDescription = false): set new or add to the existing description of the Jenkins job.printDebugInfo(Map context): print context info to the log of Jenkins' job.runStage(String stage_name, Map context): run the particular stage according to its name.Example:context.job = new Job(JobType.CODEREVIEW.value, context.platform, this) context.job.init() context.job.printDebugInfo(context) context.job.setDisplayName(\"test\") context.job.setDescription(\"Name: ${context.application.config.name}\") Gerrit(Job job, Platform platform, Script script) - Class that describes the Gerrit tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().Job job - Object of a class Job().String credentialsId - Credentials Id in Jenkins for Gerrit.String autouser - Username of an auto user in Gerrit for integration with Jenkins.String host - Gerrit host.String project - the project name of the built application.String branch - branch to build an application from.String changeNumber - change number of Gerrit commit.String changeName - change name of Gerrit commit.String refspecName - refspecName of Gerrit commit.String sshPort - Gerrit ssh port number.String patchsetNumber - patchsetNumber of Gerrit commit.Methods:init(): set all the properties of Gerrit objectExample: context.gerrit = new Gerrit(context.job, context.platform, this) context.gerrit.init() Nexus(Job job, Platform platform, Script script) - Class that describes the Nexus tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().Job job - Object of a class Job().String autouser - Username of an auto user in Nexus for integration with Jenkins.String credentialsId - Credentials Id in Jenkins for Nexus.String host - Nexus host.String port - Nexus http(s) port.String repositoriesUrl - Base URL of repositories in Nexus.String restUrl - URL of Rest API.Methods:init(): set all the properties of the Nexus object.Example:context.nexus = new Nexus(context.job, context.platform, this) context.nexus.init() Sonar(Job job, Platform platform, Script script) - Class that describes the Sonar tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().Job job - Object of a class Job().String route - External route of the sonar application.Methods:init(): set all the properties of Sonar object.Example:context.sonar = new Sonar(context.job, context.platform, this) context.sonar.init()"},{"location":"user-guide/pipeline-framework/#build-pipeline-stages","title":"Build Pipeline Stages","text":"

                                            Each EDP stage implementation has run method that is as input parameter required to pass a context map with different keys. Some stages can implement the logic for several build tools and application types, some of them are specific.

                                            The Build pipeline includes the following default stages: Checkout \u2192 Gerrit Checkout \u2192 Compile \u2192 Get version \u2192 Tests \u2192 Sonar \u2192 Build \u2192 Build Docker Image \u2192 Push \u2192 Git tag.

                                            Info

                                            To get the full description of every stage, please refer to the EDP Stages Framework section.

                                            "},{"location":"user-guide/pipeline-framework/#how-to-redefine-or-extend-edp-pipeline-stages-library","title":"How to Redefine or Extend EDP Pipeline Stages Library","text":"

                                            Inspect the points below to redefine or extend the EDP Pipeline Stages Library:

                                            • Create a \u201cstage\u201d folder in the App repository.
                                            • Create a Groovy file with a meaningful name for the custom stage description. For instance \u2013 CustomBuildMavenApplication.groovy
                                            • Describe stage logic.

                                            Redefinition:

                                            import com.epam.edp.stages.ProjectType\nimport com.epam.edp.stages.Stage\n@Stage(name = \"compile\", buildTool = \"maven\", type = ProjectType.APPLICATION)\nclass CustomBuildMavenApplication {\nScript script\nvoid run(context) {\nscript.sh \"echo 'Your custom logic of the stage'\"\n}\n}\nreturn CustomBuildMavenApplication\n

                                            Extension:

                                            import com.epam.edp.stages.ProjectType\nimport com.epam.edp.stages.Stage\n@Stage(name = \"new-stage\", buildTool = \"maven\", type = ProjectType.APPLICATION)\nclass NewStageMavenApplication {\nScript script\nvoid run(context) {\nscript.sh \"echo 'Your custom logic of the stage'\"\n}\n}\nreturn NewStageMavenApplication\n
                                            "},{"location":"user-guide/pipeline-framework/#using-edp-stages-library-in-the-pipeline_1","title":"Using EDP Stages Library in the Pipeline","text":"

                                            In order to use the EDP stages, the created pipeline should fit some requirements, that`s why a developer has to do the following:

                                            • import library - @Library(['edp-library-stages'])
                                            • import StageFactory class - import com.epam.edp.stages.StageFactory
                                            • define context Map \u2013 context = [:]
                                            • define stagesFactory instance and load EDP stages:
                                            context.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n

                                            After that, there is the ability to run any EDP stage beforehand by defining a requirement context context.factory.getStage(\"checkout\",\"maven\",\"application\").run(context)

                                            For instance, the pipeline can look like:

                                            @Library(['edp-library-stages']) _\n\nimport com.epam.edp.stages.StageFactory\nimport org.apache.commons.lang.RandomStringUtils\n\ncontext = [:]\n\nnode('maven') {\ncontext.workDir = new File(\"/tmp/${RandomStringUtils.random(10, true, true)}\")\ncontext.workDir.deleteDir()\n\ncontext.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n\ncontext.gerrit = [:]\ncontext.application = [:]\ncontext.application.config = [:]\ncontext.buildTool = [:]\ncontext.nexus = [:]\n\n\n\nstage(\"checkout\") {\ncontext.gerrit.branch = \"master\"\ncontext.gerrit.credentialsId = \"jenkins\"\ncontext.application.config.cloneUrl = \"ssh://jenkins@gerrit:32092/sit-718-cloned-java-maven-project\"\ncontext.factory.getStage(\"checkout\",\"maven\",\"application\").run(context)\n}\n\n\nstage(\"compile\") {\ncontext.buildTool.command = \"mvn\"\ncontext.nexus.credentialsId = \"nexus\"\ncontext.factory.getStage(\"compile\",\"maven\",\"application\").run(context)\n}\n}\n

                                            Or in a declarative way:

                                            @Library(['edp-library-stages']) _\n\nimport com.epam.edp.stages.StageFactory\nimport org.apache.commons.lang.RandomStringUtils\n\ncontext = [:]\n\npipeline {\nagent { label 'maven' }\nstages {\nstage('Init'){\nsteps {\nscript {\ncontext.workDir = new File(\"/tmp/${RandomStringUtils.random(10, true, true)}\")\ncontext.workDir.deleteDir()\ncontext.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n\ncontext.gerrit = [:]\ncontext.application = [:]\ncontext.application.config = [:]\ncontext.buildTool = [:]\ncontext.nexus = [:]\n}\n}\n}\n\nstage(\"Checkout\") {\nsteps {\nscript {\ncontext.gerrit.branch = \"master\"\ncontext.gerrit.credentialsId = \"jenkins\"\ncontext.application.config.cloneUrl = \"ssh://jenkins@gerrit:32092/sit-718-cloned-java-maven-project\"\ncontext.factory.getStage(\"checkout\",\"maven\",\"application\").run(context)\n}\n}\n}\n\nstage('Compile') {\nsteps {\nscript {\ncontext.buildTool.command = \"mvn\"\ncontext.nexus.credentialsId = \"nexus\"\ncontext.factory.getStage(\"compile\",\"maven\",\"application\").run(context)\n}\n}\n}\n}\n}\n
                                            "},{"location":"user-guide/pipeline-framework/#edp-library-stages-description","title":"EDP Library Stages Description","text":"

                                            Using in pipelines - @Library(['edp-library-stages@version'])

                                            The corresponding enums, classes, interfaces and their methods can be used separately from the EDP Stages library function (please refer to Table 5).

                                            Table 5. Enums and Classes with the respective properties, methods, and examples.

                                            Enums Classes ProjectType: - APPLICATION - AUTOTESTS - LIBRARY StageFactory() - Class that contains methods getting an implementation of the particular stage either EDP from shared library or custom from application repository.Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Map stages - Map of stages implementations.Methods:loadEdpStages(): return a list of Classes that describes EDP stages implementations.loadCustomStages(String directory): return a list of Classes that describes EDP custom stages from application repository from \"directory\". The \"directory\" should have an absolute path to files with classes of custom stages implementations. Should be run from a Jenkins agent.add(Class clazz): register class for some particular stage in stages map of StageFactory class.getStage(String name, String buildTool, String type): return an object of the class for a particular stage from stages property based on stage name and buildTool, type of application.Example:context.factory = new StageFactory(script: this)context.factory.loadEdpStages().each() { context.factory.add(it) }context.factory.loadCustomStages(\"${context.workDir}/stages\").each() { context.factory.add(it) }context.factory.getStage(stageName.toLowerCase(),context.application.config.build_tool.toLowerCase(),context.application.config.type).run(context)"},{"location":"user-guide/pipeline-framework/#edp-stages-framework","title":"EDP Stages Framework","text":"

                                            Each EDP stage implementation has run method that is as input parameter required to pass a context map with different keys. Some stages can implement the logic for several build tools and application types, some of them are specific.

                                            Inspect the Table 6 and Table 7 that contain the full description of every stage that can be included in Code Review and Build pipelines: Checkout \u2192 Gerrit Checkout \u2192 Compile \u2192 Get version \u2192 Tests \u2192 Sonar \u2192 Build \u2192 Build Docker Image \u2192 Push \u2192 Git tag.

                                            Table 6. The Checkout, Gerrit Checkout, Compile, Get version, and Tests stages description.

                                            Checkout Gerrit Checkout Compile Get version Tests name = \"checkout\",buildTool = [\"maven\", \"npm\", \"dotnet\",\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- StageFactory context.factory- String context.gerrit.branch- String context.gerrit.credentialsId- String context.application.config.cloneUrl name = \"gerrit-checkout\",buildTool = [\"maven\", \"npm\", \"dotnet\",\"gradle\"]type = [ProjectType.APPLICATION, ProjectType.AUTOTESTS, ProjectType.LIBRARY]context required:- String context.workDir- StageFactory context.factory- String context.gerrit.changeName- String context.gerrit.refspecName- String context.gerrit.credentialsId- String context.application.config.cloneUrl name = \"compile\"buildTool = [\"dotnet\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.buildTool.sln_filenameoutput:- String context.buildTool.sln_filenamebuildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.buildTool.command- String context.nexus.credentialsIdbuildTool = [\"maven\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.buildTool.command- String context.nexus.credentialsIdbuildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.buildTool.groupRepository name = \"get-version\"buildTool = [\"dotnet\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- Map(empty) context.application- String context.gerrit.branch- Job context.joboutput:-String context.application.deplyableModule- String context.application.deplyableModuleDir- String context.application.version- String context.application.buildVersionbuildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.buildTool.command- Job context.job- String context.gerrit.branchoutput:- String context.application.deplyableModuleDir- String context.application.version- String context.application.buildVersionbuildTool = [\"maven\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.buildTool.command- Job context.job- String context.gerrit.branchoutput:- String context.application.deplyableModule- String context.application.deplyableModuleDir- String context.application.version- String context.application.buildVersionbuildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- Job context.job- String context.gerrit.branchoutput:- String context.application.deplyableModuleDir- String context.application.version- String context.application.buildVersion name = \"tests\"buildTool = [\"dotnet\"]type = [ProjectType.APPLICATION]context required:- String context.workDirbuildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.buildTool.commandbuildTool = [\"maven\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.buildTool.commandtype = [ProjectType.AUTOTESTS]context required:- String context.workDir- String context.buildTool.command- String context.application.config.report_frameworkbuildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir

                                            Table 7. The Sonar, Build, Build Docker Image, Push, and Git tag stages description.

                                            Sonar Build Build Docker Image Push Git tag name = \"sonar\"buildTool = [\"dotnet\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.job.type- String context.application.name- String context.buildTool.sln_filename- String context.sonar.route- String context.gerrit.changeName(Only for codereview pipeline)- String context.gerrit.branch(Only for build pipeline)buildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.job.type- String context.nexus.credentialsId- String context.buildTool.command- String context.application.name- String context.sonarRoute- String context.gerrit.changeName(Only for codereview pipeline)- String context.gerrit.branch(Only for build pipeline)buildTool = [\"maven\"]type = [ProjectType.APPLICATION, ProjectType.AUTOTESTS, ProjectType.LIBRARY]context required:- String context.workDir- String context.job.type- String context.nexus.credentialsId- String context.application.name- String context.buildTool.command- String context.sonar.route- String context.gerrit.changeName(Only for codereview pipeline)- String context.gerrit.branch(Only for build pipeline)buildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.job.type- String context.sonar.route- String context.application.name- String context.gerrit.changeName(Only for codereview pipeline)- String context.gerrit.branch(Only for build pipeline) name = \"build\"buildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.buildTool.command- String context.nexus.credentialsIdbuildTool = [\"maven\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.buildTool.command- String context.nexus.credentialsIdbuildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.buildTool.groupRepository name = \"build-image\"buildTool = [\"dotnet\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.application.deployableModule- String context.application.deployableModuleDir- String context.application.name- String context.application.config.language- String context.application.buildVersion- Boolean context.job.promoteImages- String context.job.envToPromotebuildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.application.deployableModule- String context.application.deployableModuleDir- String context.application.name- String context.application.config.language- String context.application.buildVersion- Boolean context.job.promoteImages- String context.job.envToPromotebuildTool = [\"maven\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.application.deployableModule- String context.application.deployableModuleDir- String context.application.name- String context.application.config.language- String context.application.buildVersion- Boolean context.job.promoteImages- String context.job.envToPromotebuildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.application.deployableModule- String context.application.deployableModuleDir- String context.application.name- String context.application.config.language- String context.application.buildVersion- Boolean context.job.promoteImages- String context.job.envToPromote name = \"push\"buildTool = [\"dotnet\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.gerrit.project- String context.buildTool.sln_filename- String context.buildTool.snugetApiKey- String context.buildTool.hostedRepositorybuildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.application.version- String context.buildTool.hostedRepository- String context. buildTool.settingsbuildTool = [\"maven\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.application.version- String context.buildTool.hostedRepository- String context.buildTool.commandbuildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.buildTool.hostedRepository- String context.gerrit.autouser name = \"git-tag\"buildTool = [\"maven\", \"npm\", \"dotnet\",\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.gerrit.credentialsId- String context.gerrit.sshPort- String context.gerrit.host- String context.gerrit.autouser- String context.application.buildVersion"},{"location":"user-guide/pipeline-framework/#deploy-pipeline","title":"Deploy Pipeline","text":"

                                            Deploy() \u2013 a function that allows using the EDP implementation for the deploy pipeline. All values of different parameters that are used during the pipeline execution are stored in the \"Map\" context.

                                            The deploy pipeline consists of several steps:

                                            On the master:

                                            • Initialization of all objects (Platform, Job, Gerrit, Nexus, StageFactory) and loading the default implementations of EDP stages;
                                            • Creating an environment if it doesn`t exist;
                                            • Deploying the last versions of the applications;
                                            • Run predefined manual gates.

                                            On a particular autotest Jenkins agent that depends on the build tool:

                                            • Creating workdir for autotest sources;
                                            • Run predefined autotests.
                                            "},{"location":"user-guide/pipeline-framework/#edp-library-pipelines-description","title":"EDP Library Pipelines Description","text":"

                                            _Using in pipelines - @Library(['edp-library-pipelines@version']) _

                                            The corresponding enums and interfaces with their methods can be used separately from the EDP Pipelines library function (please refer to Table 8 and Table 9).

                                            Table 8. Enums and Interfaces with the respective properties, methods, and examples.

                                            Enums Interfaces PlatformType:- OPENSHIFT- KUBERNETESJobType:- CODEREVIEW- BUILD- DEPLOYBuildToolType:- MAVEN- GRADLE- NPM- DOTNET Platform() - contains methods for working with platform CLI. At the moment only OpenShift is supported.Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Methods:getJsonPathValue(String k8s_kind, String k8s_kind_name, String jsonPath): return String value of specific parameter of particular object using jsonPath utility. Example: context.platform.getJsonPathValue(\"cm\",\"project-settings\",\".data.username\") BuildTool() - contains methods for working with different buildTool from ENUM BuildToolType. (Should be invoked on Jenkins build agents)Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Nexus object - Object of class Nexus.Methods:init: return parameters of buildTool that are needed for running stages. Example:context.buildTool = new BuildToolFactory().getBuildToolImpl(context.application.config.build_tool, this, context.nexus)context.buildTool.init()

                                            Table 9. Classes with the respective properties, methods, and examples.

                                            Classes Description (properties, methods, and examples) PlatformFactory() - Class that contains methods getting implementation of CLI of platform. At the moment OpenShift and Kubernetes are supported. Methods:getPlatformImpl(PlatformType platform, Script script): return Class PlatformExample: context.platform = new PlatformFactory().getPlatformImpl(PlatformType.OPENSHIFT, this) Application(String name, Platform platform, Script script) - Class that describe the application object. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Platform platform - Object of a class Platform()String name - Name for the application for creating objectMap config - Map of configuration settings for particular application that is loaded from config map project-settingsString version - Application version, initially empty. Is set on get-version step.String deployableModule - The name of deployable module for multi module applications, initially empty.String buildVersion - Version of built artifact, contains build number of Job initially emptyString deployableModuleDir - The name of deployable module directory for multi module applications, initially empty.Array imageBuildArgs - List of arguments for building application Docker imageMethods: setConfig(String gerrit_autouser, String gerrit_host, String gerrit_sshPort, String gerrit_project): set the config property with values from config mapExample: context.application = new Application(context.job, context.gerrit.project, context.platform, this) context.application.setConfig(context.gerrit.autouser, context.gerrit.host, context.gerrit.sshPort, context.gerrit.project) Job(type: JobType.value, platform: Platform, script: Script) - Class that describe the Gerrit tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\"Platform platform - Object of a class Platform().JobType.value type.String deployTemplatesDirectory - The name of the directory in application repository, where deploy templates are located. Can be set for particular Job through DEPLOY_TEMPLATES_DIRECTORY parameter.String edpName - The name of the EDP Project.Map stages - Contains all stages in JSON format that is retrieved from Jenkins job env variable.String envToPromote - The name of the environment for promoting images.Boolean promoteImages - Defines whether images should be promoted or not. Methods:getParameterValue(String parameter, String defaultValue = null): return parameter of ENV variable of Jenkins job. init(): set all the properties of Job object. setDisplayName(String displayName): set display name of the Jenkins job. setDescription(String description, Boolean addDescription = false): set new or add to existing description of the Jenkins job. printDebugInfo(Map context): print context info to log of Jenkins job. runStage(String stage_name, Map context): run the particular stage according to its name. Example: context.job = new Job(JobType.DEPLOY.value, context.platform, this) context.job.init() context.job.printDebugInfo(context) context.job.setDisplayName(\"test\") context.job.setDescription(\"Name: ${context.application.config.name}\") Gerrit(Job job, Platform platform, Script script) - Class that describe the Gerrit tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform(). Job job - Object of a class Job().String credentialsId - Credential Id in Jenkins for Gerrit. String autouser - Username of autouser in Gerrit for integration with Jenkins. String host - Gerrit host. String project - project name of built application. String branch - branch to build application from. String changeNumber - change number of Gerrit commit. String changeName - change name of Gerrit commit. String refspecName - refspecName of Gerrit commit. String sshPort - gerrit ssh port number. String patchsetNumber - patchsetNumber of Gerrit commit.Methods:init(): set all the properties of Gerrit object. Example:context.gerrit = new Gerrit(context.job, context.platform, this)context.gerrit.init(). Nexus(Job job, Platform platform, Script script) - Class that describe the Nexus tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Platform platform - Object of a class Platform(). Job job - Object of a class Job(). String autouser - Username of autouser in Nexus for integration with Jenkins. String credentialsId - Credential Id in Jenkins for Nexus. String host - Nexus host. String port - Nexus http(s) port. String repositoriesUrl - Base URL of repositories in Nexus. String restUrl - URL of Rest API. Methods:init(): set all the properties of Nexus object. Example: context.nexus = new Nexus(context.job, context.platform, this) context.nexus.init()."},{"location":"user-guide/pipeline-framework/#edp-library-stages-description_1","title":"EDP Library Stages Description","text":"

                                            Using in pipelines - @Library(['edp-library-stages@version']) _

                                            The corresponding classes with methods can be used separately from the EDP Pipelines library function (please refer to Table 10).

                                            Table 10. Classes with the respective properties, methods, and examples.

                                            Classes Description (properties, methods, and examples) StageFactory() - Class that contains methods getting implementation of particular stage either EDP from shared library or custom from application repository. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\"Map stages - Map of stages implementationsMethods:loadEdpStages(): return list of Classes that describes EDP stages implementationsloadCustomStages(String directory): return list of Classes that describes EDP custom stages from application repository from \"directory\". The \"directory\" should be absolute path to files with classes of custom stages implementations. Should be run from Jenkins agent.add(Class clazz): register class for some particular stage in stages map of StageFactory classgetStage(String name, String buildTool, String type): return object of the class for particular stage from stages property based on stage name and buildTool, type of applicationExample:context.factory = new StageFactory(script: this)context.factory.loadEdpStages().each() { context.factory.add(it) }context.factory.loadCustomStages(\"${context.workDir}/stages\").each() { context.factory.add(it) }context.factory.getStage(stageName.toLowerCase(),context.application.config.build_tool.toLowerCase(),context.application.config.type).run(context)."},{"location":"user-guide/pipeline-framework/#deploy-pipeline-stages","title":"Deploy Pipeline Stages","text":"

                                            Each EDP stage implementation has run method that is as input parameter required to pass a context map with different keys. Some stages can implement the logic for several build tools and application types, some of them are specific.

                                            The stages for the deploy pipeline are independent of the build tool and application type. Find below (see Table 11 ) the full description of every stage: Deploy \u2192 Automated tests \u2192 Promote Images.

                                            Table 11. The Deploy, Automated tests, and Promote Images stages description.

                                            Deploy Automated tests Promote Images name = \"deploy\"buildTool = nulltype = nullcontext required:\u2022 String context.workDir\u2022 StageFactory context.factory\u2022 String context.gerrit.autouser\u2022 String context.gerrit.host\u2022 String context.application.config.cloneUrl\u2022 String context.jenkins.token\u2022 String context.job.edpName\u2022 String context.job.buildUrl\u2022 String context.job.jenkinsUrl\u2022 String context.job.metaProject\u2022 List context.job.applicationsList [['name':'application1_name','version':'application1_version],...]\u2022 String context.job.deployTemplatesDirectoryoutput:\u2022 List context.job.updatedApplicaions [['name':'application1_name','version':'application1_version],...] name = \"automation-tests\", buildTool = null, type = nullcontext required:- String context.workDir- StageFactory context.factory- String context.gerrit.credentialsId- String context.autotest.config.cloneUrl- String context.autotest.name- String context.job.stageWithoutPrefixName- String context.buildTool.settings- String context.autotest.config.report_framework name = \"promote-images\"buildTool = nulltype = nullcontext required:- String context.workDir- String context.buildTool.sln_filename- List context.job.updatedApplicaions [['name':'application1_name','version':'application1_version],...]"},{"location":"user-guide/pipeline-framework/#how-to-redefine-or-extend-edp-pipeline-stages-library_1","title":"How to Redefine or Extend EDP Pipeline Stages Library","text":"

                                            Info

                                            Currently, the redefinition of Deploy pipeline stages is prohibited.

                                            "},{"location":"user-guide/pipeline-framework/#using-edp-library-stages-in-the-pipeline","title":"Using EDP Library Stages in the Pipeline","text":"

                                            In order to use the EDP stages, the created pipeline should fit some requirements, that`s why a developer has to do the following:

                                            • import libraries - @Library(['edp-library-stages', 'edp-library-pipelines']) _
                                            • import reference EDP classes(See example below)
                                            • define context Map \u2013 context = [:]
                                            • define reference \"init\" stage

                                            After that, there is the ability to run any EDP stage beforehand by defining requirement context context.job.runStage(\"Deploy\", context).

                                            For instance, the pipeline can look like:

                                            @Library(['edp-library-stages', 'edp-library-pipelines']) _\n\nimport com.epam.edp.stages.StageFactory\nimport com.epam.edp.platform.PlatformFactory\nimport com.epam.edp.platform.PlatformType\nimport com.epam.edp.JobType\n\ncontext = [:]\n\nnode('master') {\nstage(\"Init\") {\ncontext.platform = new PlatformFactory().getPlatformImpl(PlatformType.OPENSHIFT, this)\ncontext.job = new com.epam.edp.Job(JobType.DEPLOY.value, context.platform, this)\ncontext.job.init()\ncontext.job.initDeployJob()\nprintln(\"[JENKINS][DEBUG] Created object job with type - ${context.job.type}\")\n\ncontext.nexus = new com.epam.edp.Nexus(context.job, context.platform, this)\ncontext.nexus.init()\n\ncontext.jenkins = new com.epam.edp.Jenkins(context.job, context.platform, this)\ncontext.jenkins.init()\n\ncontext.gerrit = new com.epam.edp.Gerrit(context.job, context.platform, this)\ncontext.gerrit.init()\n\ncontext.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n\ncontext.environment = new com.epam.edp.Environment(context.job.deployProject, context.platform, this)\ncontext.job.printDebugInfo(context)\ncontext.job.setDisplayName(\"${currentBuild.displayName}-${context.job.deployProject}\")\n\ncontext.job.generateInputDataForDeployJob()\n}\n\nstage(\"Pre Deploy Custom stage\") {\nprintln(\"Some custom pre deploy logic\")\n}\n\ncontext.job.runStage(\"Deploy\", context)\n\nstage(\"Post Deploy Custom stage\") {\nprintln(\"Some custom post deploy logic\")\n}\n}\n

                                            Or in a declarative way:

                                            @Library(['edp-library-stages', 'edp-library-pipelines']) _\n\nimport com.epam.edp.stages.StageFactory\nimport com.epam.edp.platform.PlatformFactory\nimport com.epam.edp.platform.PlatformType\nimport com.epam.edp.JobType\n\ncontext = [:]\n\npipeline {\nagent { label 'master'}\nstages {\nstage('Init') {\nsteps {\nscript {\ncontext.platform = new PlatformFactory().getPlatformImpl(PlatformType.OPENSHIFT, this)\ncontext.job = new com.epam.edp.Job(JobType.DEPLOY.value, context.platform, this)\ncontext.job.init()\ncontext.job.initDeployJob()\nprintln(\"[JENKINS][DEBUG] Created object job with type - ${context.job.type}\")\n\ncontext.nexus = new com.epam.edp.Nexus(context.job, context.platform, this)\ncontext.nexus.init()\n\ncontext.jenkins = new com.epam.edp.Jenkins(context.job, context.platform, this)\ncontext.jenkins.init()\n\ncontext.gerrit = new com.epam.edp.Gerrit(context.job, context.platform, this)\ncontext.gerrit.init()\n\ncontext.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n\ncontext.environment = new com.epam.edp.Environment(context.job.deployProject, context.platform, this)\ncontext.job.printDebugInfo(context)\ncontext.job.setDisplayName(\"${currentBuild.displayName}-${context.job.deployProject}\")\n\ncontext.job.generateInputDataForDeployJob()\n}\n}\n}\nstage('Deploy') {\nsteps {\nscript {\ncontext.factory.getStage(\"deploy\").run(context)\n}\n}\n}\n\nstage('Custom stage') {\nsteps {\nprintln(\"Some custom logic\")\n}\n}\n}\n}\n
                                            "},{"location":"user-guide/pipeline-framework/#related-articles","title":"Related Articles","text":"
                                            • Add Application
                                            • Add Library
                                            • Add CD Pipeline
                                            • CI Pipeline Details
                                            • CD Pipeline Details
                                            • Customize CI Pipeline
                                            • Customize CD Pipeline
                                            • EDP Stages
                                            • Glossary
                                            • Use Terraform Library in EDP
                                            "},{"location":"user-guide/pipeline-stages/","title":"Pipeline Stages","text":"

                                            Get acquainted with EDP CI/CD workflow and stages description.

                                            "},{"location":"user-guide/pipeline-stages/#edp-cicd-workflow","title":"EDP CI/CD Workflow","text":"

                                            Within EDP, the pipeline framework comprises the following pipelines:

                                            • Code Review;
                                            • Build;
                                            • Deploy.

                                            Note

                                            Please refer to the EDP Pipeline Framework page for details.

                                            The diagram below shows the delivery path through these pipelines and the respective stages. Please be aware that stages may differ for different codebase types.

                                            stages

                                            "},{"location":"user-guide/pipeline-stages/#stages-description","title":"Stages Description","text":"

                                            The table below provides the details on all the stages in the EDP pipeline framework:

                                            Name Dependency Description Pipeline Application Library Autotest Source code Documentation init Initiates information gathering Create Release, Code Review, Build + + Build.groovy checkout Performs for all files the checkout from a selected branch of the Git repository. For the main branch - from HEAD, for code review - from the commit Create Release, Build + + Checkout.groovy sast Launches vulnerability testing via Semgrep scanner. Pushes a vulnerability report to the DefectDojo. Build + Security compile Compiles the code, includes individual groovy files for each type of app or lib (NPM, DotNet, Python, Maven, Gradle) Code Review, Build + + Compile tests Launches testing procedure, includes individual groovy files for each type of app or lib Code Review, Build + + + Tests sonar Launches testing via SonarQube scanner and includes individual groovy files for each type of app or lib Code Review, Build + + Sonar build Builds the application, includes individual groovy files for each type of app or lib (Go, Maven, Gradle, NPM) Code Review, Build + Build create-branch EDP create-release process Creates default branch in Gerrit during create and clone strategies Create Release + + + CreateBranch.groovy trigger-job EDP create-release process Triggers \"build\" job Create Release + + + TriggerJob.groovy gerrit-checkout Performs checkout to the current project branch in Gerrit Code Review + + + GerritCheckout.groovy commit-validate Optional in EDP Admin Console Takes Jira parameters, when \"Jira Integration\" is enabled for the project in the Admin Console. Code Review + + CommitValidate.groovy dockerfile-lint Launches linting tests for Dockerfile Code Review + LintDockerApplicationLibrary.groovy Use Dockerfile Linters for Code Review dockerbuild-verify \"Build\" stage (if there are no \"COPY\" layers in Dockerfile) Launches build procedure for Dockerfile without pushing an image to the repository Code Review + BuildDockerfileApplicationLibrary.groovy Use Dockerfile Linters for Code Review helm-lint Launches linting tests for deployment charts Code Review + LintHelmApplicationLibrary.groovy Use helm-lint for Code Review helm-docs Checks generated documentation for deployment charts Code Review + HelmDocsApplication.groovy Use helm-docs for Code Review helm-uninstall Helm release deletion step to clear Helm releases Deploy + HelmUninstall.groovy Helm release deletion semi-auto-deploy-input Provides auto deploy with timeout and manual deploy flow Deploy + SemiAutoDeployInput.groovy Semi Auto Deploy get-version Defines the versioning of the project depending on the versioning schema selected in Admin Console Build + + GetVersion terraform-plan AWS credentials added to Jenkins Checks Terraform version, and installs default version if necessary, and launches terraform init, returns AWS username which used for action, and terraform plan command is called with an output of results to .tfplan file Build + TerraformPlan.groovy Use Terraform library in EDP terraform-apply AWS credentials added to Jenkins, the \"Terraform-plan\" stage Checks Terraform version, and installs default version if necessary, and launches terraform init, launches terraform plan from saves before .tfplan file, asks to approve, and run terraform apply from .tfplan file Build + TerraformApply.groovy Use Terraform library in EDP build-image-from-dockerfile Platform: OpenShift Builds Dockerfile Build + + .groovy files for building Dockerfile image build-image-kaniko Platform: k8s Builds Dockerfile using the Kaniko tool Build + BuildImageKaniko.groovy push Pushes an artifact to the Nexus repository Build + + Push create-Jira-issue-metadata \"get-version\" stage Creates a temporary CR in the namespace and after that pushes Jira Integration data to Jira ticket, and delete CR Build + + JiraIssueMetadata.groovy ecr-to-docker DockerHub credentials added to Jenkins Copies the docker image from the ECR project registry to DockerHub via the Crane tool after it is built Build + EcrToDocker.groovy Promote Docker Images From ECR to Docker Hub git-tag \"Get-version\" stage Creates a tag in SCM for the current build Build + + GitTagApplicationLibrary.groovy deploy Deploys the application Deploy + Deploy.groovy manual Works with the manual approve to proceed Deploy + ManualApprove.groovy promote-images Promotes docker images to the registry Deploy + PromoteImage.groovy

                                            Note

                                            The Create Release pipeline is an internal EDP mechanism for adding, importing or cloning a codebase. It is not a part of the pipeline framework.

                                            "},{"location":"user-guide/pipeline-stages/#related-articles","title":"Related Articles","text":"
                                            • Manage Jenkins CI Job Provisioner
                                            • GitLab Webhook Configuration
                                            • GitHub Webhook Configuration
                                            "},{"location":"user-guide/prepare-for-release/","title":"Prepare for Release","text":"

                                            After the necessary applications are added to EDP, they can be managed via the Admin Console. To prepare for the release, create a new branch from a selected commit with a set of CI pipelines (Code Review and Build pipelines), launch the Build pipeline, and add a new CD pipeline as well.

                                            Note

                                            Please refer to the Add Application and Add CD Pipeline for the details on how to add an application or a CD pipeline.

                                            Become familiar with the following preparation steps for release and a CD pipeline structure:

                                            • Create a new branch
                                            • Launch the Build pipeline
                                            • Add a new CD pipeline
                                            • Check CD pipeline structure
                                            "},{"location":"user-guide/prepare-for-release/#create-a-new-branch","title":"Create a New Branch","text":"
                                            1. Open Gerrit via the Admin Console Overview page to have this tab available in a web browser.

                                            2. Being in Admin Console, open the Applications section and click an application from the list to create a new branch.

                                            3. Once clicked the application name, scroll down to the Branches menu and click the Create button to open the Create New Branch dialog box, fill in the Branch Name field by typing a branch name.

                                              • Open the Gerrit tab in the web browser, navigate to Projects \u2192 List \u2192 select the application \u2192 Branches \u2192 gitweb for a necessary branch.
                                              • Select the commit that will be the last included to a new branch commit.
                                              • Copy to clipboard the commit hash.
                                            4. Paste the copied hash to the From Commit Hash field and click Proceed.

                                            Note

                                            If the commit hash is not added to the From Commit Hash field, the new branch will be created from the head of the master branch.

                                            "},{"location":"user-guide/prepare-for-release/#launch-the-build-pipeline","title":"Launch the Build Pipeline","text":"
                                            1. After the new branches are added, open the details page of every application and click the CI link that refers to Jenkins.

                                              Note

                                              The adding of a new branch may take some time. As soon as the new branch is created, it will be displayed in the list of the Branches menu.

                                            2. To build a new version of a corresponding Docker container (an image stream in OpenShift terms) for the new branch, start the Build pipeline. Being in Jenkins, select the new branch tab and click the link to the Build pipeline.

                                            3. Navigate to the Build with Parameters option and click the Build button to launch the Build pipeline.

                                              Warning

                                              The predefined default parameters should not be changed when triggering the Build pipeline, otherwise, it will lead to the pipeline failure.

                                            "},{"location":"user-guide/prepare-for-release/#add-a-new-cd-pipeline","title":"Add a New CD Pipeline","text":"
                                            1. Add a new CD pipeline and indicate the new release branch using the Admin console tool. Pay attention to the Applications menu, the necessary application(s) should be selected there, as well as the necessary branch(es) from the drop-down list.

                                              Note

                                              For the details on how to add a CD pipeline, please refer to the Add CD Pipeline page.

                                            2. As soon as the Build pipelines are successfully passed in Jenkins, the Docker Registry, which is used in EDP by default, will have the new image streams (Docker container in Kubernetes terms) version that corresponds to the current branch.

                                            3. Open the Kubernetes/OpenShift page of the project via the Admin Console Overview page \u2192 go to CodebaseImageStream (in OpenShift, go to Builds \u2192 Images) \u2192 check whether the image streams are created under the specific name (the combination of the application and branch names) and the specific tags are added. Click every image stream link.

                                            "},{"location":"user-guide/prepare-for-release/#check-cd-pipeline-structure","title":"Check CD Pipeline Structure","text":"

                                            When the CD pipeline is added through the Admin Console, it becomes available in the CD pipelines list. Every pipeline has the details page with the additional information. To explore the CD pipeline structure, follow the steps below:

                                            1. Open Admin Console and navigate to Continuous Delivery section, click the newly created CD pipeline name.

                                            2. Discover the CD pipeline components:

                                              • Applications - the list of applications with the image streams and links to Jenkins for the respective branch;
                                              • Stages - a set of stages with the defined characteristics and links to Kubernetes/OpenShift project;

                                              Note

                                              Initially, an environment is empty and does not have any deployment unit. When deploying the subsequent stages, the artifacts of the selected versions will be deployed to the current project and the environment will display the current stage status. The project has a standard pattern: \u2039edp-name\u203a-\u2039pipeline-name\u203a-\u2039stage-name\u203a.

                                              • Deployed Versions - the deployment status of the specific application and the predefined stage.
                                            "},{"location":"user-guide/prepare-for-release/#launch-cd-pipeline-manually","title":"Launch CD Pipeline Manually","text":"

                                            Follow the steps below to deploy the QA and UAT application stages:

                                            1. As soon as the Build pipelines for both applications are successfully passed, the new version of the Docker container will appear, thus allowing to launch the CD pipeline. Simply navigate to Continuous Delivery and click the pipeline name to open it in Jenkins.

                                            2. Click the QA stage link.

                                            3. Deploy the QA stage by clicking the Build Now option.

                                            4. After the initialization step starts, in case another menu is opened, the Pause for Input option will appear. Select the application version in the drop-down list and click Proceed. The pipeline passes the following stages:

                                              • Init - initialization of the Jenkins pipeline outputs with the stages that are the Groovy scripts that execute the current code;
                                              • Deploy - the deployment of the selected versions of the docker container and third-party services. As soon as the Deployed pipeline stage is completed, the respective environment will be deployed.
                                              • Approve - the verification stage that enables to Proceed or Abort this stage;
                                              • Promote-images - the creation of the new image streams for the current versions with the pattern combination: [pipeline name]-[stage name]-[application name]-[verified];

                                              After all the stages are passed, the new image streams will be created in the Kubernetes/OpenShift with the new names.

                                            5. Deploy the UAT stage, which takes the versions that were verified during the QA stage, by clicking the Build Now option, and select the necessary application versions. The launch process is the same as for all the deploy pipelines.

                                            6. To get the status of the pipeline deployment, open the CD pipeline details page and check the Deployed versions state.

                                            "},{"location":"user-guide/prepare-for-release/#cd-pipeline-as-a-team-environment","title":"CD Pipeline as a Team Environment","text":"

                                            Admin Console allows creating a CD pipeline with a part of the application set as a team environment. To do this, perform the following steps;

                                            1. Open the Continuous Delivery section \u2192 click the Create button \u2192 enter the pipeline name (e.g. team-a) \u2192 select ONE application and choose the master branch for it \u2192 add one DEV stage.
                                            2. As soon as the CD pipeline is added to the CD pipelines list, its details page will display the links to Jenkins and Kubernetes/OpenShift.
                                            3. Open Jenkins and deploy the DEV stage by clicking the Build Now option.
                                            4. Kubernetes/OpenShift keeps an independent environment that allows checking the new versions, thus speeding up the developing process when working with several microservices.

                                            As a result, the team will have the same abilities to verify the code changes when developing and during the release.

                                            "},{"location":"user-guide/prepare-for-release/#related-articles","title":"Related Articles","text":"
                                            • Add Application
                                            • Add CD Pipeline
                                            • Autotest as Qulity Gate
                                            • Build Pipeline
                                            • CD Pipeline Details
                                            • Customize CD Pipeline
                                            "},{"location":"user-guide/semi-auto-deploy/","title":"Semi Auto Deploy","text":"

                                            The Semi Auto Deploy stage provides the ability to deploy applications with the custom logic that comprises the following behavior:

                                            • When the build of an application selected for deploy in the CD pipeline is completed, the Deploy pipeline is automatically triggered;
                                            • By default, the deploy stage waits for 5 minutes, and if the user does not interfere with the process (cancels or selects certain versions of the application to deploy), then the deploy stage will deploy the latest versions of all applications;
                                            • The stage can be used in the manual mode.

                                            To enable the Semi Auto Deploy stage during the deploy process, follow the steps below:

                                            1. Create or update the CD pipeline: make sure the trigger type for the stage is set to auto.
                                            2. Replace the {\"name\":\"auto-deploy-input\",\"step_name\":\"auto-deploy-input\"} step to the {\"name\":\"semi-auto-deploy-input\",\"step_name\":\"semi-auto-deploy-input\"} step in the CD pipeline. Alternatively, it is possible to create a custom job provisioner with this step.
                                            3. Run the Build pipeline for any application selected in the CD pipeline.
                                            "},{"location":"user-guide/semi-auto-deploy/#exceptional-cases","title":"Exceptional Cases","text":"

                                            After the timeout starts and in case the pipeline has been interrupted not from the Input requested menu, the automatic deployment will be proceeding. To resolve the issue and stop the pipeline, click the Input requested menu -> Abort or being on the pipeline UI, click the Abort button.

                                            "},{"location":"user-guide/semi-auto-deploy/#related-articles","title":"Related Articles","text":"
                                            • Add CD Pipeline
                                            • Customize CD Pipeline
                                            • Manage Jenkins CD Pipeline Job Provisioner
                                            "},{"location":"user-guide/terraform-stages/","title":"CI Pipelines for Terraform","text":"

                                            EPAM Delivery Platform ensures the implemented Terraform support by adding a separate component type called Infrastructure. The Infrastructure codebase type allows to work with Terraform code that is processed by means of stages in the Code-Review and Build pipelines.

                                            "},{"location":"user-guide/terraform-stages/#pipeline-stages-for-terraform","title":"Pipeline Stages for Terraform","text":"

                                            Under the hood, Infrastructure codebase type, namely Terraform, looks quite similar to other codebase types. The distinguishing characterstic of the Infrastructure codebase type is that there is a stage called terraform-check in both of Code Review and Build pipelines. This stage runs the pre-commit activities which in their turn run the following commands and tools:

                                            1. Terraform fmt - the first step of the stage is basically the terraform fmt command. The terraform fmt command automatically updates the formatting of Terraform configuration files to follow the standard conventions and make the code more readable and consistent.

                                            2. Lock provider versions - locks the versions of the Terraform providers used in the project. This ensures that the project uses specific versions of the providers and prevents unexpected changes from impacting the infrastructure due to newer provider versions.

                                            3. Terraform validate - checks the syntax and validity of the Terraform configuration files. It scans the configuration files for all possible issues.

                                            4. Terraform docs - generates human-readable documentation for the Terraform project.

                                            5. Tflint - additional validation step using the tflint linter to provide more in-depth checks in addition to what the terraform validate command does.

                                            6. Checkov - runs the checkov command against the Terraform codebase to identify any security misconfigurations or compliance issues.

                                            7. Tfsec - another security-focused validation step using the tfsec command. Tfsec is a security scanner for Terraform templates that detects potential security issues and insecure configurations in the Terraform code.

                                            Note

                                            The commands and their attributes are displayed in the .pre-commit-config.yaml file.

                                            "},{"location":"user-guide/terraform-stages/#related-articles","title":"Related Articles","text":"
                                            • User Guide Overview
                                            • Add Infrastructure
                                            • Manage Infrastructures
                                            "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"faq/","title":"FAQ","text":""},{"location":"faq/#how-do-i-set-parallel-reconciliation-for-a-number-of-codebase-branches","title":"How Do I Set Parallel Reconciliation for a Number of Codebase Branches?","text":"

                                            Set the CODEBASE_BRANCH_MAX_CONCURRENT_RECONCILES Env variable in codebase-operator by updating Deployment template. For example:

                                                      ...\n          env:\n            - name: WATCH_NAMESPACE\n          ...\n\n            - name: CODEBASE_BRANCH_MAX_CONCURRENT_RECONCILES\n              value: 10\n...\n

                                            It's not recommended to set the value above 10.

                                            "},{"location":"faq/#how-to-change-the-lifespan-of-an-access-token-that-is-used-for-edp-portal-and-oidc-login-plugin","title":"How To Change the Lifespan of an Access Token That Is Used for EDP Portal and 'oidc-login' Plugin?","text":"

                                            Change the Access Token Lifespan: go to your Keycloak and select Openshift realm > Realm settings > Tokens > Access Token Lifespan > set a new value to the field and save this change.

                                            By default, \"Access Token Lifespan\" value is 5 minutes.

                                            Access Token Lifespan

                                            "},{"location":"features/","title":"Basic Concepts","text":"

                                            Consult EDP Glossary section for definitions mentioned on this page and EDP Toolset to have a full list of tools used with the Platform. The below table contains a full list of features provided by EDP.

                                            Features Description Cloud Agnostic EDP runs on Kubernetes cluster, so any Public Cloud Provider which provides Kubernetes can be used. Kubernetes clusters deployed on-premises work as well CI/CD for Microservices EDP is initially designed to support CI/CD for Microservices running as containerized applications inside Kubernetes Cluster. EDP also supports CI for:- Terraform Modules, - Open Policy Rules,- Workflows for Java (8,11,17), JavaScript (React, Vue, Angular, Express, Antora), C# (.NET 6.0), Python (FastAPI, Flask, 3.8), Go (Beego, Operator SDK) Version Control System (VCS) EDP installs Gerrit as a default Source Code Management (SCM) tool. EDP also supports GitHub and GitLab integration Branching Strategy EDP supports Trunk-based development as well as GitHub/GitLab flow. EDP creates two Pipelines per each codebase branch (see Pipeline Framework): Code Review and Build Repository Structure EDP provides separate Git repository per each Codebase and doesn't work with Monorepo. However, EDP does support customization and runs helm-lint, dockerfile-lint steps using Monorepo approach. Artifacts Versioning EDP supports two approaches for Artifacts versioning: - default (BRANCH-[TECH_STACK_VERSION]-BUILD_ID)- EDP (MAJOR.MINOR.PATCH-BUILD_ID), which is SemVer.Custom versioning can be created by implementing get-version stage Application Library EDP provides baseline codebase templates for Microservices, Libraries, within create strategy while onboarding new Codebase Stages Library Each EDP Pipeline consists of pre-defined steps (stages). Consult library documentation for more details CI Pipelines EDP provides CI Pipelines for first-class citizens: - Applications (Microservices) based on Java (8,11,17), JavaScript (React, Vue, Angular, Express, Antora), C# (.NET 6.0), Python (FastAPI, Flask, 3.8), Go (Beego, Operator SDK)- Libraries based on Java (8,11,17), JavaScript (React, Vue, Angular, Express), Python (FastAPI, Flask, 3.8), Groovy Pipeline (Codenarc), Terraform, Rego (OPA), Container (Docker), Helm (Pipeline), C#(.NET 6.0)- Autotests based on Java8, Java11, Java17 CD Pipelines EDP provides capabilities to design CD Pipelines (in Admin Console) for Microservices and defines logic for artifacts flow (promotion) from env to env. Artifacts promotion is performed automatically (Autotests), manually (User Approval) or combining both approaches Autotests EDP provides CI pipeline for autotest implemented in Java. Autotests can be used as Quality Gates in CD Pipelines Custom Pipeline Library EDP can be extended by introducing Custom Pipeline Library Dynamic Environments Each EDP CD Pipeline creates/destroys environment upon user requests"},{"location":"getting-started/","title":"Quick Start","text":""},{"location":"getting-started/#software-requirements","title":"Software Requirements","text":"
                                            • Kubernetes cluster 1.23+ or OpenShift 4.9+;
                                            • Kubectl tool;
                                            • Helm 3.10.x+;
                                            • Keycloak 18.0+;
                                            • Kiosk 0.2.11.
                                            "},{"location":"getting-started/#minimal-hardware-requirements","title":"Minimal Hardware Requirements","text":"

                                            The system should have the following specifications to run properly:

                                            • CPU: 8 Core
                                            • Memory: 32 Gb
                                            "},{"location":"getting-started/#edp-toolset","title":"EDP Toolset","text":"

                                            EPAM Delivery Platform supports the following tools:

                                            Domain Related Tools/Solutions Artifacts Management Nexus Repository, Jfrog Artifactory AWS IRSA, AWS ECR, AWS EFS, Parameter Store, S3, ALB/NLB, Route53 Build .NET, Go, Apache Gradle, Apache Maven, NPM Cluster Backup Velero Code Review Gerrit, GitLab, GitHub Container Registry AWS ECR, OpenShift Registry, Harbor, DockerHub Containers Hadolint, Kaniko, Crane Documentation as Code MkDocs, Antora (AsciiDoc) Infrastructure as Code Terraform, TFLint, Terraform Docs, Crossplane, AWS Controllers for Kubernetes Kubernetes Deployment Kubectl, Helm, Helm Docs, Chart Testing, Argo CD, Argo Rollout Kubernetes Multitenancy Kiosk Logging OpenSearch, EFK, ELK, Loki, Splunk Monitoring Prometheus, Grafana, VictoriaMetrics Pipeline Orchestration Tekton, Jenkins Policies/Rules Open Policy Agent Secrets Management External Secret Operator, Vault Secure Development SonarQube, DefectDojo, Dependency Track, Semgrep, Grype, Trivy, Clair, GitLeaks, CycloneDX Generator, tfsec, checkov SSO Keycloak, oauth2-proxy Test Report Tool ReportPortal, Allure Tracing OpenTelemetry, Jaeger"},{"location":"getting-started/#install-edp","title":"Install EDP","text":"

                                            To install EDP with the necessary parameters, please refer to the Install EDP section of the Operator Guide. Mind the parameters in the EDP installation chart. For details, please refer to the values.yaml.

                                            Find below the example of the installation command:

                                                helm install edp epamedp/edp-install --wait --timeout=900s \\\n    --version <edp_version> \\\n    --set global.dnsWildCard=<cluster_DNS_wilcdard> \\\n    --set global.platform=<platform_type> \\\n    --set awsRegion=<region> \\\n    --set global.dockerRegistry.url=<aws_account_id>.dkr.ecr.<region>.amazonaws.com \\\n    --set keycloak-operator.keycloak.url=<keycloak_endpoint> \\\n    --set global.gerritSSHPort=<gerrit_ssh_port> \\\n    --namespace edp\n

                                            Warning

                                            Please be aware that the command above is an example.

                                            "},{"location":"getting-started/#related-articles","title":"Related Articles","text":"

                                            Getting Started

                                            "},{"location":"glossary/","title":"Glossary","text":"

                                            Get familiar with the definitions and context for the most useful EDP terms presented in table below.

                                            Terms Details EDP Component - an item used in CI/CD process EDP Portal UI - an EDP component that helps to manage, set up, and control the business entities. Artifactory - an EDP component that stores all the binary artifacts. NOTE: Nexus is used as a possible implementation of a repository. CI/CD Server - an EDP component that launches pipelines that perform the build, QA, and deployment code logic. NOTE: Jenkins is used as a possible implementation of a CI/CD server. Code Review tool - an EDP component that collaborates with the changes in the codebase. NOTE: Gerrit is used as a possible implementation of a code review tool. Identity Server - an authentication server providing a common way to verify requests to all of the applications. NOTE: Keycloak is used as a possible implementation of an identity server. Security Realm Tenant - a realm in identity server (e.g Keycloak) where all users' accounts and their access permissions are managed. The realm is unique for the identity server instance. Static Code Analyzer - an EDP component that inspects continuously a code quality before the necessary changes appear in a master branch. NOTE: SonarQube is used as a possible implementation of a static code analyzer. VCS (Version Control System) - a replication of the Gerrit repository that displays all the changes made by developers. NOTE: GitHub and GitLab are used as the possible implementation of a repository with the version control system. EDP Business Entity - a part of the CI/CD process (the integration, delivery, and deployment of any codebase changes) Application - a codebase type that is built as the binary artifact and deployable unit with the code that is stored in VCS. As a result, the application becomes a container and can be deployed in an environment. Autotests - a codebase type that inspects a product (e.g. an application set) on a stage. Autotests are not deployed to any container and launched from the respective code stage. CD Pipeline (Continuous Delivery Pipeline) - an EDP business entity that describes the whole delivery process of the selected application set via the respective stages. The main idea of the CD pipeline is to promote the application version between the stages by applying the sequential verification (i.e. the second stage will be available if the verification on the first stage is successfully completed). NOTE: The CD pipeline can include the essential set of applications with its specific stages as well. CD Pipeline Stage - an EDP business entity that is presented as the logical gate required for the application set inspection. Every stage has one OpenShift project where the selected application set is deployed. All stages are sequential and promote applications one-by-one. Codebase - an EDP business entity that possesses a code. Codebase Branch - an EDP business entity that represents a specific version in a Git branch. Every codebase branch has a Codebase Docker Stream entity. Codebase Docker Stream - a deployable component that leads to the application build and displays that the last build was verified on the specific stage. Every CD pipeline stage accepts a set of Codebase Docker Streams (CDS) that are input and output. SAMPLE: if an application1 has a master branch, the input CDS will be named as [app name]-[pipeline name]-[stage name]-[master] and the output after the passing of the DEV stage will be as follows: [app name]-[pipeline name]-[stage name]-[dev]-[verified]. Library - a codebase type that is built as the binary artifact, i.e. it`s stored in the Artifactory and can be uploaded by other applications, autotests or libraries. Quality Gate - an EDP business entity that represents the minimum acceptable results after the testing. Every stage has a quality gate that should be passed to promote the application. The stage quality gate can be a manual approve from a QA specialist OR a successful autotest launch. Quality Gate Type - this value defines trigger type that promotes artifacts (images) to the next environment in CD Pipeline. There are manual and automatic types of quality gates. The manual type means that the promoting process should be confirmed in Jenkins. The automatic type promotes the images automatically in case there are no errors in the Allure Report. NOTE: If any of the test types is not passed, the CD pipeline will fail. Trigger Type - a value that defines a trigger type used for the CD pipeline triggering. There are manual and automatic types of triggering. The manual type means that the CD pipeline should be triggered manually. The automatic type triggers the CD pipeline automatically as soon as the Codebase Docker Stream was changed. EDP CI/CD Pipelines Framework - a library that allows extending the Jenkins pipelines and stages to develop an application. Pipelines are presented as the shared library that can be connected in Jenkins. The library is connected using the Git repository link (a public repository that is supported by EDP) on the GitHub. Allure Report- a tool that represents test results in one brief report in a clear form. Automated Tests - different types of automated tests that can be run on the environment for a specific stage. Build Pipeline - a Jenkins pipeline that builds a corresponding codebase branch in the Codebase. Build Stage - a stage that takes place after the code has been submitted/merged to the repository of the main branch (the pull request from the feature branch is merged to the main one, the Patch set is submitted in Gerrit). Code Review Pipeline - a Jenkins pipeline that inspects the code candidate in the Code Review tool. Code Review Stage - a stage where code is reviewed before it goes to the main branch repository of the version control system (the commit to the feature branch is pushed, the Patch set is created in Gerrit). Deploy Pipeline - a Jenkins pipeline that is responsible for the CD Pipeline Stage deployment with the full set of applications and autotests. Deployment Stage - a part of the Continuous Delivery where artifacts are being deployed to environments. EDP CI/CD Pipelines - an orchestrator for stages that is responsible for the common technical events, e.g. initialization, in Jenkins pipeline. The set of stages for the pipeline is defined as an input JSON file for the respective Jenkins job. NOTE: There is the ability to create the necessary realization of the library pipeline on your own as well. EDP CI/CD Stages - a repository that is launched in the Jenkins pipeline. Every stage is presented as an individual Groovy file in a corresponding repository. Such single responsibility realization allows rewriting of one essential stage without changing the whole pipeline. Environment - a part of the stage where the built and packed into an image application are deployed for further testing. It`s possible to deploy several applications to several environments (Team and Integration environments) within one stage. Integration Environment - an environment type that is always deployed as soon as the new application version is built in order to launch the integration test and promote images to the next stages. The Integration Environment can be triggered manually or in case a new image appears in the Docker registry. Jenkinsfile - a text file that keeps the definition of a Jenkins Pipeline and is checked into source control. Every Job has its Jenkinsfile that is stored in the specific application repository and in Jenkins as the plain text. Jenkins Node - a machine that is a part of the Jenkins environment that is capable of executing a pipeline. Jenkins Pipeline - a user-defined model of a CD pipeline. The pipeline code defines the entire build process. Jenkins Stage - a part of the whole CI/CD process that should pass the source code in order to be released and deployed on the production. Team Environment - an environment type that can be deployed at any time by the manual trigger of the Deploy pipeline where team or developers can check out their applications. NOTE: The promotion from such kind of environment is prohibited and developed only for the local testing. OpenShift / Kubernetes (K8S) ConfigMap - a resource that stores configuration data and processes the strings that do not contain sensitive information. Docker Container - is a lightweight, standalone, and executable package. Docker Registry - a store for the Docker Container that is created for the application after the Build pipeline performance. OpenShift Web Console - a web console that enables to view, manage, and change OpenShift / K8S resources using browser. Operator Framework - a deployable unit in OpenShift that is responsible for one or a set of resources and performs its life circle (adding, displaying, and provisioning). Path - a route component that helps to find a specified path (e.g. /api) at once and skip the other. Pod - the smallest deployable unit of the large microservice application that is responsible for the application launch. The pod is presented as the one launched Docker container. When the Docker container is collected, it will be kept in Docker Registry and then saved as Pod in the OpenShift project. NOTE: The Deployment Config is responsible for the Pod push, restart, and stop processes. PV (Persistent Volume) - a cluster resource that captures the details of the storage implementation and has an independent lifecycle of any individual pod. PVC (Persistent Volume Claim) - a user request for storage that can request specific size and access mode. PV resources are consumed by PVCs. Route - a resource in OpenShift that allows getting the external access to the pushed application. Secret - an object that stores and manages all the sensitive information (e.g. passwords, tokens, and SSH keys). Service - an external connection point with Pod that is responsible for the network. A specific Service is connected to a specific Pod using labels and redirects all the requests to Pod as well. Site - a route component (link name) that is created from the indicated application name and applies automatically the project name and a wildcard DNS record."},{"location":"overview/","title":"Overview","text":"

                                            EPAM Delivery Platform (EDP) is an open-source cloud-agnostic SaaS/PaaS solution for software development, licensed under Apache License 2.0. It provides a pre-defined set of CI/CD patterns and tools, which allow a user to start product development quickly with established code review, release, versioning, branching, build processes. These processes include static code analysis, security checks, linters, validators, dynamic feature environments provisioning. EDP consolidates the top Open-Source CI/CD tools by running them on Kubernetes/OpenShift, which enables web/app development either in isolated (on-prem) or cloud environments.

                                            EPAM Delivery Platform, which is also called \"The Rocket\", is a platform that allows shortening the time that is passed before an active development can be started from several months to several hours.

                                            EDP consists of the following:

                                            • The platform based on managed infrastructure and container orchestration
                                            • Security covering authentication, authorization, and SSO for platform services
                                            • Development and testing toolset
                                            • Well-established engineering process and EPAM practices (EngX) reflected in CI/CD pipelines, and delivery analytics
                                            • Local development with debug capabilities
                                            "},{"location":"overview/#features","title":"Features","text":"
                                            • Deployed and configured CI/CD toolset (Tekton, ArgoCD, Jenkins, Nexus, SonarQube, DefectDojo)
                                            • Gerrit, GitLab or GitHub as a version control system for your code
                                            • Tekton is a default pipeline orchestrator
                                            • Jenkins is an optional pipeline orchestrator
                                            • CI pipelines

                                              Tekton (by default)Jenkins (optional) Language Framework Build Tool Application Library Autotest Java Java 8, Java 11, Java 17 Gradle, Maven Python Python 3.8, FastAPI, Flask Python C# .Net 3.1, .Net 6.0 .Net Go Beego, Gin, Operator SDK Go JavaScript React, Vue, Angular, Express, Next.js, Antora NPM HCL Terraform Terraform Helm Helm, Pipeline Helm Groovy Codenarc Codenarc Rego OPA OPA Container Docker Kaniko Language Framework Build Tool Application Library Autotest Java Java 8, Java 11 Gradle, Maven Python Python 3.8 Python .Net .Net 3.1 .Net Go Beego, Operator SDK Go JavaScript React NPM HCL Terraform Terraform Groovy Codenarc Codenarc Rego OPA OPA Container Docker Kaniko
                                            • Portal UI as a single entry point
                                            • CD pipeline for Microservice Deployment
                                            • Kubernetes native approach (CRD, CR) to declare CI/CD pipelines
                                            "},{"location":"overview/#whats-inside","title":"What's Inside","text":"

                                            EPAM Delivery Platform (EDP) is suitable for all aspects of delivery starting from development including the capability to deploy production environment. EDP architecture is represented on a diagram below.

                                            Architecture

                                            EDP consists of four cross-cutting concerns:

                                            1. Infrastructure as a Service;
                                            2. GitOps approach;
                                            3. Container orchestration and centralized services;
                                            4. Security.

                                            On the top of these indicated concerns, EDP adds several blocks that include:

                                            • EDP CI/CD Components. EDP component enables a feature in CI/CD or an instance artifacts storage and distribution (Nexus or Artifactory), static code analysis (Sonar), etc.;
                                            • EDP Artifacts. This element represents an artifact that is being delivered through EDP and presented as a code.

                                              Artifact samples: frontend, backend, mobile, applications, functional and non-functional autotests, workloads for 3rd party components that can be deployed together with applications.

                                            • EDP development and production environments that share the same logic. Environments wrap a set of artifacts with a specific version, and allow performing SDLC routines in order to be sure of the artifacts quality;
                                            • Pipelines. Pipelines cover CI/CD process, production rollout and updates. They also connect three elements indicated above via automation allowing SDLC routines to be non-human;
                                            "},{"location":"overview/#technology-stack","title":"Technology Stack","text":"

                                            Explore the EDP technology stack diagram

                                            Technology stack

                                            The EDP IaaS layer supports most popular public clouds AWS, Azure and GCP keeping the capability to be deployed on private/hybrid clouds based on OpenStack. EDP containers are based on Docker technology, orchestrated by Kubernetes compatible solutions.

                                            There are two main options for Kubernetes provided by EDP:

                                            • Managed Kubernetes in Public Clouds to avoid installation and management of Kubernetes cluster, and get all benefits of scaling, reliability of this solution;
                                            • OpenShift that is a Platform as a Service on the top of Kubernetes from Red Hat. OpenShift is the default option for on-premise installation and it can be considered whether the solution built on the top of EDP should be cloud-agnostic or require enterprise support;

                                            There is no limitation to run EDP on vanilla Kubernetes.

                                            "},{"location":"overview/#related-articles","title":"Related Articles","text":"
                                            • Quick Start
                                            • Basic Concepts
                                            • Glossary
                                            • Supported Versions and Compatibility
                                            "},{"location":"roadmap/","title":"RoadMap","text":"

                                            RoadMap consists of three streams:

                                            • Community
                                            • Architecture
                                            • Building Blocks
                                            • Admin Console
                                            • Documentation
                                            "},{"location":"roadmap/#i-community","title":"I. Community","text":"

                                            Goals:

                                            • Innovation Through Collaboration
                                            • Improve OpenSource Adoption
                                            • Build Community around technology solutions EDP is built on
                                            "},{"location":"roadmap/#deliver-operators-on-operatorhub","title":"Deliver Operators on OperatorHub","text":"

                                            OperatorHub is a defacto leading solution which consolidates Kubernetes Community around Operators. EDP follows the best practices of delivering Operators in a quick and reliable way. We want to improve Deployment and Management experience for our Customers by publishing all EDP operators on this HUB.

                                            Another artifact aggregator which is used by EDP - ArtifactHub, that holds description for both components: stable and under-development.

                                            OperatorHub. Keycloak Operator

                                            EDP Keycloak Operator is now available from OperatorHub both for Upstream (Kubernetes) and OpenShift deployments.

                                            "},{"location":"roadmap/#ii-architecture","title":"II. Architecture","text":"

                                            Goals:

                                            • Improve reusability for EDP components
                                            • Integrate Kubernetes Native Deployment solutions
                                            • Introduce abstraction layer for CI/CD components
                                            • Build processes around the GitOps approach
                                            • Introduce secrets management
                                            "},{"location":"roadmap/#kubernetes-multitenancy","title":"Kubernetes Multitenancy","text":"

                                            Multiple instances of EDP are run in a single Kubernetes cluster. One way to achieve this is to use Multitenancy. Initially, Kiosk was selected as tools that provides this capability. An alternative option that EDP Team took into consideration is Capsule. Another tool which goes far beyond multitenancy is vcluster going a good candidate for e2e testing scenarios where one needs simple lightweight kubernetes cluster in CI pipelines.

                                            "},{"location":"roadmap/#microservice-reference-architecture-framework","title":"Microservice Reference Architecture Framework","text":"

                                            EDP provides basic Application Templates for a number of technology stacks (Java, .Net, NPM, Python) and Helm is used as a deployment tool. The goal is to extend this library and provide: Application Templates which are built on pre-defined architecture patterns (e.g., Microservice, API Gateway, Circuit Breaker, CQRS, Event Driven) and Deployment Approaches: Canary, Blue/Green. This requires additional tools installation on cluster as well.

                                            "},{"location":"roadmap/#policy-enforcement-for-kubernetes","title":"Policy Enforcement for Kubernetes","text":"

                                            Running workload in Kubernetes calls for extra effort from Cluster Administrators to ensure those workloads do follow best practices or specific requirements defined on organization level. Those requirements can be formalized in policies and integrated into: CI Pipelines and Kubernetes Cluster (through Admission Controller approach) - to guarantee proper resource management during development and runtime phases. EDP uses Open Policy Agent (from version 2.8.0), since it supports compliance check for more use-cases: Kubernetes Workloads, Terraform and Java code, HTTP APIs and many others. Kyverno is another option being checked in scope of this activity.

                                            "},{"location":"roadmap/#secrets-management","title":"Secrets Management","text":"

                                            EDP should provide secrets management as a part of platform. There are multiple tools providing secrets management capabilities. The aim is to be aligned with GitOps and Operator Pattern approaches so HashiCorp Vault, Banzaicloud Bank Vaults, Bitnami Sealed Secrets are currently used for internal projects and some of them should be made publicly available - as a part of EDP Deployment.

                                            EDP Release 2.12.x

                                            External Secret Operator is a recommended secret management tool for the EDP components.

                                            "},{"location":"roadmap/#release-management","title":"Release Management","text":"

                                            Conventional Commits and Conventional Changelog are two approaches to be used as part of release process. Today EDP provides only capabilities to manage Release Branches. This activity should address this gap by formalizing and implementing Release Process as a part of EDP. Topics to be covered: Versioning, Tagging, Artifacts Promotion.

                                            "},{"location":"roadmap/#kubernetes-native-cicd-pipelines","title":"Kubernetes Native CI/CD Pipelines","text":"

                                            EDP uses Jenkins as Pipeline Orchestrator. Jenkins runs workload for CI and CD parts. There is also basic support for GitLab CI, but it provides Docker image build functionality only. EDP works on providing an alternative to Jenkins and use Kubernetes Native Approach for pipeline management. There are a number of tools, which provides such capability:

                                            • Argo CD
                                            • Argo Workflows
                                            • Argo Rollouts
                                            • Tekton
                                            • Drone
                                            • Flux

                                            This list is under investigation and solution is going to be implemented in two steps:

                                            1. Introduce tool that provide Continues Delivery/Deployment approach. Argo CD is one of the best to go with.
                                            2. Integrate EDP with tool that provides Continues Integration capabilities.

                                            EDP Release 2.12.x

                                            Argo CD is suggested as a solution providing the Continuous Delivery capabilities.

                                            EDP Release 3.0

                                            Tekton is used as a CI/CD pipelines orchestration tool on the platform. Review edp-tekton GitHub repository that keeps all the logic behind this solution on the EDP (Pipelines, Tasks, TriggerTemplates, Interceptors, etc). Get acquainted with the series of publications on our Medium Page.

                                            "},{"location":"roadmap/#advanced-edp-role-based-model","title":"Advanced EDP Role-based Model","text":"

                                            EDP has a number of base roles which are used across EDP. In some cases it is necessary to provide more granular permissions for specific users. It is possible to do this using Kubernetes Native approach.

                                            "},{"location":"roadmap/#notifications-framework","title":"Notifications Framework","text":"

                                            EDP has a number of components which need to report their statuses: Build/Code Review/Deploy Pipelines, changes in Environments, updates with artifacts. The goal for this activity is to onboard Kubernetes Native approach which provides Notification capabilities with different sources/channels integration (e.g. Email, Slack, MS Teams). Some of these tools are Argo Events, Botkube.

                                            "},{"location":"roadmap/#reconciler-component-retirement","title":"Reconciler Component Retirement","text":"

                                            Persistent layer, which is based on edp-db (PostgreSQL) and reconciler component should be retired in favour of Kubernetes Custom Resource (CR). The latest features in EDP are implemented using CR approach.

                                            EDP Release 3.0

                                            Reconciler component is deprecated and is no longer supported. All the EDP components are migrated to Kubernetes Custom Resources (CR).

                                            "},{"location":"roadmap/#iii-building-blocks","title":"III. Building Blocks","text":"

                                            Goals:

                                            • Introduce best practices from Microservice Reference Architecture deployment and observability using Kubernetes Native Tools
                                            • Enable integration with the Centralized Test Reporting Frameworks
                                            • Onboard SAST/DAST tool as a part of CI pipelines and Non-Functional Testing activities

                                            EDP Release 2.12.x

                                            SAST is introduced as a mandatory part of the CI Pipelines. The list of currently supported SAST scanners and the instruction on how to add them are also available.

                                            "},{"location":"roadmap/#infrastructure-as-code","title":"Infrastructure as Code","text":"

                                            EDP Target tool for Infrastructure as Code (IaC) is Terraform. EDP sees two CI/CD scenarios while working with IaC: Module Development and Live Environment Deployment. Today, EDP provides basic capabilities (CI Pipelines) for Terraform Module Development. At the same time, currently EDP doesn't provide Deployment pipelines for Live Environments and the feature is under development. Terragrunt is an option to use in Live Environment deployment. Another Kubernetes Native approach to provision infrastructure components is Crossplane.

                                            "},{"location":"roadmap/#database-schema-management","title":"Database Schema Management","text":"

                                            One of the challenges for Application running in Kubernetes is to manage database schema. There are a number of tools which provides such capabilities, e.g. Liquibase, Flyway. Both tools provide versioning control for database schemas. There are different approaches on how to run migration scripts in Kubernetes: in init container, as separate Job or as a separate CD stage. Purpose of this activity is to provide database schema management solution in Kubernetes as a part of EDP. EDP Team investigates SchemaHero tool and use-cases which suits Kubernetes native approach for database schema migrations.

                                            "},{"location":"roadmap/#open-policy-agent","title":"Open Policy Agent","text":"

                                            Open Policy Agent is introduced in version 2.8.0. EDP now supports CI for Rego Language, so you can develop your own policies. The next goal is to provide pipeline steps for running compliance policies check for Terraform, Java, Helm Chart as a part of CI process.

                                            "},{"location":"roadmap/#report-portal","title":"Report Portal","text":"

                                            EDP uses Allure Framework as a Test Report tool. Another option is to integrate Report Portal into EDP ecosystem.

                                            EDP Release 3.0

                                            Use ReportPortal to consolidate and analyze your Automation tests results. Consult our pages on how to perform reporting and Keycloak integration.

                                            "},{"location":"roadmap/#carrier","title":"Carrier","text":"

                                            Carrier provides Non-functional testing capabilities.

                                            "},{"location":"roadmap/#java-17","title":"Java 17","text":"

                                            EDP supports two LTS versions of Java: 8 and 11. The goal is to provide Java 17 (LTS) support.

                                            EDP Release 3.2.1

                                            CI Pipelines for Java 17 is available in EDP.

                                            "},{"location":"roadmap/#velero","title":"Velero","text":"

                                            Velero is used as a cluster backup tool and is deployed as a part of Platform. Currently, Multitenancy/On-premise support for backup capabilities is in process.

                                            "},{"location":"roadmap/#istio","title":"Istio","text":"

                                            Istio is to be used as a Service Mesh and to address challenges for Microservice or Distributed Architectures.

                                            "},{"location":"roadmap/#kong","title":"Kong","text":"

                                            Kong is one of tools which is planned to use as an API Gateway solution provider. Another possible candidate for investigation is Ambassador API Gateway

                                            "},{"location":"roadmap/#openshift-4x","title":"OpenShift 4.X","text":"

                                            EDP supports the OpenShift 4.9 platform.

                                            EDP Release 2.12.x

                                            EDP Platform runs on the latest OKD versions: 4.9 and 4.10. Creating the IAM Roles for Service Account is a recommended way to work with AWS Resources from the OKD cluster.

                                            "},{"location":"roadmap/#iv-admin-console-ui","title":"IV. Admin Console (UI)","text":"

                                            Goals:

                                            • Improve U\u0425 for different user types to address their concerns in the delivery model
                                            • Introduce user management capabilities
                                            • Enrich with traceability metrics for products

                                            EDP Release 2.12.x

                                            EDP Team has introduced a new UI component called EDP Headlamp, which will replace the EDP Admin Console in future releases. EDP Headlamp is based on the Kinvolk Headlamp UI Client.

                                            EDP Release 3.0

                                            EDP Headlamp is used as a Control Plane UI on the platform.

                                            EDP Release 3.4

                                            Since EDP v3.4.0, Headlamp UI has been renamed to EDP Portal.

                                            "},{"location":"roadmap/#users-management","title":"Users Management","text":"

                                            EDP uses Keycloak as an Identity and Access provider. EDP roles/groups are managed inside the Keycloak realm, then these changes are propagated across the EDP Tools. We plan to provide this functionality in EDP Portal using the Kubernetes-native approach (Custom Resources).

                                            "},{"location":"roadmap/#the-delivery-pipelines-dashboard","title":"The Delivery Pipelines Dashboard","text":"

                                            The CD Pipeline section in EDP Portal provides basic information, such as environments, artifact versions deployed per each environment, and direct links to the namespaces. One option is to enrich this panel with metrics from the Prometheus, custom resources, or events. Another option is to use the existing dashboards and expose EDP metrics to them, for example, plugin for Lens or OpenShift UI Console.

                                            "},{"location":"roadmap/#split-jira-and-commit-validation-sections","title":"Split Jira and Commit Validation Sections","text":"

                                            Commit Validate step was initially designed to be aligned with Jira Integration and cannot be used as single feature. Target state is to ensure features CommitMessage Validation and Jira Integration both can be used independently. We also want to add support for Conventional Commits.

                                            EDP Release 3.2.0

                                            EDP Portal has separate sections for Jira Integration and CommitMessage Validation step.

                                            "},{"location":"roadmap/#v-documentation-as-code","title":"V. Documentation as Code","text":"

                                            Goal:

                                            • Transparent documentation and clear development guidelines for EDP customization.

                                            Consolidate documentation in a single repository edp-install, use mkdocs tool to generate docs and GitHub Pages as a hosting solution.

                                            "},{"location":"supported-versions/","title":"Supported Versions and Compatibility","text":"

                                            EPAM Delivery Platform supports only the three last versions. For a stable performance, the EDP team recommends installing the corresponding Kubernetes and OpenShift versions as indicated in the table below.

                                            Get acquainted with the list of the latest releases and component versions on which the platform is tested and verified:

                                            EDP Release Version Release Date EKS Version OpenShift Version 3.4 Aug 18, 2023 1.26 4.12 3.3 May 25, 2023 1.26 4.12 3.2 Mar 26, 2023 1.23 4.10 3.1 Jan 24, 2023 1.23 4.10 3.0 Dec 19, 2022 1.23 4.10 2.12 Aug 30, 2022 1.23 4.10"},{"location":"developer-guide/","title":"Overview","text":"

                                            The EDP Developer guide is intended for developers and provides details on the necessary actions to extend the EDP functionality.

                                            "},{"location":"developer-guide/edp-workflow/","title":"EDP Project Rules. Working Process","text":"

                                            This page contains the details on the project rules and working process for EDP team and contributors. Explore the main points about working with Gerrit, following the main commit flow, as well as the details about commit types and message below.

                                            "},{"location":"developer-guide/edp-workflow/#project-rules","title":"Project Rules","text":"

                                            Before starting the development, please check the project rules:

                                            1. It is highly recommended to become familiar with the Gerrit flow. For details, please refer to the Gerrit official documentation and pay attention to the main points:

                                              a. Voting in Gerrit.

                                              b. Resolution of Merge Conflict.

                                              c. Comments resolution.

                                              d. One Jira task should have one Merge Request (MR). If there are many changes within one MR, add the next patch set to the open MR by selecting the Amend commit check box.

                                            2. Only the Assignee is responsible for the MR merge and Jira task status.

                                            3. Every MR should be merged in a timely manner.

                                            4. Log time to Jira ticket.

                                            "},{"location":"developer-guide/edp-workflow/#working-process","title":"Working Process","text":"

                                            With EDP, the main workflow is based on the getting a Jira task and creating a Merge Request according to the rules described below.

                                            Workflow

                                            Get Jira task \u2192 implement, verify by yourself the results \u2192 create Merge Request (MR) \u2192 send for review \u2192 resolve comments/add changes, ask colleagues for the final review \u2192 track the MR merge \u2192 verify by yourself the results \u2192 change the status in the Jira ticket to CODE COMPLETE or RESOLVED \u2192 share necessary links with a QA specialist in the QA Verification channel \u2192 QA specialist closes the Jira task after his verification \u2192 Jira task should be CLOSED.

                                            Commit Flow

                                            1. Get a task in the Jira/GitHub dashboard. Please be aware of the following points:

                                              JiraGitHub

                                              a. Every task has a reporter who can provide more details in case something is not clear.

                                              b. The responsible person for the task and code implementation is the assignee who tracks the following:

                                              • Actual Jira task status.
                                              • Time logging.
                                              • Add comments, attach necessary files.
                                              • In comments, add link that refers to the merged MR (optional, if not related to many repositories).
                                              • Code review and the final merge.
                                              • MS Teams chats - ping other colleagues, answer questions, etc.
                                              • Verification by a QA specialist.
                                              • Bug fixing.

                                              c. Pay attention to the task Status that differs in different entities, the workflow will help to see the whole task processing:

                                              View Jira workflow

                                              d. There are several entities that are used on the EDP project: Story, Improvement, Task, Bug.

                                              a. Every task has a reporter who can provide more details in case something is not clear.

                                              b. The responsible person for the task and code implementation is the assignee who tracks the following:

                                              • Actual GitHub task status.
                                              • Add comments, attach necessary files.
                                              • In comments, add link that refers to the merged MR (optional, if not related to many repositories).
                                              • Code review and the final merge.
                                              • MS Teams chats - ping other colleagues, answer questions, etc.
                                              • Verification by a QA specialist.
                                              • Bug fixing.

                                              c. If the task is created on your own, make sure it is populated completely. See an example below:

                                              GitHub issue

                                            2. Implement feature, improvement, fix and check the results on your own. If it is impossible to check the results of your work before the merge, verify all later.

                                            3. Create a Merge Request, for details, please refer to the Code Review Process.

                                            4. When committing, use the pattern: commit type: Commit message (#GitHub ticket number).

                                              a. commit type:

                                              feat: (new feature for the user, not a new feature for build script)

                                              fix: (bug fix for the user, not a fix to a build script)

                                              docs: (changes to the documentation)

                                              style: (formatting, missing semicolons, etc; no production code change)

                                              refactor: (refactoring production code, eg. renaming a variable)

                                              test: (adding missing tests, refactoring tests; no production code change)

                                              chore: (updating grunt tasks etc; no production code change)

                                              !: (added to other commit types to mark breaking changes) For example:

                                              feat!: Job provisioner is responsible for the formation of Jenkinsfile (#26)\n\nBREAKING CHANGE: Job provisioner creates Jenkinsfile and configures it in Jenkins pipeline as a pipeline script.\n

                                              b. Commit message:

                                              • brief, for example:

                                                fix: Fix Gerrit plugin for Jenkins provisioning (#62)

                                                or

                                              • descriptive, for example:

                                                feat: Provide the ability to configure hadolint check (#88)\n\n*Add configuration files .hadolint.yaml and .hadolint.yml to stash\n

                                                Note

                                                It is mandatory to start a commit message from a capital letter.

                                              c. GitHub tickets are typically identified using a number preceded by the # sign and enclosed in parentheses.

                                            Note

                                            Make sure there is a descriptive commit message for a breaking change Merge Request. For example:

                                            feat!: Job provisioner is responsible for the formation of Jenkinsfile

                                            BREAKING CHANGE: Job provisioner creates Jenkinsfile and configures it in Jenkins pipeline as a pipeline script.

                                            Note

                                            If a Merge Request contains both new functionality and breaking changes, make sure the functionality description is placed before the breaking changes. For example:

                                            feat!: Update Gerrit to improve access

                                            • Implement Developers group creation process
                                            • Align group permissions

                                            BREAKING CHANGES: Update Gerrit config according to groups

                                            "},{"location":"developer-guide/edp-workflow/#related-articles","title":"Related Articles","text":"
                                            • Conventional Commits
                                            • Karma
                                            "},{"location":"developer-guide/local-development/","title":"Workspace Setup Manual","text":"

                                            This page is intended for developers with the aim to share details on how to set up the local environment and start coding in Go language for EPAM Delivery Platform.

                                            "},{"location":"developer-guide/local-development/#prerequisites","title":"Prerequisites","text":"
                                            • Git is installed;
                                            • One of our repositories where you would like to contribute is cloned locally;
                                            • Docker is installed;
                                            • Kubectl is set up;
                                            • Local Kubernetes cluster (Kind is recommended) is installed;
                                            • Helm is installed;
                                            • Any IDE (GoLand is used here as an example) is installed;
                                            • GoLang stable version is installed.

                                            Note

                                            Make sure GOPATH and GOROOT environment variables are added in PATH.

                                            "},{"location":"developer-guide/local-development/#environment-setup","title":"Environment Setup","text":"

                                            Set up your environment by following the steps below.

                                            "},{"location":"developer-guide/local-development/#set-up-your-ide","title":"Set Up Your IDE","text":"

                                            We recommend using GoLand and enabling the Kubernetes plugin. Before installing plugins, make sure to save your work because IDE may require restarting.

                                            "},{"location":"developer-guide/local-development/#set-up-your-operator","title":"Set Up Your Operator","text":"

                                            To set up the cloned operator, follow the three steps below:

                                            1. Configure Go Build Option. Open folder in GoLand, click the button and select the Go Build option:

                                              Add configuration

                                            2. Fill in the variables in Configuration tab:

                                              • In the Files field, indicate the path to the main.go file;
                                              • In the Working directory field, indicate the path to the operator;
                                              • In the Environment field, specify the namespace to watch by setting WATCH_NAMESPACE variable. It should equal default but it can be any other if required by the cluster specifications.
                                              • In the Environment field, also specify the platform type by setting PLATFORM_TYPE. It should equal either kubernetes or openshift.

                                              Build config

                                            3. Check cluster connectivity and variables. Local development implies working within local Kubernetes clusters. Kind (Kubernetes in Docker) is recommended so set this or another environment first before running code.

                                            "},{"location":"developer-guide/local-development/#pre-commit-activities","title":"Pre-commit Activities","text":"

                                            Before making commit and sending pull request, take care of precautionary measures to avoid crashing some other parts of the code.

                                            "},{"location":"developer-guide/local-development/#testing-and-linting","title":"Testing and Linting","text":"

                                            Testing and linting must be used before every single commit with no exceptions. The instructions for the commands below are written here.

                                            It is mandatory to run test and lint to make sure the code passes the tests and meets acceptance criteria. Most operators are covered by tests so just run them by issuing the commands \"make test\" and \"make lint\":

                                              make test\n

                                            The command \"make test\" should give the output similar to the following:

                                            \"make test\" command

                                              make lint\n

                                            The command \"make lint\" should give the output similar to the following:

                                            \"make lint\" command

                                            "},{"location":"developer-guide/local-development/#observe-auto-generated-docs-api-and-manifests","title":"Observe Auto-Generated Docs, API and Manifests","text":"

                                            The commands below are especially essential when making changes to API. The code is unsatisfactory if these commands fail.

                                            • Generate documentation in the .MD file format so the developer can read it:

                                              make api-docs\n

                                              The command \"make api-docs\" should give the output similar to the following:

                                            \"make api-docs\" command with the file contents

                                            • There are also manifests within the operator that generate zz_generated.deepcopy.go file in /api/v1 directory. This file is necessary for the platform to work but it's time-consuming to fill it by yourself so there is a mechanism that does it automatically. Update it using the following command and check if it looks properly:

                                              make generate\n

                                              The command \"make generate\" should give the output similar to the following:

                                            \"make generate\" command

                                            • Refresh custom resource definitions for Kubernetes, thus allowing the cluster to know what resources it deals with.

                                              make manifests\n

                                              The command \"make manifests\" should give the output similar to the following:

                                            \"make manifests\" command

                                            At the end of the procedure, you can push your code confidently to your branch and create a pull request.

                                            That's it, you're all set! Good luck in coding!

                                            "},{"location":"developer-guide/local-development/#related-articles","title":"Related Articles","text":"
                                            • EDP Project Rules. Working Process
                                            "},{"location":"developer-guide/mk-docs-development/","title":"Documentation Flow","text":"

                                            This section defines necessary steps to start developing the EDP documentation in the MkDocs Framework. The framework presents a static site generator with documentation written in Markdown. All the docs are configured with a YAML configuration file.

                                            Note

                                            For more details on the framework, please refer to the MkDocs official website.

                                            There are two options for working with MkDocs:

                                            • Work with MkDocs if Docker is installed
                                            • Work with MkDocs if Docker is not installed

                                            Please see below the detailed description of each options and choose the one that suits you.

                                            "},{"location":"developer-guide/mk-docs-development/#mkdocs-with-docker","title":"MkDocs With Docker","text":"

                                            Prerequisites:

                                            • Docker is installed.
                                            • make utility is installed.
                                            • Git is installed. Please refer to the Git downloads.

                                            To work with MkDocs, take the following steps:

                                            1. Clone the edp-install repository to your local folder.

                                            2. Run the following command:

                                              make docs

                                            3. Enter the localhost:8000 address in the browser and check that documentation pages are available.

                                            4. Open the file editor, navigate to edp-install->docs and make necessary changes. Check all the changes at localhost:8000.

                                            5. Create a merge request with changes.

                                            "},{"location":"developer-guide/mk-docs-development/#mkdocs-without-docker","title":"MkDocs Without Docker","text":"

                                            Prerequisites:

                                            • Git is installed. Please refer to the Git downloads.
                                            • Python 3.9.5 is installed.

                                            To work with MkDocs without Docker, take the following steps:

                                            1. Clone the edp-install repository to your local folder.

                                            2. Run the following command:

                                              pip install -r  hack/mkdocs/requirements.txt\n
                                            3. Run the local development command:

                                              mkdocs serve --dev-addr 0.0.0.0:8000\n

                                              Note

                                              This command may not work on Windows, so a quick solution is:

                                              python -m mkdocs serve --dev-addr 0.0.0.0:8000\n

                                            4. Enter the localhost:8000 address in the browser and check that documentation pages are available.

                                            5. Open the file editor, navigate to edp-install->docs and make necessary changes. Check all the changes at localhost:8000.

                                            6. Create a merge request with changes.

                                            "},{"location":"operator-guide/","title":"Overview","text":"

                                            The EDP Operator guide is intended for DevOps and provides information on EDP installation, configuration and customization, as well as the platform support. Inspect the documentation to adjust the EPAM Delivery Platform according to your business needs:

                                            • The Installation section provides the prerequisites for EDP installation, including Kubernetes or OpenShift cluster setup, Keycloak, DefectDojo, Kiosk, and Ingress-nginx setup as well as the subsequent deployment of EPAM Delivery Platform.
                                            • The Configuration section indicates the options to set the project with adding a code language, backup, integrate VCS with Jenkins or Tekton, managing Jenkins pipelines, and logging.
                                            • The Integration section comprises the AWS, GitHub, GitLab, Jira, and Logsight integration options.
                                            • The Tutorials section provides information on working with various aspects, for example, using cert-manager in OpenShift, deploying AWS EKS cluster, deploying OKD 4.9 cluster, deploying OKD 4.10 cluster, managing Jenkins agent, and upgrading Keycloak v.17.0.x-legacy to v.19.0.x on Kubernetes.
                                            "},{"location":"operator-guide/add-jenkins-agent/","title":"Manage Jenkins Agent","text":"

                                            Inspect the main steps to add and update Jenkins agent.

                                            "},{"location":"operator-guide/add-jenkins-agent/#createupdate-jenkins-agent","title":"Create/Update Jenkins Agent","text":"

                                            Every Jenkins agent is based on epamedp/edp-jenkins-base-agent. Check DockerHub for the latest version. Use it to create a new agent (or update an old one). See the example with Dockerfile of gradle-java11-agent below:

                                            View: Dockerfile
                                                # Copyright 2021 EPAM Systems.\n    # Licensed under the Apache License, Version 2.0 (the \"License\");\n    # you may not use this file except in compliance with the License.\n    # You may obtain a copy of the License at\n    # http://www.apache.org/licenses/LICENSE-2.0\n    # Unless required by applicable law or agreed to in writing, software\n    # distributed under the License is distributed on an \"AS IS\" BASIS,\n    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n    # See the License for the specific language governing permissions and\n    # limitations under the License.\n\n    FROM epamedp/edp-jenkins-base-agent:1.0.1\n    SHELL [\"/bin/bash\", \"-o\", \"pipefail\", \"-c\"]\n    ENV GRADLE_VERSION=7.1 \\\n        PATH=$PATH:/opt/gradle/bin\n\n    # Install Gradle\n    RUN curl -skL -o /tmp/gradle-bin.zip https://services.gradle.org/distributions/gradle-$GRADLE_VERSION-bin.zip && \\\n        mkdir -p /opt/gradle && \\\n        unzip -q /tmp/gradle-bin.zip -d /opt/gradle && \\\n        ln -sf /opt/gradle/gradle-$GRADLE_VERSION/bin/gradle /usr/local/bin/gradle\n\n    RUN yum install java-11-openjdk-devel.x86_64 -y && \\\n        rpm -V java-11-openjdk-devel.x86_64 && \\\n        yum clean all -y\n\n    WORKDIR $HOME/.gradle\n\n    RUN chown -R \"1001:0\" \"$HOME\" && \\\n        chmod -R \"g+rw\" \"$HOME\"\n\n    USER 1001\n

                                            After the Docker agent update/creation, build and load the image into the project registry (e.g. DockerHub, AWS ECR, etc.).

                                            "},{"location":"operator-guide/add-jenkins-agent/#add-jenkins-agent-configuration","title":"Add Jenkins Agent Configuration","text":"

                                            To add a new Jenkins agent, take the steps below:

                                            1. Run the following command. Please be aware that edp is the name of the EDP tenant.

                                                kubectl edit configmap jenkins-slaves -n edp\n

                                              Note

                                              On an OpenShift cluster, run the oc command instead of kubectl one.

                                              Add new agent template. View: ConfigMap jenkins-slaves

                                                data:\n    docker-template: |-\n     <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n<inheritFrom></inheritFrom>\n<name>docker</name>\n<namespace></namespace>\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<instanceCap>2147483647</instanceCap>\n<slaveConnectTimeout>100</slaveConnectTimeout>\n<idleMinutes>5</idleMinutes>\n<activeDeadlineSeconds>0</activeDeadlineSeconds>\n<label>docker</label>\n<serviceAccount>jenkins</serviceAccount>\n<nodeSelector>beta.kubernetes.io/os=linux</nodeSelector>\n<nodeUsageMode>NORMAL</nodeUsageMode>\n<workspaceVolume class=\"org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume\">\n<memory>false</memory>\n</workspaceVolume>\n<volumes/>\n<containers>\n<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n<name>jnlp</name>\n\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<workingDir>/tmp</workingDir>\n<command></command>\n<args>${computer.jnlpmac} ${computer.name}</args>\n<ttyEnabled>false</ttyEnabled>\n<resourceRequestCpu></resourceRequestCpu>\n<resourceRequestMemory></resourceRequestMemory>\n<resourceLimitCpu></resourceLimitCpu>\n<resourceLimitMemory></resourceLimitMemory>\n<envVars>\n<org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n<key>JAVA_TOOL_OPTIONS</key>\n<value>-XX:+UnlockExperimentalVMOptions -Dsun.zip.disableMemoryMapping=true</value>\n</org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n</envVars>\n<ports/>\n</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n</containers>\n<envVars/>\n<annotations/>\n<imagePullSecrets/>\n<podRetention class=\"org.csanchez.jenkins.plugins.kubernetes.pod.retention.Default\"/>\n</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n

                                              Note

                                              The name and label properties should be unique(docker in the example above). Insert image name and tag instead of IMAGE_NAME:IMAGE_TAG.

                                            2. Open Jenkins to ensure that everything is added correctly. Click the Manage Jenkins option, navigate to the Manage Nodes and Clouds->Configure Clouds->Kubernetes->Pod Templates..., and scroll down to find new Jenkins agent Pod Template details...:

                                              Jenkins pod template

                                              As a result, the newly added Jenkins agent will be available in the Advanced Settings block of the Admin Console tool during the codebase creation:

                                              Advanced settings

                                            3. "},{"location":"operator-guide/add-jenkins-agent/#modify-existing-agent-configuration","title":"Modify Existing Agent Configuration","text":"

                                              If your application is integrated with EDP, take the steps below to change an existing agent configuration:

                                              1. Run the following command. Please be aware that edp is the name of the EDP tenant.

                                                  kubectl edit configmap jenkins-slaves -n edp\n

                                                Note

                                                On an OpenShift cluster, run the oc command instead of kubectl one.

                                              2. Find the agent template in use and change and change the parameters.

                                              3. Open Jenkins and check the correct addition. Click the Manage Jenkins option, navigate to the Manage Nodes and Clouds->Configure Clouds->Kubernetes->Pod Templates..., and scroll down to Pod Template details... with the necessary data.

                                              "},{"location":"operator-guide/add-ons-overview/","title":"Cluster Add-Ons Overview","text":"

                                              This page describes the entity of Cluster Add-Ons for EPAM Delivery Platform, as well as their purpose, benefits and usage.

                                              "},{"location":"operator-guide/add-ons-overview/#what-are-add-ons","title":"What Are Add-Ons","text":"

                                              EDP Add-Ons is basically a Kubernetes-based structure that enables users to quickly install additional components for the platform using Argo CD applications.

                                              Add-Ons have been introduced into EDP starting from version 3.4.0. They empower users to seamlessly incorporate the platform with various additional components, such as SonarQube, Nexus, Keycloak, Jira, and more. This eliminates the need for manual installations, as outlined in the Install EDP page.

                                              In a nutshell, Add-Ons represent separate Helm Charts that imply to be installed by one click using the Argo CD tool.

                                              "},{"location":"operator-guide/add-ons-overview/#add-ons-repository-structure","title":"Add-Ons Repository Structure","text":"

                                              All the Add-Ons are stored in our public GitHub repository adhering to the GitOps approach. Apart from default Helm and Git files, it contains both custom resources called Applications for Argo CD and application source code. The repository follows the GitOps approach to enable Add-Ons with the capability to rollback changes when needed. The repository structure is the following:

                                                \u251c\u2500\u2500 CHANGELOG.md\n  \u251c\u2500\u2500 LICENSE\n  \u251c\u2500\u2500 Makefile\n  \u251c\u2500\u2500 README.md\n  \u251c\u2500\u2500 add-ons\n  \u2514\u2500\u2500 chart\n
                                              • add-ons - the directory that contains Helm charts of the applications that can be integrated with EDP using Add-Ons.
                                              • chart - the directory that contains Helm charts with application templates that will be used to create custom resources called Applications for Argo CD.
                                              "},{"location":"operator-guide/add-ons-overview/#enable-edp-add-ons","title":"Enable EDP Add-Ons","text":"

                                              To enable EDP Add-Ons, it is necessary to have the configured Argo CD, and connect and synchronize the forked repository. To do this, follow the guidelines below:

                                              1. Fork the Add-Ons repository to your personal account.

                                              2. Provide the parameter values for the values.yaml files of the desired Add-Ons you are going to install.

                                              3. Navigate to Argo CD -> Settings -> Repositories. Connect your forked repository where you have the values.yaml files changed by clicking the + Connect repo button:

                                                Connect the forked repository

                                              4. In the appeared window, fill in the following fields and click the Connect button:

                                                • Name - select the namespace where the project is going to be depolyed;
                                                • Choose your connection method - choose Via SSH;
                                                • Type - choose Helm;
                                                • Repository URL - enter the URL of your forked repository.

                                                Repository parameters

                                              5. As soon as the repository is connected, the new item in the repository list will appear:

                                                Connected repository

                                              6. Navigate to Argo CD -> Applications. Click the + New app button:

                                                Adding Argo CD application

                                              7. Fill in the required fields:

                                                • Application Name - addons-demo;
                                                • Project name - select the namespace where the project is going to be depolyed;
                                                • Sync policy - select Manual;
                                                • Repository URL - enter the URL of your forked repository;
                                                • Revision - Head;
                                                • Path - select chart;
                                                • Cluster URL - enter the URL of your cluster;
                                                • Namespace - enter the namespace which must be equal to the Project name field.
                                              8. As soon as the repository is synchronized, the list of applications that can be installed by Add-Ons will be shown:

                                                Add-Ons list

                                              "},{"location":"operator-guide/add-ons-overview/#install-edp-add-ons","title":"Install EDP Add-Ons","text":"

                                              Now that Add-Ons are enabled in Argo CD, they can be installed by following the steps below:

                                              1. Choose the Add-On to install.

                                              2. On the chosen Add-On, click the \u22ee button and then Details:

                                                Open Add-Ons

                                              3. To install the Add-On, click the \u22ee button -> Sync:

                                                Install Add-Ons

                                              4. Once the Add-On is installed, the Sync OK message will appear in the Add-On status bar:

                                                Sync OK message

                                              5. Open the application details by clicking on the little square with an arrow underneath the Add-On name:

                                                Open details

                                              6. Track application resources and status in the App details menu:

                                                Application details

                                              As we see, Argo CD offers great observability and monitoring tools for its resources which comes in handy when using EDP Add-Ons.

                                              "},{"location":"operator-guide/add-ons-overview/#available-add-ons-list","title":"Available Add-Ons List","text":"

                                              The list of the available Add-Ons:

                                              Name Description Default Argo CD A GitOps continuous delivery tool that helps automate the deployment, configuration, and lifecycle management of applications in Kubernetes clusters. false AWS EFS CSI Driver A Container Storage Interface (CSI) driver that enables the dynamic provisioning of Amazon Elastic File System (EFS) volumes in Kubernetes clusters. true Cert Manager A native Kubernetes certificate management controller that automates the issuance and renewal of TLS certificates. true DefectDojo A security vulnerability management tool that allows tracking and managing security findings in applications. true DependencyTrack A Software Composition Analysis (SCA) platform that helps identify and manage open-source dependencies and their associated vulnerabilities. true EDP An internal platform created by EPAM to enhance software delivery processes using DevOps principles and tools. false Extensions OIDC EDP Helm chart to provision OIDC clients for different Add-Ons using EDP Keycloak Operator. true External Secrets A Kubernetes Operator that fetches secrets from external secret management systems and injects them as Kubernetes Secrets. true Fluent Bit A lightweight and efficient log processor and forwarder that collects and routes logs from various sources in Kubernetes clusters. false Harbor A cloud-native container image registry that provides support for vulnerability scanning, policy-based image replication, and more. true Nginx ingress An Ingress controller that provides external access to services running within a Kubernetes cluster using Nginx as the underlying server. true Jaeger Operator An operator for deploying and managing Jaeger, an end-to-end distributed tracing system, in Kubernetes clusters. true Keycloak An open-source Identity and Access Management (IAM) solution that enables authentication, authorization, and user management in Kubernetes clusters. true Keycloak PostgreSQL A PostgreSQL database operator that simplifies the deployment and management of PostgreSQL instances in Kubernetes clusters. false MinIO Operator An operator that simplifies the deployment and management of MinIO, a high-performance object storage server compatible with Amazon S3, in Kubernetes clusters. true OpenSearch A community-driven, open-source search and analytics engine that provides scalable and distributed search capabilities for Kubernetes clusters. true OpenTelemetry Operator An operator for automating the deployment and management of OpenTelemetry, a set of observability tools for capturing, analyzing, and exporting telemetry data. true PostgreSQL Operator An operator for running and managing PostgreSQL databases in Kubernetes clusters with high availability and scalability. true Prometheus Operator An operator that simplifies the deployment and management of Prometheus, a monitoring and alerting toolkit, in Kubernetes clusters. true Redis Operator An operator for managing Redis, an in-memory data structure store, in Kubernetes clusters, providing high availability and horizontal scalability. true StorageClass A Kubernetes resource that provides a way to define different classes of storage with different performance characteristics for persistent volumes. true Tekton A flexible and cloud-native framework for building, testing, and deploying applications using Kubernetes-native workflows. true Vault An open-source secrets management solution that provides secure storage, encryption, and access control for sensitive data in Kubernetes clusters. true"},{"location":"operator-guide/add-other-code-language/","title":"Add Other Code Language","text":"

                                              There is an ability to extend the default code languages when creating a codebase with the Clone or Import strategy.

                                              Other code language

                                              Warning

                                              The Create strategy does not allow to customize the default code language set.

                                              To customize the Build Tool list, perform the following:

                                              • Edit the edp-admin-console deployment by adding the necessary code language into the BUILD TOOLS field:

                                                 kubectl edit deployment edp-admin-console -n edp\n

                                                Note

                                                Using an OpenShift cluster, run the oc command instead of kubectl one.

                                                Info

                                                edp is the name of the EDP tenant here and in all the following steps.

                                                View: edp-admin-console deployment
                                                ...\nspec:\ncontainers:\n- env:\n...\n- name: BUILD_TOOLS\nvalue: docker # List of custom build tools in Admin Console, e.g. 'docker,helm';\n...\n...\n
                                              • Add the Jenkins agent by following the instruction.
                                              • Add the Custom CI pipeline provisioner by following the instruction.
                                              • As a result, the newly added Jenkins agent will be available in the Select Jenkins Slave dropdown list of the Advanced Settings block during the codebase creation:

                                                Advanced settings

                                              If it is necessary to create Code Review and Build pipelines, add corresponding entries (e.g. stages[Build-application-docker], [Code-review-application-docker]). See the example below:

                                              ...\nstages['Code-review-application-docker'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" + ',{\"name\": \"sonar\"}]'\nstages['Build-application-docker'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sonar\"},' +\n'{\"name\": \"build-image-kaniko\"}' + ',{\"name\": \"git-tag\"}]'\n...\n

                                              Jenkins job provisioner

                                              Note

                                              Application is one of the available options. Another option might be to add a library. Please refer to the Add Library page for details.

                                              "},{"location":"operator-guide/add-other-code-language/#related-articles","title":"Related Articles","text":"
                                              • Add Application
                                              • Add Library
                                              • Manage Jenkins Agent
                                              • Manage Jenkins CI Pipeline Job Provisioner
                                              "},{"location":"operator-guide/add-security-scanner/","title":"Add Security Scanner","text":"

                                              In order to add a new security scanner, perform the steps below:

                                              1. Select a pipeline customization option from the Customize CI Pipeline article. Follow the steps described in this article, to create a new repository.

                                                Note

                                                This tutorial will focus on adding a new stage using shared library via the custom global pipeline libraries.

                                              2. Open the new repository and create a directory with the /src/com/epam/edp/customStages/impl/ci/impl/stageName/ name in the library repository, for example: /src/com/epam/edp/customStages/impl/ci/impl/security/. After that, add a Groovy file with another name to the same stages catalog, for example: CustomSAST.groovy.

                                              3. Copy the logic from SASTMavenGradleGoApplication.groovy stage into the new CustomSAST.groovy stage.

                                              4. Add a new runGoSecScanner function to the stage:

                                                @Stage(name = \"sast-custom\", buildTool = [\"maven\",\"gradle\",\"go\"], type = [ProjectType.APPLICATION])\nclass CustomSAST {\n...\ndef runGoSecScanner(context) {\ndef edpName = context.platform.getJsonPathValue(\"cm\", \"edp-config\", \".data.edp_name\")\ndef reportData = [:]\nreportData.active = \"true\"\nreportData.verified = \"false\"\nreportData.path = \"sast-gosec-report.json\"\nreportData.type = \"Gosec Scanner\"\nreportData.productTypeName = \"Tenant\"\nreportData.productName = \"${edpName}\"\nreportData.engagementName = \"${context.codebase.name}-${context.git.branch}\"\nreportData.autoCreateContext = \"true\"\nreportData.closeOldFindings = \"true\"\nreportData.pushToJira = \"false\"\nreportData.environment = \"Development\"\nreportData.testTitle = \"SAST\"\nscript.sh(script: \"\"\"\n                set -ex\n                gosec -fmt=json -out=${reportData.path} ./...\n        \"\"\")\nreturn reportData\n}\n...\n}\n
                                              5. Add function calls for the runGoSecScanner and publishReport functions:

                                                ...\nscript.node(\"sast\") {\nscript.dir(\"${testDir}\") {\nscript.unstash 'all-repo'\n...\ndef dataFromGoSecScanner = runGoSecScanner(context)\npublishReport(defectDojoCredentials, dataFromGoSecScanner)\n}\n}\n...\n
                                              6. Gosec scanner will be installed on the Jenkins SAST agent. It is based on the epamedp/edp-jenkins-base-agent. Please check DockerHub for its latest version.

                                                See below an example of the edp-jenkins-sast-agent Dockerfile:

                                                View: Default Dockerfile
                                                 # Copyright 2022 EPAM Systems.\n\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n # You may obtain a copy of the License at\n # http://www.apache.org/licenses/LICENSE-2.0\n\n # Unless required by applicable law or agreed to in writing, software\n # distributed under the License is distributed on an \"AS IS\" BASIS,\n # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n\n # See the License for the specific language governing permissions and\n # limitations under the License.\n\n FROM epamedp/edp-jenkins-base-agent:1.0.31\n\n SHELL [\"/bin/bash\", \"-o\", \"pipefail\", \"-c\"]\n\n USER root\n\n ENV SEMGREP_SCANNER_VERSION=0.106.0 \\\n     GOSEC_SCANNER_VERSION=2.12.0\n\n RUN apk --no-cache add \\\n     curl=7.79.1-r2 \\\n     build-base=0.5-r3 \\\n     python3-dev=3.9.5-r2 \\\n     py3-pip=20.3.4-r1 \\\n     go=1.16.15-r0\n\n # hadolint ignore=DL3059\n RUN pip3 install --no-cache-dir --upgrade --ignore-installed \\\n     pip==22.2.1 \\\n     ruamel.yaml==0.17.21 \\\n     semgrep==${SEMGREP_SCANNER_VERSION}\n\n # Install GOSEC\n RUN curl -Lo /tmp/gosec.tar.gz https://github.com/securego/gosec/releases/download/v${GOSEC_SCANNER_VERSION}/gosec_${GOSEC_SCANNER_VERSION}_linux_amd64.tar.gz && \\\n     tar xf /tmp/gosec.tar.gz && \\\n     rm -f /tmp/gosec.tar.gz && \\\n     mv gosec /bin/gosec\n\n RUN chown -R \"1001:0\" \"$HOME\" && \\\n     chmod -R \"g+rw\" \"$HOME\"\n\n USER 1001\n
                                              "},{"location":"operator-guide/add-security-scanner/#related-articles","title":"Related Articles","text":"
                                              • Customize CI Pipeline
                                              • Static Application Security Testing Overview
                                              • Semgrep
                                              "},{"location":"operator-guide/argocd-integration/","title":"Argo CD Integration","text":"

                                              EDP uses Jenkins Pipeline as a part of the Continues Delivery/Continues Deployment implementation. Another approach is to use Argo CD tool as an alternative to Jenkins. Argo CD follows the best GitOps practices, uses Kubernetes native approach for the Deployment Management, has rich UI and required RBAC capabilities.

                                              "},{"location":"operator-guide/argocd-integration/#argo-cd-deployment-approach-in-edp","title":"Argo CD Deployment Approach in EDP","text":"

                                              Argo CD can be installed using two different approaches:

                                              • Cluster-wide scope with the cluster-admin access
                                              • Namespaced scope with the single namespace access

                                              Both approaches can be deployed with High Availability (HA) or Non High Availability (non HA) installation manifests.

                                              EDP uses the HA deployment with the cluster-admin permissions, to minimize cluster resources consumption by sharing single Argo CD instance across multiple EDP Tenants. Please follow the installation instructions to deploy Argo CD.

                                              "},{"location":"operator-guide/argocd-integration/#edp-argo-cd-integration","title":"EDP Argo CD Integration","text":"

                                              See a diagram below for the details:

                                              Argo CD Diagram

                                              • Argo CD is deployed in a separate argocd namespace.
                                              • Argo CD uses a cluster-admin role for managing cluster-scope resources.
                                              • The control-plane application is created using the App of Apps approach, and its code is managed by the control-plane members.
                                              • The control-plane is used to onboard new Argo CD Tenants (Argo CD Projects - AppProject).
                                              • The EDP Tenant Member manages Argo CD Applications using kind: Application in the edpTenant namespace.

                                              The App Of Apps approach is used to manage the EDP Tenants. Inspect the edp-grub repository structure that is used to provide the EDP Tenants for the Argo CD Projects:

                                              edp-grub\n\u251c\u2500\u2500 LICENSE\n\u251c\u2500\u2500 README.md\n\u251c\u2500\u2500 apps                      ### All Argo CD Applications are stored here\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 grub-argocd.yaml      # Application that provisions Argo CD Resources - Argo Projects (EDP Tenants)\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 grub-keycloak.yaml    # Application that provisions Keycloak Resources - Argo CD Groups (EDP Tenants)\n\u251c\u2500\u2500 apps-configs\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 grub\n\u2502\u00a0\u00a0     \u251c\u2500\u2500 argocd            ### Argo CD resources definition\n\u2502\u00a0\u00a0     \u2502\u00a0\u00a0 \u251c\u2500\u2500 team-bar.yaml\n\u2502\u00a0\u00a0     \u2502\u00a0\u00a0 \u2514\u2500\u2500 team-foo.yaml\n\u2502\u00a0\u00a0     \u2514\u2500\u2500 keycloak          ### Keycloak resources definition\n\u2502\u00a0\u00a0         \u251c\u2500\u2500 team-bar.yaml\n\u2502\u00a0\u00a0         \u2514\u2500\u2500 team-foo.yaml\n\u251c\u2500\u2500 bootstrap\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 root.yaml             ### Root application in App of Apps, which provision Applications from /apps\n\u2514\u2500\u2500 examples                  ### Examples\n\u2514\u2500\u2500 tenant\n        \u2514\u2500\u2500 foo-petclinic.yaml\n

                                              The Root Application must be created under the control-plane scope.

                                              "},{"location":"operator-guide/argocd-integration/#configuration","title":"Configuration","text":"

                                              Note

                                              Make sure that both EDP and Argo CD are installed, and that SSO is enabled.

                                              To start using Argo CD with EDP, perform the following steps:

                                              "},{"location":"operator-guide/argocd-integration/#keycloak","title":"Keycloak","text":"
                                              1. Create a Keycloak Group.

                                                apiVersion: v1.edp.epam.com/v1\nkind: KeycloakRealmGroup\nmetadata:\nname: argocd-team-foo-users\nspec:\nname: ArgoCD-team-foo-users\nrealm: main\n
                                              2. In Keycloak, add users to the ArgoCD-team-foo-users Keycloak Group.

                                              "},{"location":"operator-guide/argocd-integration/#argo-cd","title":"Argo CD","text":"
                                              1. Add a credential template for Gerrit, GitHub, GitLab integrations. The credential template must be created for each Git server.

                                                GerritGitHub/GitLab

                                                Copy existing SSH private key for Gerrit to Argo CD namespace

                                                EDP_NAMESPACE=<EPD_NAMESPACE>\nGERRIT_PORT=$(kubectl get gerrit gerrit -n ${EDP_NAMESPACE} -o jsonpath='{.spec.sshPort}')\nGERRIT_ARGOCD_SSH_KEY_NAME=\"gerrit-argocd-sshkey\"\nGERRIT_URL=$(echo \"ssh://argocd@gerrit.${EDP_NAMESPACE}:${GERRIT_PORT}\" | base64)\nkubectl get secret ${GERRIT_ARGOCD_SSH_KEY_NAME} -n ${EDP_NAMESPACE} -o json | jq 'del(.data.username,.metadata.annotations,.metadata.creationTimestamp,.metadata.labels,.metadata.resourceVersion,.metadata.uid,.metadata.ownerReferences)' | jq '.metadata.namespace = \"argocd\"' | jq --arg name \"${EDP_NAMESPACE}\" '.metadata.name = $name' | jq --arg url \"${GERRIT_URL}\" '.data.url = $url' | jq '.data.sshPrivateKey = .data.id_rsa' | jq 'del(.data.id_rsa,.data.\"id_rsa.pub\")' | kubectl apply -f -\nkubectl label --overwrite secret ${EDP_NAMESPACE} -n argocd \"argocd.argoproj.io/secret-type=repo-creds\"\n

                                                Generate an SSH key pair and add a public key to GitLab or GitHub account.

                                                Warning

                                                Use an additional GitHub/GitLab User to access a repository. For example: - GitHub, add a User to a repository with a \"Read\" role. - GitLab, add a User to a repository with a \"Guest\" role.

                                                ssh-keygen -t ed25519 -C \"email@example.com\" -f argocd\n

                                                Copy SSH private key to Argo CD namespace

                                                EDP_NAMESPACE=<EDP_NAMESPACE>\nVCS_HOST=\"<github.com_or_gitlab.com>\"\nACCOUNT_NAME=\"<ACCOUNT_NAME>\"\nURL=\"ssh://git@${VCS_HOST}:22/${ACCOUNT_NAME}\"\n\nkubectl create secret generic ${EDP_NAMESPACE} -n argocd \\\n--from-file=sshPrivateKey=argocd \\\n--from-literal=url=\"${URL}\"\nkubectl label --overwrite secret ${EDP_NAMESPACE} -n argocd \"argocd.argoproj.io/secret-type=repo-creds\"\n

                                                Add public SSH key to GitHub/GitLab account.

                                              2. Add SSH Known hosts for Gerrit, GitHub, GitLab integration.

                                                GerritGitHub/GitLab

                                                Add Gerrit host to Argo CD config map with known hosts

                                                EDP_NAMESPACE=<EDP_NAMESPACE>\nKNOWN_HOSTS_FILE=\"/tmp/ssh_known_hosts\"\nARGOCD_KNOWN_HOSTS_NAME=\"argocd-ssh-known-hosts-cm\"\nGERRIT_PORT=$(kubectl get gerrit gerrit -n ${EDP_NAMESPACE} -o jsonpath='{.spec.sshPort}')\n\nrm -f ${KNOWN_HOSTS_FILE}\nkubectl get cm ${ARGOCD_KNOWN_HOSTS_NAME} -n argocd -o jsonpath='{.data.ssh_known_hosts}' > ${KNOWN_HOSTS_FILE}\nkubectl exec -it deployment/gerrit -n ${EDP_NAMESPACE} -- ssh-keyscan -p ${GERRIT_PORT} gerrit.${EDP_NAMESPACE} >> ${KNOWN_HOSTS_FILE}\nkubectl create configmap ${ARGOCD_KNOWN_HOSTS_NAME} -n argocd --from-file ${KNOWN_HOSTS_FILE} -o yaml --dry-run=client | kubectl apply -f -\n

                                                Add GitHub/GitLab host to Argo CD config map with known hosts

                                                EDP_NAMESPACE=<EPD_NAMESPACE>\nVCS_HOST=\"<VCS_HOST>\"\nKNOWN_HOSTS_FILE=\"/tmp/ssh_known_hosts\"\nARGOCD_KNOWN_HOSTS_NAME=\"argocd-ssh-known-hosts-cm\"\n\nrm -f ${KNOWN_HOSTS_FILE}\nkubectl get cm ${ARGOCD_KNOWN_HOSTS_NAME} -n argocd -o jsonpath='{.data.ssh_known_hosts}' > ${KNOWN_HOSTS_FILE}\nssh-keyscan ${VCS_HOST} >> ${KNOWN_HOSTS_FILE}\nkubectl create configmap ${ARGOCD_KNOWN_HOSTS_NAME} -n argocd --from-file ${KNOWN_HOSTS_FILE} -o yaml --dry-run=client | kubectl apply -f -\n
                                              3. Create an Argo CD Project (EDP Tenant), for example, with the team-foo name:

                                                AppProject
                                                apiVersion: argoproj.io/v1alpha1\nkind: AppProject\nmetadata:\nname: team-foo\nnamespace: argocd\n# Finalizer that ensures that project is not deleted until it is not referenced by any application\nfinalizers:\n- resources-finalizer.argocd.argoproj.io\nspec:\ndescription: CD pipelines for team-foo\nroles:\n- name: developer\ndescription: Users for team-foo tenant\npolicies:\n- p, proj:team-foo:developer, applications, create, team-foo/*, allow\n- p, proj:team-foo:developer, applications, delete, team-foo/*, allow\n- p, proj:team-foo:developer, applications, get, team-foo/*, allow\n- p, proj:team-foo:developer, applications, override, team-foo/*, allow\n- p, proj:team-foo:developer, applications, sync, team-foo/*, allow\n- p, proj:team-foo:developer, applications, update, team-foo/*, allow\n- p, proj:team-foo:developer, repositories, create, team-foo/*, allow\n- p, proj:team-foo:developer, repositories, delete, team-foo/*, allow\n- p, proj:team-foo:developer, repositories, update, team-foo/*, allow\n- p, proj:team-foo:developer, repositories, get, team-foo/*, allow\ngroups:\n# Keycloak Group name\n- ArgoCD-team-foo-users\ndestinations:\n# ensure we can deploy to ns with tenant prefix\n- namespace: 'team-foo-*'\n# allow to deploy to specific server (local in our case)\nserver: https://kubernetes.default.svc\n# Deny all cluster-scoped resources from being created, except for Namespace\nclusterResourceWhitelist:\n- group: ''\nkind: Namespace\n# Allow all namespaced-scoped resources to be created, except for ResourceQuota, LimitRange, NetworkPolicy\nnamespaceResourceBlacklist:\n- group: ''\nkind: ResourceQuota\n- group: ''\nkind: LimitRange\n- group: ''\nkind: NetworkPolicy\n# we are ok to create any resources inside namespace\nnamespaceResourceWhitelist:\n- group: '*'\nkind: '*'\n# enable access only for specific git server. The example below 'team-foo' - it is namespace where EDP deployed\nsourceRepos:\n- ssh://argocd@gerrit.team-foo:30007/*\n# enable capability to deploy objects from namespaces\nsourceNamespaces:\n- team-foo\n
                                              4. Optional: if the Argo CD controller has not been enabled to manage the Application resources in the specific namespaces (team-foo, in our case) in the Install Argo CD, modify the argocd-cmd-params-cm ConfigMap in the Argo CD namespace and add the application.namespaces parameter to the subsection data:

                                                argocd-cmd-params-cm
                                                ...\ndata:\napplication.namespaces: team-foo\n...\n
                                                values.yaml file
                                                ...\nconfigs:\nparams:\napplication.namespaces: team-foo\n...\n
                                              5. Check that your new Repository, Known Hosts, and AppProject are added to the Argo CD UI.

                                              Once Argo CD is successfully integrated, EDP user can utilize Argo CD to deploy CD pipelines.

                                              "},{"location":"operator-guide/argocd-integration/#check-argo-cd-integration-optional","title":"Check Argo CD Integration (Optional)","text":"

                                              This section provides the information on how to test the integration with Argo CD and is not mandatory to be followed.

                                              1. Follow the Add Application instruction to deploy a test EDP application with the demo name, which should be stored in a Gerrit private repository:

                                                Example: Argo CD Application
                                                apiVersion: argoproj.io/v1alpha1\nkind: Application\nmetadata:\nname: demo\nspec:\nproject: team-foo\ndestination:\nnamespace: team-foo-demo\nserver: https://kubernetes.default.svc\nsource:\nhelm:\nparameters:\n- name: image.tag\nvalue: master-0.1.0-1\n- name: image.repository\nvalue: image-repo\npath: deploy-templates\nrepoURL: ssh://argocd@gerrit.team-foo:30007/demo.git\ntargetRevision: master\nsyncPolicy:\nsyncOptions:\n- CreateNamespace=true\nautomated:\nselfHeal: true\nprune: true\n
                                              2. Check that your new Application is added to the Argo CD UI under the team-foo Project scope.

                                              "},{"location":"operator-guide/argocd-integration/#related-articles","title":"Related Articles","text":"
                                              • Install Argo CD
                                              "},{"location":"operator-guide/aws-marketplace-install/","title":"Install via AWS Marketplace","text":"

                                              This documentation provides the detailed instructions on how to install the EPAM Delivery Platform via the AWS Marketplace.

                                              To initiate the installation process, navigate to our dedicated AWS Marketplace page and commence the deployment of EPAM Delivery Platform.

                                              Disclaimer

                                              EDP is aligned with industry standards for storing and managing sensitive data, ensuring optimal security. However, the use of custom solutions introduces uncertainties, thus the responsibility for the safety of your data is totally covered by platform administrator.

                                              "},{"location":"operator-guide/aws-marketplace-install/#prerequisites","title":"Prerequisites","text":"

                                              Please familiarize yourself with the Prerequisites page before deploying the product. To perform a minimal installation, ensure that you meet the following requirements:

                                              • The domain name is available and associated with the ingress object in cluster.
                                              • Cluster administrator access.
                                              • The Tekton resources are deployed.
                                              • Access to the cluster via Service Account token is available.
                                              "},{"location":"operator-guide/aws-marketplace-install/#deploy-epam-delivery-platform","title":"Deploy EPAM Delivery Platform","text":"

                                              To deploy the platform, follow the steps below:

                                              1. To apply Tekton stack, deploy Tekton resources by executing the command below:

                                                 kubectl create ns tekton-pipelines\n kubectl create ns tekton-chains\n kubectl create ns tekton-pipelines-resolvers\n kubectl apply --filename https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml\n kubectl apply --filename https://storage.googleapis.com/tekton-releases/triggers/latest/interceptors.yaml\n kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml\n kubectl apply --filename https://storage.googleapis.com/tekton-releases/chains/latest/release.yaml\n

                                              2. Define the mandatory parameters you would like to use for installation using the following command:

                                                 kubectl create ns edp\n helm install edp-install \\\n--namespace edp ./* \\\n--set global.dnsWildCard=example.com \\\n--set awsRegion=<AWS_REGION>\n
                                              3. (Optional) Provide token to sign in to EDP Portal. Run the following command to create Service Account with cluster admin permissions:

                                                kubectl create serviceaccount edp-admin -n edp\nkubectl create clusterrolebinding edp-cluster-admin --clusterrole=cluster-admin --serviceaccount=edp:edp-admin\nkubectl apply -f - <<EOF\napiVersion: v1\nkind: Secret\nmetadata:\n  name: edp-admin-token\n  namespace: edp\n  annotations:\n    kubernetes.io/service-account.name: edp-admin\ntype: kubernetes.io/service-account-token\nEOF\n
                                              4. (Optional) To get access to EDP Portal, run the port-forwarding command:

                                                 kubectl port-forward service/edp-headlamp 59480:80 -n edp\n

                                              5. (Optional) To open EDP Portal, navigate to the http://localhost:59480.

                                              6. (Optional) To get admin token to sign in to EDP Portal:

                                                kubectl get secrets -o jsonpath=\"{.items[?(@.metadata.annotations['kubernetes\\.io/service-account\\.name']=='edp-admin')].data.token}\" -n edp|base64 --decode\n

                                              As a result, you will get access to EPAM Delivery Platform components via EDP Portal UI. Navigate to our Use Cases to try out EDP functionality. Visit other subsections of the Operator Guide to figure out how to configure EDP and integrate it with various tools.

                                              "},{"location":"operator-guide/aws-marketplace-install/#related-articles","title":"Related Articles","text":"
                                              • EPAM Delivery Platform on AWS Marketplace
                                              • Integrate GitHub/GitLab in Tekton
                                              • Set Up Kubernetes
                                              • Set Up OpenShift
                                              • EDP Installation Prerequisites Overview
                                              • Headlamp OIDC Integration
                                              "},{"location":"operator-guide/capsule/","title":"Capsule Integration","text":"

                                              This documentation guide provides comprehensive instructions of integrating Capsule with the EPAM Delivery Platform to enhance security and resource management.

                                              Note

                                              When integrating the EPAM Delivery Platform with Capsule, it's essential to understand that the platform needs administrative rights to make and oversee resources. This requirement might raise security concerns, but it's important to clarify that it only pertains to the deployment process within the platform. There is an alternative approach available. You can manually create permissions for each deployment flow. This alternative method can be used to address and lessen these security concerns.

                                              "},{"location":"operator-guide/capsule/#installation","title":"Installation","text":"

                                              To install the Capsule tool, use the Cluster Add-Ons approach. For more details, please refer to the Capsule official page.

                                              "},{"location":"operator-guide/capsule/#configuration","title":"Configuration","text":"

                                              To use Capsule in EDP, follow the steps below:

                                              1. Run the command below to upgrade EDP with Capsule capabilities:

                                                helm upgrade --install edp epamedp/edp-install -n edp --values values.yaml --set cd-pipeline-operator.tenancyEngine=capsule\n
                                              2. Open the CapsuleConfiguration custom resource called default:

                                                kubectl edit CapsuleConfiguration default\n

                                                Add the tenant name (by default, it's the EDP namespace name) to the manifest's spec section as follows:

                                                spec:\nuserGroups:\n- system:serviceaccounts:edp\n

                                              As a result, EDP will be using Capsule capabilities to manage tenants, thus providing better access management.

                                              "},{"location":"operator-guide/capsule/#related-articles","title":"Related Articles","text":"
                                              • Install EDP With Values File
                                              • Cluster Add-Ons Overview
                                              • Set Up Kiosk
                                              • EDP Kiosk Usage
                                              "},{"location":"operator-guide/configure-keycloak-oidc-eks/","title":"EKS OIDC With Keycloak","text":"

                                              This article provides the instruction of configuring Keycloak as OIDC Identity Provider for EKS. The example is written on Terraform (HCL).

                                              "},{"location":"operator-guide/configure-keycloak-oidc-eks/#prerequisites","title":"Prerequisites","text":"

                                              To follow the instruction, check the following prerequisites:

                                              1. terraform 0.14.10
                                              2. hashicorp/aws = 4.8.0
                                              3. mrparkers/keycloak >= 3.0.0
                                              4. hashicorp/kubernetes ~> 2.9.0
                                              5. kubectl = 1.22
                                              6. kubelogin >= v1.25.1
                                              7. Ensure that Keycloak has network availability for AWS (not in a private network).

                                              Note

                                              To connect OIDC with a cluster, install and configure the kubelogin plugin. For Windows, it is recommended to download the kubelogin as a binary and add it to your PATH.

                                              "},{"location":"operator-guide/configure-keycloak-oidc-eks/#solution-overview","title":"Solution Overview","text":"

                                              The solution includes three types of the resources - AWS (EKS), Keycloak, Kubernetes. The left part of Keycloak resources remain unchanged after creation, thus allowing us to associate a claim for a user group membership. Other resources can be created, deleted or changed if needed. The most crucial from Kubernetes permissions are Kubernetes RoleBindings and ClusterRoles/Roles. Roles present a set of permissions, in turn RoleBindings map Kubernetes Role to representative Keycloak groups, so a group member can have just appropriate permissions.

                                              EKS Keycloak OIDC

                                              "},{"location":"operator-guide/configure-keycloak-oidc-eks/#keycloak-configuration","title":"Keycloak Configuration","text":"

                                              To configure Keycloak, follow the steps described below.

                                              • Create a client:
                                              resource \"keycloak_openid_client\" \"openid_client\" {\nrealm_id                                  = \"openshift\"\nclient_id                                 = \"kubernetes\"\naccess_type                               = \"CONFIDENTIAL\"\nstandard_flow_enabled                     = true\nimplicit_flow_enabled                     = false\ndirect_access_grants_enabled              = true\nservice_accounts_enabled                  = true\noauth2_device_authorization_grant_enabled = true\nbackchannel_logout_session_required       = true\n\nroot_url    = \"http://localhost:8000/\"\nbase_url    = \"http://localhost:8000/\"\nadmin_url   = \"http://localhost:8000/\"\nweb_origins = [\"*\"]\n\nvalid_redirect_uris = [\n\"http://localhost:8000/*\"\n]\n}\n
                                              • Create the client scope:
                                              resource \"keycloak_openid_client_scope\" \"openid_client_scope\" {\nrealm_id               = <realm_id>\nname                   = \"groups\"\ndescription            = \"When requested, this scope will map a user's group memberships to a claim\"\ninclude_in_token_scope = true\nconsent_screen_text    = false\n}\n
                                              • Add scope to the client by selecting all default client scope:
                                              resource \"keycloak_openid_client_default_scopes\" \"client_default_scopes\" {\nrealm_id  = <realm_id>\nclient_id = keycloak_openid_client.openid_client.id\n\ndefault_scopes = [\n\"profile\",\n\"email\",\n\"roles\",\n\"web-origins\",\nkeycloak_openid_client_scope.openid_client_scope.name,\n]\n}\n
                                              • Add the following mapper to the client scope:
                                              resource \"keycloak_openid_group_membership_protocol_mapper\" \"group_membership_mapper\" {\nrealm_id            = <realm_id>\nclient_scope_id     = keycloak_openid_client_scope.openid_client_scope.id\nname                = \"group-membership-mapper\"\nadd_to_id_token     = true\nadd_to_access_token = true\nadd_to_userinfo     = true\nfull_path           = false\n\nclaim_name = \"groups\"\n}\n
                                              • In the authorization token, get groups membership field with the list of group membership in the realm:
                                                ...\n\"email_verified\": false,\n\"name\": \"An User\",\n\"groups\": [\n\"<env_prefix_name>-oidc-viewers\",\n\"<env_prefix_name>-oidc-cluster-admins\"\n],\n\"preferred_username\": \"an_user@example.com\",\n\"given_name\": \"An\",\n\"family_name\": \"User\",\n\"email\": \"an_user@example.com\"\n...\n
                                              • Create group/groups, e.g. admin group:
                                              resource \"keycloak_group\" \"oidc_tenant_admin\" {\nrealm_id = <realm_id>\nname     = \"kubernetes-oidc-admins\"\n}\n
                                              "},{"location":"operator-guide/configure-keycloak-oidc-eks/#eks-configuration","title":"EKS Configuration","text":"

                                              To configure EKS, follow the steps described below. In AWS Console, open EKS home page -> Choose a cluster -> Configuration tab -> Authentication tab.

                                              The Terraform code for association with Keycloak:

                                              • terraform.tfvars
                                                ...\ncluster_identity_providers = {\nkeycloak = {\nclient_id                     = <keycloak_client_id>\nidentity_provider_config_name = \"Keycloak\"\nissuer_url                    = \"https://<keycloak_url>/auth/realms/<realm_name>\"\ngroups_claim                  = \"groups\"\n}\n...\n
                                              • the resource code
                                                resource \"aws_eks_identity_provider_config\" \"keycloak\" {\nfor_each = { for k, v in var.cluster_identity_providers : k => v if true }\n\ncluster_name = var.platform_name\n\noidc {\nclient_id                     = each.value.client_id\ngroups_claim                  = lookup(each.value, \"groups_claim\", null)\ngroups_prefix                 = lookup(each.value, \"groups_prefix\", null)\nidentity_provider_config_name = try(each.value.identity_provider_config_name, each.key)\nissuer_url                    = each.value.issuer_url\nrequired_claims               = lookup(each.value, \"required_claims\", null)\nusername_claim                = lookup(each.value, \"username_claim\", null)\nusername_prefix               = lookup(each.value, \"username_prefix\", null)\n}\n\ntags = var.tags\n}\n

                                              Note

                                              The resource creation takes around 20-30 minutes. The resource doesn't support updating, so each change will lead to deletion of the old instance and creation of a new instance instead.

                                              "},{"location":"operator-guide/configure-keycloak-oidc-eks/#kubernetes-configuration","title":"Kubernetes Configuration","text":"

                                              To connect the created Keycloak resources with permissions, it is necessary to create Kubernetes Roles and RoleBindings:

                                              • ClusterRole
                                                resource \"kubernetes_cluster_role_v1\" \"oidc_tenant_admin\" {\nmetadata {\nname = \"oidc-admin\"\n}\nrule {\napi_groups = [\"*\"]\nresources  = [\"*\"]\nverbs      = [\"*\"]\n}\n}\n
                                              • ClusterRoleBinding
                                                resource \"kubernetes_cluster_role_binding_v1\" \"oidc_cluster_rb\" {\nmetadata {\nname = \"oidc-cluster-admin\"\n}\nrole_ref {\napi_group = \"rbac.authorization.k8s.io\"\nkind      = \"ClusterRole\"\nname      = kubernetes_cluster_role_v1.oidc_tenant_admin.metadata[0].name\n}\nsubject {\nkind      = \"Group\"\nname      = keycloak_group.oidc_tenant_admin.name\napi_group = \"rbac.authorization.k8s.io\"\n    # work-around due https://github.com/hashicorp/terraform-provider-kubernetes/issues/710\nnamespace = \"\"\n}\n}\n

                                              Note

                                              When creating the Keycloak group, ClusterRole, and ClusterRoleBinding, a user receives cluster admin permissions. There is also an option to provide admin permissions just to a particular namespace or another resources set in another namespace. For details, please refer to the Mixing Kubernetes Roles page.

                                              "},{"location":"operator-guide/configure-keycloak-oidc-eks/#kubeconfig","title":"Kubeconfig","text":"

                                              Template for kubeconfig:

                                              apiVersion: v1\npreferences: {}\nkind: Config\n\nclusters:\n- cluster:\nserver: https://<eks_url>.eks.amazonaws.com\ncertificate-authority-data: <certificate_authtority_data>\nname: <cluster_name>\n\ncontexts:\n- context:\ncluster: <cluster_name>\nuser: <keycloak_user_email>\nname: <cluster_name>\n\ncurrent-context: <cluster_name>\n\nusers:\n- name: <keycloak_user_email>\nuser:\nexec:\napiVersion: client.authentication.k8s.io/v1beta1\ncommand: kubectl\nargs:\n- oidc-login\n- get-token\n- -v1\n- --oidc-issuer-url=https://<keycloak_url>/auth/realms/<realm>\n- --oidc-client-id=<keycloak_client_id>\n- --oidc-client-secret=<keycloak_client_secret>\n
                                              Flag -v1 can be used for debug, in a common case it's not needed and can be deleted.

                                              To find the client secret:

                                              1. Open Keycloak
                                              2. Choose realm
                                              3. Find keycloak_client_id that was previously created
                                              4. Open Credentials tab
                                              5. Copy Secret
                                              "},{"location":"operator-guide/configure-keycloak-oidc-eks/#testing","title":"Testing","text":"

                                              Before testing, ensure that a user is a member of the correct Keycloak group. To add a user to a Keycloak group:

                                              1. Open Keycloak
                                              2. Choose realm
                                              3. Open user screen with search field
                                              4. Find a user and open the configuration
                                              5. Open Groups tab
                                              6. In Available Groups, choose an appropriate group
                                              7. Click the Join button
                                              8. The group should appear in the Group Membership list

                                              Follow the steps below to test the configuration:

                                              • Run kubectl command, it is important to specify the correct kubeconfig:
                                                KUBECONFIG=<path_to_oidc_kubeconfig> kubectl get ingresses -n <namespace_name>\n
                                              • After the first run and redirection to the Keycloak login page, log in using credentials (login:password) or using SSO Provider. In case of the successful login, you will receive the following notification that can be closed:

                                              OIDC Successful Login

                                              • As the result, a respective response from the Kubernetes will appear in the console in case a user is configured correctly and is a member of the correct group and Roles/RoleBindings.
                                              • If something is not set up correctly, the following output error will be displayed:
                                                Error from server (Forbidden): ingresses.networking.k8s.io is forbidden:\nUser \"https://<keycloak_url>/auth/realms/<realm>#<keycloak_user_id>\"\ncannot list resource \"ingresses\" in API group \"networking.k8s.io\" in the namespace \"<namespace_name>\"\n
                                              "},{"location":"operator-guide/configure-keycloak-oidc-eks/#session-update","title":"Session Update","text":"

                                              To update the session, clear cache. The default location for the login cache:

                                              rm -rf ~/.kube/cache\n
                                              "},{"location":"operator-guide/configure-keycloak-oidc-eks/#access-cluster-via-lens","title":"Access Cluster via Lens","text":"

                                              To access the Kubernetes cluster via Lens, follow the steps below to configure it:

                                              • Add a new kubeconfig to the location where Lens has access. The default location of the kubeconfig is ~/.kube/config but it can be changed by navigating to File -> Preferences -> Kubernetes -> Kubeconfig Syncs;
                                              • (Optional) Using Windows, it is recommended to reboot the system after adding a new kubeconfig.
                                              • Authenticate on the Keycloak login page to be able to access the cluster;

                                              Note

                                              Lens does not add namespaces of the project automatically, so it is necessary to add them manually, simply go to Settings -> Namespaces and add the namespaces of a project.

                                              "},{"location":"operator-guide/configure-keycloak-oidc-eks/#related-articles","title":"Related Articles","text":"
                                              • Headlamp OIDC Configuration
                                              "},{"location":"operator-guide/container-registry-harbor-integration-tekton-ci/","title":"Integrate Harbor With EDP Pipelines","text":"

                                              Harbor serves as a tool for storing images and artifacts. This documentation contains instructions on how to create a project in Harbor and set up a robot account for interacting with the registry from CI pipelines.

                                              "},{"location":"operator-guide/container-registry-harbor-integration-tekton-ci/#overview","title":"Overview","text":"

                                              Harbor integration with Tekton enables the centralized storage of container images within the cluster, eliminating the need for external services. By leveraging Harbor as the container registry, users can manage and store their automation results and reports in one place.

                                              "},{"location":"operator-guide/container-registry-harbor-integration-tekton-ci/#integration-procedure","title":"Integration Procedure","text":"

                                              The integration process involves two steps:

                                              1. Creating a project to store application images.

                                              2. Creating two accounts with different permissions to push (read/write) and pull (read-only) project images.

                                              "},{"location":"operator-guide/container-registry-harbor-integration-tekton-ci/#create-new-project","title":"Create New Project","text":"

                                              The process of creating new projects is the following:

                                              1. Log in to the Harbor console using your credentials.
                                              2. Navigate to the Projects menu, click the New Project button:

                                                Projects menu

                                              3. On the New Project menu, enter a project name that matches your EDP namespace in the Project Name field. Keep other fields as default and click OK to continue:

                                                New Project menu

                                              "},{"location":"operator-guide/container-registry-harbor-integration-tekton-ci/#set-up-robot-account","title":"Set Up Robot Account","text":"

                                              To make EDP and Harbor project interact with each other, set up a robot account:

                                              1. Navigate to your newly created project, select Robot Accounts menu and choose New Robot Account:

                                                Create Robot Account menu

                                              2. In the pop-up window, fill in the fields as follows:

                                                • Name - edp-push;
                                                • Expiration time - set the value which is aligned with your organization policy;
                                                • Description - read/write permissions;
                                                • Permissions - Pull Repository and Push Repository.

                                                To proceed, click the ADD button:

                                                Robot Accounts menu

                                              3. In the appeared window, copy the robot account credentials or click the Export to file button to save the secret and account name locally:

                                                New credentials for Robot Account

                                              4. Provision the kaniko-docker-config secrets using kubectl, EDP Portal or with the externalSecrets operator:

                                                Example

                                                The auth string can be generated by this command:

                                                echo -n \"robot\\$edp-project+edp:secret\" | base64\n

                                                kubectlManual SecretExternal Secrets Operator
                                                  apiVersion: v1\nkind: Secret\nmetadata:\nname: kaniko-docker-config\nnamespace: edp\nlabels:\napp.edp.epam.com/secret-type: registry\ntype: kubernetes.io/dockerconfigjson\nstringData:\n.dockerconfigjson: |\n{\n\"auths\" : {\n\"harbor-registry.com\":\n{\n\"username\":\"registry-username\",\n\"password\":\"registry-password\",\n\"auth\": \"secret-string\"\n}\n}\n}\n

                                                Navigate to EDP Portal UI -> EDP -> Configuration -> Registry. Fill in the required fields and click Save.

                                                Registry update manual secret

                                                \"kaniko-docker-config\":\n{\"auths\" : \"harbor-registry.com\":\n{\n\"username\":\"registry-username\",\n\"password\":\"registry-password\",\n\"auth\": \"secret-string\"\n}\n}\n

                                                Navigate to EDP Portal UI -> EDP -> Configuration -> Registry. Here, you will observe the Managed by ExternalSecret message:

                                                Registry managed by external secret operator

                                                Note

                                                More details of External Secrets Operator Integration can be found in the External Secrets Operator Integration page.

                                              5. Repeat steps 2-3 with values below:

                                                • Name - edp-pull;
                                                • Expiration time - set the value which is aligned with your organization policy;
                                                • Description - read-only permissions;
                                                • Permissions - Pull Repository.
                                              6. Provision the regcred secrets using kubectl, EDP Portal or with the externalSecrets operator:

                                                Example

                                                The auth string can be generated by this command:

                                                echo -n \"robot\\$edp-project+edp-push:secret\" | base64\n

                                                kubectlManual SecretExternal Secrets Operator
                                                apiVersion: v1\nkind: Secret\nmetadata:\nname: regcred\nnamespace: edp\nlabels:\napp.edp.epam.com/secret-type: registry\ntype: kubernetes.io/dockerconfigjson\nstringData:\n.dockerconfigjson: |\n{\n\"auths\" : {\n\"harbor-registry.com\":\n{\n\"username\":\"registry-username\",\n\"password\":\"registry-password\",\n\"auth\": \"secret-string\"\n}\n}\n}\n

                                                Navigate to EDP Portal UI -> EDP -> Configuration -> Registry. Fill in the required fields and click Save.

                                                Registry update manual secret

                                                \"regcred\":\n{\"auths\" : \"harbor-registry.com\":\n{\n\"username\":\"registry-username\",\n\"password\":\"registry-password\",\n\"auth\": \"secret-string\"\n}\n}\n

                                                Navigate to EDP Portal UI -> EDP -> Configuration -> Registry. Here, you will observe the Managed by ExternalSecret message:

                                                Registry managed by external secret operator

                                                Note

                                                More details of External Secrets Operator Integration can be found in the External Secrets Operator Integration page.

                                              7. In the values.yaml file for the edp-install Helm chart, set the following values for the specified fields:

                                                Manual SecretExternal Secrets Operator

                                                If the kaniko-docker-config secret has been created manually:

                                                values.yaml
                                                ...\nkaniko:\nexistingDockerConfig: \"kaniko-docker-config\"\nglobal:\ndockerRegistry:\nurl: harbor-registry.com\ntype: \"harbor\"\n...\n

                                                If the kaniko-docker-config secret has been created via External Secrets Operator:

                                                values.yaml
                                                ...\nkaniko:\nexistingDockerConfig: \"kaniko-docker-config\"\nexternalSecrets:\nenabled: true\nglobal:\ndockerRegistry:\nurl: harbor-registry.com\ntype: \"harbor\"\n...\n
                                              8. (Optional) If you've already deployed the EDP Helm chart, you can update it using the following command:

                                                helm update --install edp epamedp/edp-install \\\n--values values.yaml \\\n--namespace edp\n

                                              As a result, application images built in EDP Portal will be stored in Harbor project and will be deployed from the harbor registry.

                                              Harbor projects can be added and retained with a retention policy generated through the EDP script in edp-cluster-add-ons.

                                              "},{"location":"operator-guide/container-registry-harbor-integration-tekton-ci/#related-articles","title":"Related Articles","text":"
                                              • Install EDP
                                              • Install Harbor
                                              • Adjust Jira Integration
                                              • Custom SonarQube Integration
                                              "},{"location":"operator-guide/delete-edp/","title":"Uninstall EDP","text":"

                                              This tutorial provides detailed instructions on the optimal method to uninstall the EPAM Delivery Platform.

                                              "},{"location":"operator-guide/delete-edp/#deletion-procedure","title":"Deletion Procedure","text":"

                                              To uninstall EDP, perform the following steps:

                                              1. It is highly recommended to delete all the resources created via EDP Portal UI first. It can be:

                                                • Applications;
                                                • Libraries;
                                                • Autotests;
                                                • Infrastructures;
                                                • CD Pipelines.

                                                We recommend deleting them via EDP Portal UI respectively, although it is also possible to delete all the EDP Portal resources using the kubectl delete command.

                                              2. Delete application namespaces. They should be called according to the edp-<cd-pipeline>-<stage-name> pattern.

                                              3. Uninstall EDP the same way it was installed.

                                              4. Run the script that deletes the rest of the custom resources:

                                                View: CleanEDP.sh
                                                #!/bin/sh\n\n###################################################################\n# A POSIX script to remove EDP Kubernetes Custom Resources        #\n#                                                                 #\n# PREREQUISITES                                                   #\n#     kubectl>=1.23.x, awscli (for EKS authentication)            #\n#                                                                 #\n# TESTED                                                          #\n#     OS: Ubuntu, FreeBSD, Windows (GitBash)                      #\n#     Shells: zsh, bash, dash                                     #\n###################################################################\n\n[ -n \"${DEBUG}\" ] && set -x\n\nset -e\n\nexit_err() {\nprintf '%s\\n' \"$1\" >&2\nexit 1\n}\n\ncheck_kubectl() {\nif ! hash kubectl; then\nexit_err \"Error: kubectl is not installed\"\nfi\n}\n\nget_script_help() {\nself_name=\"$(basename \"$0\")\"\necho \"\\\n${self_name} deletes EDP Kubernetes Custom Resources\n\nUsage: ${self_name}\n\nOptions:\n${self_name} [OPTION] [FILE]\n\n-h, --help          Print Help\n-k, --kubeconfig    Pass Kubeconfig file\n\nDebug:\nDEBUG=true ${self_name}\n\nExamples:\n${self_name} --kubeconfig ~/.kube/custom_config\"\n}\n\nyellow_fg() {\ntput setaf 3 || true\n}\n\nno_color_out() {\ntput sgr0 || true\n}\n\nget_current_context() {\nkubectl config current-context\n}\n\nget_context_ns() {\nkubectl config view \\\n--minify --output jsonpath='{..namespace}' 2> /dev/null\n}\n\nget_ns() {\nkubectl get ns \"${edp_ns}\" --output name --request-timeout='5s'\n}\n\ndelete_ns() {\nkubectl delete ns \"${edp_ns}\" --timeout='30s'\n}\n\nget_edp_crds() {\nkubectl get crds --no-headers=true | awk '/edp.epam.com/ {print $1}'\n}\n\nget_all_edp_crs_manif() {\nkubectl get \"${edp_crds_comma_list}\" -n \"${edp_ns}\" \\\n--output yaml --ignore-not-found --request-timeout='15s'\n}\n\ndel_all_edp_crs() {\nkubectl delete --all \"${edp_crds_comma_list}\" -n \"${edp_ns}\" \\\n--ignore-not-found --timeout='15s'\n}\n\niterate_edp_crs() {\nedp_crds_comma_list=\"$(printf '%s' \"${edp_crds}\" | tr -s '\\n' ',')\"\nget_all_edp_crs_manif \\\n| sed '/finalizers:/,/.*:/{//!d;}' \\\n| kubectl replace -f - || true\ndel_all_edp_crs || true\n}\n\niterate_edp_crds() {\nn=0\nwhile [ \"$n\" -lt 2 ]; do\nn=$((n + 1))\n\nif [ \"$n\" -eq 2 ]; then\n# Delete remaining resources\nedp_crds=\"keycloakclients,codebasebranches,jenkinsfolders\"\niterate_edp_crs\necho \"EDP Custom Resources in NS ${color_ns} have been deleted.\"\nbreak\nfi\n\necho \"Replacing EDP CR Manifests. Wait for output (may take 2min)...\"\nedp_crds=\"$(get_edp_crds)\"\niterate_edp_crs\ndone\n}\n\nselect_ns() {\nis_context=\"$(get_current_context)\" || exit 1\nprintf '%s' \"Current cluster: \"\nprintf '%s\\n' \"$(yellow_fg)${is_context}$(no_color_out)\"\n\ncurrent_ns=\"$(get_context_ns)\" || true\n\nprintf '%s\\n' \"Enter EDP namespace\"\nprintf '%s' \"Skip to use [$(yellow_fg)${current_ns}$(no_color_out)]: \"\nread -r edp_ns\n\nif [ -z \"${edp_ns}\" ]; then\nedp_ns=\"${current_ns}\"\necho \"${edp_ns}\"\nif [ -z \"${edp_ns}\" ]; then\nexit_err \"Error: namespace is not specified\"\nfi\nelse\nget_ns || exit 1\nfi\n\ncolor_ns=\"$(yellow_fg)${edp_ns}$(no_color_out)\"\n}\n\nchoose_delete_ns() {\nprintf '%s\\n' \"Do you want to delete namespace ${color_ns} as well? (y/n)?\"\nprintf '%s' \"Skip or enter [N/n] to keep the namespace: \"\nread -r answer\nif [ \"${answer}\" != \"${answer#[Yy]}\" ]; then\ndelete_edp_ns=true\necho \"Namespace ${color_ns} is marked for deletion.\"\nelse\necho \"Skipped. Deleting EDP Custom Resources only.\"\nfi\n}\n\ndelete_ns_if_true() {\nif [ \"${delete_edp_ns}\" = true ]; then\necho \"Deleting ${color_ns} namespace...\"\ndelete_ns || exit 1\nfi\n}\n\ninvalid_option() {\nexit_err \"Invalid option '$1'. Use -h, --help for details\"\n}\n\nmain_func() {\ncheck_kubectl\nselect_ns\nchoose_delete_ns\niterate_edp_crds\ndelete_ns_if_true\n}\n\nwhile [ \"$#\" -gt 0 ]; do\ncase \"$1\" in\n-h | --help)\nget_script_help\nexit 0\n;;\n-k | --kubeconfig)\nshift\n[ $# = 0 ] && exit_err \"No Kubeconfig file specified\"\nexport KUBECONFIG=\"$1\"\n;;\n--)\nbreak\n;;\n-k* | --k*)\necho \"Did you mean '--kubeconfig'?\"\ninvalid_option \"$1\"\n;;\n-* | *)\ninvalid_option \"$1\"\n;;\nesac\nshift\ndone\n\nmain_func\n

                                                The script will prompt user to specify the namespace where EDP was deployed in and choose if the namespace is going to be deleted. This script will delete EDP custom resources in the namespace specified by user.

                                              5. In Keycloak, delete the edp-main realm, also delete client which is supposed to be called by the edp-main pattern in the openshift realm.

                                              "},{"location":"operator-guide/delete-edp/#related-articles","title":"Related Articles","text":"
                                              • Install EDP
                                              • Install EDP via Helmfile
                                              • Keycloak Integration
                                              "},{"location":"operator-guide/delete-jenkins-job-provision/","title":"Delete Jenkins Job Provision","text":"

                                              To delete the job provisioner, take the following steps:

                                              1. Delete the job provisioner from Jenkins. Navigate to Admin Console->Jenkins->jobs->job-provisions folder, select the necessary provisioner and click the drop-down right to the provisioner name. Select Delete project.

                                                Delete job provisioner

                                              "},{"location":"operator-guide/dependency-track/","title":"Install DependencyTrack","text":"

                                              This documentation guide provides comprehensive instructions for installing and integrating DependencyTrack with the EPAM Delivery Platform.

                                              "},{"location":"operator-guide/dependency-track/#prerequisites","title":"Prerequisites","text":"
                                              • Kubectl version 1.26.0 is installed.
                                              • Helm version 3.12.0+ is installed.
                                              "},{"location":"operator-guide/dependency-track/#installation","title":"Installation","text":"

                                              To install DependencyTrack use EDP addons approach.

                                              "},{"location":"operator-guide/dependency-track/#configuration","title":"Configuration","text":"
                                              1. Open Administration -> Access Management -> Teams. Click Create Team -> Automation and click Create.

                                              2. Click + in Permissions and add:

                                                BOM_UPLOAD\nPROJECT_CREATION_UPLOAD\nVIEW_PORTFOLIO\n
                                              3. Click + in API keys to create token:

                                              DependencyTrack settings

                                              1. Provision secrets using manifest, EDP Portal, or with the externalSecrets operator:
                                              manifestEDP Portal UIExternal Secrets Operator
                                              apiVersion: v1\nkind: Secret\nmetadata:\nname: ci-dependency-track\nnamespace: <edp>\nlabels:\napp.edp.epam.com/secret-type: dependency-track\nstringData:\ntoken: <dependency-track-token>\nurl: <dependency-track-api-url>\ntype: Opaque\n

                                              Go to the EDP Portal UI open EDP -> Configuration -> DependencyTrack apply Token and URL click the save button.

                                              DependencyTrack update manual secret

                                              Store DependencyTrack URL and Token in the AWS Parameter Store with the following format:

                                              \"ci-dependency-track\":\n{\n\"token\": \"XXXXXXXXXXXX\",\n\"url\": \"https://dependency-track.example.com\"\n}\n

                                              Go to the EDP Platform UI open EDP -> Configuration -> DependencyTrack see the Managed by External Secret.

                                              DependencyTrack managed by external secret operator

                                              More detail on External Secrets Operator Integration can be found on the following page

                                              After following the instructions provided, you should be able to integrate your DependencyTrack with the EPAM Delivery Platform.

                                              "},{"location":"operator-guide/dependency-track/#related-articles","title":"Related Articles","text":"
                                              • Install External Secrets Operator
                                              • External Secrets Operator Integration
                                              • Cluster Add-Ons Overview
                                              "},{"location":"operator-guide/deploy-aws-eks/","title":"Deploy AWS EKS Cluster","text":"

                                              This instruction provides detailed information on the Amazon Elastic Kubernetes Service cluster deployment and contains the additional setup necessary for the managed infrastructure.

                                              "},{"location":"operator-guide/deploy-aws-eks/#prerequisites","title":"Prerequisites","text":"

                                              Before the EKS cluster deployment and configuration, make sure to check the prerequisites.

                                              "},{"location":"operator-guide/deploy-aws-eks/#required-tools","title":"Required Tools","text":"

                                              Install the required tools listed below:

                                              • Git
                                              • tfenv
                                              • AWS CLI
                                              • kubectl
                                              • helm
                                              • lens (optional)

                                              To check the correct tools installation, run the following commands:

                                              $ git --version\n$ tfenv --version\n$ aws --version\n$ kubectl version\n$ helm version\n
                                              "},{"location":"operator-guide/deploy-aws-eks/#aws-account-and-iam-roles","title":"AWS Account and IAM Roles","text":"
                                              • Make sure the AWS account is active.
                                              • Create the AWS IAM role: EKSDeployerRole to deploy EKS cluster on the project side. The provided resources will allow to use cross-account deployment by assuming the created EKSDeployerRole from the root AWS account. Take the following steps:

                                                1. Clone git repo with the edp-terraform-aws-platform.git ism-deployer project, and rename it according to the project name.

                                                  clone project

                                                  $ git clone https://github.com/epmd-edp/edp-terraform-aws-platform.git\n$ mv edp-terraform-aws-platform edp-terraform-aws-platform-<PROJECT_NAME>\n$ cd edp-terraform-aws-platform-<PROJECT_NAME>/iam-deployer\n

                                                  where:

                                                  • \u2039PROJECT_NAME\u203a - is a project name or a unique platform identifier, for example, shared or test-eks.
                                                2. Fill in the input variables for Terraform run in the \u2039iam-deployer/terraform.tfvars\u203a file. Use the iam-deployer/template.tfvars as an example. Please find the detailed description of the variables in the iam-deployer/variables.tf file.

                                                  terraform.tfvars file example

                                                  aws_profile = \"aws_user\"\n\nregion = \"eu-central-1\"\n\ntags = {\n\"SysName\"      = \"EKS\"\n\"SysOwner\"     = \"owner@example.com\"\n\"Environment\"  = \"EKS-TEST-CLUSTER\"\n\"CostCenter\"   = \"0000\"\n\"BusinessUnit\" = \"BU\"\n\"Department\"   = \"DEPARTMENT\"\n}\n
                                                3. Run the terraform apply command. Then initialize the backend and apply the changes.

                                                  apply the changes

                                                  $ terraform init\n$ terraform apply\n...\nDo you want to perform these actions?\nTerraform will perform the actions described above.\nOnly 'yes' will be accepted to approve.\n\nEnter a value: yes\n\naws_iam_role.deployer: Creating...\naws_iam_role.deployer: Creation complete after 4s [id=EKSDeployerRole]\n\nApply complete! Resources: 1 added, 0 changed, 0 destroyed.\n\nOutputs:\n\ndeployer_iam_role_arn = \"arn:aws:iam::012345678910:role/EKSDeployerRole\"\ndeployer_iam_role_id = \"EKSDeployerRole\"\ndeployer_iam_role_name = \"EKSDeployerRole\"\n
                                                4. Commit the local state. At this run, Terraform will use the local backend to store the state on the local filesystem. Terraform locks that state using system APIs and performs operations locally. It is not mandatory to store the resulted state file in Git, but this option can be used since the file data is not sensitive. Optionally, commit the state of the s3-backend project.

                                                  $ git add iam-deployer/terraform.tfstate iam-deployer/terraform.tfvars\n$ git commit -m \"Terraform state for IAM deployer role\"\n
                                                  • Create the AWS IAM role: ServiceRoleForEKSWorkerNode to connect to the EKS cluster. Take the following steps:

                                                    1. Use the local state file or the AWS S3 bucket for saving the state file. The AWS S3 bucket creation is described in the Terraform Backend section.

                                                    2. Go to the folder with the iam-workernode role edp-terraform-aws-platform.git, and rename it according to the project name.

                                                      go to the iam-workernode folder

                                                      $ cd edp-terraform-aws-platform-<PROJECT_NAME>/iam-workernode\n

                                                      where:

                                                      • \u2039PROJECT_NAME\u203a - is a project name or a unique platform identifier, for example, shared or test-eks.
                                                    3. Fill in the input variables for Terraform run in the \u2039iam-workernode/terraform.tfvars\u203a file, use the iam-workernode/template.tfvars as an example. Please find the detailed description of the variables in the iam-workernode/variables.tf file.

                                                      terraform.tfvars file example

                                                      role_arn = \"arn:aws:iam::012345678910:role/EKSDeployerRole\"\n\nplatform_name = \"<PROJECT_NAME>\"\n\niam_permissions_boundary_policy_arn = \"arn:aws:iam::012345678910:policy/some_role_boundary\"\n\nregion = \"eu-central-1\"\n\ntags = {\n\"SysName\"      = \"EKS\"\n\"SysOwner\"     = \"owner@example.com\"\n\"Environment\"  = \"EKS-TEST-CLUSTER\"\n\"CostCenter\"   = \"0000\"\n\"BusinessUnit\" = \"BU\"\n\"Department\"   = \"DEPARTMENT\"\n}\n
                                                    4. Run the terraform apply command. Then initialize the backend and apply the changes.

                                                      apply the changes

                                                      $ terraform init\n$ terraform apply\n...\nDo you want to perform these actions?\nTerraform will perform the actions described above.\nOnly 'yes' will be accepted to approve.\n\nEnter a value: yes\n
                                                      • Create the AWS IAM role: ServiceRoleForEKSShared for the EKS cluster. Take the following steps:

                                                        1. Create the AWS IAM role: ServiceRoleForEKSShared

                                                        2. Attach the following policies: \"AmazonEKSClusterPolicy\" and \"AmazonEKSServicePolicy\"

                                                      • Configure AWS profile for deployment from the local node. Please, refer to the AWS documentation for detailed guide to configure profiles.
                                                      • Create AWS Key pair for EKS cluster nodes access. Please refer to the AWS documentation for detailed guide to create a Key pair.
                                                      • Create a public Hosted Zone if there is no any to provide for EKS cluster deployment. Please, refer to the AWS documentation for detailed guide to create a Hosted zone.
                                                      "},{"location":"operator-guide/deploy-aws-eks/#terraform-backend","title":"Terraform Backend","text":"

                                                      The Terraform configuration for EKS cluster deployment has a backend block, which defines where and how the operations are performed, and where the state snapshots are stored. Currently, the best practice is to store the state as a given key in a given bucket on Amazon S3.

                                                      This backend also supports state locking and consistency checking via Dynamo DB, which can be enabled by setting the dynamodb_table field to an existing DynamoDB table name.

                                                      In the following configuration a single DynamoDB table can be used to lock multiple remote state files. Terraform generates key names that include the values of the bucket and key variables.

                                                      In the edp-terraform-aws-platform.git repo an optional project is provided to create initial resources to start using Terraform from the scratch.

                                                      The provided resources will allow to use the following Terraform options:

                                                      • to store Terraform states remotely in the Amazon S3 bucket;
                                                      • to manage remote state access with S3 bucket policy;
                                                      • to support state locking and consistency checking via DynamoDB.

                                                      After Terraform run the following AWS resources will be created:

                                                      • S3 bucket: terraform-states-\u2039AWS_ACCOUNT_ID\u203a
                                                      • S3 bucket policy: terraform-states-\u2039AWS_ACCOUNT_ID\u203a
                                                      • DynamoDB lock table: terraform_locks

                                                      Please, skip this section if you already have the listed resources for further Terraform remote backend usage.

                                                      To create the required resources, do the following:

                                                      1. Clone git repo with s3-backend project edp-terraform-aws-platform.git, rename it in the correspondence with project name.

                                                        clone project

                                                          $ git clone https://github.com/epmd-edp/edp-terraform-aws-platform.git\n\n  $ mv edp-terraform-aws-platform tedp-terraform-aws-platform-<PROJECT_NAME>\n\n  $ cd edp-terraform-aws-platform-<PROJECT_NAME>/s3-backend\n

                                                        where:

                                                        \u2039PROJECT_NAME\u203a - is a project name, a unique platform identifier, e.g. shared, test-eks etc.

                                                      2. Fill the input variables for Terraform run in the \u2039s3-backend/terraform.tfvars\u203a file, refer to the s3-backend/template.tfvars as an example.

                                                        terraform.tfvars file example

                                                          region = \"eu-central-1\"\n\ns3_states_bucket_name = \"terraform-states\"\n\ntable_name = \"terraform_locks\"\n\ntags = {\n\"SysName\"      = \"EKS\"\n\"SysOwner\"     = \"owner@example.com\"\n\"Environment\"  = \"EKS-TEST-CLUSTER\"\n\"CostCenter\"   = \"0000\"\n\"BusinessUnit\" = \"BU\"\n\"Department\"   = \"DEPARTMENT\"\n}\n

                                                        Find the detailed description of the variables in the s3-backend/variables.tf file.

                                                      3. Run Terraform apply. Initialize the backend and apply the changes.

                                                        apply the changes

                                                          $ terraform init\n$ terraform apply\n...\n  Do you want to perform these actions?\n  Terraform will perform the actions described above.\n  Only 'yes' will be accepted to approve.\n\n  Enter a value: yes\n\naws_dynamodb_table.terraform_lock_table: Creating...\n  aws_s3_bucket.terraform_states: Creating...\n  aws_dynamodb_table.terraform_lock_table: Creation complete after 27s [id=terraform-locks-test]\n  aws_s3_bucket.terraform_states: Creation complete after 1m10s [id=terraform-states-test-012345678910]\n  aws_s3_bucket_policy.terraform_states: Creating...\n  aws_s3_bucket_policy.terraform_states: Creation complete after 1s [id=terraform-states-test-012345678910]\n\n  Apply complete! Resources: 3 added, 0 changed, 0 destroyed.\n\n  Outputs:\n\n  terraform_lock_table_dynamodb_id = \"terraform_locks\"\nterraform_states_s3_bucket_name = \"terraform-states-012345678910\"\n
                                                      4. Commit the local state. At this run Terraform will use the local backend to store state on the local filesystem, locks that state using system APIs, and performs operations locally. There is no strong requirements to store the resulted state file in the git, but it's possible at will since there is no sensitive data. On your choice, commit the state of the s3-backend project or not.

                                                          $ git add s3-backend/terraform.tfstate\n\n$ git commit -m \"Terraform state for s3-backend\"\n

                                                        As a result, the projects that run Terraform can use the following definition for remote state configuration:

                                                        providers.tf - terraform backend configuration block
                                                        terraform {\n  backend \"s3\" {\n    bucket         = \"terraform-states-<AWS_ACCOUNT_ID>\"\n    key            = \"<PROJECT_NAME>/<REGION>/terraform/terraform.tfstate\"\n    region         = \"<REGION>\"\n    acl            = \"bucket-owner-full-control\"\n    dynamodb_table = \"terraform_locks\"\n    encrypt        = true\n  }\n}\n

                                                        where:

                                                        • AWS_ACCOUNT_ID - is AWS account id, e.g. 012345678910,
                                                        • REGION - is AWS region, e.g. eu-central-1,
                                                        • PROJECT_NAME - is a project name, a unique platform identifier, e.g. shared, test-eks etc.
                                                        View: providers.tf - terraform backend configuration example
                                                        terraform {\n  backend \"s3\" {\n    bucket         = \"terraform-states-012345678910\"\n    key            = \"test-eks/eu-central-1/terraform/terraform.tfstate\"\n    region         = \"eu-central-1\"\n    acl            = \"bucket-owner-full-control\"\n    dynamodb_table = \"terraform_locks\"\n    encrypt        = true\n  }\n}\n
                                                      5. Note

                                                        At the moment, it is recommended to use common s3 bucket and Dynamo DB in the root EDP account both for Shared and Standalone clusters deployment.

                                                        "},{"location":"operator-guide/deploy-aws-eks/#deploy-eks-cluster","title":"Deploy EKS Cluster","text":"

                                                        To deploy the EKS cluster, make sure that all the above-mentioned Prerequisites are ready to be used.

                                                        "},{"location":"operator-guide/deploy-aws-eks/#eks-cluster-deployment-with-terraform","title":"EKS Cluster Deployment with Terraform","text":"
                                                        1. Clone git repo with the Terraform project for EKS infrastructure edp-terraform-aws-platform.git and rename it in the correspondence with project name if not yet.

                                                          clone project

                                                            $ git clone https://github.com/epmd-edp/edp-terraform-aws-platform.git\n  $ mv edp-terraform-aws-platform edp-terraform-aws-platform-<PROJECT_NAME>\n  $ cd edp-terraform-aws-platform-<PROJECT_NAME>\n

                                                          where:

                                                          • \u2039PROJECT_NAME\u203a - is a project name, a unique platform identifier, e.g. shared, test-eks etc.
                                                        2. Configure Terraform backend according to your project needs or use instructions from the Configure Terraform backend section.

                                                        3. Fill the input variables for Terraform run in the \u2039terraform.tfvars\u203a file, refer to the template.tfvars file and apply the changes. See details below. Be sure to put the correct values of the variables created in the Prerequisites section. Find the detailed description of the variables in the variables.tf file.

                                                          Warning

                                                          Please, do not use upper case in the input variables. It can lead to unexpected issues.

                                                          template.tfvars file template
                                                          # Check out all the inputs based on the comments below and fill the gaps instead <...>\n  # More details on each variable can be found in the variables.tf file\n\n  create_elb = true # set to true if you'd like to create ELB for Gerrit usage\n\n  region   = \"<REGION>\"\n  role_arn = \"<ROLE_ARN>\"\n\n  platform_name        = \"<PLATFORM_NAME>\"        # the name of the cluster and AWS resources\n  platform_domain_name = \"<PLATFORM_DOMAIN_NAME>\" # must be created as a prerequisite\n\n  # The following will be created or used existing depending on the create_vpc value\n  subnet_azs    = [\"<SUBNET_AZS1>\", \"<SUBNET_AZS2>\"]\n  platform_cidr = \"<PLATFORM_CIDR>\"\n  private_cidrs = [\"<PRIVATE_CIDRS1>\", \"<PRIVATE_CIDRS2>\"]\n  public_cidrs  = [\"<PUBLIC_CIDRS1>\", \"<PUBLIC_CIDRS2>\"]\n\n  infrastructure_public_security_group_ids = [\n    \"<INFRASTRUCTURE_PUBLIC_SECURITY_GROUP_IDS1>\",\n    \"<INFRASTRUCTURE_PUBLIC_SECURITY_GROUP_IDS2>\",\n  ]\n\n  ssl_policy = \"<SSL_POLICY>\"\n\n  # EKS cluster configuration\n  cluster_version = \"1.22\"\n  key_name        = \"<AWS_KEY_PAIR_NAME>\" # must be created as a prerequisite\n  enable_irsa     = true\n\n  cluster_iam_role_name            = \"<SERVICE_ROLE_FOR_EKS>\"\n  worker_iam_instance_profile_name = \"<SERVICE_ROLE_FOR_EKS_WORKER_NODE\"\n\n  add_userdata = <<EOF\n  export TOKEN=$(aws ssm get-parameter --name <PARAMETER_NAME> --query 'Parameter.Value' --region <REGION> --output text)\n  cat <<DATA > /var/lib/kubelet/config.json\n  {\n    \"auths\":{\n      \"https://index.docker.io/v1/\":{\n        \"auth\":\"$TOKEN\"\n      }\n    }\n  }\n  DATA\n  EOF\n\n  map_users = [\n    {\n      \"userarn\" : \"<IAM_USER_ARN1>\",\n      \"username\" : \"<IAM_USER_NAME1>\",\n      \"groups\" : [\"system:masters\"]\n    },\n    {\n      \"userarn\" : \"<IAM_USER_ARN2>\",\n      \"username\" : \"<IAM_USER_NAME2>\",\n      \"groups\" : [\"system:masters\"]\n    }\n  ]\n\n  map_roles = [\n    {\n      \"rolearn\" : \"<IAM_ROLE_ARN1>\",\n      \"username\" : \"<IAM_ROLE_NAME1>\",\n      \"groups\" : [\"system:masters\"]\n    },\n  ]\n\n  tags = {\n    \"SysName\"      = \"<SYS_NAME>\"\n    \"SysOwner\"     = \"<SYSTEM_OWNER>\"\n    \"Environment\"  = \"<ENVIRONMENT>\"\n    \"CostCenter\"   = \"<COST_CENTER>\"\n    \"BusinessUnit\" = \"<BUSINESS_UNIT>\"\n    \"Department\"   = \"<DEPARTMENT>\"\n    \"user:tag\"     = \"<PLATFORM_NAME>\"\n  }\n\n  # Variables for demand pool\n  demand_instance_types      = [\"r5.large\"]\n  demand_max_nodes_count     = 0\n  demand_min_nodes_count     = 0\n  demand_desired_nodes_count = 0\n\n  // Variables for spot pool\n  spot_instance_types      = [\"r5.xlarge\", \"r5.large\", \"r4.large\"] # need to ensure we use nodes with more memory\n  spot_max_nodes_count     = 2\n  spot_desired_nodes_count = 2\n  spot_min_nodes_count     = 2\n

                                                          Note

                                                          The file above is an example. Please find the latest version in the project repo in the terraform.tfvars file.

                                                          There are the following possible scenarios to deploy the EKS cluster:

                                                          Case 1: Create new VPC and deploy the EKS cluster, terraform.tfvars file example
                                                          create_elb     = true # set to true if you'd like to create ELB for Gerrit usage\n\nregion   = \"eu-central-1\"\nrole_arn = \"arn:aws:iam::012345678910:role/EKSDeployerRole\"\n\nplatform_name        = \"test-eks\"\nplatform_domain_name = \"example.com\" # must be created as a prerequisite\n\n# The following will be created or used existing depending on the create_vpc value\nsubnet_azs    = [\"eu-central-1a\", \"eu-central-1b\"]\nplatform_cidr = \"172.31.0.0/16\"\nprivate_cidrs = [\"172.31.0.0/20\", \"172.31.16.0/20\"]\npublic_cidrs  = [\"172.31.32.0/20\", \"172.31.48.0/20\"]\n\n# Use this parameter the second time you apply the code to specify new AWS Security Groups\ninfrastructure_public_security_group_ids = [\n  #  \"sg-00000000000000000\",\n  #  \"sg-00000000000000000\",\n]\n\n# EKS cluster configuration\ncluster_version = \"1.22\"\nkey_name        = \"test-kn\" # must be created as a prerequisite\nenable_irsa     = true\n\n# Define if IAM roles should be created during the deployment or used existing ones\ncluster_iam_role_name            = \"ServiceRoleForEKSShared\"\nworker_iam_instance_profile_name = \"ServiceRoleForEksSharedWorkerNode0000000000000000000000\"\n\nadd_userdata = <<EOF\nexport TOKEN=$(aws ssm get-parameter --name edprobot --query 'Parameter.Value' --region eu-central-1 --output text)\ncat <<DATA > /var/lib/kubelet/config.json\n{\n  \"auths\":{\n    \"https://index.docker.io/v1/\":{\n      \"auth\":\"$TOKEN\"\n    }\n  }\n}\nDATA\nEOF\n\nmap_users = [\n  {\n    \"userarn\" : \"arn:aws:iam::012345678910:user/user_name1@example.com\",\n    \"username\" : \"user_name1@example.com\",\n    \"groups\" : [\"system:masters\"]\n  },\n  {\n    \"userarn\" : \"arn:aws:iam::012345678910:user/user_name2@example.com\",\n    \"username\" : \"user_name2@example.com\",\n    \"groups\" : [\"system:masters\"]\n  }\n]\n\nmap_roles = [\n  {\n    \"rolearn\" : \"arn:aws:iam::012345678910:role/EKSClusterAdminRole\",\n    \"username\" : \"eksadminrole\",\n    \"groups\" : [\"system:masters\"]\n  },\n]\n\ntags = {\n  \"SysName\"      = \"EKS\"\n  \"SysOwner\"     = \"owner@example.com\"\n  \"Environment\"  = \"EKS-TEST-CLUSTER\"\n  \"CostCenter\"   = \"2020\"\n  \"BusinessUnit\" = \"BU\"\n  \"Department\"   = \"DEPARTMENT\"\n  \"user:tag\"     = \"test-eks\"\n}\n\n# Variables for spot pool\nspot_instance_types      = [\"r5.large\", \"r4.large\"] # need to ensure we use nodes with more memory\nspot_max_nodes_count     = 1\nspot_desired_nodes_count = 1\nspot_min_nodes_count     = 1\n
                                                        4. Run Terraform apply. Initialize the backend and apply the changes.

                                                          apply the changes
                                                             $ terraform init\n   $ terraform apply\n   ...\n\n   Do you want to perform these actions?\n   Terraform will perform the actions described above.\n   Only 'yes' will be accepted to approve.\n   Enter a value: yes\n   ...\n
                                                        5. "},{"location":"operator-guide/deploy-aws-eks/#check-eks-cluster-deployment","title":"Check EKS cluster deployment","text":"

                                                          As a result, the \u2039PLATFORM_NAME\u203a EKS cluster is deployed to the specified AWS account.

                                                          Make sure you have all required tools listed in the Install required tools section.

                                                          To connect to the cluster find the kubeconfig_ file in the project folder which is output of the last Terraform apply run. Move it to the ~/.kube/ folder.

                                                              $ mv kubeconfig_<PLATFORM_NAME> ~/.kube/\n

                                                          Run the following commands to ensure the EKS cluster is up and has required nodes count:

                                                              $ kubectl config get-contexts\n    $ kubectl get nodes\n

                                                          Note

                                                          If the there are any authorisation issues, make sure the users section in the kubeconfig_ file has all required parameters based on you AWS CLI version. Find more details in the create kubeconfig AWS user guide. And pay attention on the kubeconfig_aws_authenticator terraform input variables.

                                                          Optionally, a Lens tool can be installed and used for further work with Kubernetes cluster. Refer to the original documentation to add and process the cluster.

                                                          "},{"location":"operator-guide/deploy-okd-4.10/","title":"Deploy OKD 4.10 Cluster","text":"

                                                          This instruction provides detailed information on the OKD 4.10 cluster deployment in the AWS Cloud and contains the additional setup necessary for the managed infrastructure.

                                                          A full description of the cluster deployment can be found in the official documentation.

                                                          "},{"location":"operator-guide/deploy-okd-4.10/#prerequisites","title":"Prerequisites","text":"

                                                          Before the OKD cluster deployment and configuration, make sure to check the prerequisites.

                                                          "},{"location":"operator-guide/deploy-okd-4.10/#required-tools","title":"Required Tools","text":"
                                                          1. Install the following tools listed below:

                                                            • AWS CLI
                                                            • OpenShift CLI
                                                            • Lens (optional)
                                                          2. Create the AWS IAM user with the required permissions. Make sure the AWS account is active, and the user doesn't have a permission boundary. Remove any Service Control Policy (SCP) restrictions from the AWS account.

                                                          3. Generate a key pair for cluster node SSH access. Please perform the steps below:

                                                            • Generate the SSH key. Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If there is an existing key pair, ensure that the public key is in the ~/.ssh directory.
                                                              ssh-keygen -t ed25519 -N '' -f <path>/<file_name>\n
                                                            • Add the SSH private key identity to the SSH agent for a local user if it has not already been added.
                                                              eval \"$(ssh-agent -s)\"\n
                                                            • Add the SSH private key to the ssh-agent:
                                                              ssh-add <path>/<file_name>\n
                                                          4. Build the ccoctl tool:

                                                            • Clone the cloud-credential-operator repository.
                                                              git clone https://github.com/openshift/cloud-credential-operator.git\n
                                                            • Move to the cloud-credential-operator folder and build the ccoctl tool.
                                                              cd cloud-credential-operator && git checkout release-4.10\nGO_PACKAGE='github.com/openshift/cloud-credential-operator'\ngo build -ldflags \"-X $GO_PACKAGE/pkg/version.versionFromGit=$(git describe --long --tags --abbrev=7 --match 'v[0-9]*')\" ./cmd/ccoctl\n
                                                          "},{"location":"operator-guide/deploy-okd-4.10/#prepare-for-the-deployment-process","title":"Prepare for the Deployment Process","text":"

                                                          Before deploying the OKD cluster, please perform the steps below:

                                                          "},{"location":"operator-guide/deploy-okd-4.10/#create-aws-resources","title":"Create AWS Resources","text":"

                                                          Create the AWS resources with the Cloud Credential Operator utility (the ccoctl tool):

                                                          1. Generate the public and private RSA key files that are used to set up the OpenID Connect identity provider for the cluster:

                                                            ./ccoctl aws create-key-pair\n
                                                          2. Create an OpenID Connect identity provider and an S3 bucket on AWS:

                                                            ./ccoctl aws create-identity-provider \\\n--name=<NAME> \\\n--region=<AWS_REGION> \\\n--public-key-file=./serviceaccount-signer.public\n

                                                            where:

                                                            • NAME - is the name used to tag any cloud resources created for tracking,
                                                            • AWS_REGION - is the AWS region in which cloud resources will be created.
                                                          3. Create the IAM roles for each component in the cluster:

                                                            • Extract the list of the CredentialsRequest objects from the OpenShift Container Platform release image:

                                                              oc adm release extract \\\n--credentials-requests \\\n--cloud=aws \\\n--to=./credrequests \\\n--quay.io/openshift-release-dev/ocp-release:4.10.25-x86_64\n

                                                              Note

                                                              A version of the openshift-release-dev docker image can be found in the Quay registry.

                                                            • Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory:
                                                              ccoctl aws create-iam-roles \\\n--name=<NAME> \\\n--region=<AWS_REGION> \\\n--credentials-requests-dir=./credrequests\n--identity-provider-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/<NAME>-oidc.s3.<AWS_REGION>.amazonaws.com\n
                                                          "},{"location":"operator-guide/deploy-okd-4.10/#create-okd-manifests","title":"Create OKD Manifests","text":"

                                                          Before deploying the OKD cluster, please perform the steps below:

                                                          1. Download the OKD installer.

                                                          2. Extract the installation program:

                                                            tar -xvf openshift-install-linux.tar.gz\n
                                                          3. Download the installation pull secret for any private registry. This pull secret allows to authenticate with the services that are provided by the authorities, including Quay.io, serving the container images for OKD components. For example, here is a pull secret for Docker Hub:

                                                            The pull secret for the private registry
                                                            {\n\"auths\":{\n\"https://index.docker.io/v1/\":{\n\"auth\":\"$TOKEN\"\n}\n}\n}\n
                                                          4. Create a deployment directory and the install-config.yaml file:

                                                            mkdir okd-deployment\ntouch okd-deployment/install-config.yaml\n

                                                            To specify more details about the OKD cluster platform or to modify the values of the required parameters, customize the install-config.yaml file for the AWS. Please see below an example of the customized file:

                                                            install-config.yaml - OKD cluster\u2019s platform installation configuration file
                                                            apiVersion: v1\nbaseDomain: <YOUR_DOMAIN>\ncredentialsMode: Manual\ncompute:\n- architecture: amd64\nhyperthreading: Enabled\nname: worker\nplatform:\naws:\nrootVolume:\nsize: 30\nzones:\n- eu-central-1a\ntype: r5.large\nreplicas: 3\ncontrolPlane:\narchitecture: amd64\nhyperthreading: Enabled\nname: master\nplatform:\naws:\nrootVolume:\nsize: 50\nzones:\n- eu-central-1a\ntype: m5.xlarge\nreplicas: 3\nmetadata:\ncreationTimestamp: null\nname: 4-10-okd-sandbox\nnetworking:\nclusterNetwork:\n- cidr: 10.128.0.0/14\nhostPrefix: 23\nmachineNetwork:\n- cidr: 10.0.0.0/16\nnetworkType: OVNKubernetes\nserviceNetwork:\n- 172.30.0.0/16\nplatform:\naws:\nregion: eu-central-1\nuserTags:\nuser:tag: 4-10-okd-sandbox\npublish: External\npullSecret: <PULL_SECRET>\nsshKey: |\n<SSH_KEY>\n

                                                            where:

                                                            • YOUR_DOMAIN - is a base domain,
                                                            • PULL_SECRET - is a created pull secret for a private registry,
                                                            • SSH_KEY - is a created SSH key.
                                                          5. Create the required OpenShift Container Platform installation manifests:

                                                            ./openshift-install create manifests --dir okd-deployment\n
                                                          6. Copy the manifests generated by the ccoctl tool to the manifests directory created by the installation program:

                                                            cp ./manifests/* ./okd-deployment/manifests/\n
                                                          7. Copy the private key generated in the tls directory by the ccoctl tool to the installation directory:

                                                            cp -a ./tls ./okd-deployment\n
                                                          "},{"location":"operator-guide/deploy-okd-4.10/#deploy-the-cluster","title":"Deploy the Cluster","text":"

                                                          To initialize the cluster deployment, run the following command:

                                                          ./openshift-install create cluster --dir okd-deployment --log-level=info\n

                                                          Note

                                                          If the cloud provider account configured on the host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

                                                          When the cluster deployment is completed, directions for accessing the cluster are displayed in the terminal, including a link to the web console and credentials for the kubeadmin user. The kubeconfig for the cluster will be located in okd-deployment/auth/kubeconfig.

                                                          Example output
                                                          ...\nINFO Install complete!\nINFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\nINFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\nINFO Login to the console with the user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\"\nINFO Time elapsed: 36m22s:\n

                                                          Warning

                                                          The Ignition config files contain certificates that expire after 24 hours, which are then renewed at that time. Do not turn off the cluster for this time, or you will have to update the certificates manually. See OpenShift Container Platform documentation for more information.

                                                          "},{"location":"operator-guide/deploy-okd-4.10/#log-into-the-cluster","title":"Log Into the Cluster","text":"

                                                          To log into the cluster, export the kubeconfig:

                                                            export KUBECONFIG=<installation_directory>/auth/kubeconfig\n

                                                          Optionally, use the Lens tool for further work with the Kubernetes cluster.

                                                          Note

                                                          To install and manage the cluster, refer to Lens documentation.

                                                          "},{"location":"operator-guide/deploy-okd-4.10/#manage-okd-cluster-without-the-inbound-rules","title":"Manage OKD Cluster Without the Inbound Rules","text":"

                                                          In order to manage the OKD cluster without the 0.0.0.0/0 inbound rules, please perform the steps below:

                                                          1. Create a Security Group with a list of your external IPs:

                                                            aws ec2 create-security-group --group-name <SECURITY_GROUP_NAME> --description \"<DESCRIPTION_OF_SECURITY_GROUP>\" --vpc-id <VPC_ID>\naws ec2 authorize-security-group-ingress \\\n--group-id '<SECURITY_GROUP_ID>' \\\n--ip-permissions 'IpProtocol=all,PrefixListIds=[{PrefixListId=<PREFIX_LIST_ID>}]'\n
                                                          2. Manually attach this new Security Group to all master nodes of the cluster.

                                                          3. Create another Security Group with an Elastic IP of the Cluster VPC:

                                                            aws ec2 create-security-group --group-name custom-okd-4-10 --description \"Cluster Ip to 80, 443\" --vpc-id <VPC_ID>\naws ec2 authorize-security-group-ingress \\\n--group-id '<SECURITY_GROUP_ID>' \\\n--protocol all \\\n--port 80 \\\n--cidr <ELASTIC_IP_OF_CLUSTER_VPC>\naws ec2 authorize-security-group-ingress \\\n--group-id '<SECURITY_GROUP_ID>' \\\n--protocol all \\\n--port 443 \\\n--cidr <ELASTIC_IP_OF_CLUSTER_VPC>\n
                                                          4. Modify the cluster load balancer via the router-default svc in the openshift-ingress namespace, paste two Security Groups created on previous steps:

                                                            The pull secret for the private registry
                                                            apiVersion: v1\nkind: Service\nmetadata:\n  name: router-default\n  namespace: openshift-ingress\n  annotations:\n    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: \"tag_name=some_value\"\n    service.beta.kubernetes.io/aws-load-balancer-security-groups: \"<SECURITY_GROUP_IDs>\"\n    ...\n
                                                          "},{"location":"operator-guide/deploy-okd-4.10/#optimize-spot-instances-usage","title":"Optimize Spot Instances Usage","text":"

                                                          In order to optimize the usage of Spot Instances on the AWS, add the following line under the providerSpec field in the MachineSet of Worker Nodes:

                                                          providerSpec:\nvalue:\nspotMarketOptions: {}\n
                                                          "},{"location":"operator-guide/deploy-okd-4.10/#related-articles","title":"Related Articles","text":"
                                                          • Deploy AWS EKS Cluster
                                                          • Manage Jenkins Agent
                                                          • Associate IAM Roles With Service Accounts
                                                          • Deploy OKD 4.9 Cluster
                                                          "},{"location":"operator-guide/deploy-okd/","title":"Deploy OKD 4.9 Cluster","text":"

                                                          This instruction provides detailed information on the OKD 4.9 cluster deployment in the AWS Cloud and contains the additional setup necessary for the managed infrastructure.

                                                          A full description of the cluster deployment can be found in the official documentation.

                                                          "},{"location":"operator-guide/deploy-okd/#prerequisites","title":"Prerequisites","text":"

                                                          Before the OKD cluster deployment and configuration, make sure to check the prerequisites.

                                                          "},{"location":"operator-guide/deploy-okd/#required-tools","title":"Required Tools","text":"
                                                          1. Install the following tools listed below:

                                                            • AWS CLI
                                                            • OpenShift CLI
                                                            • Lens (optional)
                                                          2. Create the AWS IAM user with the required permissions. Make sure the AWS account is active, and the user doesn't have a permission boundary. Remove any Service Control Policy (SCP) restrictions from the AWS account.

                                                          3. Generate a key pair for cluster node SSH access. Please perform the steps below:

                                                            • Generate the SSH key. Specify the path and file name, such as ~/.ssh/id_ed25519, of the new SSH key. If there is an existing key pair, ensure that the public key is in the ~/.ssh directory.
                                                               ssh-keygen -t ed25519 -N '' -f <path>/<file_name>\n
                                                            • Add the SSH private key identity to the SSH agent for a local user if it has not already been added.
                                                               eval \"$(ssh-agent -s)\"\n
                                                            • Add the SSH private key to the ssh-agent:
                                                               ssh-add <path>/<file_name>\n
                                                          "},{"location":"operator-guide/deploy-okd/#prepare-for-the-deployment-process","title":"Prepare for the Deployment Process","text":"

                                                          Before deploying the OKD cluster, please perform the steps below:

                                                          1. Download the OKD installer.

                                                          2. Extract the installation program:

                                                            tar -xvf openshift-install-linux.tar.gz\n
                                                          3. Download the installation pull secret for any private registry.

                                                            This pull secret allows to authenticate with the services that are provided by the included authorities, including Quay.io serving container images for OKD components. For example, here is a pull secret for Docker Hub:

                                                            The pull secret for the private registry
                                                            {\n  \"auths\":{\n    \"https://index.docker.io/v1/\":{\n      \"auth\":\"$TOKEN\"\n    }\n  }\n}\n
                                                          4. Create the deployment directory and the install-config.yaml file:

                                                            mkdir okd-deployment\ntouch okd-deployment/install-config.yaml\n

                                                            To specify more details about the OKD cluster platform or to modify the values of the required parameters, customize the install-config.yaml file for AWS. Please see an example of the customized file below:

                                                            install-config.yaml - OKD cluster\u2019s platform installation configuration file
                                                            apiVersion: v1\nbaseDomain: <YOUR_DOMAIN>\ncompute:\n- architecture: amd64\n  hyperthreading: Enabled\n  name: worker\n  platform:\n    aws:\n      zones:\n        - eu-central-1a\n      rootVolume:\n        size: 50\n      type: r5.large\n  replicas: 3\ncontrolPlane:\n  architecture: amd64\n  hyperthreading: Enabled\n  name: master\n  platform:\n    aws:\n      rootVolume:\n        size: 50\n      zones:\n        - eu-central-1a\n      type: m5.xlarge\n  replicas: 3\nmetadata:\n  creationTimestamp: null\n  name: 4-9-okd-sandbox\nplatform:\n  aws:\n    region: eu-central-1\n    userTags:\n      user:tag: 4-9-okd-sandbox\npublish: External\npullSecret: <PULL_SECRET>\nsshKey: |\n  <SSH_KEY>\n

                                                            where:

                                                            • YOUR_DOMAIN - is a base domain,
                                                            • PULL_SECRET - is a created pull secret for a private registry,
                                                            • SSH_KEY - is a created SSH key.
                                                          "},{"location":"operator-guide/deploy-okd/#deploy-the-cluster","title":"Deploy the Cluster","text":"

                                                          To initialize the cluster deployment, run the following command:

                                                          ./openshift-install create cluster --dir <installation_directory> --log-level=info\n

                                                          Note

                                                          If the cloud provider account configured on the host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed.

                                                          When the cluster deployment is completed, directions for accessing the cluster are displayed in the terminal, including a link to the web console and credentials for the kubeadmin user. The kubeconfig for the cluster will be located in okd-deployment/auth/kubeconfig.

                                                          Example output
                                                          ...\nINFO Install complete!\nINFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\nINFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\nINFO Login to the console with the user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\"\nINFO Time elapsed: 36m22s:\n

                                                          Warning

                                                          The Ignition config files contain certificates that expire after 24 hours, which are then renewed at that time. Do not turn off the cluster for this time, or you will have to update the certificates manually. See OpenShift Container Platform documentation for more information.

                                                          "},{"location":"operator-guide/deploy-okd/#log-into-the-cluster","title":"Log Into the Cluster","text":"

                                                          To log into the cluster, export the kubeconfig:

                                                            export KUBECONFIG=<installation_directory>/auth/kubeconfig\n

                                                          Optionally, use the Lens tool for further work with the Kubernetes cluster.

                                                          Note

                                                          To install and manage the cluster, refer to Lens documentation.

                                                          "},{"location":"operator-guide/deploy-okd/#related-articles","title":"Related Articles","text":"
                                                          • Deploy AWS EKS Cluster
                                                          • Manage Jenkins Agent
                                                          • Deploy OKD 4.10 Cluster
                                                          "},{"location":"operator-guide/ebs-csi-driver/","title":"Install Amazon EBS CSI Driver","text":"

                                                          The Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver allows Amazon Elastic Kubernetes Service (Amazon EKS) clusters to manage the lifecycle of Amazon EBS volumes for Kubernetes Persistent Volumes.

                                                          "},{"location":"operator-guide/ebs-csi-driver/#prerequisites","title":"Prerequisites","text":"

                                                          An existing AWS Identity and Access Management (IAM) OpenID Connect (OIDC) provider for your cluster. To determine whether you already have an OIDC provider or to create a new one, see Creating an IAM OIDC provider for your cluster.

                                                          To add an Amazon EBS CSI add-on, please follow the steps below:

                                                          1. Check your cluster details (the random value in the cluster name will be required in the next step):

                                                            kubectl cluster-info\n
                                                          2. Create Kubernetes IAM Trust Policy for Amazon EBS CSI Driver. Replace AWS_ACCOUNT_ID with your account ID, AWS_REGION with your AWS Region, and EXAMPLED539D4633E53DE1B71EXAMPLE with the value that was returned in the previous step. Save this Trust Policy into a file aws-ebs-csi-driver-trust-policy.json.

                                                            aws-ebs-csi-driver-trust-policy.json
                                                              {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Effect\": \"Allow\",\n\"Principal\": {\n\"Federated\": \"arn:aws:iam::AWS_ACCOUNT_ID:oidc-provider/oidc.eks.AWS_REGION.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE\"\n},\n\"Action\": \"sts:AssumeRoleWithWebIdentity\",\n\"Condition\": {\n\"StringEquals\": {\n\"oidc.eks.AWS_REGION.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud\": \"sts.amazonaws.com\",\n\"oidc.eks.AWS_REGION.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub\": \"system:serviceaccount:kube-system:ebs-csi-controller-sa\"\n}\n}\n}\n]\n}\n

                                                            To get the notion of the IAM Role creation, please refer to the official documentation.

                                                          3. Create the IAM role, for example:

                                                            aws iam create-role \\\n--role-name AmazonEKS_EBS_CSI_DriverRole \\\n--assume-role-policy-document file://\"aws-ebs-csi-driver-trust-policy.json\"\n
                                                          4. Attach the required AWS Managed Policy AmazonEBSCSIDriverPolicy to the role with the following command:

                                                            aws iam attach-role-policy \\\n--policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \\\n--role-name AmazonEKS_EBS_CSI_DriverRole\n
                                                          5. Add the Amazon EBS CSI add-on using the AWS CLI. Replace my-cluster with the name of your cluster, AWS_ACCOUNT_ID with your account ID, and AmazonEKS_EBS_CSI_DriverRole with the name of the role that was created earlier:

                                                            aws eks create-addon --cluster-name my-cluster --addon-name aws-ebs-csi-driver \\\n--service-account-role-arn arn:aws:iam::AWS_ACCOUNT_ID:role/AmazonEKS_EBS_CSI_DriverRole\n

                                                            Note

                                                            When the plugin is deployed, it creates the ebs-csi-controller-sa service account. The service account is bound to a Kubernetes ClusterRole with the required Kubernetes permissions. The ebs-csi-controller-sa service account should already be annotated with arn:aws:iam::AWS_ACCOUNT_ID:role/AmazonEKS_EBS_CSI_DriverRole. To check the annotation, please run:

                                                            kubectl get sa ebs-csi-controller-sa -n kube-system -o=jsonpath='{.metadata.annotations}'\n

                                                            In case pods have errors, restart the ebs-csi-controller deployment:

                                                            kubectl rollout restart deployment ebs-csi-controller -n kube-system\n
                                                          "},{"location":"operator-guide/ebs-csi-driver/#related-articles","title":"Related Articles","text":"
                                                          • Creating an IAM OIDC provider for your cluster
                                                          • Creating the Amazon EBS CSI driver IAM role for service accounts
                                                          • Managing the Amazon EBS CSI driver as an Amazon EKS add-on
                                                          "},{"location":"operator-guide/edp-access-model/","title":"EDP Access Model","text":"

                                                          EDP uses two different methods to regulate access to resources, each tailored to specific scenarios:

                                                          • The initial method involves roles and groups in Keycloak and is used for SonarQube, Jenkins and partly for Nexus.
                                                          • The second method of resource access control in EDP involves EDP custom resources. This approach requires modifying custom resources that outline the required access privileges for every user or group and is used to govern access to Gerrit, Nexus, EDP Portal, EKS Cluster and Argo CD.

                                                          Info

                                                          These two approaches are not interchangeable, as each has its unique capabilities.

                                                          "},{"location":"operator-guide/edp-access-model/#keycloak","title":"Keycloak","text":"

                                                          This section explains what realm roles and realm groups are and how they function within Keycloak.

                                                          "},{"location":"operator-guide/edp-access-model/#realm-roles","title":"Realm Roles","text":"

                                                          The Keycloak realm of edp has two realm roles with a composite types named administrator and developer:

                                                          • The administrator realm role is designed for users who need administrative access to the tools used in the project. This realm role contains two roles: jenkins-administrators and sonar-administrators. Users who are assigned the administrator realm role will be granted these two roles automatically.
                                                          • The developer realm role, on the other hand, is designed for users who need access to the development tools used in the project. This realm role also contains two roles: jenkins-users and sonar-developers. Users who are assigned the developer realm role will be granted these two roles automatically.

                                                          These realm roles have been defined to make it easier to assign groups of rights to users.

                                                          The table below shows the realm roles and the composite types they relate to.

                                                          Realm Role Name Regular Role Composite role administrator developer jenkins-administrators jenkins-users sonar-administrators sonar-developers"},{"location":"operator-guide/edp-access-model/#realm-groups","title":"Realm Groups","text":"

                                                          EDP uses two different realms for group management, edp and openshift:

                                                          • The edp realm contains two groups that are specifically used for controlling access to Argo CD. These groups are named ArgoCDAdmins and ArgoCD-edp-users.
                                                          • The openshift realm contains five groups that are used for access control in both the EDP Portal and EKS cluster. These groups are named edp-oidc-admins, edp-oidc-builders, edp-oidc-deployers,edp-oidc-developers and edp-oidc-viewers.
                                                          Realm Group Name Realm Name ArgoCDAdmins edp ArgoCD-edp-users edp edp-oidc-admins openshift edp-oidc-builders openshift edp-oidc-deployers openshift edp-oidc-developers openshift edp-oidc-viewers openshift"},{"location":"operator-guide/edp-access-model/#sonarqube","title":"SonarQube","text":"

                                                          In the case of SonarQube, there are two ways to manage access: via Keycloak and via EDP approach. This sections describes both of the approaches.

                                                          "},{"location":"operator-guide/edp-access-model/#manage-access-via-keycloak","title":"Manage Access via Keycloak","text":"

                                                          SonarQube access is managed using Keycloak roles in the edp realm. The sonar-developers and sonar-administrators realm roles are the two available roles that determine user access levels. To grant access, the corresponding role must be added to the user in Keycloak.

                                                          For example, a user who needs developer access to SonarQube should be assigned the sonar-developers or developer composite role in Keycloak.

                                                          "},{"location":"operator-guide/edp-access-model/#edp-approach-for-managing-access","title":"EDP Approach for Managing Access","text":"

                                                          EDP provides its own SonarQube Permission Template, which is used to manage user access and permissions for SonarQube projects.

                                                          The template is stored in the custom SonarQube resource of the operator, an example of a custom resource can be found below.

                                                          SonarPermissionTemplate

                                                          apiVersion: v2.edp.epam.com/v1\nkind: SonarPermissionTemplate\nmetadata:\nname: edp-default\nspec:\ndescription: EDP permission templates (DO NOT REMOVE)\ngroupPermissions:\n- groupName: non-interactive-users\npermissions:\n- user\n- groupName: sonar-administrators\npermissions:\n- admin\n- user\n- groupName: sonar-developers\npermissions:\n- codeviewer\n- issueadmin\n- securityhotspotadmin\n- user\nname: edp-default\nprojectKeyPattern: .+\nsonarOwner: sonar\n

                                                          The SonarQube Permission Template contains three groups: non-interactive-users, sonar-administrators and sonar-developers:

                                                          • non-interactive-users are users who do not require direct access to the SonarQube project but need to be informed about the project's status and progress. This group has read-only access to the project, which means that they can view the project's data and metrics but cannot modify or interact with it in any way.
                                                          • sonar-administrators are users who have full control over the SonarQube project. They have the ability to create, modify, and delete projects, as well as manage user access and permissions. This group also has the ability to configure SonarQube settings and perform other administrative tasks.
                                                          • sonar-developers are users who are actively working on the SonarQube project. They have read and write access to the project, which means that they can modify the project's data and metrics. This group also has the ability to configure project-specific settings and perform other development tasks.

                                                          These groups are designed to provide different levels of access to the SonarQube project, depending on the user's role and responsibilities.

                                                          Info

                                                          If a user has no group, it will have the sonar-users group by default. This group does not have any permissions in the edp-default Permission Template.

                                                          The permissions that are attached to each of the groups are described below in the table:

                                                          Group Name Permissions non-interactive-users user sonar-administrators admin, user sonar-developers codeviewer, issueadmin, securityhotspotadmin, user sonar-users -"},{"location":"operator-guide/edp-access-model/#nexus","title":"Nexus","text":"

                                                          Users authenticate to Nexus using their Keycloak credentials.

                                                          During the authentication process, the OAuth2-Proxy receives the user's role from Keycloak.

                                                          Info

                                                          Only users with either the administrator or developer role in Keycloak can access Nexus.

                                                          Nexus has four distinct roles available, including edp-admin, edp-viewer, nx-admin and nx-anonymous. To grant the user access to one or more of these roles, an entry must be added to the custom Nexus resource.

                                                          For instance, in the context of the custom Nexus resource, the user \"user_1@example.com\" has been assigned the \"nx-admin\" role. An example can be found below:

                                                          Nexus

                                                          apiVersion: v2.edp.epam.com/v1\nkind: Nexus\nmetadata:\nname: nexus\nspec:\nbasePath: /\nedpSpec:\ndnsWildcard: example.com\nkeycloakSpec:\nenabled: false\nroles:\n- developer\n- administrator\nusers:\n- roles:\n- nx-admin\nusername: user_1@example.com\n
                                                          "},{"location":"operator-guide/edp-access-model/#gerrit","title":"Gerrit","text":"

                                                          The user should use their credentials from Keycloak when authenticating to Gerrit.

                                                          After logging into Gerrit, the user is not automatically attached to any groups. To add a user to a group, the GerritGroupMember custom resource must be created. This custom resource specifies the user's email address and the name of the group to which they should be added.

                                                          The ConfigMap below is an example of the GerritGroupMember resource:

                                                          GerritGroupMember

                                                          apiVersion: v2.edp.epam.com/v1\nkind: GerritGroupMember\nmetadata:\nname: user-admins\nspec:\naccountId: user@user.com\ngroupId: Administrators\n

                                                          After the GerritGroupMember resource is created, the user will have the permissions and access levels associated with that group.

                                                          "},{"location":"operator-guide/edp-access-model/#edp-portal-and-eks-cluster","title":"EDP Portal and EKS Cluster","text":"

                                                          Both Portal and EKS Cluster use Keycloak groups for controlling access. Users need to be added to the required group in Keycloak to get access. The groups that are used for access control are in the openshift realm.

                                                          Note

                                                          The openshift realm is used because a Keycloak client for OIDC is in this realm.

                                                          "},{"location":"operator-guide/edp-access-model/#keycloak-groups","title":"Keycloak Groups","text":"

                                                          There are two types of groups provided for users:

                                                          • Independent group: provides the minimum required permission set.
                                                          • Extension group: extends the rights of an independent group.

                                                          For example, the edp-oidc-viewers group can be extended with rights from the edp-oidc-builders group.

                                                          Group Name Independent Group Extension Group edp-oidc-admins edp-oidc-developers edp-oidc-viewers edp-oidc-builders edp-oidc-deployers Name Action List View Getting of all namespaced resources Build Starting a PipelineRun from EDP Portal UI Deploy Deploying a new version of application via Argo CD Application Group Name View Build Deploy Full Namespace Access edp-oidc-admins edp-oidc-developers edp-oidc-viewers edp-oidc-builders edp-oidc-deployers"},{"location":"operator-guide/edp-access-model/#cluster-rbac-resources","title":"Cluster RBAC Resources","text":"

                                                          The edp namespace has five role bindings that provide the necessary permissions for the Keycloak groups described above.

                                                          Role Binding Name Role Name Groups tenant-admin cluster-admin edp-oidc-admins tenant-builder tenant-builder edp-oidc-builders tenant-deployer tenant-deployer edp-oidc-deployers tenant-developer tenant-developer edp-oidc-developers tenant-viewer view edp-oidc-viewers , edp-oidc-developers

                                                          Note

                                                          EDP provides an aggregate ClusterRole with permissions to view custom EDP resources. ClusterRole is named edp-aggregate-view-edp

                                                          Info

                                                          The tenant-admin RoleBinding will be created in a created namespace by cd-pipeline-operator. tenant-admin RoleBinding assign the admin role to edp-oidc-admins and edp-oidc-developers groups.

                                                          "},{"location":"operator-guide/edp-access-model/#grant-user-access-to-the-created-namespaces","title":"Grant User Access to the Created Namespaces","text":"

                                                          To provide users with admin or developer privileges for project namespaces, they need to be added to the edp-oidc-admins and edp-oidc-developers groups in Keycloak.

                                                          "},{"location":"operator-guide/edp-access-model/#argo-cd","title":"Argo CD","text":"

                                                          In Argo CD, groups are specified when creating an AppProject to restrict access to deployed applications. To gain access to deployed applications within a project, the user must be added to their corresponding Argo CD group in Keycloak. This ensures that only authorized users can access and modify applications within the project.

                                                          Info

                                                          By default, only the ArgoCDAdmins group is automatically created in Keycloak.

                                                          "},{"location":"operator-guide/edp-access-model/#related-articles","title":"Related Articles","text":"
                                                          • EDP Portal Overview
                                                          • EKS OIDC With Keycloak
                                                          • Argo CD Integration
                                                          "},{"location":"operator-guide/edp-kiosk-usage/","title":"EDP Kiosk Usage","text":"

                                                          Explore the way Kiosk, a multi-tenancy extension for Kubernetes, is used in EDP.

                                                          "},{"location":"operator-guide/edp-kiosk-usage/#prerequisites","title":"Prerequisites","text":"
                                                          • Installed Kiosk 0.2.11.
                                                          "},{"location":"operator-guide/edp-kiosk-usage/#diagram-of-using-kiosk-by-edp","title":"Diagram of using Kiosk by EDP","text":"

                                                          Kiosk usage

                                                          Agenda

                                                          • blue - created by Helm chart;
                                                          • grey - created manually
                                                          "},{"location":"operator-guide/edp-kiosk-usage/#usage","title":"Usage","text":"
                                                          • EDP installation area on a diagram is described by following link;
                                                          • Once the above step is executed, edp-cd-pipeline-operator service account will be linked to kiosk-edit ClusterRole to get an ability for leveraging Kiosk specific resources (e.g. Space);
                                                          • Newly created stage in edp installation of EDP generates new Kiosk Space resource that is linked to edp Kiosk Account;
                                                          • According to Kiosk doc the Space resource creates namespace with RoleBinding that contains relation between service account which is linked to Kiosk Account and kiosk-space-admin ClusterRole. As cd-pipeline-operator ServiceAccount is linked to Account, it has admin permissions in all generated by him namespaces.
                                                          "},{"location":"operator-guide/edp-kiosk-usage/#related-articles","title":"Related Articles","text":"
                                                          • Install EDP
                                                          • Set Up Kiosk
                                                          "},{"location":"operator-guide/eks-oidc-integration/","title":"EKS OIDC Integration","text":"

                                                          This page is a detailed guide on integrating Keycloak with the edp-keycloak-operator to serve as an identity provider for AWS Elastic Kubernetes Service (EKS). It provides step-by-step instructions for creating necessary realms, users, roles, and client configurations for a seamless Keycloak-EKS collaboration. Additionally, it includes guidelines on installing the edp-keycloak-operator using Helm charts.

                                                          "},{"location":"operator-guide/eks-oidc-integration/#prerequisites","title":"Prerequisites","text":"
                                                          • EKS Configuration is performed;
                                                          • Helm v3.10.0 is installed;
                                                          • Keycloak is installed.
                                                          "},{"location":"operator-guide/eks-oidc-integration/#configure-keycloak","title":"Configure Keycloak","text":"

                                                          To prepare Keycloak for integration with the edp-keycloak-operator, follow the steps below:

                                                          1. Ensure that the openshift realm is created.

                                                          2. Create the orchestrator user and set the password in the Master realm.

                                                          3. In the Role Mapping tab, assign the proper roles to the user:

                                                            • Realm Roles:

                                                              • create-realm;
                                                              • offline_access;
                                                              • uma_authorization.
                                                            • Client Roles openshift-realm:

                                                              • impersonation;
                                                              • manage-authorization;
                                                              • manage-clients;
                                                              • manage-users.

                                                          Role mappings

                                                          "},{"location":"operator-guide/eks-oidc-integration/#install-keycloak-operator","title":"Install Keycloak Operator","text":"

                                                          To install the Keycloak operator, follow the steps below:

                                                          1. Add the epamedp Helm chart to a local client:

                                                            helm repo add epamedp https://epam.github.io/edp-helm-charts/stable\nhelm repo update\n
                                                          2. Install the Keycloak operator:

                                                            helm install keycloak-operator epamedp/keycloak-operator --namespace security --set name=keycloak-operator\n
                                                          "},{"location":"operator-guide/eks-oidc-integration/#connect-keycloak-operator-to-keycloak","title":"Connect Keycloak Operator to Keycloak","text":"

                                                          The next stage after installing Keycloak is to integrate it with the Keycloak operator. It can be implemented with the following steps:

                                                          1. Create the keycloak secret that will contain username and password to perform the integration. Set your own password. The username must be orchestrator:

                                                            kubectl -n security create secret generic keycloak \\\n--from-literal=username=orchestrator \\\n--from-literal=password=<password>\n
                                                          2. Create the Keycloak Custom Resource with the Keycloak instance URL and the secret created in the previous step:

                                                            apiVersion: v1.edp.epam.com/v1\nkind: Keycloak\nmetadata:\nname: main\nnamespace: security\nspec:\nsecret: keycloak                   # Secret name\nurl: https://keycloak.example.com  # Keycloak URL\n
                                                          3. Create the KeycloakRealm Custom Resource:

                                                            apiVersion: v1.edp.epam.com/v1\nkind: KeycloakRealm\nmetadata:\nname: control-plane\nnamespace: security\nspec:\nrealmName: control-plane\nkeycloakOwner: main\n
                                                          4. Create the KeycloakRealmGroup Custom Resource for both administrators and developers:

                                                            administratorsdevelopers
                                                            apiVersion: v1.edp.epam.com/v1\nkind: KeycloakRealmGroup\nmetadata:\nname: administrators\nnamespace: security\nspec:\nrealm: control-plane\nname: eks-oidc-administrator\n
                                                            apiVersion: v1.edp.epam.com/v1\nkind: KeycloakRealmGroup\nmetadata:\nname: developers\nnamespace: security\nspec:\nrealm: control-plane\nname: eks-oidc-developers\n
                                                          5. Create the KeycloakClientScope Custom Resource:

                                                            apiVersion: v1.edp.epam.com/v1\nkind: KeycloakClientScope\nmetadata:\nname: groups-keycloak-eks\nnamespace: security\nspec:\nname: groups\nrealm: control-plane\ndescription: \"Group Membership\"\nprotocol: openid-connect\nprotocolMappers:\n- name: groups\nprotocol: openid-connect\nprotocolMapper: \"oidc-group-membership-mapper\"\nconfig:\n\"access.token.claim\": \"true\"\n\"claim.name\": \"groups\"\n\"full.path\": \"false\"\n\"id.token.claim\": \"true\"\n\"userinfo.token.claim\": \"true\"\n
                                                          6. Create the KeycloakClient Custom Resource:

                                                            apiVersion: v1.edp.epam.com/v1\nkind: KeycloakClient\nmetadata:\nname: eks\nnamespace: security\nspec:\nadvancedProtocolMappers: true\nclientId: eks\ndirectAccess: true\npublic: false\ndefaultClientScopes:\n- groups\ntargetRealm: control-plane\nwebUrl: \"http://localhost:8000\"\n
                                                          7. Create the KeycloakRealmUser Custom Resource for both administrator and developer roles:

                                                            administrator roledeveloper role
                                                            apiVersion: v1.edp.epam.com/v1\nkind: KeycloakRealmUser\nmetadata:\nname: keycloakrealmuser-sample\nnamespace: security\nspec:\nrealm: control-plane\nusername: \"administrator\"\nfirstName: \"John\"\nlastName: \"Snow\"\nemail: \"administrator@example.com\"\nenabled: true\nemailVerified: true\npassword: \"12345678\"\nkeepResource: true\nrequiredUserActions:\n- UPDATE_PASSWORD\ngroups:\n- eks-oidc-administrator\n
                                                            apiVersion: v1.edp.epam.com/v1\nkind: KeycloakRealmUser\nmetadata:\nname: keycloakrealmuser-sample\nnamespace: security\nspec:\nrealm: control-plane\nusername: \"developers\"\nfirstName: \"John\"\nlastName: \"Snow\"\nemail: \"developers@example.com\"\nenabled: true\nemailVerified: true\npassword: \"12345678\"\nkeepResource: true\nrequiredUserActions:\n- UPDATE_PASSWORD\ngroups:\n- eks-oidc-developers\n
                                                          8. As a result, Keycloak is integrated with the AWS Elastic Kubernetes Service. This integration enables users to log in to the EKS cluster effortlessly using their kubeconfig files while managing permissions through Keycloak.

                                                          "},{"location":"operator-guide/eks-oidc-integration/#related-articles","title":"Related Articles","text":"
                                                          • Keycloak Installation
                                                          • EKS OIDC With Keycloak
                                                          "},{"location":"operator-guide/enable-irsa/","title":"Associate IAM Roles With Service Accounts","text":"

                                                          This page contains accurate information on how to associate an IAM role with the service account (IRSA) in EPAM Delivery Platform.

                                                          Get acquainted with the AWS Official Documentation on the subject before proceeding.

                                                          "},{"location":"operator-guide/enable-irsa/#common-configuration-of-iam-roles-with-service-accounts","title":"Common Configuration of IAM Roles With Service Accounts","text":"

                                                          To successfully associate the IAM role with the service account, follow the steps below:

                                                          1. Create an IAM role that will further be associated with the service account. This role must have the following trust policy:

                                                            IAM Role

                                                            {\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Principal\": {\n        \"Federated\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/<OIDC_PROVIDER>\"\n      },\n      \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n      \"Condition\": {\n        \"StringEquals\": {\n          \"<OIDC_PROVIDER>:sub\": \"system:serviceaccount:<SERVICE_ACCOUNT_NAMESPACE>:<SERVICE_ACCOUNT_NAME>\"\n        }\n      }\n    }\n  ]\n}\n

                                                            View cluster's \u2039OIDC_PROVIDER\u203a URL.

                                                              aws eks describe-cluster --name <CLUSTER_NAME> --query \"cluster.identity.oidc.issuer\" --output text\n

                                                            Example output:

                                                              https://oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E\n

                                                            \u2039OIDC_PROVIDER\u203a in this example will be:

                                                              oidc.eks.us-west-2.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E\n
                                                          2. Deploy the amazon-eks-pod-identity-webhook v0.2.0.

                                                            Note

                                                            The amazon-eks-pod-identity-webhook functionality is provided out of the box in EKS v1.21 and higher. This does not apply if the cluster has been upgraded from older versions. Therefore, skip step 2 and continue from step 3 in this documentation.

                                                            2.1. Provide the stable(ed8c41f) version of the Docker image in the deploy/deployment-base.yaml file.

                                                            2.2. Provide ${CA_BUNDLE}_in the_deploy/mutatingwebhook.yaml file:

                                                              secret_name=$(kubectl -n default get sa default -o jsonpath='{.secrets[0].name}') \\\n  CA_BUNDLE=$(kubectl -n default get secret/$secret_name -o jsonpath='{.data.ca\\.crt}' | tr -d '\\n')\n

                                                            2.3. Deploy the Webhook:

                                                              kubectl apply -f deploy/\n

                                                            2.4. Approve the csr:

                                                              csr_name=$(kubectl get csr -o jsonpath='{.items[?(@.spec.username==\"system:serviceaccount:default:pod-identity-webhook\")].metadata.name}')\n  kubectl certificate approve $csr_name\n
                                                          3. Annotate the created service account with the IAM role:

                                                            Service Account

                                                              apiVersion: v1\n  kind: ServiceAccount\n  metadata:\n    name: <SERVICE_ACCOUNT_NAME>\n    namespace: <NAMESPACE>\n    annotations:\n      eks.amazonaws.com/role-arn: \"arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>\"\n
                                                          4. All newly launched pods with this service account will be modified and then use the associated IAM role. Find below the pod specification template:

                                                            Pod Template

                                                              apiVersion: v1\n  kind: Pod\n  metadata:\n    name: irsa-test\n    namespace: <POD_NAMESPACE>\n  spec:\n    serviceAccountName: <SERVICE_ACCOUNT_NAME>\n    securityContext:\n      fsGroup: 65534\n    containers:\n    - name: terraform\n      image: epamedp/edp-jenkins-terraform-agent:3.0.9\n      command: ['sh', '-c', 'aws sts \"get-caller-identity\" && sleep 3600']\n
                                                          5. Check the logs of the created pod from the template above.

                                                            Example output:

                                                              {\n  \"UserId\": \"XXXXXXXXXXXXXXXXXXXXX:botocore-session-XXXXXXXXXX\",\n  \"Account\": \"XXXXXXXXXXXX\",\n  \"Arn\": \"arn:aws:sts::XXXXXXXXXXXX:assumed-role/AWSIRSATestRole/botocore-session-XXXXXXXXXX\"\n  }\n

                                                            As a result, it is possible to perform actions in AWS under the AWSIRSATestRole role.

                                                          "},{"location":"operator-guide/enable-irsa/#related-articles","title":"Related Articles","text":"
                                                          • Use Terraform Library in EDP
                                                          "},{"location":"operator-guide/external-secrets-operator-integration/","title":"External Secrets Operator Integration","text":"

                                                          External Secrets Operator (ESO) can be integrated with EDP.

                                                          There are multiple Secrets Providers that can be used within ESO. EDP is integrated with two major providers:

                                                          • Kubernetes Secrets
                                                          • AWS Systems Manager Parameter Store

                                                          EDP uses various secrets to integrate various applications. Below is a list of secrets that are used in the EDP platform and their description.

                                                          Secret Name Field Description keycloak username Admin username for keycloak, used by keycloak operator keycloak password Admin password for keycloak, used by keycloak operator defectdojo-ciuser-token token Defectdojo token with admin permissions defectdojo-ciuser-token url Defectdojo url kaniko-docker-config registry.com Change to registry url kaniko-docker-config username Registry username kaniko-docker-config password Registry password kaniko-docker-config auth Base64 encoded 'user:secret' string regcred registry.com Change to registry url regcred username Registry username regcred password Registry password regcred auth Base64 encoded 'user:secret' string github-config id_rsa Private key from github repo in base64 github-config token Api token github-config secretString Random string gitlab-config id_rsa Private key from gitlab repo in base64 gitlab-config token Api token gitlab-config secretString Random string jira-user username Jira username in base64 jira-user password Jira password in base64 sonar-ciuser-token username Sonar service account username sonar-ciuser-token secret Sonar service account secret nexus-ci-user username Nexus service account username nexus-ci-user password Nexus service accountpassword oauth2-proxy-cookie-secret cookie-secret Secret key for keycloak client in base64 nexus-proxy-cookie-secret cookie-secret Secret key for keycloak client in base64 keycloak-client-headlamp-secret Secret key for keycloak client in base64 keycloak-client-argo-secret Secret key for keycloak client in base64"},{"location":"operator-guide/external-secrets-operator-integration/#kubernetes-provider","title":"Kubernetes Provider","text":"

                                                          All secrets are stored in Kubernetes in pre-defined namespaces. EDP suggests using the following approach for secrets management:

                                                          • EDP_NAMESPACE-vault, where EDP_NAMESPACE is a name of the namespace where EDP is deployed, such as edp-vault. This namespace is used by EDP platform. Access to secrets in the edp-vault is permitted only for EDP Administrators.
                                                          • EDP_NAMESPACE-cicd-vault, where EDP_NAMESPACE is a name of the namespace where EDP is deployed, such as edp-cicd-vault. Development team uses access to secrets in the edp-cicd-vaultfor microservices development.

                                                          See a diagram below for more details:

                                                          In order to install EDP, a list of passwords must be created. Secrets are provided automatically when using ESO.

                                                          1. Create a common namespace for secrets and EDP:

                                                            kubectl create namespace edp-vault\nkubectl create namespace edp\n
                                                          2. Create secrets in the edp-vault namespace:

                                                            apiVersion: v1\nkind: Secret\nmetadata:\nname: keycloak\nnamespace: edp-vault\ndata:\npassword: cGFzcw==  # pass in base64\nusername: dXNlcg==  # user in base64\ntype: Opaque\n
                                                          3. In the edp-vault namespace, create a Role with a permission to read secrets:

                                                            apiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\nnamespace: edp-vault\nname: external-secret-store\nrules:\n- apiGroups: [\"\"]\nresources:\n- secrets\nverbs:\n- get\n- list\n- watch\n- apiGroups:\n- authorization.k8s.io\nresources:\n- selfsubjectrulesreviews\nverbs:\n- create\n
                                                          4. In the edp-vault namespace, create a ServiceAccount used by SecretStore:

                                                            apiVersion: v1\nkind: ServiceAccount\nmetadata:\nname: secret-manager\nnamespace: edp\n
                                                          5. Connect the Role from the edp-vault namespace with the ServiceAccount in the edp namespace:

                                                            apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\nname: eso-from-edp\nnamespace: edp-vault\nsubjects:\n- kind: ServiceAccount\nname: secret-manager\nnamespace: edp\nroleRef:\napiGroup: rbac.authorization.k8s.io\nkind: Role\nname: external-secret-store\n
                                                          6. Create a SecretStore in the edp namespace, and use ServiceAccount for authentication:

                                                            apiVersion: external-secrets.io/v1beta1\nkind: SecretStore\nmetadata:\nname: edp-vault\nnamespace: edp\nspec:\nprovider:\nkubernetes:\nremoteNamespace: edp-vault  # namespace with secrets\nauth:\nserviceAccount:\nname: secret-manager\nserver:\ncaProvider:\ntype: ConfigMap\nname: kube-root-ca.crt\nkey: ca.crt\n
                                                          7. Each secret must be defined by the ExternalSecret object. A code example below creates the keycloak secret in the edp namespace based on a secret with the same name in the edp-vault namespace:

                                                            apiVersion: external-secrets.io/v1beta1\nkind: ExternalSecret\nmetadata:\nname: keycloak\nnamespace: edp\nspec:\nrefreshInterval: 1h\nsecretStoreRef:\nkind: SecretStore\nname: edp-vault\n# target:\n#   name: secret-to-be-created  # name of the k8s Secret to be created. metadata.name used if not defined\ndata:\n- secretKey: username       # key to be created\nremoteRef:\nkey: keycloak           # remote secret name\nproperty: username      # value will be fetched from this field\n- secretKey: password       # key to be created\nremoteRef:\nkey: keycloak           # remote secret name\nproperty: password      # value will be fetched from this field\n

                                                          Apply the same approach for enabling secrets management in the namespaces used for microservices development, such as sit and qa on the diagram above.

                                                          "},{"location":"operator-guide/external-secrets-operator-integration/#aws-systems-manager-parameter-store","title":"AWS Systems Manager Parameter Store","text":"

                                                          AWS SSM Parameter Store can be used as a Secret Provider for ESO. For EDP, it is recommended to use the IAM Roles For Service Accounts approach (see a diagram below).

                                                          "},{"location":"operator-guide/external-secrets-operator-integration/#aws-parameter-store-in-edp-scenario","title":"AWS Parameter Store in EDP Scenario","text":"

                                                          In order to install EDP, a list of passwords must be created. Follow the steps below, to get secrets from the SSM:

                                                          1. In the AWS, create an AWS IAM policy and an IAM role used by ServiceAccount in SecretStore. The IAM role must have permissions to get values from the SSM Parameter Store.

                                                            a. Create an IAM policy that allows to get values from the Parameter Store with the edp/ path. Use your AWS Region and AWS Account Id:

                                                            {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Sid\": \"VisualEditor0\",\n\"Effect\": \"Allow\",\n\"Action\": \"ssm:GetParameter*\",\n\"Resource\": \"arn:aws:ssm:eu-central-1:012345678910:parameter/edp/*\"\n}\n]\n}\n

                                                            b. Create an AWS IAM role with trust relationships (defined below) and attach the IAM policy. Put your string for Federated value (see more on IRSA enablement for EKS Cluster) and AWS region.

                                                            {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Effect\": \"Allow\",\n\"Principal\": {\n\"Federated\": \"arn:aws:iam::012345678910:oidc-provider/oidc.eks.eu-central-1.amazonaws.com/id/XXXXXXXXXXXXXXXXXX\"\n},\n\"Action\": \"sts:AssumeRoleWithWebIdentity\",\n\"Condition\": {\n\"StringLike\": {\n\"oidc.eks.eu-central-1.amazonaws.com/id/XXXXXXXXXXXXXXXXXX:sub\": \"system:serviceaccount:edp:*\"\n}\n}\n}\n]\n}\n
                                                          2. Create a secret in the AWS Parameter Store with the name /edp/my-json-secret. This secret is represented as a parameter of type string within the AWS Parameter Store:

                                                            View: Parameter Store JSON
                                                            {\n\"keycloak\":\n{\n\"username\": \"keycloak-username\",\n\"password\": \"keycloak-password\"\n},\n\"defectdojo-ciuser-token\":\n{\n\"token\": \"XXXXXXXXXXXX\",\n\"url\": \"https://defectdojo.example.com\"\n},\n\"kaniko-docker-config\":\n{\n\"auths\" :\n{\n\"registry.com\":\n{\n\"username\":\"registry-username\",\n\"password\":\"registry-password\",\n\"auth\": \"<base64 encoded 'user:secret' string>\"\n}\n}},\n\"regcred\":\n{\n\"auths\":\n{\n\"registry.com\":\n{\n\"username\":\"registry-username\",\n\"password\":\"registry-password\",\n\"auth\":\"<base64 encoded 'user:secret' string>\"\n}\n}},\n\"github-config\":\n{\n\"id_rsa\": \"id-rsa-key\",\n\"token\": \"github-token\",\n\"secretString\": \"XXXXXXXXXXXX\"\n},\n\"gitlab-config\":\n{\n\"id_rsa\": \"id-rsa-key\",\n\"token\": \"gitlab-token\",\n\"secretString\": \"XXXXXXXXXXXX\"\n},\n\"jira-user\":\n{\n\"username\": \"jira-username\",\n\"password\": \"jira-password\"\n},\n\"sonar-ciuser-token\": { \"username\": \"<ci-user>\",  \"secret\": \"<secret>\" },\n\"nexus-ci-user\": { \"username\": \"<ci.user>\",  \"password\": \"<secret>\" },\n\"oauth2-proxy-cookie-secret\": { \"cookie-secret\": \"XXXXXXXXXXXX\" },\n\"nexus-proxy-cookie-secret\": { \"cookie-secret\": \"XXXXXXXXXXXX\" },\n\"keycloak-client-headlamp-secret\":  \"XXXXXXXXXXXX\",\n\"keycloak-client-argo-secret\":  \"XXXXXXXXXXXX\"\n}\n
                                                          3. Set External Secret operator enabled by updating the values.yaml file:

                                                            EDP install values.yaml
                                                            externalSecrets:\nenabled: true\n
                                                          4. Install/upgrade edp-install:

                                                            helm upgrade --install edp epamedp/edp-install --wait --timeout=900s \\\n--version <edp_version> \\\n--values values.yaml \\\n--namespace edp \\\n--atomic\n
                                                          "},{"location":"operator-guide/external-secrets-operator-integration/#related-articles","title":"Related Articles","text":"
                                                          • Install External Secrets Operator
                                                          "},{"location":"operator-guide/github-debug-webhooks/","title":"Debug GitHub Webhooks in Jenkins","text":"

                                                          A webhook enables third-party services like GitHub to send real-time updates to an application. Updates are triggered by an event or an action by the webhook provider (for example, a push to a repository, a Pull Request creation), and pushed to the application via HTTP requests, namely, Jenkins. The GitHub Jenkins job provisioner creates a webhook in the GitHub repository during the Create release pipeline once the Integrate GitHub/GitLab in Jenkins is enabled and the GitHub Webhook Configuration is completed.

                                                          The Jenkins setup in EDP uses the following plugins responsible for listening on GitHub webhooks:

                                                          • GitHub plugin is configured to listen on Push events.
                                                          • GitHub Pull Request Builder is configured to listen on Pull Request events.

                                                          In case of any issues with webhooks, try the following solutions:

                                                          1. Check that the firewalls are configured to accept the incoming traffic from the IP address range that is described in the GitHub documentation.

                                                          2. Check that GitHub Personal Access Token is correct and has sufficient scope permissions.

                                                          3. Check that the job has run at least once before using the hook (once an application is created in EDP, the build job should be run automatically in Jenkins).

                                                          4. Check that both Push and issue comment and Pull Request webhooks are created on the GitHub side (unlike GitLab, GitHub does not need separate webhooks for each branch):

                                                            • Go to the GitHub repository -> Settings -> Webhooks.

                                                            Webhooks settings

                                                          5. Click each webhook and check if the event delivery is successful:

                                                            • The URL payload must be https://jenkins-the-host.com/github-webhook/ for the GitHub plugin and https://jenkins-the-host.com/ghprbhook/ for the GitHub Pull Request Builder.
                                                            • The content type must be application/json for Push events and application/x-www-form-urlencoded for Pull Request events.
                                                            • The html_url in the Payload request must match the repository URL and be without .git at the end of the URL.
                                                          6. Check that the X-Hub-Signature secret is verified. It is provided by the Jenkins GitHub plugin for Push events and by the GitHub Pull Request Builder plugin for Pull Request events. The Secret field is optional. Nevertheless, if incorrect, it can prevent webhook events.

                                                            For the GitHub plugin (Push events):

                                                            • Go to Jenkins -> Manage Jenkins -> Configure System, and find the GitHub plugin section.
                                                            • Select Advanced -> Shared secrets to add the secret via the Jenkins Credentials Provider.

                                                            For the GitHub Pull Request Builder (Pull Request events):

                                                            • Go to Jenkins -> Manage Jenkins -> Configure System, and find the GitHub Pull Request Builder plugin section.
                                                            • Check Shared secret that can be added manually.
                                                          7. Redeliver events by clicking the Redeliver button and check the Response body.

                                                            Manage webhook

                                                            Note

                                                            Use Postman to debug webhooks. Add all headers to Postman from the webhook Request -> Headers field and send the payload (Request body) using the appropriate content type.

                                                            Examples for Push and Pull Request events:

                                                            Postman push event payload headers GitHub plugin push events

                                                            The response in the Jenkins log:

                                                            Jan 17, 2022 8:51:14 AM INFO org.jenkinsci.plugins.github.webhook.subscriber.PingGHEventSubscriber onEvent\nPING webhook received from repo <https://github.com/user-profile/user-repo>!\n

                                                            Postman pull request event payload headers GitHub pull request builder

                                                            The response in the Jenkins log:

                                                            Jan 17, 2022 8:17:53 AM FINE org.jenkinsci.plugins.ghprb.GhprbRootAction\nGot payload event: ping\n
                                                          8. Check that the repo pushing to Jenkins, the GitHub project URL in the project configuration, and the repos in the pipeline Job must be lined up.

                                                          9. Enable the GitHub hook trigger for GITScm polling for the Build job.

                                                            GitHub hook trigger

                                                          10. Enable the GitHub Pull Request Builder for the Code Review job.

                                                            GitHub pull request builder

                                                          11. Filter through Jenkins log by using Jenkins custom log recorder:

                                                            • Go to Manage Jenkins -> System log -> Add new log recorder.
                                                            • The Push events for the GitHub:

                                                              Logger Log Level org.jenkinsci.plugins.github.webhook.subscriber.DefaultPushGHEventSubscriber ALL com.cloudbees.jenkins.GitHubPushTrigger ALL com.cloudbees.jenkins.GitHubWebHook ALL org.jenkinsci.plugins.github.webhook.WebhookManager ALL org.jenkinsci.plugins.github.webhook.subscriber.PingGHEventSubscriber ALL
                                                            • The Pull Request events for the GitHub Pull Request Builder:

                                                              Logger Log Level org.jenkinsci.plugins.ghprb.GhprbRootAction ALL org.jenkinsci.plugins.ghprb.GhprbTrigger ALL org.jenkinsci.plugins.ghprb.GhprbPullRequest ALL org.jenkinsci.plugins.ghprb.GhprbRepository ALL

                                                            Note

                                                            Below is an example of using the Pipeline script with webhooks for the GitHub plugin implemented in the EDP pipelines:

                                                            properties([pipelineTriggers([githubPush()])])\n\nnode {\n    git credentialsId: 'github-sshkey', url: 'https://github.com/someone/something.git', branch: 'master'\n}\n

                                                            Push events may not work correctly with the Job Pipeline script from SCM option in the current version of the GitHub plugin 1.34.1.

                                                          "},{"location":"operator-guide/github-debug-webhooks/#related-articles","title":"Related Articles","text":"
                                                          • GitHub Webhooks
                                                          • Integrate GitHub/GitLab in Jenkins
                                                          • Integrate GitHub/GitLab in Tekton
                                                          • GitHub Webhook Configuration
                                                          • Manage Jenkins CI Pipeline Job Provision
                                                          • GitHub Plugin
                                                          • GitHub Pull Request Builder
                                                          "},{"location":"operator-guide/github-integration/","title":"GitHub Webhook Configuration","text":"

                                                          Follow the steps below to automatically integrate Jenkins with GitHub webhooks.

                                                          Note

                                                          Before applying the GitHub integration, make sure you have already visited the Integrate GitHub/GitLab in Jenkins page.

                                                          1. Ensure the new job provisioner is created, as well as Secret with SSH key and GitServer custom resources.

                                                          2. Ensure the access token for GitHub is created.

                                                          3. Navigate to Dashboard -> Manage Jenkins -> Manage Credentials -> Global -> Add Credentials, and create new credentials with the Secret text kind. In the Secret field, provide the GitHub API token, fill in the ID field with the github-access-token value:

                                                            Jenkins github credentials

                                                          4. Navigate to Jenkins -> Manage Jenkins -> Configure system -> GitHub, and configure the GitHub server:

                                                            GitHub plugin config GitHub plugin Shared secrets config

                                                            Note

                                                            Keep the Manage hooks checkbox clear since the Job Provisioner automatically creates webhooks in the repository regardless of the checkbox selection. Select Advanced to see the shared secrets that can be used in a webhook Secret field to authenticate payloads from GitHub to Jenkins. The Secret field is optional.

                                                          5. Configure the GitHub Pull Request Builder plugin. This plugin is responsible for listening on Pull Request webhook events and triggering Code Review jobs:

                                                            Note

                                                            The Secret field is optional and is used in a webhook Secret field to authenticate payloads from GitHub to Jenkins. For details, please refer to the official GitHub pull request builder plugin documentation.

                                                            GitHub pull plugin config

                                                          "},{"location":"operator-guide/github-integration/#related-articles","title":"Related Articles","text":"
                                                          • Integrate GitHub/GitLab in Jenkins
                                                          • Integrate GitHub/GitLab in Tekton
                                                          • Adjust Jira Integration
                                                          • Manage Jenkins CI Pipeline Job Provision
                                                          "},{"location":"operator-guide/gitlab-debug-webhooks/","title":"Debug GitLab Webhooks in Jenkins","text":"

                                                          A webhook enables third-party services like GitLab to send real-time updates to the application. Updates are triggered by an event or action by the webhook provider (for example, a push to a repository, a Merge Request creation), and pushed to the application via the HTTP requests, namely, Jenkins. The GitLab Jenkins job provisioner creates a webhook in the GitLab repository during the Create release pipeline once the Integrate GitHub/GitLab in Jenkins is enabled and the GitLab Integration is completed.

                                                          The Jenkins setup in EDP uses the GitLab plugin responsible for listening on GitLab webhook Push and Merge Request events.

                                                          In case of any issues with webhooks, try the following solutions:

                                                          1. Check that the firewalls are configured to accept incoming traffic from the IP address range that is described in the GitLab documentation.

                                                          2. Check that GitLab Personal Access Token is correct and has the api scope. If you have used the Project Access Token, make sure that the role is Owner or Maintainer, and it has the api scope.

                                                          3. Check that the job has run at least once before using the hook (once an application is created in EDP, the build job should be run automatically in Jenkins).

                                                          4. Check that both Push Events, Note Events and Merge Requests Events, Note Events webhooks are created on the GitLab side for each branch (unlike GitHub, GitLab must have separate webhooks for each branch).

                                                            • Go to the GitLab repository -> Settings -> Webhooks:

                                                            Webhooks list

                                                          5. Click Edit next to each webhook and check if the event delivery is successful. If the webhook is sent, the Recent Deliveries list becomes available. Click View details.

                                                            Webhooks settings

                                                            • The URL payload must be similar to the job URL on Jenkins. For example: https://jenkins-server.com/project/project-name/MAIN-Build-job is for the Push events. https://jenkins-server.com/project/project-name/MAIN-Code-review-job is for the Merge Request events.
                                                            • The content type must be application/json for both events.
                                                            • The \"web_url\" in the Request body must match the repository URL.
                                                            • Project \"web_url\", \"path_with_namespace\", \"homepage\" links must be without .git at the end of the URL.
                                                          6. Verify the Secret token (X-Gitlab-Token). This token comes from the Jenkins job due to the Jenkins GitLab Plugin and is created by our Job Provisioner:

                                                            • Go to the Jenkins job and select Configure.
                                                            • Select Advanced under the Build Triggers and check the Secret token.

                                                            Secret token is optional and can be empty. Nevertheless, if incorrect, it can prevent webhook events.

                                                          7. Redeliver events by clicking the Resend Request button and check the Response body.

                                                            Note

                                                            Use Postman to debug webhooks. Add all headers to Postman from the webhook Request Headers field and send the payload (Request body) using the appropriate content type.

                                                            Examples for Push and Merge Request events:

                                                            Postman push request payload headers Push request build pipeline

                                                            The response in the Jenkins log:

                                                            Jan 17, 2022 11:26:34 AM INFO com.dabsquared.gitlabjenkins.webhook.GitLabWebHook getDynamic\nWebHook call ed with url: /project/project-name/MAIN-Build-job\nJan 17, 2022 11:26:34 AM INFO com.dabsquared.gitlabjenkins.trigger.handler.AbstractWebHookTriggerHandler handle\nproject-name/MAIN-Build-job triggered for push.\n

                                                            Postman merge request payload headers Merge request code review pipeline

                                                            The response in the Jenkins log:

                                                            Jan 17, 2022 11:14:58 AM INFO com.dabsquared.gitlabjenkins.webhook.GitLabWebHook getDynamic\nWebHook called with url: /project/project-name/MAIN-Code-review-job\n
                                                          8. Check that the repository pushing to Jenkins and the repository(ies) in the pipeline Job are lined up. GitLab Connection must be defined in the job settings.

                                                          9. Check that the settings in the Build Triggers for the Build job are as follows:

                                                            Build triggers build pipeline

                                                          10. Check that the settings in the Build Triggers for the Code Review job are as follows:

                                                            Build triggers code review pipeline

                                                          11. Filter through Jenkins log by using Jenkins custom log recorder:

                                                            • Go to Manage Jenkins -> System Log -> Add new log recorder.
                                                            • The Push and Merge Request events for the GitLab:

                                                              Logger Log Level com.dabsquared.gitlabjenkins.webhook.GitLabWebHook ALL com.dabsquared.gitlabjenkins.trigger.handler.AbstractWebHookTriggerHandler ALL com.dabsquared.gitlabjenkins.trigger.handler.merge.MergeRequestHookTriggerHandlerImpl ALL com.dabsquared.gitlabjenkins.util.CommitStatusUpdater ALL
                                                          "},{"location":"operator-guide/gitlab-debug-webhooks/#related-articles","title":"Related Articles","text":"
                                                          • GitLab Webhooks
                                                          • Integrate GitHub/GitLab in Jenkins
                                                          • Integrate GitHub/GitLab in Tekton
                                                          • Jenkins Integration With GitLab
                                                          • GitLab Integration
                                                          • Manage Jenkins CI Pipeline Job Provision
                                                          • GitLab Plugin
                                                          "},{"location":"operator-guide/gitlab-integration/","title":"GitLab Webhook Configuration","text":"

                                                          Follow the steps below to automatically create and integrate Jenkins GitLab webhooks.

                                                          Note

                                                          Before applying the GitLab integration, make sure to enable Integrate GitHub/GitLab in Jenkins. For details, please refer to the Integrate GitHub/GitLab in Jenkins page.

                                                          1. Ensure the new job provisioner is created, as well as Secret with SSH key and GitServer custom resources.

                                                          2. Ensure the access token for GitLab is created.

                                                          3. Create the Jenkins Credential ID by navigating to Dashboard -> Manage Jenkins -> Manage Credentials -> Global -> Add Credentials:

                                                            • Select the Secret text kind.
                                                            • Select the Global scope.
                                                            • Secret is the access token that was created earlier.
                                                            • ID is the gitlab-access-token ID.
                                                            • Use the description of the current Credential ID.

                                                            Jenkins credential

                                                            Warning

                                                            When using the GitLab integration, a webhook is automatically created. After the removal of the application, the webhook stops working but is not deleted. If necessary, it must be deleted manually.

                                                            Note

                                                            The next step is necessary if it is needed to see the status of Jenkins Merge Requests builds in the GitLab CI/CD Pipelines section.

                                                          4. In order to see the status of Jenkins Merge Requests builds in the GitLab CI/CD Pipelines section, configure the GitLab plugin by navigating to Manage Jenkins -> Configure System and filling in the GitLab plugin settings:

                                                            • Connection name is gitlab.
                                                            • GitLab host URL is a host URL to GitLab.
                                                            • Use the gitlab-access-token credentials.

                                                            GitLab plugin configuration

                                                            Find below an example of the Merge Requests build statuses in the GitLab CI/CD Pipelines section:

                                                            GitLab pipelines statuses

                                                          "},{"location":"operator-guide/gitlab-integration/#related-articles","title":"Related Articles","text":"
                                                          • Adjust Jira Integration
                                                          • Integrate GitHub/GitLab in Jenkins
                                                          • Integrate GitHub/GitLab in Tekton
                                                          • Grant Jenkins Access to the Gitlab Project
                                                          • Manage Jenkins CI Pipeline Job Provision
                                                          "},{"location":"operator-guide/gitlabci-integration/","title":"Adjust GitLab CI Tool","text":"

                                                          EDP allows selecting one of two available CI (Continuous Integration) tools, namely: Jenkins or GitLab. The Jenkins tool is available by default. To use the GitLab CI tool, it is required to make it available first.

                                                          Follow the steps below to adjust the GitLab CI tool:

                                                          1. In GitLab, add the environment variables to the project.

                                                            • To add variables, navigate to Settings -> CI/CD -> Expand Variables -> Add Variable:

                                                              Gitlab ci environment variables

                                                            • Apply the necessary variables as they differ in accordance with the cluster OpenShift / Kubernetes, see below:

                                                              OpenShift Environment Variables Description DOCKER_REGISTRY_URL URL to OpenShift docker registry DOCKER_REGISTRY_PASSWORD Service Account token that has an access to registry DOCKER_REGISTRY_USER user name OPENSHIFT_SA_TOKEN token that can be used to log in to OpenShift

                                                              Info

                                                              In order to get access to the Docker registry and OpenShift, use the gitlab-ci ServiceAccount; pay attention that SA description contains the credentials and secrets:

                                                              Service account

                                                              Kubernetes Environment Variables Description DOCKER_REGISTRY_URL URL to Amazon ECR AWS_ACCESS_KEY_ID auto IAM user access key AWS_SECRET_ACCESS_KEY auto IAM user secret access key K8S_SA_TOKEN token that can be used to log in to Kubernetes

                                                              Note

                                                              To get the access to ECR, it is required to have an auto IAM user that has rights to push/create a repository.

                                                          2. In Admin Console, select the CI tool in the Advanced Settings menu during the codebase creation:

                                                            Advanced settings

                                                            Note

                                                            The selection of the CI tool is available only with the Import strategy.

                                                          3. As soon as the codebase is provisioned, the .gitlab-ci.yml file will be created in the repository that describes the pipeline's stages and logic:

                                                            .gitlab-ci.yml file presented in repository

                                                          "},{"location":"operator-guide/harbor-oidc/","title":"Harbor OIDC Configuration","text":"

                                                          This page provides instructions for configuring OIDC authorization for Harbor. This enables the use of Single Sign-On (SSO) for authorization in Harbor and allows centralized control over user access and rights through a single configuration point.

                                                          "},{"location":"operator-guide/harbor-oidc/#prerequisites","title":"Prerequisites","text":"

                                                          Before the beginning, ensure your cluster meets the following requirements:

                                                          • Keycloak is installed;
                                                          • EPAM Delivery Platform is installed.
                                                          "},{"location":"operator-guide/harbor-oidc/#configure-keycloak","title":"Configure Keycloak","text":"

                                                          To start from, configure Keycloak by creating two Kubernetes resources. Follow the steps below to succeed:

                                                          1. Generate the keycloak-client-harbor-secret for Keycloak using either the commands below or using the External Secrets Operator:

                                                            keycloak_client_harbor_secret=$(openssl rand -base64 32 | head -c 32)\n
                                                            kubectl -n edp create secret generic keycloak-client-harbor-secret \\\n--from-literal=cookie-secret=${keycloak_client_harbor_secret}\n
                                                          2. Create the KeycloakClient custom resource by applying the HarborKeycloakClient.yaml file in the edp namespace. This custom resource will use the keycloak-client-harbor-secret to include the harbor client. After the download, you will receive the created harbor client, and the password that is actually the value of the Kubernetes secret from the step 1:

                                                            View: HarborKeycloakClient.yaml
                                                            apiVersion: v1.edp.epam.com/v1\nkind: KeycloakClient\nmetadata:\nname: harbor\nspec:\nadvancedProtocolMappers: true\nclientId: harbor\ndirectAccess: true\npublic: false\nsecret: keycloak-client-harbor-secret\ndefaultClientScopes:\n- profile\n- email\n- roles\ntargetRealm: control-plane\nwebUrl: <harbor_endpoint>\nprotocolMappers:\n- name: roles\nprotocol: openid-connect\nprotocolMapper: oidc-usermodel-realm-role-mapper\nconfig:\naccess.token.claim: true\nclaim.name: roles\nid.token.claim: true\nuserinfo.token.claim: true\nmultivalued: true\n
                                                          "},{"location":"operator-guide/harbor-oidc/#configure-harbor","title":"Configure Harbor","text":"

                                                          The next stage is to configure Harbor. Proceed with following the steps below:

                                                          1. Log in to Harbor UI with an account that has Harbor system administrator privileges. To get the administrator password, execute the command below:

                                                            kubectl get secret harbor -n harbor -o jsonpath='{.data.HARBOR_ADMIN_PASSWORD}' | base64 --decode\n
                                                          2. Navigate to Administration -> Configuration -> Authentication. Configure OIDC using the parameters below:

                                                            auth_mode: oidc_auth\noidc_name: keycloak\noidc_endpoint: <keycloak_endpoint>/auth/realms/control-plane\noidc_client_id: harbor\noidc_client_secret: <keycloak-client-harbor-secret>\noidc_groups_claim: roles\noidc_admin_group: administrator\noidc_scope: openid,email,profile,roles\nverify_certificate: true\noidc_auto_onboard: true\noidc_user_claim: preferred_username\n

                                                            Harbor Authentication Configuration

                                                          As a result, users will be prompted to authenticate themselves when logging in to Harbor UI.

                                                          "},{"location":"operator-guide/harbor-oidc/#related-articles","title":"Related Articles","text":"
                                                          • Configure Access Token Lifetime
                                                          • EKS OIDC With Keycloak
                                                          • External Secrets Operator Integration
                                                          • Integrate Harbor With EDP Pipelines
                                                          "},{"location":"operator-guide/headlamp-oidc/","title":"Headlamp OIDC Configuration","text":"

                                                          This page provides the instructions of configuring the OIDC authorization for EDP Portal UI, thus allowing using SSO for authorization in Portal and controlling user access and rights from one configuration point.

                                                          "},{"location":"operator-guide/headlamp-oidc/#prerequisites","title":"Prerequisites","text":"

                                                          Ensure the following values are set first before starting the Portal OIDC configuration:

                                                          1. realm_id = openshift

                                                          2. client_id = kubernetes

                                                          3. keycloak_client_key= keycloak_client_secret_key (received from: Openshift realm -> clients -> kubernetes -> Credentials -> Client secret)

                                                          4. group = edp-oidc-admins, edp-oidc-builders, edp-oidc-deployers, edp-oidc-developers, edp-oidc-viewers (Should be created manually in the realm from point 1)

                                                          Note

                                                          The values indicated above are the result of the Keycloak configuration as an OIDC identity provider. To receive them, follow the instructions on the Keycloak OIDC EKS Configuration page.

                                                          "},{"location":"operator-guide/headlamp-oidc/#configure-keycloak","title":"Configure Keycloak","text":"

                                                          To proceed with the Keycloak configuration, perform the following:

                                                          1. Add the URL of the Headlamp to the valid_redirect_uris variable in Keycloak:

                                                            View: keycloak_openid_client
                                                              valid_redirect_uris = [\n\"https://edp-headlamp-edp.<dns_wildcard>/*\"\n\"http://localhost:8000/*\"\n]\n

                                                            Make sure to define the following Keycloak client values as indicated:

                                                            Keycloak client configuration

                                                          2. Configure the Keycloak client key in Kubernetes using the Kubernetes secrets or the External Secrets Operator:

                                                            apiVersion: v1\nkind: Secret\nmetadata:\nname: keycloak-client-headlamp-secret\nnamespace: edp\ntype: Opaque\nstringData:\nclientSecret: <keycloak_client_secret_key>\n
                                                          3. Assign user to one or more groups in Keycloak.

                                                          "},{"location":"operator-guide/headlamp-oidc/#integrate-headlamp-with-kubernetes","title":"Integrate Headlamp With Kubernetes","text":"

                                                          Headlamp can be integrated in Kubernetes in three steps:

                                                          1. Update the values.yaml file by enabling OIDC:

                                                            View: values.yaml
                                                            edp-headlamp:\nconfig:\noidc:\nenabled: true\n
                                                          2. Navigate to Headlamp and log in by clicking the Sign In button:

                                                            Headlamp login page

                                                          3. Go to EDP section -> Account -> Settings, and set up a namespace:

                                                            Headlamp namespace settings

                                                          As a result, it is possible to control access and rights from the Keycloak endpoint.

                                                          "},{"location":"operator-guide/headlamp-oidc/#related-articles","title":"Related Articles","text":"
                                                          • Configure Access Token Lifetime
                                                          • EKS OIDC With Keycloak
                                                          • External Secrets Operator
                                                          "},{"location":"operator-guide/import-strategy-jenkins/","title":"Integrate GitHub/GitLab in Jenkins","text":"

                                                          This page describes how to integrate EDP with GitLab or GitHub in case of following the Jenkins deploy scenario.

                                                          "},{"location":"operator-guide/import-strategy-jenkins/#integration-procedure","title":"Integration Procedure","text":"

                                                          To start from, it is required to add both Secret with SSH key and GitServer custom resources by taking the steps below:

                                                          1. Generate an SSH key pair and add a public key to GitLab or GitHub account.

                                                            ssh-keygen -t ed25519 -C \"email@example.com\"\n
                                                          2. Generate access token for GitLab or GitHub account with read/write access to the API. Both personal and project access tokens are applicable.

                                                            GitHubGitLab

                                                            To create access token in GitHub, follow the steps below:

                                                            • Log in to GitHub.
                                                            • Click the profile account and navigate to Settings -> Developer Settings.
                                                            • Select Personal access tokens (classic) and generate a new token with the following parameters:

                                                            Repo permission

                                                            Note

                                                            The access below is required for the GitHub Pull Request Builder plugin to get Pull Request commits, their status, and author info.

                                                            Admin permission User permission

                                                            Warning

                                                            Make sure to save a new personal access token because it won`t be displayed later.

                                                            To create access token in GitLab, follow the steps below:

                                                            • Log in to GitLab.
                                                            • In the top-right corner, click the avatar and select Settings.
                                                            • On the User Settings menu, select Access Tokens.
                                                            • Choose a name and an optional expiry date for the token.
                                                            • In the Scopes block, select the api scope for the token.

                                                            Personal access tokens

                                                            • Click the Create personal access token button.

                                                            Note

                                                            Make sure to save the access token as there will not be any ability to access it once again.

                                                            In case you want to create a project access token instead of a personal one, the GitLab Jenkins plugin will be able to accept payloads from webhooks for the project only:

                                                            • Log in to GitLab and navigate to the project.
                                                            • On the User Settings menu, select Access Tokens.
                                                            • Choose a name and an optional expiry date for the token.
                                                            • Choose a role: Owner or Maintainer.
                                                            • In the Scopes block, select the api scope for the token.

                                                            Project access tokens

                                                            • Click the Create project access token button.
                                                          3. Create secret in the edp namespace for the Git account with the id_rsa, username, and token fields. We recommend using EDP Portal to implement this:

                                                            • Open EDP Portal URL. Use the Sign-In option:

                                                              Logging screen

                                                            • In the top right corner, enter the Cluster settings and set the Default namespace. The Allowed namespaces field is optional. All the resources created via EDP Portal are created in the Default namespace whereas Allowed namespaces means the namespaces you are allowed to access in this cluster:

                                                              Cluster settings

                                                            • Log into EDP Portal UI, select EDP -> Git Servers -> + to see the Create Git Server menu:

                                                              Git Servers overview

                                                            • Choose your Git provider, insert Host, Access token, Private SSH key. Adjust SSH port, User and HTTPS port if needed and click Apply:

                                                              Note

                                                              Do not forget to press enter at the very end of the private key to have the last row empty.

                                                              Create Git Servers menu

                                                            • After performing the steps above, two Kubernetes custom resources will be created in the default namespace: secret and GitServer. EDP Portal appends random symbols to both the secret and the GitServer to provide names with uniqueness. Also, the attempt to connect to your actual Git server will be performed. If the connection with the server is established, the Git server status should be green:

                                                              Git server status

                                                              Note

                                                              The value of the nameSshKeySecret property is the name of the Secret that is indicated in the first step above.

                                                          4. Create the JenkinsServiceAccount custom resource with the credentials field that corresponds to the nameSshKeySecret property above:

                                                            apiVersion: v2.edp.epam.com/v1\nkind: JenkinsServiceAccount\nmetadata:\nname: gitlab # It can also be github.\nnamespace: edp\nspec:\ncredentials: <nameSshKeySecret>\nownerName: ''\ntype: ssh\n
                                                          5. Double-check that the new SSH credentials called gitlab/github are created in Jenkins using the SSH key. Navigate to Jenkins -> Manage Jenkins -> Manage Credentials -> (global):

                                                            Jenkins credentials

                                                          6. Create a new job provisioner by following the instructions for GitHub or GitLab. The job provisioner will create a job suite for an application added to EDP. The job provisioner will also create webhooks for the project in GitLab using a GitLab token.

                                                          7. Configure GitHub or GitLab plugins in Jenkins.

                                                          "},{"location":"operator-guide/import-strategy-jenkins/#related-articles","title":"Related Articles","text":"
                                                          • Add Git Server
                                                          • Add Application
                                                          • GitHub Webhook Configuration
                                                          • GitLab Webhook Configuration
                                                          "},{"location":"operator-guide/import-strategy-tekton/","title":"Integrate GitHub/GitLab in Tekton","text":"

                                                          This page describes how to integrate EDP with GitLab or GitHub Version Control System.

                                                          "},{"location":"operator-guide/import-strategy-tekton/#integration-procedure","title":"Integration Procedure","text":"

                                                          To start from, it is required to add both Secret with SSH key, API token, and GitServer resources by taking the steps below.

                                                          1. Generate an SSH key pair and add a public key to GitLab or GitHub account.

                                                            ssh-keygen -t ed25519 -C \"email@example.com\"\n
                                                          2. Generate access token for GitLab or GitHub account with read/write access to the API. Both personal and project access tokens are applicable.

                                                            GitHubGitLab

                                                            To create access token in GitHub, follow the steps below:

                                                            • Log in to GitHub.
                                                            • Click the profile account and navigate to Settings -> Developer Settings.
                                                            • Select Personal access tokens (classic) and generate a new token with the following parameters:

                                                            Repo permission

                                                            Note

                                                            The access below is required for the GitHub Pull Request Builder plugin to get Pull Request commits, their status, and author info.

                                                            Admin permission User permission

                                                            Warning

                                                            Make sure to save a new personal access token because it won`t be displayed later.

                                                            To create access token in GitLab, follow the steps below:

                                                            • Log in to GitLab.
                                                            • In the top-right corner, click the avatar and select Settings.
                                                            • On the User Settings menu, select Access Tokens.
                                                            • Choose a name and an optional expiry date for the token.
                                                            • In the Scopes block, select the api scope for the token.

                                                            Personal access tokens

                                                            • Click the Create personal access token button.

                                                            Note

                                                            Make sure to save the access token as there will not be any ability to access it once again.

                                                            In case you want to create a project access token instead of a personal one, take the following steps:

                                                            • Log in to GitLab and navigate to the project.
                                                            • On the User Settings menu, select Access Tokens.
                                                            • Choose a name and an optional expiry date for the token.
                                                            • Choose a role: Owner or Maintainer.
                                                            • In the Scopes block, select the api scope for the token.

                                                            Project access tokens

                                                            • Click the Create project access token button.
                                                          3. Create a secret in the edp namespace for the Git account with the id_rsa, username, and token fields. Take the following template as an example (use ci-github instead of ci-gitlab for GitHub):

                                                            kubectl create secret generic ci-gitlab -n edp \\\n--from-file=id_rsa=id_rsa \\\n--from-literal=username=git \\\n--from-literal=token=your_gitlab_access_token\n
                                                          "},{"location":"operator-guide/import-strategy-tekton/#related-articles","title":"Related Articles","text":"
                                                          • Add Git Server
                                                          • Add Application
                                                          • GitHub WebHook Configuration
                                                          • GitLab WebHook Configuration
                                                          "},{"location":"operator-guide/import-strategy/","title":"Enable VCS Import Strategy","text":"

                                                          Enabling the VCS Import strategy is a prerequisite to integrate EDP with GitLab or GitHub.

                                                          "},{"location":"operator-guide/import-strategy/#general-steps","title":"General Steps","text":"

                                                          In order to use the Import strategy, it is required to add both Secret with SSH key and GitServer custom resources by taking the steps below.

                                                          1. Generate an SSH key pair and add a public key to GitLab or GitHub account.

                                                            ssh-keygen -t ed25519 -C \"email@example.com\"\n
                                                          2. Generate access token for GitLab or GitHub account with read/write access to the API. Both personal and project access tokens are applicable.

                                                          GitHubGitLab

                                                          To create access token in GitHub, follow the steps below:

                                                          • Log in to GitHub.
                                                          • Click the profile account and navigate to Settings -> Developer Settings.
                                                          • Select Personal access tokens (classic) and generate a new token with the following parameters:

                                                          Repo permission

                                                          Note

                                                          The access below is required for the GitHub Pull Request Builder plugin to get Pull Request commits, their status, and author info.

                                                          Admin permission User permission

                                                          Warning

                                                          Make sure to save a new personal access token because it won`t be displayed later.

                                                          To create access token in GitLab, follow the steps below:

                                                          • Log in to GitLab.
                                                          • In the top-right corner, click the avatar and select Settings.
                                                          • On the User Settings menu, select Access Tokens.
                                                          • Choose a name and an optional expiry date for the token.
                                                          • In the Scopes block, select the api scope for the token.

                                                          Personal access tokens

                                                          • Click the Create personal access token button.

                                                          Note

                                                          Make sure to save the access token as there will not be any ability to access it once again.

                                                          In case you want to create a project access token instead of a personal one, the GitLab Jenkins plugin will be able to accept payloads from webhooks for the project only:

                                                          • Log in to GitLab and navigate to the project.
                                                          • On the User Settings menu, select Access Tokens.
                                                          • Choose a name and an optional expiry date for the token.
                                                          • Choose a role: Owner or Maintainer.
                                                          • In the Scopes block, select the api scope for the token.

                                                          Project access tokens

                                                          • Click the Create project access token button.
                                                          "},{"location":"operator-guide/import-strategy/#ci-tool-specific-steps","title":"CI Tool Specific Steps","text":"

                                                          The further steps depend on the CI tool used.

                                                          Tekton CI toolJenkins CI tool
                                                          1. Create a secret in the edp-project namespace for the Git account with the id_rsa, username, and token fields. Take the following template as an example (use github instead of gitlab for GitHub):

                                                            kubectl create secret generic gitlab -n edp \\\n--from-file=id_rsa=id_rsa \\\n--from-literal=username=git \\\n--from-literal=token=your_gitlab_access_token\n
                                                          2. After completing the steps above, you can get back and continue installing EDP.

                                                          1. Create secret in the edp namespace for the Git account with the id_rsa, username, and token fields. We recommend using EDP Portal to implement this:

                                                            Open EDP Portal URL. Use the Sign-In option:

                                                            Logging screen

                                                            In the top right corner, enter the Cluster settings and set the Default namespace. The Allowed namespaces field is optional. All the resources created via EDP Portal are created in the Default namespace whereas Allowed namespaces means the namespaces you are allowed to access in this cluster:

                                                            Cluster settings

                                                            Log into EDP Portal UI, select EDP -> Git Servers -> + to see the Create Git Server menu:

                                                            Git Servers overview

                                                            Choose your Git provider, insert Host, Access token, Private SSH key. Adjust SSH port, User and HTTPS port if needed and click Apply:

                                                            Note

                                                            Do not forget to press enter at the very end of the private key to have the last row empty.

                                                            Create Git Servers menu

                                                            When everything is done, two custom resources will be created in the default namespace: secret and Git server. EDP Portal appends random symbols to both the secret and the server to provide names with uniqueness. Also, the attempt to connect to your Git server will be performed. If everything is correct, the Git server status should be green:

                                                            Git server status

                                                            Note

                                                            The value of the nameSshKeySecret property is the name of the Secret that is indicated in the first step above.

                                                          2. Create the JenkinsServiceAccount custom resource with the credentials field that corresponds to the nameSshKeySecret property above:

                                                            apiVersion: v2.edp.epam.com/v1\nkind: JenkinsServiceAccount\nmetadata:\nname: gitlab # It can also be github.\nnamespace: edp\nspec:\ncredentials: <nameSshKeySecret>\nownerName: ''\ntype: ssh\n
                                                          3. Double-check that the new SSH credentials called gitlab/github are created in Jenkins using the SSH key. Navigate to Jenkins -> Manage Jenkins -> Manage Credentials -> (global):

                                                            Jenkins credentials

                                                          4. The next step is to create a new job provisioner by following the instructions for GitHub or GitLab. The job provisioner will create a job suite for an application added to EDP. It will also create webhooks for the project in GitLab using a GitLab token.

                                                          5. The next step is to integrate Jenkins with GitHub or GitLab by setting their plugins.

                                                          "},{"location":"operator-guide/import-strategy/#related-articles","title":"Related Articles","text":"
                                                          • Add Git Server
                                                          • Add Application
                                                          • GitHub Webhook Configuration
                                                          • GitLab Webhook Configuration
                                                          "},{"location":"operator-guide/install-argocd/","title":"Install Argo CD","text":"

                                                          Inspect the prerequisites and the main steps to perform for enabling Argo CD in EDP.

                                                          "},{"location":"operator-guide/install-argocd/#prerequisites","title":"Prerequisites","text":"

                                                          The following tools must be installed:

                                                          • Keycloak
                                                          • EDP
                                                          • Kubectl version 1.23.0
                                                          • Helm version 3.10.0
                                                          "},{"location":"operator-guide/install-argocd/#installation","title":"Installation","text":"

                                                          Argo CD enablement for EDP consists of two major steps:

                                                          • Argo CD integration with EDP (SSO enablement, codebase onboarding, etc.)
                                                          • Argo CD installation

                                                          Info

                                                          It is also possible to install Argo CD using the Helmfile. For details, please refer to the Install via Helmfile page.

                                                          "},{"location":"operator-guide/install-argocd/#integrate-with-edp","title":"Integrate With EDP","text":"

                                                          To enable Argo CD integration, ensure that the argocd.enabled flag values.yaml is set to true

                                                          "},{"location":"operator-guide/install-argocd/#install-with-helm","title":"Install With Helm","text":"

                                                          Argo CD can be installed in several ways, please follow the official documentation for more details.

                                                          Follow the steps below to install Argo CD using Helm:

                                                          For the OpenShift users:

                                                          When using the OpenShift platform, apply the SecurityContextConstraints resource. Change the namespace in the users section if required.

                                                          View: argocd-scc.yaml

                                                          allowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegeEscalation: true\nallowPrivilegedContainer: false\nallowedCapabilities: null\napiVersion: security.openshift.io/v1\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- min: 99\nmax: 65543\ngroups: []\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: argo-redis-ha\npriority: 1\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities:\n- KILL\n- MKNOD\n- SETUID\n- SETGID\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMin: 1\nuidRangeMax: 65543\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nseccompProfiles:\n- '*'\nusers:\n- system:serviceaccount:argocd:argo-redis-ha\n- system:serviceaccount:argocd:argo-redis-ha-haproxy\n- system:serviceaccount:argocd:argocd-notifications-controller\n- system:serviceaccount:argocd:argo-argocd-repo-server\n- system:serviceaccount:argocd:argocd-server\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n

                                                          1. Check out the values.yaml file sample of the Argo CD customization, which is based on the HA mode without autoscaling:

                                                            View: kubernetes-values.yaml
                                                            redis-ha:\nenabled: true\n\ncontroller:\nenableStatefulSet: true\n\nserver:\nreplicas: 2\nextraArgs:\n- \"--insecure\"\nenv:\n- name: ARGOCD_API_SERVER_REPLICAS\nvalue: '2'\ningress:\nenabled: true\nhosts:\n- \"argocd.<Values.global.dnsWildCard>\"\nconfig:\n# required when SSO is enabled\nurl: \"https://argocd.<.Values.global.dnsWildCard>\"\napplication.instanceLabelKey: argocd.argoproj.io/instance-edp\noidc.config: |\nname: Keycloak\nissuer: https://<.Values.global.keycloakEndpoint>/auth/realms/edp-main\nclientID: argocd\nclientSecret: $oidc.keycloak.clientSecret\nrequestedScopes:\n- openid\n- profile\n- email\n- groups\nrbacConfig:\n# users may be still be able to login,\n# but will see no apps, projects, etc...\npolicy.default: ''\nscopes: '[groups]'\npolicy.csv: |\n# default global admins\ng, ArgoCDAdmins, role:admin\n\nconfigs:\nparams:\napplication.namespaces: edp\n\nrepoServer:\nreplicas: 2\n\n# we use Keycloak so no DEX is required\ndex:\nenabled: false\n\n# Disabled for multitenancy env with single instance deployment\napplicationSet:\nenabled: false\n
                                                            View: openshift-values.yaml
                                                            redis-ha:\nenabled: true\n\ncontroller:\nenableStatefulSet: true\n\nserver:\nreplicas: 2\nextraArgs:\n- \"--insecure\"\nenv:\n- name: ARGOCD_API_SERVER_REPLICAS\nvalue: '2'\nroute:\nenabled: true\nhostname: \"argocd.<.Values.global.dnsWildCard>\"\ntermination_type: edge\ntermination_policy: Redirect\nconfig:\n# required when SSO is enabled\nurl: \"https://argocd.<.Values.global.dnsWildCard>\"\napplication.instanceLabelKey: argocd.argoproj.io/instance-edp\noidc.config: |\nname: Keycloak\nissuer: https://<.Values.global.keycloakEndpoint>/auth/realms/edp-main\nclientID: argocd\nclientSecret: $oidc.keycloak.clientSecret\nrequestedScopes:\n- openid\n- profile\n- email\n- groups\nrbacConfig:\n# users may be still be able to login,\n# but will see no apps, projects, etc...\npolicy.default: ''\nscopes: '[groups]'\npolicy.csv: |\n# default global admins\ng, ArgoCDAdmins, role:admin\n\nconfigs:\nparams:\napplication.namespaces: edp\n\nrepoServer:\nreplicas: 2\n\n# we use Keycloak so no DEX is required\ndex:\nenabled: false\n\n# Disabled for multitenancy env with single instance deployment\napplicationSet:\nenabled: false\n

                                                            Populate Argo CD values with the values from the EDP values.yaml:

                                                            • <.Values.global.dnsWildCard> is the EDP DNS WildCard.
                                                            • <.Values.global.keycloakEndpoint> is the Keycloak Hostname.
                                                            • We use edp namespace.
                                                          2. Run the installation:

                                                            kubectl create ns argocd\nhelm repo add argo https://argoproj.github.io/argo-helm\nhelm install argo --version 5.33.1 argo/argo-cd -f values.yaml -n argocd\n
                                                          3. Update the argocd-secret secret in the argocd namespace by providing the correct Keycloak client secret (oidc.keycloak.clientSecret) with the value from the keycloak-client-argocd-secret secret in the EDP namespace. Then restart the deployment:

                                                            ARGOCD_CLIENT=$(kubectl -n edp get secret keycloak-client-argocd-secret  -o jsonpath='{.data.clientSecret}')\nkubectl -n argocd patch secret argocd-secret -p=\"{\\\"data\\\":{\\\"oidc.keycloak.clientSecret\\\": \\\"${ARGOCD_CLIENT}\\\"}}\" -v=1\nkubectl -n argocd rollout restart deployment argo-argocd-server\n
                                                          "},{"location":"operator-guide/install-argocd/#related-articles","title":"Related Articles","text":"
                                                          • Argo CD Integration
                                                          • Install via Helmfile
                                                          "},{"location":"operator-guide/install-defectdojo/","title":"Install DefectDojo","text":"

                                                          Inspect the main steps to perform for installing DefectDojo via Helm Chart.

                                                          Info

                                                          It is also possible to install DefectDojo using the EDP addons approach. For details, please refer to the EDP addons approach.

                                                          "},{"location":"operator-guide/install-defectdojo/#prerequisites","title":"Prerequisites","text":"
                                                          • Kubectl version 1.26.0 is installed.
                                                          • Helm version 3.12.0+ is installed.
                                                          "},{"location":"operator-guide/install-defectdojo/#installation","title":"Installation","text":"

                                                          Info

                                                          Please refer to the DefectDojo Helm Chart and Deploy DefectDojo into the Kubernetes cluster sections for details.

                                                          To install DefectDojo, follow the steps below:

                                                          1. Check that a security namespace is created. If not, run the following command to create it:

                                                            kubectl create namespace defectdojo\n

                                                            For the OpenShift users:

                                                            When using the OpenShift platform, install the SecurityContextConstraints resource. In case of using a custom namespace for defectdojo, change the namespace in the users section.

                                                            View: defectdojo-scc.yaml

                                                            allowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegeEscalation: true\nallowPrivilegedContainer: false\nallowedCapabilities: null\napiVersion: security.openshift.io/v1\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- min: 999\nmax: 65543\ngroups: []\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: defectdojo\npriority: 1\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities:\n- KILL\n- MKNOD\n- SETUID\n- SETGID\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMin: 1\nuidRangeMax: 65543\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nusers:\n- system:serviceaccount:defectdojo:defectdojo\n- system:serviceaccount:defectdojo:defectdojo-rabbitmq\n- system:serviceaccount:defectdojo:default\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n

                                                          2. Add a chart repository:

                                                            helm repo add defectdojo 'https://raw.githubusercontent.com/DefectDojo/django-DefectDojo/helm-charts'\nhelm repo update\n
                                                          3. Create PostgreSQL admin secret:

                                                            kubectl -n defectdojo create secret generic defectdojo-postgresql-specific \\\n--from-literal=postgresql-password=<postgresql_password> \\\n--from-literal=postgresql-postgres-password=<postgresql_postgres_password>\n

                                                            Note

                                                            The postgresql_password and postgresql_postgres_password passwords must be 16 characters long.

                                                          4. Create Rabbitmq admin secret:

                                                            kubectl -n defectdojo create secret generic defectdojo-rabbitmq-specific \\\n--from-literal=rabbitmq-password=<rabbitmq_password> \\\n--from-literal=rabbitmq-erlang-cookie=<rabbitmq_erlang_cookie>\n

                                                            Note

                                                            The rabbitmq_password password must be 10 characters long.

                                                            The rabbitmq_erlang_cookie password must be 32 characters long.

                                                          5. Create DefectDojo admin secret:

                                                            kubectl -n defectdojo create secret generic defectdojo \\\n--from-literal=DD_ADMIN_PASSWORD=<dd_admin_password> \\\n--from-literal=DD_SECRET_KEY=<dd_secret_key> \\\n--from-literal=DD_CREDENTIAL_AES_256_KEY=<dd_credential_aes_256_key> \\\n--from-literal=METRICS_HTTP_AUTH_PASSWORD=<metric_http_auth_password>\n

                                                            Note

                                                            The dd_admin_password password must be 22 characters long.

                                                            The dd_secret_key password must be 128 characters long.

                                                            The dd_credential_aes_256_key password must be 128 characters long.

                                                            The metric_http_auth_password password must be 32 characters long.

                                                          6. Install DefectDojo v.2.22.4 using defectdojo/defectdojo Helm chart v.1.6.69:

                                                            helm upgrade --install \\\ndefectdojo \\\n--version 1.6.69 \\\ndefectdojo/defectdojo \\\n--namespace defectdojo \\\n--values values.yaml\n

                                                            Check out the values.yaml file sample of the DefectDojo customization:

                                                            View: values.yaml
                                                            tag: 2.22.4\nfullnameOverride: defectdojo\nhost: defectdojo.<ROOT_DOMAIN>\nsite_url: https://defectdojo.<ROOT_DOMAIN>\nalternativeHosts:\n- defectdojo-django.defectdojo\n\ninitializer:\n# should be false after initial installation was performed\nrun: true\ndjango:\ningress:\nenabled: true # change to 'false' for OpenShift\nactivateTLS: false\nuwsgi:\nlivenessProbe:\n# Enable liveness checks on uwsgi container. Those values are use on nginx readiness checks as well.\n# default value is 120, so in our case 20 is just fine\ninitialDelaySeconds: 20\n
                                                          7. For the OpenShift platform, install a Route:

                                                            View: defectdojo-route.yaml
                                                            kind: Route\napiVersion: route.openshift.io/v1\nmetadata:\nname: defectdojo\nnamespace: defectdojo\nspec:\nhost: defectdojo.<ROOT_DOMAIN>\npath: /\ntls:\ninsecureEdgeTerminationPolicy: Redirect\ntermination: edge\nto:\nkind: Service\nname: defectdojo-django\nport:\ntargetPort: http\nwildcardPolicy: None\n
                                                          8. "},{"location":"operator-guide/install-defectdojo/#configuration","title":"Configuration","text":"

                                                            To prepare DefectDojo for integration with EDP, follow the steps below:

                                                            1. Create ci user in DefectDojo UI:

                                                              • Login to DefectDojo UI using admin credentials:
                                                                echo \"DefectDojo admin password: $(kubectl \\\nget secret defectdojo \\\n--namespace=defectdojo \\\n--output jsonpath='{.data.DD_ADMIN_PASSWORD}' \\\n| base64 --decode)\"\n
                                                              • Go to User section
                                                              • Create new user with write permission: DefectDojo set user permission
                                                            2. Get a token of the DefectDojo user:

                                                              • Login to the DefectDojo UI using the credentials from previous steps.
                                                              • Go to the API v2 key (token).
                                                              • Copy the API key.
                                                            3. Provision the secret using EDP Portal, Manifest or with the externalSecrets operator:

                                                            EDP PortalManifestExternal Secrets Operator

                                                            Go to EDP Portal -> EDP -> Configuration -> DefectDojo. Update or fill in the URL and Token and click the Save button.

                                                            DefectDojo update manual secret

                                                            apiVersion: v1\nkind: Secret\nmetadata:\nname: ci-defectdojo\nnamespace: edp\nlabels:\napp.edp.epam.com/secret-type: defectdojo\nstringData:\nurl: https://defectdojo.example.com\ntoken: <token>\n

                                                            Store defectdojo URL and Token in AWS Parameter Store with following format:

                                                            \"ci-defectdojo\":\n{\n\"url\": \"https://defectdojo.example.com\",\n\"token\": \"XXXXXXXXXXXX\"\n}\n
                                                            Go to EDP Portal -> EDP -> Configuration -> DefectDojo and see the Managed by External Secret message.

                                                            More details about the External Secrets Operator integration procedure can be found in the External Secrets Operator Integration page.

                                                            After following the instructions provided, you should be able to integrate your DefectDojo with the EPAM Delivery Platform using one of the few available scenarios.

                                                            "},{"location":"operator-guide/install-defectdojo/#related-articles","title":"Related Articles","text":"
                                                            • Install External Secrets Operator
                                                            • External Secrets Operator Integration
                                                            • Install Harbor
                                                            "},{"location":"operator-guide/install-edp/","title":"Install EDP","text":"

                                                            Inspect the main steps to install EPAM Delivery Platform. Please check the Prerequisites Overview page before starting the installation. There are two recommended ways to deploy EPAM Delivery Platform:

                                                            • Using Helm (see below);
                                                            • Using Helmfile.

                                                            Note

                                                            The installation process below is given for a Kubernetes cluster. The steps that differ for an OpenShift cluster are indicated in the notes.

                                                            Disclaimer

                                                            EDP is aligned with industry standards for storing and managing sensitive data, ensuring optimal security. However, the use of custom solutions introduces uncertainties, thus the responsibility for the safety of your data is totally covered by platform administrator.

                                                            1. EDP manages secrets via External Secret Operator to integrate with a multitude of utilities. For insights into the secrets in use and their utilization, refer to the provided External Secrets Operator Integration.

                                                            2. Create an edp namespace or a Kiosk space depending on whether Kiosk is used or not.

                                                              • Without Kiosk, create a namespace:

                                                                kubectl create namespace edp\n

                                                                Note

                                                                For an OpenShift cluster, run the oc command instead of the kubectl one.

                                                              • With Kiosk, create a relevant space:

                                                                apiVersion: tenancy.kiosk.sh/v1alpha1\nkind: Space\nmetadata:\nname: edp\nspec:\naccount: edp-admin\n

                                                              Note

                                                              Kiosk is mandatory for EDP v.2.8.x. It is not implemented for the previous versions, and is optional for EDP since v.2.9.x.

                                                            3. For the EDP, it is required to have Keycloak access to perform the integration. To see the details on how to configure Keycloak correctly, please refer to the Install Keycloak page.

                                                            4. Add the Helm EPAMEDP Charts for local client.

                                                              helm repo add epamedp https://epam.github.io/edp-helm-charts/stable\n
                                                            5. Choose the required Helm chart version:

                                                              helm search repo epamedp/edp-install\nNAME                    CHART VERSION   APP VERSION     DESCRIPTION\nepamedp/edp-install     3.4.1           3.4.1           A Helm chart for EDP Install\n

                                                              Note

                                                              It is highly recommended to use the latest released version.

                                                            6. EDP can be integrated with the following version control systems:

                                                              • Gerrit (by default)
                                                              • GitHub
                                                              • GitLab

                                                              This integration implies in what system the development of the application will be or is already being carried out. The global.gitProvider flag in the edp-install controls this integration:

                                                              Gerrit (by default)GitHubGitLab values.yaml
                                                              ...\nglobal:\ngitProvider: gerrit\n...\n
                                                              values.yaml
                                                              ...\nglobal:\ngitProvider: github\n...\n
                                                              values.yaml
                                                              ...\nglobal:\ngitProvider: gitlab\n...\n

                                                              By default, the internal Gerrit server is deployed as a result of EDP deployment. For more details on how to integrate EDP with GitLab or GitHub instead of Gerrit, please refer to the Integrate GitHub/GitLab in Tekton page.

                                                            7. Configure SonarQube integration. EDP provides two ways to work with SonarQube:

                                                              • External SonarQube - any SonarQube that is installed separately from EDP. For example, SonarQube that is installed using edp-cluster-add-ons or another public SonarQube server. For more details on how EDP recommends to configure SonarQube to work with the platform, please refer to the SonarQube Integration page.
                                                              • Internal SonarQube - SonarQube that is installed along with EDP.
                                                              External SonarQubeInternal SonarQube values.yaml
                                                              ...\nglobal:\n# -- Optional parameter. Link to use custom sonarqube. Format: http://<service-name>.<sonarqube-namespace>:9000 or http(s)://<endpoint>\nsonarUrl: \"http://sonar.example.com\"\nsonar-operator:\nenabled: false\n...\n

                                                              This scenario is pre-configured by default, any values are already pre-defined.

                                                            8. It is also mandatory to have Nexus configured to run the platform. EDP provides two ways to work with Nexus:

                                                              • External Nexus - any Nexus that is installed separately from EDP. For example, Nexus that installed using edp-cluster-add-ons or another public Nexus server. For more details on how EDP recommends to configure Nexus to work with the platform, please refer to the Nexus Sonatype Integration page.
                                                              • Internal Nexus - Nexus that is installed along with EDP.
                                                              External NexusInternal Nexus values.yaml
                                                              ...\nglobal:\n# -- Optional parameter. Link to use custom nexus. Format: http://<service-name>.<nexus-namespace>:8081 or http://<ip-address>:<port>\nnexusUrl: \"http://nexus.example.com\"\nnexus-operator:\nenabled: false\n...\n

                                                              This scenario is pre-configured by default, any values are already pre-defined.

                                                            9. (Optional) Configure Container Registry for image storage.

                                                              Since EDP v3.4.0, we enabled users to configure Harbor registry instead of AWS ECR and Openshift-registry. We recommend installing Harbor using our edp-cluster-add-ons although you can install it any other way. To integrate EDP with Harbor, see Harbor integration page.

                                                              To enable Harbor as a registry storage, use the values below:

                                                              global:\ndockerRegistry:\ntype: \"harbor\"\nurl: \"harbor.example.com\"\n

                                                            10. Check the parameters in the EDP installation chart. For details, please refer to the values.yaml file.

                                                            11. Install EDP in the edp namespace with the Helm tool:

                                                              helm install edp epamedp/edp-install --wait --timeout=900s \\\n--version <edp_version> \\\n--values values.yaml \\\n--namespace edp\n

                                                              See the details on the parameters below:

                                                              Example values.yaml file
                                                              global:\n# -- platform type that can be either \"kubernetes\" or \"openshift\"\nplatform: \"kubernetes\"\n# DNS wildcard for routing in the Kubernetes cluster;\ndnsWildCard: \"example.com\"\n# -- Administrators of your tenant\nadmins:\n- \"stub_user_one@example.com\"\n# -- Developers of your tenant\ndevelopers:\n- \"stub_user_one@example.com\"\n- \"stub_user_two@example.com\"\n# -- Can be gerrit, github or gitlab. By default: gerrit\ngitProvider: gerrit\n# -- Gerrit SSH node port\ngerritSSHPort: \"22\"\n# Keycloak address with which the platform will be integrated\nkeycloakUrl: \"https://keycloak.example.com\"\ndockerRegistry:\n# -- Docker Registry endpoint\nurl: \"<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com\"\ntype: \"ecr\"\n\n# AWS Region, e.g. \"eu-central-1\"\nawsRegion:\n\nargocd:\n# -- Enable ArgoCD integration\nenabled: true\n# -- ArgoCD URL in format schema://URI\n# -- By default, https://argocd.{{ .Values.global.dnsWildCard }}\nurl: \"\"\n\n# Kaniko configuration section\nkaniko:\n# -- AWS IAM role to be used for kaniko pod service account (IRSA). Format: arn:aws:iam::<AWS_ACCOUNT_ID>:role/<AWS_IAM_ROLE_NAME>\nroleArn:\n\nedp-tekton:\n# Tekton Kaniko configuration section\nkaniko:\n# -- AWS IAM role to be used for kaniko pod service account (IRSA). Format: arn:aws:iam::<AWS_ACCOUNT_ID>:role/<AWS_IAM_ROLE_NAME>\nroleArn:\n\nedp-headlamp:\nconfig:\noidc:\nenabled: false\n

                                                              Note

                                                              Set global.platform=openshift while deploying EDP in OpenShift.

                                                              Info

                                                              The full installation with integration between tools will take at least 10 minutes.

                                                            12. To check if the installation is successful, run the command below:

                                                              helm status <edp-release> -n edp\n
                                                              You can also check ingress endpoints to get EDP Portal endpoint to enter EDP Portal UI:
                                                              kubectl describe ingress -n edp\n

                                                            13. Once EDP is successfully installed, you can navigate to our Use Cases to try out EDP functionality.

                                                            "},{"location":"operator-guide/install-edp/#related-articles","title":"Related Articles","text":"
                                                            • Quick Start
                                                            • Install EDP via Helmfile
                                                            • Integrate GitHub/GitLab in Jenkins
                                                            • Integrate GitHub/GitLab in Tekton
                                                            • GitHub Webhook Configuration
                                                            • GitLab Webhook Configuration
                                                            • Set Up Kubernetes
                                                            • Set Up OpenShift
                                                            • EDP Installation Prerequisites Overview
                                                            • Headlamp OIDC Integration
                                                            "},{"location":"operator-guide/install-external-secrets-operator/","title":"Install External Secrets Operator","text":"

                                                            Inspect the prerequisites and the main steps to perform for enabling External Secrets Operator in EDP.

                                                            "},{"location":"operator-guide/install-external-secrets-operator/#prerequisites","title":"Prerequisites","text":"
                                                            • Kubectl version 1.23.0 is installed. Please refer to the Kubernetes official website for details.
                                                            • Helm version 3.10.0+ is installed. Please refer to the Helm page on GitHub for details.
                                                            "},{"location":"operator-guide/install-external-secrets-operator/#installation","title":"Installation","text":"

                                                            To install External Secrets Operator with Helm, run the following commands:

                                                            helm repo add external-secrets https://charts.external-secrets.io\n\nhelm install external-secrets \\\nexternal-secrets/external-secrets \\\n--version 0.8.3 \\\n-n external-secrets \\\n--create-namespace\n

                                                            Info

                                                            It is also possible to install External Secrets Operator using the Helmfile or Operator Lifecycle Manager (OLM).

                                                            "},{"location":"operator-guide/install-external-secrets-operator/#related-articles","title":"Related Articles","text":"
                                                            • External Secrets Operator Integration
                                                            • Install Harbor
                                                            "},{"location":"operator-guide/install-harbor/","title":"Install Harbor","text":"

                                                            EPAM Delivery Platform uses Harbor as a storage for application images that are created when building applications.

                                                            Inspect the prerequisites and the main steps to perform for enabling Harbor in EDP.

                                                            "},{"location":"operator-guide/install-harbor/#prerequisites","title":"Prerequisites","text":"
                                                            • Kubectl version 1.26.0 is installed.
                                                            • Helm version 3.12.0+ is installed.
                                                            "},{"location":"operator-guide/install-harbor/#installation","title":"Installation","text":"

                                                            To install Harbor with Helm, follow the steps below:

                                                            1. Create a namespace for Harbor:

                                                              kubectl create namespace harbor\n
                                                            2. Create a secret for administrator user and registry:

                                                              ManuallyExternal Secret Operator
                                                              kubectl create secret generic harbor \\\n--from-literal=HARBOR_ADMIN_PASSWORD=<secret> \\\n--from-literal=REGISTRY_HTPASSWD=<secret> \\\n--from-literal=REGISTRY_PASSWD=<secret> \\\n--from-literal=secretKey=<secret> \\\n--namespace harbor\n
                                                              apiVersion: external-secrets.io/v1beta1\nkind: ExternalSecret\nmetadata:\nname: harbor\nnamespace: harbor\nspec:\nrefreshInterval: 1h\nsecretStoreRef:\nkind: SecretStore\nname: aws-parameterstore\ndata:\n- secretKey: HARBOR_ADMIN_PASSWORD\nremoteRef:\nconversionStrategy: Default\ndecodingStrategy: None\nkey: /control-plane/deploy-secrets\nproperty: harbor.HARBOR_ADMIN_PASSWORD\n- secretKey: secretKey\nremoteRef:\nconversionStrategy: Default\ndecodingStrategy: None\nkey: /control-plane/deploy-secrets\nproperty: harbor.secretKey\n- secretKey: REGISTRY_HTPASSWD\nremoteRef:\nconversionStrategy: Default\ndecodingStrategy: None\nkey: /control-plane/deploy-secrets\nproperty: harbor.REGISTRY_HTPASSWD\n- secretKey: REGISTRY_PASSWD\nremoteRef:\nconversionStrategy: Default\ndecodingStrategy: None\nkey: /control-plane/deploy-secrets\nproperty: harbor.REGISTRY_PASSWD\n

                                                              Note

                                                              The HARBOR_ADMIN_PASSWORD is the initial password of Harbor admin. The secretKey is the secret key that is used for encryption. Must be 16 characters long. The REGISTRY_PASSWD is Harbor registry password. The REGISTRY_HTPASSWD is login and password in htpasswd string format. This value is the string in the password file generated by the htpasswd command where the username is harbor_registry_user and the encryption type is bcrypt. See the example below:

                                                              htpasswd -bBc passwordfile harbor_registry_user harbor_registry_password\n
                                                              The username must be harbor_registry_user. The password must be the value from REGISTRY_PASSWD.

                                                            3. Add the Helm Harbor Charts for the local client.

                                                              helm repo add harbor https://helm.goharbor.io\n
                                                            4. Check the parameters in the Harbor installation chart. For details, please refer to the values.yaml file.

                                                            5. Install Harbor in the \u2039harbor\u203a namespace with the Helm tool.

                                                              helm install harbor harbor/harbor\n    --version 1.12.2 \\\n--namespace harbor \\\n--values values.yaml\n

                                                              See the details on the parameters below:

                                                              Example values.yaml file

                                                              # we use Harbor secret to consolidate all the Harbor secrets\nexistingSecretAdminPassword: harbor\nexistingSecretAdminPasswordKey: HARBOR_ADMIN_PASSWORD\nexistingSecretSecretKey: harbor\n\ncore:\n# The XSRF key. Will be generated automatically if it isn't specified\nxsrfKey: \"\"\njobservice:\n# Secret is used when job service communicates with other components.\n# If a secret key is not specified, Helm will generate one.\n# Must be a string of 16 chars.\nsecret: \"\"\nregistry:\n# Secret is used to secure the upload state from client\n# and registry storage backend.\n# If a secret key is not specified, Helm will generate one.\n# Must be a string of 16 chars.\nsecret: \"\"\ncredentials:\nusername: harbor_registry_user\nexistingSecret: harbor\nfullnameOverride: harbor\n# If Harbor is deployed behind the proxy, set it as the URL of proxy\nexternalURL: https://core.harbor.domain\nipFamily:\nipv6:\nenabled: false\nexpose:\ntls:\nenabled: false\ningress:\nhosts:\ncore: core.harbor.domain\nnotary: notary.harbor.domain\nupdateStrategy:\ntype: Recreate\npersistence:\npersistentVolumeClaim:\nregistry:\nsize: 30Gi\njobservice:\njobLog:\nsize: 1Gi\ndatabase:\nsize: 2Gi\nredis:\nsize: 1Gi\ntrivy:\nsize: 5Gi\ndatabase:\ninternal:\n# The initial superuser password for internal database\npassword: \"changeit\"\n
                                                            6. To check if the installation is successful, run the command below:

                                                              helm status <harbor-release> -n harbor\n
                                                              You can also check ingress endpoints to get Harbor endpoint to enter Harbor UI:
                                                              kubectl describe ingress <harbor_ingress> -n harbor\n

                                                            "},{"location":"operator-guide/install-harbor/#related-articles","title":"Related Articles","text":"
                                                            • Install EDP
                                                            • Integrate Harbor With EDP Pipelines
                                                            "},{"location":"operator-guide/install-ingress-nginx/","title":"Install NGINX Ingress Controller","text":"

                                                            Inspect the prerequisites and the main steps to perform for installing Install NGINX Ingress Controller on Kubernetes.

                                                            "},{"location":"operator-guide/install-ingress-nginx/#prerequisites","title":"Prerequisites","text":"
                                                            • Kubectl version 1.23.0 is installed. Please refer to the Kubernetes official website for details.
                                                            • Helm version 3.10.2 is installed. Please refer to the Helm page on GitHub for details.
                                                            "},{"location":"operator-guide/install-ingress-nginx/#installation","title":"Installation","text":"

                                                            Info

                                                            It is also possible to install NGINX Ingress Controller using the Helmfile. For details, please refer to the Install via Helmfile page.

                                                            To install the ingress-nginx chart, follow the steps below:

                                                            1. Create an ingress-nginx namespace:

                                                              kubectl create namespace ingress-nginx\n
                                                            2. Add a chart repository:

                                                              helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx\nhelm repo update\n
                                                            3. Install the ingress-nginx chart:

                                                              helm install ingress ingress-nginx/ingress-nginx \\\n--version 4.7.0 \\\n--values values.yaml \\\n--namespace ingress-nginx\n

                                                              Check out the values.yaml file sample of the ingress-nginx chart customization:

                                                            View: values.yaml
                                                            controller:\naddHeaders:\nX-Content-Type-Options: nosniff\nX-Frame-Options: SAMEORIGIN\nresources:\nlimits:\nmemory: \"256Mi\"\nrequests:\ncpu: \"50m\"\nmemory: \"128M\"\nconfig:\nssl-redirect: 'true'\nclient-header-buffer-size: '64k'\nhttp2-max-field-size: '64k'\nhttp2-max-header-size: '64k'\nlarge-client-header-buffers: '4 64k'\nupstream-keepalive-timeout: '120'\nkeep-alive: '10'\nuse-forwarded-headers: 'true'\nproxy-real-ip-cidr: '172.32.0.0/16'\nproxy-buffer-size: '8k'\n\n# To watch Ingress objects without the ingressClassName field set parameter value to true.\n# https://kubernetes.github.io/ingress-nginx/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do\nwatchIngressWithoutClass: true\n\nservice:\ntype: NodePort\nnodePorts:\nhttp: 32080\nhttps: 32443\nupdateStrategy:\nrollingUpdate:\nmaxUnavailable: 1\ntype: RollingUpdate\nmetrics:\nenabled: true\ndefaultBackend:\nenabled: true\nserviceAccount:\ncreate: true\nname: nginx-ingress-service-account\n

                                                            Warning

                                                            Align value controller.config.proxy-real-ip-cidr with AWS VPC CIDR.

                                                            "},{"location":"operator-guide/install-keycloak/","title":"Install Keycloak","text":"

                                                            Inspect the prerequisites and the main steps to perform for installing Keycloak.

                                                            Info

                                                            The installation process below is given for a Kubernetes cluster. The steps that differ for an OpenShift cluster are indicated in the warnings blocks.

                                                            "},{"location":"operator-guide/install-keycloak/#prerequisites","title":"Prerequisites","text":"
                                                            • Kubectl version 1.23.0 is installed. Please refer to the Kubernetes official website for details.
                                                            • Helm version 3.10.0+ is installed. Please refer to the Helm page on GitHub for details.

                                                            Info

                                                            EDP team is using a Keycloakx helm chart from the codecentric repository, but other repositories can be used as well (e.g. Bitnami). Before installing Keycloak, it is necessary to install a PostgreSQL database.

                                                            Info

                                                            It is also possible to install Keycloak using the Helmfile. For details, please refer to the Install via Helmfile page.

                                                            "},{"location":"operator-guide/install-keycloak/#postgresql-installation","title":"PostgreSQL Installation","text":"

                                                            To install PostgreSQL, follow the steps below:

                                                            1. Check that a security namespace is created. If not, run the following command to create it:

                                                              kubectl create namespace security\n

                                                              Warning

                                                              On the OpenShift platform, apply the SecurityContextConstraints resource. Change the namespace in the users section if required.

                                                              View: keycloak-scc.yaml
                                                              allowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegeEscalation: true\nallowPrivilegedContainer: false\nallowedCapabilities: null\napiVersion: security.openshift.io/v1\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- min: 999\nmax: 65543\ngroups: []\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: keycloak\npriority: 1\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities:\n- KILL\n- MKNOD\n- SETUID\n- SETGID\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMin: 1\nuidRangeMax: 65543\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nusers:\n- system:serviceaccount:security:keycloakx\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n
                                                              View: postgresql-keycloak-scc.yaml
                                                              allowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegeEscalation: true\nallowPrivilegedContainer: false\nallowedCapabilities: null\napiVersion: security.openshift.io/v1\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- min: 999\nmax: 65543\ngroups: []\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: postgresql-keycloak\npriority: 1\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities:\n- KILL\n- MKNOD\n- SETUID\n- SETGID\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMin: 1\nuidRangeMax: 65543\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nusers:\n- system:serviceaccount:security:default\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n
                                                            2. Create PostgreSQL admin secret:

                                                              kubectl -n security create secret generic keycloak-postgresql \\\n--from-literal=password=<postgresql_password> \\\n--from-literal=postgres-password=<postgresql_postgres_password>\n
                                                            3. Add a helm chart repository:

                                                              helm repo add bitnami https://charts.bitnami.com/bitnami\nhelm repo update\n
                                                            4. Install PostgreSQL v15.2.0 using bitnami/postgresql Helm chart v12.1.15:

                                                              Info

                                                              The PostgreSQL can be deployed in production ready mode. For example, it may include multiple replicas, persistent storage, autoscaling, and monitoring. For details, please refer to the official Chart documentation.

                                                              helm install postgresql bitnami/postgresql \\\n--version 12.1.15 \\\n--values values.yaml \\\n--namespace security\n

                                                              Check out the values.yaml file sample of the PostgreSQL customization:

                                                              View: values.yaml
                                                              # PostgreSQL read only replica parameters\nreadReplicas:\n# Number of PostgreSQL read only replicas\nreplicaCount: 1\n\nimage:\ntag: 15.2.0-debian-11-r0\n\nglobal:\npostgresql:\nauth:\nusername: admin\nexistingSecret: keycloak-postgresql\ndatabase: keycloak\n\nprimary:\npersistence:\nenabled: true\nsize: 3Gi\n
                                                            "},{"location":"operator-guide/install-keycloak/#keycloak-installation","title":"Keycloak Installation","text":"

                                                            To install Keycloak, follow the steps below:

                                                            1. Use security namespace from the PostgreSQL installation.

                                                            2. Add a chart repository:

                                                              helm repo add codecentric https://codecentric.github.io/helm-charts\nhelm repo update\n
                                                            3. Create Keycloak admin secret:

                                                              kubectl -n security create secret generic keycloak-admin-creds \\\n--from-literal=username=<keycloak_admin_username> \\\n--from-literal=password=<keycloak_admin_password>\n
                                                            4. Install Keycloak 20.0.3 using codecentric/keycloakx Helm chart:

                                                              Info

                                                              Keycloak can be deployed in production ready mode. For example, it may include multiple replicas, persistent storage, autoscaling, and monitoring. For details, please refer to the official Chart documentation.

                                                              helm install keycloakx codecentric/keycloakx \\\n--version 2.2.1 \\\n--values values.yaml \\\n--namespace security\n

                                                              Check out the values.yaml file sample of the Keycloak customization:

                                                              View: values.yaml
                                                              replicas: 1\n\n# Deploy the latest version\nimage:\ntag: \"20.0.3\"\n\n# start: create OpenShift realm which is required by EDP\nextraInitContainers: |\n- name: realm-provider\nimage: busybox\nimagePullPolicy: IfNotPresent\ncommand:\n- sh\nargs:\n- -c\n- |\necho '{\"realm\": \"openshift\",\"enabled\": true}' > /opt/keycloak/data/import/openshift.json\nvolumeMounts:\n- name: realm\nmountPath: /opt/keycloak/data/import\n\n# The following parameter is unrecommended to expose. Exposed health checks lead to an unnecessary attack vector.\nhealth:\nenabled: false\n# The following parameter is unrecommended to expose. Exposed metrics lead to an unnecessary attack vector.\nmetrics:\nenabled: false\n\nextraVolumeMounts: |\n- name: realm\nmountPath: /opt/keycloak/data/import\n\nextraVolumes: |\n- name: realm\nemptyDir: {}\n\ncommand:\n- \"/opt/keycloak/bin/kc.sh\"\n- \"--verbose\"\n- \"start\"\n- \"--auto-build\"\n- \"--http-enabled=true\"\n- \"--http-port=8080\"\n- \"--hostname-strict=false\"\n- \"--hostname-strict-https=false\"\n- \"--spi-events-listener-jboss-logging-success-level=info\"\n- \"--spi-events-listener-jboss-logging-error-level=warn\"\n- \"--import-realm\"\n\nextraEnv: |\n- name: KC_PROXY\nvalue: \"passthrough\"\n- name: KEYCLOAK_ADMIN\nvalueFrom:\nsecretKeyRef:\nname: keycloak-admin-creds\nkey: username\n- name: KEYCLOAK_ADMIN_PASSWORD\nvalueFrom:\nsecretKeyRef:\nname: keycloak-admin-creds\nkey: password\n- name: JAVA_OPTS_APPEND\nvalue: >-\n-XX:+UseContainerSupport\n-XX:MaxRAMPercentage=50.0\n-Djava.awt.headless=true\n-Djgroups.dns.query={{ include \"keycloak.fullname\" . }}-headless\n\n# This block should be uncommented if you install Keycloak on Kubernetes\ningress:\nenabled: true\nannotations:\nkubernetes.io/ingress.class: nginx\ningress.kubernetes.io/affinity: cookie\n# The following parameter is unrecommended to expose. Admin paths lead to an unnecessary attack vector.\nconsole:\nenabled: false\nrules:\n- host: keycloak.<ROOT_DOMAIN>\npaths:\n- path: '{{ tpl .Values.http.relativePath $ | trimSuffix \"/\" }}/'\npathType: Prefix\n\n# This block should be uncommented if you set Keycloak to OpenShift and change the host field\n# route:\n#   enabled: false\n#   # Path for the Route\n#   path: '/'\n#   # Host name for the Route\n#   host: \"keycloak.<ROOT_DOMAIN>\"\n#   # TLS configuration\n#   tls:\n#     enabled: true\n\nresources:\nlimits:\nmemory: \"2048Mi\"\nrequests:\ncpu: \"50m\"\nmemory: \"512Mi\"\n\n# Check database readiness at startup\ndbchecker:\nenabled: true\n\ndatabase:\nvendor: postgres\nexistingSecret: keycloak-postgresql\nhostname: postgresql\nport: 5432\nusername: admin\ndatabase: keycloak\n
                                                            "},{"location":"operator-guide/install-keycloak/#configuration","title":"Configuration","text":"

                                                            To prepare Keycloak for integration with EDP, follow the steps below:

                                                            1. Ensure that the openshift realm is created.

                                                            2. Create the edp_<EDP_PROJECT> user and set the password in the Master realm.

                                                              Note

                                                              This user should be used by EDP to access Keycloak. Please refer to the Install EDP and Install EDP via Helmfile sections for details.

                                                            3. In the Role Mapping tab, assign the proper roles to the user:

                                                              • Realm Roles:

                                                                • create-realm,
                                                                • offline_access,
                                                                • uma_authorization
                                                              • Client Roles openshift-realm:

                                                                • impersonation,
                                                                • manage-authorization,
                                                                • manage-clients,
                                                                • manage-users

                                                              Role mappings

                                                            "},{"location":"operator-guide/install-keycloak/#related-articles","title":"Related Articles","text":"
                                                            • Install EDP
                                                            • Install via Helmfile
                                                            • Install Harbor
                                                            "},{"location":"operator-guide/install-kiosk/","title":"Set Up Kiosk","text":"

                                                            Kiosk is a multi-tenancy extension for managing tenants and namespaces in a shared Kubernetes cluster. Within EDP, Kiosk is used to separate resources and enables the following options (see more details):

                                                            • Access to the EDP tenants in a Kubernetes cluster;
                                                            • Multi-tenancy access at the service account level for application deploy.

                                                            Inspect the main steps to set up Kiosk for the proceeding EDP installation.

                                                            Note

                                                            Kiosk deploy is mandatory for EDP v.2.8.. In earlier versions, Kiosk is not implemented. Since EDP v.2.9.0, integration with Kiosk is an optional feature. You may not want to use it, so just skip those steps and disable in Helm parameters during EDP deploy.

                                                            # global.kioskEnabled: <true/false>\n
                                                            "},{"location":"operator-guide/install-kiosk/#prerequisites","title":"Prerequisites","text":"
                                                            • Kubectl version 1.18.0 is installed. Please refer to the Kubernetes official website for details.
                                                            • Helm version 3.6.0 is installed. Please refer to the Helm page on GitHub for details.
                                                            "},{"location":"operator-guide/install-kiosk/#installation","title":"Installation","text":"
                                                            • Deploy Kiosk version 0.2.11 in the cluster. To install it, run the following command:
                                                                # Install kiosk with helm v3\n\n  helm repo add kiosk https://charts.devspace.sh/\n  kubectl create namespace kiosk\n  helm install kiosk --version 0.2.11 kiosk/kiosk -n kiosk --atomic\n

                                                            For more details, please refer to the Kiosk page on the GitHub.

                                                            "},{"location":"operator-guide/install-kiosk/#configuration","title":"Configuration","text":"

                                                            To provide access to the EDP tenant, follow the steps below.

                                                            • Check that a security namespace is created. If not, run the following command to create it:
                                                                kubectl create namespace security\n

                                                            Note

                                                            On an OpenShift cluster, run the oc command instead of kubectl one.

                                                            • Add a service account to the security namespace.
                                                                kubectl -n security create sa edp\n

                                                            Info

                                                            Please note that edp is the name of the EDP tenant here and in all the following steps.

                                                            • Apply the Account template to the cluster. Please check the sample below:
                                                              apiVersion: tenancy.kiosk.sh/v1alpha1\nkind: Account\nmetadata:\nname: edp-admin\nspec:\nspace:\nclusterRole: kiosk-space-admin\nsubjects:\n- kind: ServiceAccount\nname: edp\nnamespace: security\n
                                                            • Apply the ClusterRoleBinding to the 'kiosk-edit' cluster role (current role is added during installation of Kiosk). Please check the sample below:
                                                              apiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\nname: edp-kiosk-edit\nsubjects:\n- kind: ServiceAccount\nname: edp\nnamespace: security\nroleRef:\nkind: ClusterRole\nname: kiosk-edit\napiGroup: rbac.authorization.k8s.io\n
                                                            • To provide access to the EDP tenant, generate kubeconfig with Service Account edp permission. The edp account created earlier is located in the security namespace.
                                                            "},{"location":"operator-guide/install-loki/","title":"Install Grafana Loki","text":"

                                                            EDP configures the logging with the help of Grafana Loki aggregation system.

                                                            "},{"location":"operator-guide/install-loki/#installation","title":"Installation","text":"

                                                            To install Loki, follow the steps below:

                                                            1. Create logging namespace:

                                                                kubectl create namespace logging\n

                                                              Note

                                                              On the OpenShift cluster, run the oc command instead of the kubectl command.

                                                            2. Add a chart repository:

                                                                helm repo add grafana https://grafana.github.io/helm-charts\n  helm repo update\n

                                                              Note

                                                              It is possible to use Amazon Simple Storage Service Amazon S3 as an object storage for Loki. To configure access, please refer to the IRSA for Loki documentation.

                                                            3. Install Loki v.2.6.0:

                                                                helm install loki grafana/loki \\\n  --version 2.6.0 \\\n  --values values.yaml \\\n  --namespace logging\n

                                                              Check out the values.yaml file sample of the Loki customization:

                                                              View: values.yaml
                                                              image:\nrepository: grafana/loki\ntag: 2.3.0\nconfig:\nauth_enabled: false\nschema_config:\nconfigs:\n- from: 2021-06-01\nstore: boltdb-shipper\nobject_store: s3\nschema: v11\nindex:\nprefix: loki_index_\nperiod: 24h\nstorage_config:\naws:\ns3: s3://<AWS_REGION>/loki-<CLUSTER_NAME>\nboltdb_shipper:\nactive_index_directory: /data/loki/index\ncache_location: /data/loki/boltdb-cache\nshared_store: s3\nchunk_store_config:\nmax_look_back_period: 24h\nresources:\nlimits:\nmemory: \"128Mi\"\nrequests:\ncpu: \"50m\"\nmemory: \"128Mi\"\nserviceAccount:\ncreate: true\nname: edp-loki\nannotations:\neks.amazonaws.com/role-arn: \"arn:aws:iam::<AWS_ACCOUNT_ID>:role/AWSIRSA\u2039CLUSTER_NAME\u203a\u2039LOKI_NAMESPACE\u203aLoki\npersistence:\nenabled: false\n

                                                              Note

                                                              In case of using cluster scheduling and amazon-eks-pod-identity-webhook, it is necessary to restart the Loki pod after the cluster is up and running. Please refer to the Schedule Pods Restart documentation.

                                                            4. Configure custom bucket policy to delete the old data.

                                                            "},{"location":"operator-guide/install-reportportal/","title":"Install ReportPortal","text":"

                                                            Inspect the prerequisites and the main steps to perform for installing ReportPortal.

                                                            Info

                                                            It is also possible to install ReportPortal using the Helmfile. For details, please refer to the Install via Helmfile page.

                                                            "},{"location":"operator-guide/install-reportportal/#prerequisites","title":"Prerequisites","text":"
                                                            • Kubectl version 1.23.0 is installed. Please refer to the Kubernetes official website for details.
                                                            • Helm version 3.10.2 is installed. Please refer to the Helm page on GitHub for details.

                                                            Info

                                                            Please refer to the ReportPortal Helm Chart section for details.

                                                            "},{"location":"operator-guide/install-reportportal/#minio-installation","title":"MinIO Installation","text":"

                                                            To install MinIO, follow the steps below:

                                                            1. Check that edp namespace is created. If not, run the following command to create it:

                                                              kubectl create namespace edp\n

                                                              For the OpenShift users:

                                                              When using the OpenShift platform, install the SecurityContextConstraints resources. In case of using a custom namespace for the reportportal, change the namespace in the users section.

                                                              View: report-portal-third-party-resources-scc.yaml
                                                              apiVersion: security.openshift.io/v1\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: report-portal-minio-rabbitmq-postgresql\nallowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegeEscalation: true\nallowPrivilegedContainer: false\nallowedCapabilities: null\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- min: 999\nmax: 65543\ngroups: []\npriority: 1\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities:\n- KILL\n- MKNOD\n- SETUID\n- SETGID\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMin: 1\nuidRangeMax: 65543\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nusers:\n- system:serviceaccount:report-portal:minio\n- system:serviceaccount:report-portal:rabbitmq\n- system:serviceaccount:report-portal:postgresql\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n
                                                              View: report-portal-elasticsearch-scc.yaml
                                                              apiVersion: security.openshift.io/v1\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: report-portal-elasticsearch\nallowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegedContainer: true\nallowedCapabilities: []\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- max: 1000\nmin: 1000\ngroups: []\npriority: 0\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities: []\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMax: 1000\nuidRangeMin: 0\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nusers:\n- system:serviceaccount:report-portal:elasticsearch-master\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n
                                                            2. Add a chart repository:

                                                              helm repo add bitnami https://charts.bitnami.com/bitnami\nhelm repo update\n
                                                            3. Create MinIO admin secret:

                                                              kubectl -n edp create secret generic reportportal-minio-creds \\\n--from-literal=root-password=<root_password> \\\n--from-literal=root-user=<root_user>\n
                                                            4. Install MinIO v.11.10.3 using bitnami/minio Helm chart v.11.10.3:

                                                              helm install minio bitnami/minio \\\n--version 11.10.3 \\\n--values values.yaml \\\n--namespace edp\n

                                                              Check out the values.yaml file sample of the MinIO customization:

                                                              View: values.yaml
                                                              auth:\nexistingSecret: reportportal-minio-creds\npersistence:\nsize: 1Gi\n
                                                            "},{"location":"operator-guide/install-reportportal/#rabbitmq-installation","title":"RabbitMQ Installation","text":"

                                                            To install RabbitMQ, follow the steps below:

                                                            1. Use edp namespace from the MinIO installation.

                                                            2. Use bitnami chart repository from the MinIO installation.

                                                            3. Create RabbitMQ admin secret:

                                                              kubectl -n edp create secret generic reportportal-rabbitmq-creds \\\n--from-literal=rabbitmq-password=<rabbitmq_password> \\\n--from-literal=rabbitmq-erlang-cookie=<rabbitmq_erlang_cookie>\n

                                                              Warning

                                                              The rabbitmq_password password must be 10 characters long. The rabbitmq_erlang_cookie password must be 32 characters long.

                                                            4. Install RabbitMQ v.10.3.8 using bitnami/rabbitmq Helm chart v.10.3.8:

                                                              helm install rabbitmq bitnami/rabbitmq \\\n--version 10.3.8 \\\n--values values.yaml \\\n--namespace edp\n

                                                              Check out the values.yaml file sample of the RabbitMQ customization:

                                                              View: values.yaml
                                                              auth:\nexistingPasswordSecret: reportportal-rabbitmq-creds\nexistingErlangSecret: reportportal-rabbitmq-creds\npersistence:\nsize: 1Gi\n
                                                            5. After the rabbitmq pod gets the status Running, you need to configure the RabbitMQ memory threshold

                                                              kubectl -n edp exec -it rabbitmq-0 -- rabbitmqctl set_vm_memory_high_watermark 0.8\n
                                                            "},{"location":"operator-guide/install-reportportal/#elasticsearch-installation","title":"Elasticsearch Installation","text":"

                                                            To install Elasticsearch, follow the steps below:

                                                            1. Use edp namespace from the MinIO installation.

                                                            2. Add a chart repository:

                                                              helm repo add elastic https://helm.elastic.co\nhelm repo update\n
                                                            3. Install Elasticsearch v.7.17.3 using elastic/elasticsearch Helm chart v.7.17.3:

                                                              helm install elasticsearch elastic/elasticsearch \\\n--version 7.17.3 \\\n--values values.yaml \\\n--namespace edp\n

                                                              Check out the values.yaml file sample of the Elasticsearch customization:

                                                              View: values.yaml
                                                              replicas: 1\n\nextraEnvs:\n- name: discovery.type\nvalue: single-node\n- name: cluster.initial_master_nodes\nvalue: \"\"\n\nrbac:\ncreate: true\n\nresources:\nrequests:\ncpu: \"100m\"\nmemory: \"2Gi\"\n\nvolumeClaimTemplate:\nresources:\nrequests:\nstorage: 3Gi\n
                                                            "},{"location":"operator-guide/install-reportportal/#postgresql-installation","title":"PostgreSQL Installation","text":"

                                                            To install PostgreSQL, follow the steps below:

                                                            1. Use edp namespace from the MinIO installation.

                                                            2. Add a chart repository:

                                                              helm repo add bitnami-archive https://raw.githubusercontent.com/bitnami/charts/archive-full-index/bitnami\nhelm repo update\n
                                                            3. Create PostgreSQL admin secret:

                                                              kubectl -n edp create secret generic reportportal-postgresql-creds \\\n--from-literal=postgresql-password=<postgresql_password> \\\n--from-literal=postgresql-postgres-password=<postgresql_postgres_password>\n

                                                              Warning

                                                              The postgresql_password and postgresql_postgres_password passwords must be 16 characters long.

                                                            4. Install PostgreSQL v.10.9.4 using Helm chart v.10.9.4:

                                                              helm install postgresql bitnami-archive/postgresql \\\n--version 10.9.4 \\\n--values values.yaml \\\n--namespace edp\n

                                                              Check out the values.yaml file sample of the PostgreSQL customization:

                                                              View: values.yaml
                                                              persistence:\nsize: 1Gi\nresources:\nrequests:\ncpu: \"100m\"\nserviceAccount:\nenabled: true\npostgresqlUsername: \"rpuser\"\npostgresqlDatabase: \"reportportal\"\nexistingSecret: \"reportportal-postgresql-creds\"\ninitdbScripts:\ninit_postgres.sh: |\n#!/bin/sh\n/opt/bitnami/postgresql/bin/psql -U postgres -d ${POSTGRES_DB} -c 'CREATE EXTENSION IF NOT EXISTS ltree; CREATE EXTENSION IF NOT EXISTS pgcrypto; CREATE EXTENSION IF NOT EXISTS pg_trgm;'\n
                                                            "},{"location":"operator-guide/install-reportportal/#reportportal-installation","title":"ReportPortal Installation","text":"

                                                            To install ReportPortal, follow the steps below:

                                                            1. Use edp namespace from the MinIO installation.

                                                              For the OpenShift users:

                                                              When using the OpenShift platform, install the SecurityContextConstraints resource. In case of using a custom namespace for the reportportal, change the namespace in the users section.

                                                              View: report-portal-reportportal-scc.yaml
                                                              apiVersion: security.openshift.io/v1\nkind: SecurityContextConstraints\nmetadata:\nannotations:\n\"helm.sh/hook\": \"pre-install\"\nname: report-portal\nallowHostDirVolumePlugin: false\nallowHostIPC: false\nallowHostNetwork: false\nallowHostPID: false\nallowHostPorts: false\nallowPrivilegedContainer: true\nallowedCapabilities: []\nallowedFlexVolumes: []\ndefaultAddCapabilities: []\nfsGroup:\ntype: MustRunAs\nranges:\n- max: 1000\nmin: 1000\ngroups: []\npriority: 0\nreadOnlyRootFilesystem: false\nrequiredDropCapabilities: []\nrunAsUser:\ntype: MustRunAsRange\nuidRangeMax: 1000\nuidRangeMin: 0\nseLinuxContext:\ntype: MustRunAs\nsupplementalGroups:\ntype: RunAsAny\nusers:\n- system:serviceaccount:report-portal:reportportal\nvolumes:\n- configMap\n- downwardAPI\n- emptyDir\n- persistentVolumeClaim\n- projected\n- secret\n
                                                            2. Add a chart repository:

                                                              helm repo add report-portal \"https://reportportal.github.io/kubernetes\"\nhelm repo update\n
                                                            3. Install ReportPortal v.5.8.0 using Helm chart v.5.8.0:

                                                              helm install report-portal report-portal/reportportal \\\n--values values.yaml \\\n--namespace edp\n

                                                              Check out the values.yaml file sample of the ReportPortal customization:

                                                              View: values.yaml
                                                              serviceindex:\nresources:\nrequests:\ncpu: 50m\nuat:\nresources:\nrequests:\ncpu: 50m\nserviceui:\nresources:\nrequests:\ncpu: 50m\nserviceAccountName: \"reportportal\"\nsecurityContext:\nrunAsUser: 0\nserviceapi:\nresources:\nrequests:\ncpu: 50m\nserviceanalyzer:\nresources:\nrequests:\ncpu: 50m\nserviceanalyzertrain:\nresources:\nrequests:\ncpu: 50m\n\nrabbitmq:\nSecretName: \"reportportal-rabbitmq-creds\"\nendpoint:\naddress: rabbitmq.<EDP_PROJECT>.svc.cluster.local\nuser: user\napiuser: user\n\npostgresql:\nSecretName: \"reportportal-postgresql-creds\"\nendpoint:\naddress: postgresql.<EDP_PROJECT>.svc.cluster.local\n\nelasticsearch:\nendpoint: http://elasticsearch-master.<EDP_PROJECT>.svc.cluster.local:9200\n\nminio:\nsecretName: \"reportportal-minio-creds\"\nendpoint: http://minio.<EDP_PROJECT>.svc.cluster.local:9000\nendpointshort: minio.<EDP_PROJECT>.svc.cluster.local:9000\naccesskeyName: \"root-user\"\nsecretkeyName: \"root-password\"\n\ningress:\n# IF YOU HAVE SOME DOMAIN NAME SET INGRESS.USEDOMAINNAME to true\nusedomainname: true\nhosts:\n- report-portal-<EDP_PROJECT>.<ROOT_DOMAIN>\n
                                                            4. For the OpenShift platform, install a Gateway with Route:

                                                              View: gateway-config-cm.yaml
                                                              kind: ConfigMap\nmetadata:\nname: gateway-config\nnamespace: report-portal\napiVersion: v1\ndata:\ntraefik-dynamic-config.yml: |\nhttp:\nmiddlewares:\nstrip-ui:\nstripPrefix:\nprefixes:\n- \"/ui\"\nforceSlash: false\nstrip-api:\nstripPrefix:\nprefixes:\n- \"/api\"\nforceSlash: false\nstrip-uat:\nstripPrefix:\nprefixes:\n- \"/uat\"\nforceSlash: false\n\nrouters:\nindex-router:\nrule: \"Path(`/`)\"\nservice: \"index\"\nui-router:\nrule: \"PathPrefix(`/ui`)\"\nmiddlewares:\n- strip-ui\nservice: \"ui\"\nuat-router:\nrule: \"PathPrefix(`/uat`)\"\nmiddlewares:\n- strip-uat\nservice: \"uat\"\napi-router:\nrule: \"PathPrefix(`/api`)\"\nmiddlewares:\n- strip-api\nservice: \"api\"\n\nservices:\nuat:\nloadBalancer:\nservers:\n- url: \"http://report-portal-reportportal-uat:9999/\"\n\nindex:\nloadBalancer:\nservers:\n- url: \"http://report-portal-reportportal-index:8080/\"\n\napi:\nloadBalancer:\nservers:\n- url: \"http://report-portal-reportportal-api:8585/\"\n\nui:\nloadBalancer:\nservers:\n- url: \"http://report-portal-reportportal-ui:8080/\"\ntraefik.yml: |\nentryPoints:\nhttp:\naddress: \":8081\"\nmetrics:\naddress: \":8082\"\n\nmetrics:\nprometheus:\nentryPoint: metrics\naddEntryPointsLabels: true\naddServicesLabels: true\nbuckets:\n- 0.1\n- 0.3\n- 1.2\n- 5.0\n\nproviders:\nfile:\nfilename: /etc/traefik/traefik-dynamic-config.yml\n
                                                              View: gateway-deployment.yaml
                                                              apiVersion: apps/v1\nkind: Deployment\nmetadata:\nlabels:\napp: reportportal\nname: gateway\nnamespace: report-portal\nspec:\nreplicas: 1\nselector:\nmatchLabels:\ncomponent: gateway\ntemplate:\nmetadata:\nlabels:\ncomponent: gateway\nspec:\ncontainers:\n- image: quay.io/waynesun09/traefik:2.3.6\nname: traefik\nports:\n- containerPort: 8080\nprotocol: TCP\nresources: {}\nvolumeMounts:\n- mountPath: /etc/traefik/\nname: config\nreadOnly: true\nvolumes:\n- name: config\nconfigMap:\ndefaultMode: 420\nname: gateway-config\n
                                                              View: gateway-route.yaml
                                                              kind: Route\napiVersion: route.openshift.io/v1\nmetadata:\nlabels:\napp: reportportal\nname: reportportal\nnamespace: report-portal\nspec:\nhost: report-portal.<CLUSTER_DOMAIN>\nport:\ntargetPort: http\ntls:\ninsecureEdgeTerminationPolicy: Redirect\ntermination: edge\nto:\nkind: Service\nname: gateway\nweight: 100\nwildcardPolicy: None\n
                                                              View: gateway-service.yaml
                                                              apiVersion: v1\nkind: Service\nmetadata:\nlabels:\napp: reportportal\ncomponent: gateway\nname: gateway\nnamespace: report-portal\nspec:\nports:\n# use 8081 to allow for usage of the dashboard which is on port 8080\n- name: http\nport: 8081\nprotocol: TCP\ntargetPort: 8081\nselector:\ncomponent:  gateway\nsessionAffinity: None\ntype: ClusterIP\n

                                                            Note

                                                            For user access: default/1q2w3e For admin access: superadmin/erebus Please refer to the ReportPortal.io page for details.

                                                            "},{"location":"operator-guide/install-reportportal/#related-articles","title":"Related Articles","text":"
                                                            • Install via Helmfile
                                                            "},{"location":"operator-guide/install-tekton/","title":"Install Tekton","text":"

                                                            EPAM Delivery Platform uses Tekton resources, such as Tasks, Pipelines, Triggers, and Interceptors, for running the CI/CD pipelines.

                                                            Inspect the main steps to perform for installing the Tekton resources via the Tekton release files.

                                                            "},{"location":"operator-guide/install-tekton/#prerequisites","title":"Prerequisites","text":"
                                                            • Kubectl version 1.24.0 or higher is installed. Please refer to the Kubernetes official website for details.
                                                            • For Openshift/OKD, the latest version of the oc utility is required. Please refer to the OKD page on GitHub for details.
                                                            • Created AWS ECR repository for Kaniko cache. By default, the Kaniko cache repository name is kaniko-cache and this parameter is located in our Tekton common-library.
                                                            "},{"location":"operator-guide/install-tekton/#installation-on-kubernetes-cluster","title":"Installation on Kubernetes Cluster","text":"

                                                            To install Tekton resources, follow the steps below:

                                                            Info

                                                            Please refer to the Install Tekton Pipelines and Install and set up Tekton Triggers sections for details.

                                                            1. Install Tekton pipelines v0.51.0 using the release file:

                                                              Note

                                                              Tekton Pipeline resources are used for managing and running EDP Tekton Pipelines and Tasks. Please refer to the EDP Tekton Pipelines and EDP Tekton Tasks pages for details.

                                                              kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.51.0/release.yaml\n
                                                            2. Install Tekton Triggers v0.25.0 using the release file:

                                                              Note

                                                              Tekton Trigger resources are used for managing and running EDP Tekton EventListeners, Triggers, TriggerBindings and TriggerTemplates. Please refer to the EDP Tekton Triggers page for details.

                                                              kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.25.0/release.yaml\n
                                                            3. Install Tekton Interceptors v0.25.0 using the release file:

                                                              Note

                                                              EPAM Delivery Platform uses GitLab and GitHub ClusterInterceptors for managing requests from webhooks.

                                                              kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.25.0/interceptors.yaml\n
                                                            "},{"location":"operator-guide/install-tekton/#installation-on-okd-cluster","title":"Installation on OKD cluster","text":"

                                                            To install Tekton resources, follow the steps below:

                                                            Info

                                                            Please refer to the Install Tekton Operator documentation for details.

                                                            Note

                                                            Tekton Operator also deploys Pipelines as Code CI that requires OpenShift v4.11 (based on Kubernetes v1.24) or higher. This feature is optional and its deployments can be scaled to zero replicas.

                                                            Install Tekton Operator v0.67.0 using the release file:

                                                            kubectl apply -f https://github.com/tektoncd/operator/releases/download/v0.67.0/openshift-release.yaml\n

                                                            After the installation, the Tekton Operator will install the following components: Pipeline, Trigger, and Addons.

                                                            Note

                                                            If there is the following error in the openshift-operators namespace for openshift-pipelines-operator and tekton-operator-webhook deployments:

                                                            Error: container has runAsNonRoot and image will run as root\n

                                                            Patch the deployments with the following commands:

                                                            kubectl -n openshift-operators patch deployment openshift-pipelines-operator -p '{\"spec\": {\"template\": {\"spec\": {\"securityContext\": {\"runAsUser\": 1000}}}}}'\nkubectl -n openshift-operators patch deployment tekton-operator-webhook -p '{\"spec\": {\"template\": {\"spec\": {\"securityContext\": {\"runAsUser\": 1000}}}}}'\n

                                                            Grant access for Tekton Service Accounts in the openshift-pipelines namespace to the Privileged SCC:

                                                            oc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-operators-proxy-webhook\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-pipelines-controller\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-pipelines-resolvers\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-pipelines-webhook\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-triggers-controller\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-triggers-core-interceptors\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:tekton-triggers-webhook\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:pipelines-as-code-controller\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:pipelines-as-code-watcher\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:pipelines-as-code-webhook\noc adm policy add-scc-to-user privileged system:serviceaccount:openshift-pipelines:default\n
                                                            "},{"location":"operator-guide/install-tekton/#related-articles","title":"Related Articles","text":"
                                                            • Install via Helmfile
                                                            "},{"location":"operator-guide/install-velero/","title":"Install Velero","text":"

                                                            Velero is an open source tool to safely back up, recover, and migrate Kubernetes clusters and persistent volumes. It works both on premises and in a public cloud. Velero consists of a server process running as a deployment in your Kubernetes cluster and a command-line interface (CLI) with which DevOps teams and platform operators configure scheduled backups, trigger ad-hoc backups, perform restores, and more.

                                                            "},{"location":"operator-guide/install-velero/#installation","title":"Installation","text":"

                                                            To install Velero, follow the steps below:

                                                            1. Create velero namespace:

                                                                kubectl create namespace velero\n

                                                              Note

                                                              On an OpenShift cluster, run the oc command instead of kubectl one.

                                                            2. Add a chart repository:

                                                                helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts\n  helm repo update\n

                                                              Note

                                                              Velero AWS Plugin requires access to AWS resources. To configure access, please refer to the IRSA for Velero documentation.

                                                            3. Install Velero v.2.14.13:

                                                                helm install velero vmware-tanzu/velero \\\n  --version 2.14.13 \\\n  --values values.yaml \\\n  --namespace velero\n

                                                              Check out the values.yaml file sample of the Velero customization:

                                                              View: values.yaml
                                                              image:\nrepository: velero/velero\ntag: v1.5.3\nsecurityContext:\nfsGroup: 65534\nrestic:\nsecurityContext:\nfsGroup: 65534\nserviceAccount:\nserver:\ncreate: true\nname: edp-velero\nannotations:\neks.amazonaws.com/role-arn: \"arn:aws:iam::<AWS_ACCOUNT_ID>:role/AWSIRSA\u2039CLUSTER_NAME\u203a\u2039VELERO_NAMESPACE\u203aVelero\"\ncredentials:\nuseSecret: false\nconfiguration:\nprovider: aws\nbackupStorageLocation:\nname: default\nbucket: velero-<CLUSTER_NAME>\nconfig:\nregion: eu-central-1\nvolumeSnapshotLocation:\nname: default\nconfig:\nregion: <AWS_REGION>\ninitContainers:\n- name: velero-plugin-for-aws\nimage: velero/velero-plugin-for-aws:v1.1.0\nvolumeMounts:\n- mountPath: /target\nname: plugins\n

                                                              Note

                                                              In case of using cluster scheduling and amazon-eks-pod-identity-webhook, it is necessary to restart the Velero pod after the cluster is up and running. Please refer to the Schedule Pods Restart documentation.

                                                            4. Install the client side (velero cli) according to the official documentation.

                                                            "},{"location":"operator-guide/install-velero/#configuration","title":"Configuration","text":"
                                                            1. Create backup for all components in the namespace:

                                                                velero backup create <BACKUP_NAME> --include-namespaces <NAMESPACE>\n
                                                            2. Create a daily backup of the namespace:

                                                                velero schedule create <BACKUP_NAME>  --schedule \"0 10 * * MON-FRI\" --include-namespaces <NAMESPACE> --ttl 120h0m0s\n
                                                            3. To restore from backup, use the following command:

                                                                velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME>\n
                                                            "},{"location":"operator-guide/install-via-helmfile/","title":"Install via Helmfile","text":"

                                                            This article provides the instruction on how to deploy EDP and components in Kubernetes using Helmfile that is intended for deploying Helm charts. Helmfile templates are available in GitHub repository.

                                                            Important

                                                            The Helmfile installation method for EPAM Delivery Platform (EDP) is currently not actively maintained. We strongly recommend exploring alternative installation options for the most up-to-date and well-supported deployment experience. You may consider using the Add-Ons approach or opting for installation via the AWS Marketplace to ensure a reliable and secure deployment of EDP.

                                                            "},{"location":"operator-guide/install-via-helmfile/#prerequisites","title":"Prerequisites","text":"

                                                            The following tools and plugins must be installed:

                                                            • Kubectl version 1.23.0;
                                                            • Helm version 3.10.0+;
                                                            • Helmfile version 0.144.0;
                                                            • Helm diff plugin version 3.6.0.
                                                            "},{"location":"operator-guide/install-via-helmfile/#helmfile-structure","title":"Helmfile Structure","text":"
                                                            • The envs/common.yaml file contains the specification for environments pattern, list of helm repositories from which it is necessary to fetch the helm charts and additional Helm parameters.
                                                            • The envs/platform.yaml file contains global parameters that are used in various Helmfiles.
                                                            • The releases/envs/ contains symbol links to environments files.
                                                            • The releases/*.yaml file contains description of parameters that is used when deploying a Helm chart.
                                                            • The helmfile.yaml file defines components to be installed by defining a path to Helm releases files.
                                                            • The envs/ci.yaml file contains stub parameters for CI linter.
                                                            • The test/lint-ci.sh script for running CI linter with debug loglevel and stub parameters.
                                                            • The resources/*.yaml file contains additional resources for the OpenShift platform.
                                                            "},{"location":"operator-guide/install-via-helmfile/#operate-helmfile","title":"Operate Helmfile","text":"

                                                            Before applying the Helmfile, please fill in the global parameters in the envs/platform.yaml (check the examples in the envs/ci.yaml) and releases/*.yaml files for every Helm deploy.

                                                            Pay attention to the following recommendations while working with the Helmfile:

                                                            • To launch Lint, run the test/lint-ci.sh script.
                                                            • Display the difference between the deployed and environment state (helm diff):
                                                              helmfile --environment platform -f helmfile.yaml diff\n
                                                            • Apply the deployment:
                                                              helmfile  --selector component=ingress --environment platform -f helmfile.yaml apply\n
                                                            • Modify the deployment and apply the changes:
                                                              helmfile  --selector component=ingress --environment platform -f helmfile.yaml sync\n
                                                            • To deploy the components according to the label, use the selector to target a subset of releases when running the Helmfile. It can be useful for large Helmfiles with the releases that are logically grouped together. For example, to display the difference only for the nginx-ingress file, use the following command:
                                                              helmfile  --selector component=ingress --environment platform -f helmfile.yaml diff\n
                                                            • To destroy the release, run the following command:
                                                              helmfile  --selector component=ingress --environment platform -f helmfile.yaml destroy\n
                                                            "},{"location":"operator-guide/install-via-helmfile/#deploy-components","title":"Deploy Components","text":"

                                                            Using the Helmfile, the following components can be installed:

                                                            • NGINX Ingress Controller
                                                            • Keycloak
                                                            • EPAM Delivery Platform
                                                            • Argo CD
                                                            • External Secrets Operator
                                                            • DefectDojo
                                                            • Moon
                                                            • ReportPortal
                                                            • Kiosk
                                                            • Monitoring stack, included Prometheus, Alertmanager, Grafana, PrometheusOperator
                                                            • Logging ELK stack, included Elasticsearch, Fluent-bit, Kibana
                                                            • Logging Grafana/Loki stack, included Grafana, Loki, Promtail, Logging Operator, Logging Operator Logging
                                                            "},{"location":"operator-guide/install-via-helmfile/#deploy-nginx-ingress-controller","title":"Deploy NGINX Ingress Controller","text":"

                                                            Info

                                                            Skip this step for the OpenShift platform, because it has its own Ingress Controller.

                                                            To install NGINX Ingress controller, follow the steps below:

                                                            1. In the releases/nginx-ingress.yaml file, set the proxy-real-ip-cidr parameter according to the value with AWS VPC IPv4 CIDR.

                                                            2. Install NGINX Ingress controller:

                                                              helmfile  --selector component=ingress --environment platform -f helmfile.yaml apply\n
                                                            "},{"location":"operator-guide/install-via-helmfile/#deploy-keycloak","title":"Deploy Keycloak","text":"

                                                            Keycloak requires a database deployment, so it has two charts: releases/keycloak.yaml and releases/postgresql-keycloak.yaml.

                                                            To install Keycloak, follow the steps below:

                                                            1. Create a security namespace:

                                                              Note

                                                              For the OpenShift users: This namespace is also indicated as users in the following custom SecurityContextConstraints resources: resources/keycloak-scc.yaml and resources/postgresql-keycloak-scc.yaml. Change the namespace name when using a custom namespace.

                                                              kubectl create namespace security\n
                                                            2. Create PostgreSQL admin secret:

                                                              kubectl -n security create secret generic keycloak-postgresql \\\n--from-literal=password=<postgresql_password> \\\n--from-literal=postgres-password=<postgresql_postgres_password>\n
                                                            3. In the envs/platform.yaml file, set the dnsWildCard parameter.

                                                            4. Create Keycloak admin secret:

                                                              kubectl -n security create secret generic keycloak-admin-creds \\\n--from-literal=username=<keycloak_admin_username> \\\n--from-literal=password=<keycloak_admin_password>\n
                                                            5. Install Keycloak:

                                                              helmfile  --selector component=sso --environment platform -f helmfile.yaml apply\n
                                                            "},{"location":"operator-guide/install-via-helmfile/#deploy-external-secrets-operator","title":"Deploy External Secrets Operator","text":"

                                                            To install External Secrets Operator, follow the steps below:

                                                            helmfile --selector component=secrets --environment platform -f helmfile.yaml apply\n
                                                            "},{"location":"operator-guide/install-via-helmfile/#deploy-kiosk","title":"Deploy Kiosk","text":"

                                                            To install Kiosk, follow the steps below:

                                                            helmfile --selector component=kiosk --environment platform -f helmfile.yaml apply\n
                                                            "},{"location":"operator-guide/install-via-helmfile/#deploy-epam-delivery-platform","title":"Deploy EPAM Delivery Platform","text":"

                                                            To install EDP, follow the steps below:

                                                            1. Create a platform namespace:

                                                              kubectl create namespace platform\n
                                                            2. For EDP, it is required to have Keycloak access to perform the integration. Create a secret with the user and password provisioned in the step 2 of the Keycloak Configuration section.

                                                              kubectl -n platform create secret generic keycloak \\\n  --from-literal=username=<username> \\\n  --from-literal=password=<password>\n
                                                            3. In the envs/platform.yaml file, set the edpName and keycloakEndpoint parameters.

                                                            4. In the releases/edp-install.yaml file, check and fill in all values.

                                                            5. Install EDP:

                                                              helmfile  --selector component=edp --environment platform -f helmfile.yaml apply\n
                                                            "},{"location":"operator-guide/install-via-helmfile/#deploy-argo-cd","title":"Deploy Argo CD","text":"

                                                            Before Argo CD deployment, install the following tools:

                                                            • Keycloak
                                                            • EDP

                                                            To install Argo CD, follow the steps below:

                                                            1. Install Argo CD:

                                                              For the OpenShift users:

                                                              When using a custom namespace for Argo CD, the argocd namespace is also indicated as users in the resources/argocd-scc.yaml custom SecurityContextConstraints resource. Change it there as well.

                                                              helmfile --selector component=argocd --environment platform -f helmfile.yaml apply\n
                                                            2. Update the argocd-secret secret in the Argo CD namespace by providing the correct Keycloak client secret (oidc.keycloak.clientSecret) with the value from the keycloak-client-argocd-secret secret in EDP namespace. Then restart the deployment:

                                                              ARGOCD_CLIENT=$(kubectl -n platform get secret keycloak-client-argocd-secret  -o jsonpath='{.data.clientSecret}')\nkubectl -n argocd patch secret argocd-secret -p=\"{\\\"data\\\":{\\\"oidc.keycloak.clientSecret\\\": \\\"${ARGOCD_CLIENT}\\\"}}\" -v=1\nkubectl -n argocd rollout restart deployment argo-argocd-server\n
                                                            "},{"location":"operator-guide/install-via-helmfile/#deploy-defectdojo","title":"Deploy DefectDojo","text":"

                                                            Prerequisites

                                                            1. Before DefectDojo deployment,first make sure to have the Keycloak configuration.

                                                            Info

                                                            It is also possible to install DefectDojo via Helm Chart. For details, please refer to the Install DefectDojo page.

                                                            To install DefectDojo via Helmfile, follow the steps below:

                                                            1. Create a DefectDojo namespace:

                                                              For the OpenShift users:

                                                              This namespace is also indicated as users in the resources/defectdojo-scc.yaml custom SecurityContextConstraints resource. Change it when using a custom namespace. Also, change the namespace in the resources/defectdojo-route.yaml file.

                                                              kubectl create namespace defectdojo\n
                                                            2. Modify the host in resources/defectdojo-route.yaml (only for OpenShift).

                                                            3. Create a PostgreSQL admin secret:

                                                              kubectl -n defectdojo create secret generic defectdojo-postgresql-specific \\\n--from-literal=postgresql-password=<postgresql_password> \\\n--from-literal=postgresql-postgres-password=<postgresql_postgres_password>\n

                                                              Note

                                                              The postgresql_password and postgresql_postgres_password passwords must be 16 characters long.

                                                            4. Create a RabbitMQ admin secret:

                                                              kubectl -n defectdojo create secret generic defectdojo-rabbitmq-specific \\\n--from-literal=rabbitmq-password=<rabbitmq_password> \\\n--from-literal=rabbitmq-erlang-cookie=<rabbitmq_erlang_cookie>\n

                                                              Note

                                                              The rabbitmq_password password must be 10 characters long.

                                                              The rabbitmq_erlang_cookie password must be 32 characters long.

                                                            5. Create a DefectDojo admin secret:

                                                              kubectl -n defectdojo create secret generic defectdojo \\\n--from-literal=DD_ADMIN_PASSWORD=<dd_admin_password> \\\n--from-literal=DD_SECRET_KEY=<dd_secret_key> \\\n--from-literal=DD_CREDENTIAL_AES_256_KEY=<dd_credential_aes_256_key> \\\n--from-literal=METRICS_HTTP_AUTH_PASSWORD=<metric_http_auth_password>\n

                                                              Note

                                                              The dd_admin_password password must be 22 characters long.

                                                              The dd_secret_key password must be 128 characters long.

                                                              The dd_credential_aes_256_key password must be 128 characters long.

                                                              The metric_http_auth_password password must be 32 characters long.

                                                            6. Create a Keycloak client secret for DefectDojo:

                                                              Note

                                                              The keycloak_client_secret value received from: edpName-main realm -> clients -> defectdojo -> Credentials -> Client secret

                                                              kubectl -n defectdojo create secret generic defectdojo-extrasecrets \\\n--from-literal=DD_SOCIAL_AUTH_KEYCLOAK_SECRET=<keycloak_client_secret>\n
                                                            7. In the envs/platform.yaml file, set the dnsWildCard parameter.

                                                            8. In the releases/defectdojo.yaml file, check and fill in all values.

                                                            9. Install DefectDojo:

                                                              helmfile  --selector component=defectdojo --environment platform -f helmfile.yaml apply\n
                                                            "},{"location":"operator-guide/install-via-helmfile/#deploy-reportportal","title":"Deploy ReportPortal","text":"

                                                            Info

                                                            It is also possible to install ReportPortal via Helm Chart. For details, please refer to the Install ReportPortal page.

                                                            ReportPortal requires third-party deployments: RabbitMQ, ElasticSearch, PostgreSQL, MinIO.

                                                            To install third-party resources, follow the steps below:

                                                            1. Create a RabbitMQ admin secret:

                                                              kubectl -n report-portal create secret generic reportportal-rabbitmq-creds \\\n--from-literal=rabbitmq-password=<rabbitmq_password> \\\n--from-literal=rabbitmq-erlang-cookie=<rabbitmq_erlang_cookie>\n

                                                              Warning

                                                              The rabbitmq_password password must be 10 characters long.

                                                              The rabbitmq_erlang_cookie password must be 32 characters long.

                                                            2. Create a PostgreSQL admin secret:

                                                              kubectl -n report-portal create secret generic reportportal-postgresql-creds \\\n--from-literal=postgresql-password=<postgresql_password> \\\n--from-literal=postgresql-postgres-password=<postgresql_postgres_password>\n

                                                              Warning

                                                              The postgresql_password and postgresql_postgres_password passwords must be 16 characters long.

                                                            3. Create a MinIO admin secret:

                                                              kubectl -n report-portal create secret generic reportportal-minio-creds \\\n--from-literal=root-password=<root_password> \\\n--from-literal=root-user=<root_user>\n
                                                            4. In the envs/platform.yaml file, set the dnsWildCard and edpName parameters.

                                                              For the OpenShift users:

                                                              The namespace is also indicated as users in the following custom SecurityContextConstraints resources: resources/report-portal-elasticsearch-scc.yaml and resources/report-portal-third-party-resources-scc.yaml. Change the namespace name when using a custom namespace.

                                                            5. Install third-party resources:

                                                              helmfile --selector component=report-portal-third-party-resources --environment platform -f helmfile.yaml apply\n
                                                            6. After the rabbitmq pod gets the status Running, you need to configure the RabbitMQ memory threshold

                                                              kubectl -n report-portal exec -it rabbitmq-0 -- rabbitmqctl set_vm_memory_high_watermark 0.8\n

                                                            To install ReportPortal via Helmfile, follow the steps below:

                                                            For the OpenShift users:

                                                            1. The namespace is also indicated as users in the resources/report-portal-reportportal-scc.yaml custom SecurityContextConstraints resource. Change it when using a custom namespace.
                                                            2. Change the namespace in the following files: resources/report-portal-gateway/gateway-config-cm, resources/report-portal-gateway/gateway-deployment, resources/report-portal-gateway/gateway-route, and resources/report-portal-gateway/gateway-service.
                                                            3. Modify the host in resources/report-portal-gateway/gateway-route
                                                            helmfile --selector component=report-portal --environment platform -f helmfile.yaml apply\n

                                                            Note

                                                            For user access: default/1q2w3e For admin access: superadmin/erebus Please refer to the ReportPortal.io page for details.

                                                            "},{"location":"operator-guide/install-via-helmfile/#deploy-moon","title":"Deploy Moon","text":"

                                                            Moon is a browser automation solution compatible with Selenium, Cypress, Playwright, and Puppeteer using Kubernetes or Openshift to launch browsers.

                                                            Note

                                                            Aerokube/Moon does not require third-party deployments.

                                                            Follow the steps below to deploy Moon:

                                                            1. Use the following command to install Moon:

                                                              helmfile --selector component=moon --environment platform -f helmfile.yaml apply\n
                                                            2. After the installation, open the Ingress Dashboard and check that SELENOID and SSE have the CONNECTED status.

                                                              Main board

                                                            3. In Moon, use the following command with the Ingress rule, for example, wd/hub:

                                                                  curl -X POST 'http://<INGRESS_LINK>/wd/hub/session' -d '{\n                \"desiredCapabilities\":{\n                    \"browserName\":\"firefox\",\n                    \"version\": \"79.0\",\n                    \"platform\":\"ANY\",\n                    \"enableVNC\": true,\n                    \"name\": \"edp\",\n                    \"sessionTimeout\": \"480s\"\n                }\n            }'\n

                                                              See below the list of Moon Dashboard Ingress rules:

                                                              Moon Dashboard Ingress rules

                                                              After using the command above, the container will start, and the VNC viewer will be displayed on the Moon Dashboard:

                                                              VNC viewer with the container starting

                                                            "},{"location":"operator-guide/install-via-helmfile/#deploy-monitoring","title":"Deploy Monitoring","text":"

                                                            The monitoring stack includes Grafana, Prometheus, Alertmanager, and Karma-dashboard. To deploy it follow the steps:

                                                            1. Generate a token for Keycloak client:

                                                              Note

                                                              The token must be 32-character and include alphabetic and numeric symbols. For example, use the following command:

                                                              keycloak_client_secret=$(date +%s | sha256sum | base64 | head -c 32 ; echo)\n
                                                            2. Create a secret for the Keycloak client:

                                                              kubectl -n platform create secret generic keycloak-client-grafana \\\n--from-literal=clientSecret=<keycloak_client_secret>\n
                                                            3. Create a secret for the Grafana:

                                                              kubectl -n monitoring create secret generic keycloak-client-grafana \\\n--from-literal=GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET=<keycloak_client_secret> \\\n
                                                            4. Create a custom resource for the Keycloak client:

                                                              View: keycloak_client
                                                              apiVersion: v1.edp.epam.com/v1\nkind: KeycloakClient\nmetadata:\nname: grafana\nnamespace: platform\nspec:\nclientId: grafana\ndirectAccess: true\nserviceAccount:\nenabled: true\ntargetRealm: platform-main\nwebUrl: https://grafana-monitoring.<dnsWildCard>\nsecret: keycloak-client.grafana\n
                                                            5. Run command:

                                                              helmfile --selector component=monitoring --environment platform -f helmfile.yaml apply\n
                                                            "},{"location":"operator-guide/install-via-helmfile/#deploy-logging","title":"Deploy Logging","text":"ELK stackGrafana, Loki, Promtail stack

                                                            To install Elasticsearch, Kibana and Fluentbit, run command:

                                                            helmfile --selector component=logging-elastic --environment platform -f helmfile.yaml apply\n

                                                            To install Grafana, Loki, Promtail, follow the steps below:

                                                            1. Make sure that appropriate resources are created:

                                                              • Secret for the Keycloak client
                                                              • Secret for the Grafana
                                                            2. Create a custom resource for the Keycloak client:

                                                              View: keycloak_client
                                                              apiVersion: v1.edp.epam.com/v1\nkind: KeycloakClient\nmetadata:\nname: grafana\nnamespace: platform\nspec:\nclientId: grafana-logging\ndirectAccess: true\nserviceAccount:\nenabled: true\ntargetRealm: platform-main\nwebUrl: https://grafana-logging.<dnsWildCard>\nsecret: keycloak-client.grafana\n
                                                            3. Run command:

                                                              helmfile --selector component=logging --environment platform -f helmfile.yaml apply\n
                                                            "},{"location":"operator-guide/install-via-helmfile/#related-articles","title":"Related Articles","text":"
                                                            • Install EDP
                                                            • Install NGINX Ingress Controller
                                                            • Install Keycloak
                                                            • Install DefectDojo
                                                            • Install ReportPortal
                                                            • Install Argo CD
                                                            "},{"location":"operator-guide/jira-gerrit-integration/","title":"Adjust VCS Integration With Jira","text":"

                                                            In order to adjust the Version Control System integration with Jira Server, first make sure you have the following prerequisites:

                                                            • VCS Server
                                                            • Jira
                                                            • Crucible

                                                            When checked the prerequisites, follow the steps below to proceed with the integration:

                                                            1. Integrate every project in VCS Server with every project in Crucible by creating a corresponding request in EPAM Support Portal. Add the repositories links and fill in the Keep Informed field as this request must be approved.

                                                              Request example

                                                            2. Provide additional details to the support team. If the VCS is Gerrit, inspect the sample below of its integration:

                                                              2.1 Create a new \"crucible-\" user in Gerrit with SSH key and add a new user to the \"Non-Interactive Users\" Gerrit group;

                                                              2.2 Create a new group in Gerrit \"crucible-watcher-group\" and add the \"crucible-\" user;

                                                              2.3 Provide access to All-Projects for the \"crucible-watcher-group\" group:

                                                              Gerrit All-Projects configuration

                                                              Gerrit All-Projects configuration

                                                            3. To link commits with Jira ticket, being in Gerrit, enter a Jira ticket ID in a commit message using the specific format:

                                                              [PROJECT-CODE-1234]: commit message

                                                              where PROJECT-CODE is a specific code of a project, 1234 is an ID number, and a commit message.

                                                            4. As a result, all Gerrit commits will be displayed on Crucible:

                                                              Crucible project

                                                            5. "},{"location":"operator-guide/jira-gerrit-integration/#related-articles","title":"Related Articles","text":"
                                                              • Adjust Jira Integration
                                                              "},{"location":"operator-guide/jira-integration/","title":"Adjust Jira Integration","text":"

                                                              This documentation guide provides step-by-step instructions for enabling the Jira integration option in the EDP Portal UI for EPAM Delivery Platform. Jira integration allows including useful metadata in Jira tickets.

                                                              "},{"location":"operator-guide/jira-integration/#overview","title":"Overview","text":"

                                                              Integrating Jira can provide a number of benefits, such as increased visibility and traceability, automatic linking code changes to relevant Jira issues, streamlining the management and tracking of development progress.

                                                              By linking CI pipelines to Jira issues, teams can get a better understanding of the status of their work and how it relates to the overall development process. This can help to improve communication and collaboration, and ultimately lead to faster and more efficient delivery of software.

                                                              Enabling Jira integration allows for the automatic population of three fields in Jira tickets: Fix Versions, Components, and Labels. Each of these fields provides distinct benefits:

                                                              • Fix Versions: helps track progress against release schedules;
                                                              • Components: allows grouping related issues together;
                                                              • Labels: enables identification of specific types of work.

                                                              Teams can utilize these fields to enhance their work prioritization, identify dependencies, improve collaboration, and ultimately achieve faster software delivery.

                                                              "},{"location":"operator-guide/jira-integration/#integration-procedure","title":"Integration Procedure","text":"

                                                              In order to adjust the Jira server integration, add the JiraServer CR by performing the following:

                                                              1. Provision the secret using EDP Portal, Manifest or with the externalSecrets operator:

                                                                EDP PortalManifestExternal Secrets Operator

                                                                Go to EDP Portal -> EDP -> Configuration -> Jira. Update or fill in the URL, User, Password fields and click the Save button:

                                                                Jira update manual secret

                                                                apiVersion: v1\nkind: Secret\nmetadata:\nname: ci-jira\nnamespace: edp\nlabels:\napp.edp.epam.com/secret-type=jira\nstringData:\nurl: https://jira.example.com\nusername: username\npassword: password\n
                                                                \"ci-jira\":\n{\n\"url\": \"https://jira.example.com\",\n\"username\": \"username\",\n\"password\": \"password\"\n}\n
                                                              2. Create JiraServer CR in the OpenShift/Kubernetes namespace with the apiUrl, credentialName and rootUrl fields:

                                                                apiVersion: v2.edp.epam.com/v1\nkind: JiraServer\nmetadata:\nname: jira-server\nspec:\napiUrl: 'https://jira-api.example.com'\ncredentialName: ci-jira\nrootUrl: 'https://jira.example.com'\n

                                                                Note

                                                                The value of the credentialName property is the name of the Secret, which is indicated in the first point above.

                                                              3. In the EDP Portal UI, navigate to the Advanced Settings menu to check that the Integrate with Jira server check box appeared:

                                                                Advanced settings

                                                                Note

                                                                There are four predefined variables with the respective values that can be specified singly or as a combination:

                                                                EDP_COMPONENT \u2013 returns application-name EDP_VERSION \u2013 returns 0.0.0-SNAPSHOT or 0.0.0-RC EDP_SEM_VERSION \u2013 returns 0.0.0 EDP_GITTAG \u2013 returns build/0.0.0-SNAPSHOT.2 or build/0.0.0-RC.2

                                                                There are no character restrictions when combining the variables, combination samples: EDP_SEM_VERSION-EDP_COMPONENT or EDP_COMPONENT-hello-world/EDP_VERSION, etc.

                                                                As a result of successful Jira integration, the additional information will be added to tickets.

                                                              "},{"location":"operator-guide/jira-integration/#related-articles","title":"Related Articles","text":"
                                                              • Adjust VCS Integration With Jira
                                                              • Add Application
                                                              "},{"location":"operator-guide/kaniko-irsa/","title":"IAM Roles for Kaniko Service Accounts","text":"

                                                              Note

                                                              The information below is relevant in case ECR is used as Docker container registry. Make sure that IRSA is enabled and amazon-eks-pod-identity-webhook is deployed according to the Associate IAM Roles With Service Accounts documentation.

                                                              The \"build-image-kaniko\" stage manages ECR through IRSA that should be available on the cluster. Follow the steps below to create a required role:

                                                              1. Create AWS IAM Policy \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039EDP_NAMESPACE\u203aKaniko_policy\":

                                                                {\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n        \"Effect\": \"Allow\",\n        \"Action\": [\n            \"ecr:*\",\n            \"cloudtrail:LookupEvents\"\n        ],\n        \"Resource\": \"arn:aws:ecr:<AWS_REGION>:<AWS_ACCOUNT_ID>:repository/<EDP_NAMESPACE>/*\"\n    },\n    {\n        \"Effect\": \"Allow\",\n        \"Action\": \"ecr:GetAuthorizationToken\",\n        \"Resource\": \"*\"\n    },\n    {\n        \"Effect\": \"Allow\",\n        \"Action\": [\n            \"ecr:DescribeRepositories\",\n            \"ecr:CreateRepository\"\n        ],\n        \"Resource\": \"arn:aws:ecr:<AWS_REGION>:<AWS_ACCOUNT_ID>:repository/*\"\n    }\n  ]\n}\n
                                                              2. Create AWS IAM Role \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039EDP_NAMESPACE\u203aKaniko\" with trust relationships:

                                                                {\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Principal\": {\n        \"Federated\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/<OIDC_PROVIDER>\"\n      },\n      \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n      \"Condition\": {\n        \"StringEquals\": {\n          \"<OIDC_PROVIDER>:sub\": \"system:serviceaccount:edp:edp-kaniko\"\n        }\n      }\n    }\n  ]\n}\n
                                                              3. Attach the \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039EDP_NAMESPACE\u203aKaniko_policy\" policy to the \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039EDP_NAMESPACE\u203aKaniko\" role.

                                                              4. Define the resulted arn role value into the kaniko.roleArn parameter in values.yaml during the EDP installation.

                                                              "},{"location":"operator-guide/kaniko-irsa/#related-articles","title":"Related Articles","text":"
                                                              • Associate IAM Roles With Service Accounts
                                                              • Install EDP
                                                              "},{"location":"operator-guide/kibana-ilm-rollover/","title":"Aggregate Application Logs Using EFK Stack","text":"

                                                              This documentation describes the advantages of EFK stack over the traditional ELK stack, explains the value that this stack brings to EDP and instructs how to set up the EFK stack to integrate the advanced logging system with your application.

                                                              "},{"location":"operator-guide/kibana-ilm-rollover/#elk-stack-overview","title":"ELK Stack Overview","text":"

                                                              The ELK (Elasticsearch, Logstash and Kibana) stack gives the ability to aggregate logs from all the managed systems and applications, analyze these logs and create visualizations for application and infrastructure monitoring, faster troubleshooting, security analytics and more.

                                                              Here is a brief description of the ELK stack default components:

                                                              • Beats family - The logs shipping tool that conveys logs from the source locations, such as Filebeat, Metricbeat, Packetbeat, etc. Beats can work instead of Logstash or along with it.
                                                              • Logstash - The log processing framework for log collecting, processing, storing and searching activities.
                                                              • Elasticsearch - The distributed search and analytics engine based on Lucene Java library.
                                                              • Kibana - The visualization engine that queries the data from Elasticsearch.

                                                              ELK Stack

                                                              "},{"location":"operator-guide/kibana-ilm-rollover/#efk-stack-overview","title":"EFK Stack Overview","text":"

                                                              We use FEK (also called EFK) (Fluent Bit, Elasticsearch, Kibana) stack in Kubernetes instead of ELK because this stack provides us with the support for Logsight for Stage Verification and Incident Detection. In addition to it, Fluent Bit has a smaller memory fingerprint than Logstash. Fluent Bit has the Inputs, Parsers, Filters and Outputs plugins similarly to Logstash.

                                                              FEK Stack

                                                              "},{"location":"operator-guide/kibana-ilm-rollover/#automate-elasticsearch-index-rollover-with-ilm","title":"Automate Elasticsearch Index Rollover With ILM","text":"

                                                              In this guide, index rollover with the Index Lifecycle Management ILM is automated in the FEK stack.

                                                              The resources can be created via API using curl, Postman, Kibana Dev Tools console or via GUI. They are going to be created them using Kibana Dev Tools.

                                                              1. Go to Management \u2192 Dev Tools in the Kibana dashboard:

                                                                Dev Tools

                                                              2. Create index lifecycle policy with the index rollover:

                                                                Note

                                                                This policy can also be created in GUI in Management \u2192 Stack Management \u2192 Index Lifecycle Policies.

                                                                Index Lifecycle has several phases: Hot, Warm, Cold, Frozen, Delete. Indices also have different priorities in each phase. The warmer the phase, the higher the priority is supposed to be, e.g., 100 for the hot phase, 50 for the warm phase, and 0 for the cold phase.

                                                                In this Use Case, only the Hot and Delete phases are configured. So an index will be created, rolled over to a new index when 1gb in size or 1day in time and deleted in 7 days. The rollover may not happen exactly at 1GB because it depends on how often Kibana checks the index size. Kibana usually checks the index size every 10 minutes but this can be changed by setting the indices.lifecycle.poll_interval monitoring timer.

                                                                The index lifecycle policy example:

                                                                Index Lifecycle Policy
                                                                PUT _ilm/policy/fluent-bit-policy\n{\n\"policy\": {\n\"phases\": {\n\"hot\": {\n\"min_age\": \"0ms\",\n\"actions\": {\n\"set_priority\": {\n\"priority\": 100\n},\n\"rollover\": {\n\"max_size\": \"1gb\",\n\"max_primary_shard_size\": \"1gb\",\n\"max_age\": \"1d\"\n}\n}\n},\n\"delete\": {\n\"min_age\": \"7d\",\n\"actions\": {\n\"delete\": {\n\"delete_searchable_snapshot\": true\n}\n}\n}\n}\n}\n}\n

                                                                Insert the code above into the Dev Tools and click the arrow to send the PUT request.

                                                              3. Create an index template so that a new index is created according to this template after the rollover:

                                                                Note

                                                                This policy can also be created in GUI in Management \u2192 Stack Management \u2192 Index Management \u2192 Index Templates.

                                                                Expand the menu below to see the index template example:

                                                                Index Template
                                                                PUT /_index_template/fluent-bit\n{\n\"index_patterns\": [\"fluent-bit-kube-*\"],\n\"template\": {\n\"settings\": {\n\"index\": {\n\"lifecycle\": {\n\"name\": \"fluent-bit-policy\",\n\"rollover_alias\": \"fluent-bit-kube\"\n},\n\"number_of_shards\": \"1\",\n\"number_of_replicas\": \"0\"\n}\n}\n}\n}\n

                                                                Note

                                                                • index.lifecycle.rollover_alias is required when using a policy containing the rollover action and specifies which alias to rollover on behalf of this index. The intention here is that the rollover alias is also defined on the index.
                                                                • number_of_shards is the quantity of the primary shards. Elasticsearch index is really just a logical grouping of one or more physical shards, where each shard is actually a self-contained index. By distributing the documents in an index across multiple shards and distributing those shards across multiple nodes, Elasticsearch can ensure redundancy, which both protects against hardware failures and increases query capacity as nodes are added to a cluster. As the cluster grows (or shrinks), Elasticsearch automatically migrates shards to re-balance the cluster. Please refer to the official documentation here.
                                                                • number_of_replicas is the number of replica shards. A replica shard is a copy of a primary shard. Elasticsearch will never assign a replica to the same node as the primary shard, so make sure you have more than one node in your Elasticsearch cluster if you need to use replica shards. The Elasticsearch cluster details and the quantity of nodes can be checked with:

                                                                  GET _cluster/health\n

                                                                Since we use one node, the number_of_shards is 1 and number_of_replicas is 0. If you put more replicas within one node, your index will get yellow status in Kibana, yet still be working.

                                                              4. Create an empty index with write permissions:

                                                                Note

                                                                This index can also be created in GUI in Management \u2192 Stack Management \u2192 Index Management \u2192 Indices.

                                                                Index example with the date math format:

                                                                Index
                                                                # URI encoded /<fluent-bit-kube-{now/d}-000001>\nPUT /%3Cfluent-bit-kube-%7Bnow%2Fd%7D-000001%3E\n{\n\"aliases\": {\n\"fluent-bit-kube\": {\n\"is_write_index\": true\n}\n}\n}\n

                                                                The code above will create an index in the{index_name}-{current_date}-{rollover_index_increment} format. For example: fluent-bit-kube-2023.03.17-000001.

                                                                Please refer to the official documentation on the index rollover with Date Math here.

                                                                Note

                                                                It is also possible to use index pattern below if the date math format does not seem applicable:

                                                                Index

                                                                PUT fluent-bit-kube-000001\n{\n\"aliases\": {\n\"fluent-bit-kube\": {\n\"is_write_index\": true\n}\n}\n}\n

                                                                Check the status of the created index:

                                                                GET fluent-bit-kube*-000001/_ilm/explain\n
                                                              5. Configure Fluent Bit. Play attention to the Elasticsearch Output plugin configuration.

                                                                The important fields in the [OUTPUT] section are Index fluent-bit-kube since we should use the index with the same name as Rollover Alias in Kibana and Logstash_Format Off as we use the Rollover index pattern in Kibana that increments by 1.

                                                                ConfigMap example with Configuration Variables for HTTP_User and HTTP_Passwd:

                                                                ConfigMap fluent-bit
                                                                data:\nfluent-bit.conf: |\n[SERVICE]\nDaemon Off\nFlush 10\nLog_Level info\nParsers_File parsers.conf\nParsers_File custom_parsers.conf\nHTTP_Server On\nHTTP_Listen 0.0.0.0\nHTTP_Port 2020\nHealth_Check On\n\n[INPUT]\nName tail\nTag kube.*\nPath /var/log/containers/*.log\nParser docker\nMem_Buf_Limit 5MB\nSkip_Long_Lines Off\nRefresh_Interval 10\n[INPUT]\nName systemd\nTag host.*\nSystemd_Filter _SYSTEMD_UNIT=kubelet.service\nRead_From_Tail On\nStrip_Underscores On\n\n[FILTER]\nName                kubernetes\nMatch               kube.*\nKube_Tag_Prefix     kube.var.log.containers.\nKube_URL            https://kubernetes.default.svc:443\nKube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\nKube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token\nMerge_Log           Off\nMerge_Log_Key       log_processed\nK8S-Logging.Parser  On\nK8S-Logging.Exclude On\n[FILTER]\nName nest\nMatch kube.*\nOperation lift\nNested_under kubernetes\nAdd_prefix kubernetes.\n[FILTER]\nName modify\nMatch kube.*\nCopy kubernetes.container_name tags.container\nCopy log message\nCopy kubernetes.container_image tags.image\nCopy kubernetes.namespace_name tags.namespace\n[FILTER]\nName nest\nMatch kube.*\nOperation nest\nWildcard tags.*\nNested_under tags\nRemove_prefix tags.\n\n[OUTPUT]\nName            es\nMatch           kube.*\nIndex           fluent-bit-kube\nHost            elasticsearch-master\nPort            9200\nHTTP_User       ${ES_USER}\nHTTP_Passwd     ${ES_PASSWORD}\nLogstash_Format Off\nTime_Key       @timestamp\nType            flb_type\nReplace_Dots    On\nRetry_Limit     False\nTrace_Error     Off\n
                                                              6. Create index pattern (Data View starting from Kibana v8.0):

                                                                Go to Management \u2192 Stack Management \u2192 Kibana \u2192 Index patterns and create an index with the fluent-bit-kube-* pattern:

                                                                Index Pattern

                                                              7. Check logs in Kibana. Navigate to Analytics \u2192 Discover:

                                                                Logs in Kibana

                                                                Note

                                                                In addition, in the top-right corner of the Discover window, there is a button called Inspect. Clicking on it will reveal the query that Kibana is sending to Elasticsearch. These queries can be used in Dev Tools.

                                                              8. Monitor the created indices:

                                                                GET _cat/indices/fluent-bit-kube-*\n

                                                                Note

                                                                Physically, the indices are located on the elasticsearch Kubernetes pod in /usr/share/elasticsearch/data/nodes/0/indices. It is recommended to backup indices only via Snapshots.

                                                              We've configured the index rollover process. Now the index will be rolled over to a new one once it reaches the indicated size or time in the policy, and old indices will be removed according to the policy as well.

                                                              When you create an empty index that corresponds to the pattern indicated in the index template, the index template attaches rollover_alias with the fluent-bit-kube name, policy and other configured data. Then the Fluent Bit Elasticsearch output plugin sends logs to the Index fluent-bit-kube rollover alias. The index rollover process is managed by ILM that increments our indices united by the rollover_alias and distributes the log data to the latest index.

                                                              "},{"location":"operator-guide/kibana-ilm-rollover/#ilm-without-rollover-policy","title":"ILM Without Rollover Policy","text":"

                                                              It is also possible to manage index lifecycle without rollover indicated in the policy. If this is the case, this section will explain how to refactor the index to make it look that way: fluent-bit-kube-2023.03.18.

                                                              Note

                                                              The main drawback of this method is that the indices can be managed only by their creation date.

                                                              To manage index lifecycle without rollover policy, follow the steps below:

                                                              1. Create a Policy without rollover but with indices deletion:

                                                                Index Lifecycle Policy
                                                                PUT _ilm/policy/fluent-bit-policy\n{\n\"policy\": {\n\"phases\": {\n\"hot\": {\n\"min_age\": \"0ms\",\n\"actions\": {\n\"set_priority\": {\n\"priority\": 100\n}\n}\n},\n\"delete\": {\n\"min_age\": \"7d\",\n\"actions\": {\n\"delete\": {\n\"delete_searchable_snapshot\": true\n}\n}\n}\n}\n}\n}\n
                                                              2. Create an index template with the rollover_alias parameter:

                                                                Index Template
                                                                PUT /_index_template/fluent-bit\n{\n\"index_patterns\": [\"fluent-bit-kube-*\"],\n\"template\": {\n\"settings\": {\n\"index\": {\n\"lifecycle\": {\n\"name\": \"fluent-bit-policy\",\n\"rollover_alias\": \"fluent-bit-kube\"\n},\n\"number_of_shards\": \"1\",\n\"number_of_replicas\": \"0\"\n}\n}\n}\n}\n
                                                              3. Change the Fluent Bit [OUTPUT] config to this one:

                                                                ConfigMap fluent-bit
                                                                [OUTPUT]\nName            es\nMatch           kube.*\nHost            elasticsearch-master\nPort            9200\nHTTP_User       ${ES_USER}\nHTTP_Passwd     ${ES_PASSWORD}\nLogstash_Format On\nLogstash_Prefix fluent-bit-kube\nLogstash_DateFormat %Y.%m.%d\nTime_Key        @timestamp\nType            flb_type\nReplace_Dots    On\nRetry_Limit     False\nTrace_Error     On\n
                                                              4. Restart Fluent Bit pods.

                                                              Fluent Bit will be producing a new index every day with the new date in its name like in the fluent-bit-kube-2023.03.18 name. Index deleting will be performed according to the policy.

                                                              "},{"location":"operator-guide/kibana-ilm-rollover/#tips-on-fluent-bit-debugging","title":"Tips on Fluent Bit Debugging","text":"

                                                              If you experience a lot of difficulties when dealing with Fluent Bit, this section may help you.

                                                              Fluent Bit has docker images labelled -debug, e.g., cr.fluentbit.io/fluent/fluent-bit:2.0.9-debug.

                                                              Change that image in the Kubernetes Fluent Bit DaemonSet and add the Trace_Error On parameter to the [OUTPUT] section in the Fluent Bit configmap:

                                                              [OUTPUT]\nTrace_Error On\n

                                                              After adding the parameter above, you will start seeing more informative logs that will probably help you find out the reason of the problem.

                                                              "},{"location":"operator-guide/kibana-ilm-rollover/#related-articles","title":"Related Articles","text":"
                                                              • Index Lifecycle Management
                                                              • Elasticsearch Output
                                                              "},{"location":"operator-guide/kubernetes-cluster-settings/","title":"Set Up Kubernetes","text":"

                                                              Make sure the cluster meets the following conditions:

                                                              1. Kubernetes cluster is installed with minimum 2 worker nodes with total capacity 8 Cores and 32Gb RAM.

                                                              2. Machine with kubectl is installed with a cluster-admin access to the Kubernetes cluster.

                                                              3. Ingress controller is installed in a cluster, for example ingress-nginx.

                                                              4. Ingress controller is configured with the disabled HTTP/2 protocol and header size of 64k support.

                                                                Find below an example of the Config Map for the NGINX Ingress controller:

                                                                kind: ConfigMap\napiVersion: v1\nmetadata:\nname: nginx-configuration\nnamespace: ingress-nginx\nlabels:\napp.kubernetes.io/name: ingress-nginx\napp.kubernetes.io/part-of: ingress-nginx\ndata:\nclient-header-buffer-size: 64k\nlarge-client-header-buffers: 4 64k\nuse-http2: \"false\"\n
                                                              5. Load balancer (if any exists in front of the Ingress controller) is configured with session stickiness, disabled HTTP/2 protocol and header size of 32k support.

                                                              6. Cluster nodes and pods have access to the cluster via external URLs. For instance, add in AWS the VPC NAT gateway elastic IP to the cluster external load balancers security group).

                                                              7. Keycloak instance is installed. To get accurate information on how to install Keycloak, please refer to the Install Keycloak instruction.

                                                              8. Helm 3.10 or higher is installed on the installation machine with the help of the Installing Helm instruction.

                                                              9. Storage classes are used with the Retain Reclaim Policy and Delete Reclaim Policy.

                                                              10. We recommended using our storage class as default storage class.

                                                                Info

                                                                By default, EDP uses the default Storage Class in a cluster. The EDP development team recommends using the following Storage Classes. See an example below.

                                                                Storage class templates with the Retain and Delete Reclaim Policies:

                                                                ebs-scgp3gp3-retain
                                                                apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\nname: ebs-sc\nannotations:\nstorageclass.kubernetes.io/is-default-class: 'true'\nallowedTopologies: []\nmountOptions: []\nprovisioner: ebs.csi.aws.com\nreclaimPolicy: Retain\nvolumeBindingMode: Immediate\n
                                                                kind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\nname: gp3\nannotations:\nstorageclass.kubernetes.io/is-default-class: 'true'\nallowedTopologies: []\nmountOptions: []\nprovisioner: ebs.csi.aws.com\nreclaimPolicy: Delete\nvolumeBindingMode: Immediate\nallowVolumeExpansion: true\n
                                                                kind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\nname: gp3-retain\nallowedTopologies: []\nmountOptions: []\nprovisioner: ebs.csi.aws.com\nreclaimPolicy: Retain\nvolumeBindingMode: Immediate\nallowVolumeExpansion: true\n
                                                              "},{"location":"operator-guide/kubernetes-cluster-settings/#related-articles","title":"Related Articles","text":"
                                                              • Install Amazon EBS CSI Driver
                                                              • Install NGINX Ingress Controller
                                                              • Install Keycloak
                                                              "},{"location":"operator-guide/logsight-integration/","title":"Logsight Integration","text":"

                                                              Logsight can be integrated with the CI/CD pipeline. It connects to log data sources, analyses collected logs, and evaluates deployment risk scores.

                                                              "},{"location":"operator-guide/logsight-integration/#overview","title":"Overview","text":"

                                                              In order to understand if a microservice or a component is ready for the deployment, EDP suggests analysing logs via Logsight to decide if the deployment is risky or not.

                                                              Please find more about Logsight in the official documentation:

                                                              • Logsight key features and workflow
                                                              • Log analysis
                                                              • Stage verification
                                                              "},{"location":"operator-guide/logsight-integration/#logsight-as-a-quality-gate","title":"Logsight as a Quality Gate","text":"

                                                              Integration with Logsight allows enhancing and optimizing software releases by creating an additional quality gate.

                                                              Logsight can be configured in two ways:

                                                              • SAAS - online system; for this solution a connection string is required.
                                                              • Self-deployment - local installation.

                                                              To work with Logsight, a new Deployment Risk stage must be added to the pipeline. On this stage, the logs are analysed with the help of Logsight mechanisms.

                                                              On the verification screen of Logsight, continuous verification of the application deployment can be monitored, and tests can be compared for detecting test flakiness.

                                                              For example, two versions of a microservice can be compared in order to detect critical differences. Risk score will be calculated for the state reached by version A and version B. Afterwards, the deployment risk will be calculated based on individual risk scores.

                                                              If the deployment failure risk is greater than a predefined threshold, the verification gate blocks the deployment from going to the target environment. In such case, the Deployment Risk stage of the pipeline is not passed, and additional attention is required. The exact log messages can be displayed in the Logsight verification screen, to help debug the problem.

                                                              "},{"location":"operator-guide/logsight-integration/#use-logsight-for-edp-development","title":"Use Logsight for EDP Development","text":"

                                                              Please find below the detailed description of Logsight integration with EDP.

                                                              "},{"location":"operator-guide/logsight-integration/#deployment-approach","title":"Deployment Approach","text":"

                                                              EDP uses Logsight in a self-deploying mode.

                                                              Logsight provides a deployment approach using Helm charts. Please find below the stack of components that must be deployed:

                                                              • logsight\u00a0- the core component.
                                                              • logsight-backend\u00a0- the backend that provides all necessary APIs and user management.
                                                              • logsight-frontend\u00a0- the frontend that provides the user interface.
                                                              • logsight-result-api\u00a0- responsible for obtaining results, for example, during the verification.

                                                              Below is a diagram of interaction when integrating the components:

                                                              Logsight Structure

                                                              "},{"location":"operator-guide/logsight-integration/#configure-fluentbit-for-sending-log-data","title":"Configure FluentBit for Sending Log Data","text":"

                                                              Logsight is integrated with the EDP logging stack. The integration is based on top of the EFK (ElasticSearch-FluentBit-Kibana) stack. It is necessary to deploy a stack with the security support, namely, enable the certificate support.

                                                              A FluentBit config indicates the namespace from which the logs will be received for further analysis. Below is an example of the FluentBit config for getting logs from the edp-delivery-edp-delivery-sit namespace:

                                                              View: fluent-bit.conf
                                                              [INPUT]\nName              tail\nTag               kube.sit.*\nPath              /var/log/containers/*edp-delivery-edp-delivery-sit*.log\nParser            docker\nMem_Buf_Limit     5MB\nSkip_Long_Lines   Off\nRefresh_Interval  10\n\n[FILTER]\nName                kubernetes\nMatch               kube.sit.*\nKube_URL            https://kubernetes.default.svc:443\nKube_CA_File        /var/run/secrets/kubernetes.io/serviceaccount/ca.crt\nKube_Token_File     /var/run/secrets/kubernetes.io/serviceaccount/token\nKube_Tag_Prefix     kube.sit.var.log.containers.\nMerge_Log           Off\nK8S-Logging.Parser  On\nK8S-Logging.Exclude On\n\n[FILTER]\nName nest\nMatch kube.sit.*\nOperation lift\nNested_under kubernetes\nAdd_prefix kubernetes.\n\n[FILTER]\nName modify\nMatch kube.sit.*\nCopy kubernetes.container_name tags.container\nCopy log message\nCopy kubernetes.container_image tags.image\nCopy kubernetes.namespace_name tags.namespace\n\n[FILTER]\nName nest\nMatch kube.sit.*\nOperation nest\nWildcard kubernetes.*\nNested_under kubernetes\nRemove_prefix kubernetes.\n\n[OUTPUT]\nName            es\nMatch           kube.sit.*\nHost            elasticsearch-master\nPort            9200\nHTTP_User elastic\nHTTP_Passwd *****\nLogstash_Format On\nLogstash_Prefix sit\nTime_Key        @timestamp\nType            flb_type\nReplace_Dots    On\nRetry_Limit     False\n\n[OUTPUT]\nMatch kube.sit.*\nName  http\nHost logsight-backend\nPort 8080\nhttp_User logsight@example.com\nhttp_Passwd *****\nuri /api/v1/logs/singles\nFormat json\njson_date_format iso8601\njson_date_key timestamp\n
                                                              "},{"location":"operator-guide/logsight-integration/#deployment-risk-analysis","title":"Deployment Risk Analysis","text":"

                                                              A deployment-risk stage is added to the EDP CD pipeline.

                                                              Deployment Risk

                                                              If the deployment risk is above 70%, the red state of the pipeline is expected.

                                                              EDP consists of a set of containerized components. For the convenience of tracking the risk deployment trend for each component, this data is stored as Jenkins artifacts.

                                                              If the deployment risk is higher than the threshold of 70%, the EDP promotion of artifacts for the next environments does not pass. The deployment risk report can be analysed in order to avoid the potential problems with updating the components.

                                                              To study the report in detail, use the link from the Jenkins pipeline to the Logsight verification screen:

                                                              Logsight Insights Logsight Insights

                                                              In this example, logs from different versions of the gerrit-operator were analyzed. As can be seen from the report, a large number of new messages appeared in the logs, and the output frequency of other notifications has also changed, which led to the high deployment risk.

                                                              The environment on which the analysis is performed can exist for different time periods. Logsight only processes the minimum total number of logs since the creating of the environment.

                                                              "},{"location":"operator-guide/logsight-integration/#related-articles","title":"Related Articles","text":"
                                                              • Customize CD Pipeline
                                                              • Adjust Jira Integration
                                                              "},{"location":"operator-guide/loki-irsa/","title":"IAM Roles for Loki Service Accounts","text":"

                                                              Note

                                                              Make sure that IRSA is enabled and amazon-eks-pod-identity-webhook is deployed according to the Associate IAM Roles With Service Accounts documentation.

                                                              It is possible to use Amazon Simple Storage Service Amazon S3 as object storage for Loki. In this case Loki requires access to AWS resources. Follow the steps below to create a required role:

                                                              1. Create AWS IAM Policy \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039LOKI_NAMESPACE\u203aLoki_policy\":

                                                                {\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"s3:ListObjects\",\n                \"s3:ListBucket\",\n                \"s3:PutObject\",\n                \"s3:GetObject\",\n                \"s3:DeleteObject\"\n            ],\n            \"Resource\": [\n                \"arn:aws:s3:::loki-*\"\n            ]\n        },\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                \"s3:ListBucket\"\n            ],\n            \"Resource\": [\n                \"arn:aws:s3:::loki-*\"\n            ]\n        }\n    ]\n}\n
                                                              2. Create AWS IAM Role \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039LOKI_NAMESPACE\u203aLoki\" with trust relationships:

                                                                {\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Principal\": {\n        \"Federated\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/<OIDC_PROVIDER>\"\n      },\n      \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n      \"Condition\": {\n        \"StringEquals\": {\n          \"<OIDC_PROVIDER>:sub\": \"system:serviceaccount:<LOKI_NAMESPACE>:edp-loki\"\n       }\n     }\n   }\n ]\n}\n
                                                              3. Attach the \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039LOKI_NAMESPACE\u203aLoki_policy\" policy to the \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039LOKI_NAMESPACE\u203aLoki\" role.

                                                              4. Make sure that Amazon S3 bucket with name loki-\u2039CLUSTER_NAME\u203a exists.

                                                              5. Provide key value eks.amazonaws.com/role-arn: \"arn:aws:iam:::role/AWSIRSA\u2039CLUSTER_NAME\u203a\u2039LOKI_NAMESPACE\u203aLoki\" into the serviceAccount.annotations parameter in values.yaml during the Loki Installation."},{"location":"operator-guide/loki-irsa/#related-articles","title":"Related Articles","text":"

                                                                • Associate IAM Roles With Service Accounts
                                                                • Install Grafana Loki
                                                                "},{"location":"operator-guide/manage-custom-certificate/","title":"Manage Custom Certificates","text":"

                                                                Familiarize yourself with the detailed instructions on adding certificates to EDP resources as well as with the respective setup for Keycloak.

                                                                EDP components that support custom certificates can be found in the table below:

                                                                Helm Chart Sub Resources admin-console-operator admin-console gerrit-operator edp-gerrit jenkins-operator jenkins-operator, edp-jenkins, jenkins agents sonar-operator sonar-operator, edp-sonar keycloak-operator keycloak-operator nexus-operator oauth2-proxy edp-install oauth2-proxy edp-headlamp edp-headlamp"},{"location":"operator-guide/manage-custom-certificate/#prerequisites","title":"Prerequisites","text":"
                                                                • The certificate in the *.crt format is used;
                                                                • Kubectl version 1.23.0 is installed;
                                                                • Helm version 3.10.2 is installed;
                                                                • Java with the keytool command inside;
                                                                • jq is installed.
                                                                "},{"location":"operator-guide/manage-custom-certificate/#enable-the-spi-truststore-of-keycloak","title":"Enable the SPI Truststore of Keycloak","text":"

                                                                To import custom certificates to Keycloak, follow the steps below:

                                                                1. Generate the cacerts local keystore and import the certificate there using the keytool tool:

                                                                  keytool -importcert -file CA.crt \\\n-alias CA.crt -keystore ./cacerts \\\n-storepass changeit -trustcacerts \\\n-noprompt\n
                                                                2. Create the custom-keycloak-keystore keystore secret from the cacerts file in the security namespace:

                                                                  kubectl -n security create secret generic custom-keycloak-keystore \\\n--from-file=./cacerts\n
                                                                3. Create the spi-truststore-data SPI truststore secret in the security namespace:

                                                                  kubectl -n security create secret generic spi-truststore-data \\\n--from-literal=KC_SPI_TRUSTSTORE_FILE_FILE=/opt/keycloak/spi-certs/cacerts \\\n--from-literal=KC_SPI_TRUSTSTORE_FILE_PASSWORD=changeit\n
                                                                4. Update the Keycloak values.yaml file from the Install Keycloak page.

                                                                  View: values.yaml
                                                                  ...\nextraVolumeMounts: |\n...\n# Use the Keycloak truststore for SPI connection over HTTPS/TLS\n- name: spi-certificates\nmountPath: /opt/keycloak/spi-certs\nreadOnly: true\n...\n\nextraVolumes: |\n...\n# Use the Keycloak truststore for SPI connection over HTTPS/TLS\n- name: spi-certificates\nsecret:\nsecretName: custom-keycloak-keystore\ndefaultMode: 420\n...\n\n...\nextraEnvFrom: |\n- secretRef:\nname: spi-truststore-data\n...\n
                                                                "},{"location":"operator-guide/manage-custom-certificate/#enable-custom-certificates-in-edp-components","title":"Enable Custom Certificates in EDP Components","text":"

                                                                Creating custom certificates is a necessary but not sufficient condition for applying, therefore, certificates should be enabled as well.

                                                                1. Create the custom-ca-certificates secret in the EDP namespace.

                                                                  kubectl -n edp create secret generic custom-ca-certificates \\\n--from-file=CA.crt\n
                                                                2. Add the certificate by mounting the custom-ca-certificates secret to the operator pod as a volume.

                                                                  Example of specifying custom certificates for the keycloak-operator:

                                                                  ...\nkeycloak-operator:\nenabled: true\n\n# -- Additional volumes to be added to the pod\nextraVolumes:\n- name: custom-ca\nsecret:\ndefaultMode: 420\nsecretName: custom-ca-certificates\n\n# -- Additional volumeMounts to be added to the container\nextraVolumeMounts:\n- name: custom-ca\nmountPath: /etc/ssl/certs/CA.crt\nreadOnly: true\nsubPath: CA.crt\n...\n
                                                                3. For Sonar, Jenkins and Gerrit, change the flag in the caCerts.enabled field to true. Also, change the name of the secret in the caCerts.secret field to custom-ca-certificates.

                                                                  Example of specifying custom certificates for Gerrit via the gerrit-operator helm chart values:

                                                                  ...\ngerrit-operator:\nenabled: true\ngerrit:\ncaCerts:\n# -- Flag for enabling additional CA certificates\nenabled: true\n# -- Change init CA certificates container image\nimage: adoptopenjdk/openjdk11:alpine\n# -- Name of the secret containing additional CA certificates\nsecret: custom-ca-certificates\n...\n
                                                                "},{"location":"operator-guide/manage-custom-certificate/#integrate-custom-certificates-into-jenkins-agents","title":"Integrate Custom Certificates Into Jenkins Agents","text":"

                                                                This section describes how to add custom certificates to Jenkins agents to use them from Java applications.

                                                                Info

                                                                For example, curl doesn't use keystore files specified in this part of the documentation.

                                                                EDP Jenkins agents keep keystore files in two places:

                                                                • /etc/ssl/certs/java folder with the cacerts file;
                                                                • /opt/java/openjdk/lib/security folder with the blocked.certs, cacerts, default.policy and public_suffix_list.dat files.
                                                                1. Copy the files in /etc/ssl/certs/java and /opt/java/openjdk/lib/security directories from Jenkins agent pod to the local tmp folder. There is a copy_certs.sh script below that can manage this. It copies the files in /etc/ssl/certs/java and /opt/java/openjdk/lib/security directories from Jenkins agent pod to the local tmp folder and imports the custom certificate into the keystore files, after which it creates the jenkins-agent-opt-java-openjdk-lib-security-cacerts and jenkins-agent-etc-ssl-certs-java-cacerts secrets from updated keystore files in EDP namespace. Also, the jenkins-agent-opt-java-openjdk-lib-security-cacerts secret contains three additional files: blocked.certs, default.policy and public_suffix_list.dat which managed by the copy_certs.sh script as well. Expand the drop-down button below to see the contents of the copy_certs.sh script.

                                                                  View: copy_certs.sh
                                                                  # Fill in the variables `ns` and `ca_file`\nns=\"edp-project\"\nca_file=\"/tmp/CA.crt\"\n\nimages=$(kubectl get -n \"${ns}\" cm jenkins-slaves -ojson | jq -r \".data[]\" | grep image\\> | sed 's/\\s*<.*>\\(.*\\)<.*>/\\1/')\n\nimage=$(for i in ${images[@]}; do echo $i; done | grep maven-java8)\npod_name=$(echo \"${image}\" | tr '.:/' '-')\n\noverrides=\"{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Pod\\\",\\\"metadata\\\":{\\\"name\\\":\\\"${pod_name}\\\", \\\"namespace\\\": \\\"${ns}\\\"},\n\\\"spec\\\":{\\\"containers\\\":[{\\\"name\\\":\\\"${pod_name}\\\",\\\"image\\\":\\\"${image}\\\",\n\\\"command\\\":[\\\"sh\\\",\\\"-c\\\",\\\"while true;do sleep 30;done;\\\"]}]}}\"\n\nkubectl run -n \"${ns}\" \"${pod_name}\" --image \"${image}\" --overrides=\"${overrides}\"\n\nkubectl wait --for=condition=ready pod \"${pod_name}\" -n \"${ns}\"\n\ncacerts_location=$(kubectl exec -n \"${ns}\" \"${pod_name}\" \\\n-- find / -name cacerts -exec ls -la \"{}\" \\; 2>/dev/null | grep -v ^l | awk '{print $9}')\n\nfor cacerts in ${cacerts_location[@]}; do echo $(dirname \"${cacerts}\"); kubectl exec -n \"${ns}\" \"${pod_name}\" -- ls $(dirname \"${cacerts}\"); done\n\nfor cacerts in ${cacerts_location[@]}; do \\\necho $(dirname \"${cacerts}\"); \\\nmkdir -p \"/tmp$(dirname \"${cacerts}\")\"; \\\nfrom_files=''; \\\nfor file in $(kubectl exec -n \"${ns}\" \"${pod_name}\" -- ls $(dirname \"${cacerts}\")); do \\\nkubectl exec -n \"${ns}\" \"${pod_name}\" -- cat \"$(dirname \"${cacerts}\")/${file}\" > \"/tmp$(dirname \"${cacerts}\")/${file}\"; \\\nfrom_files=\"${from_files} --from-file=/tmp$(dirname \"${cacerts}\")/${file}\"\ndone ; \\\nkeytool -import -storepass changeit -alias kubernetes -file ${ca_file} -noprompt -keystore \"/tmp${cacerts}\"; \\\nkubectl -n \"${ns}\" create secret generic \"jenkins-agent${cacerts//\\//-}\" $from_files \\\ndone\n\nkubectl delete -n \"${ns}\" pod \"${pod_name}\" --force --grace-period=0\n

                                                                  Before using the copy_certs.sh script, keep in mind the following:

                                                                  • assign actual values to the variables ns and ca_file;
                                                                  • the script collects all the images from the jenkins-slaves ConfigMap and uses the image of the maven-java8 agent as the base image of the temporary pod to get the keystore files;
                                                                  • custom certificate is imported using the keytool application;
                                                                  • the jenkins-agent-opt-java-openjdk-lib-security-cacerts and jenkins-agent-etc-ssl-certs-java-cacerts secrets will be created in the EDP namespace.
                                                                2. Run the copy_certs.sh script from the previous point after the requirements are met.

                                                                3. Update manually the jenkins-slaves ConfigMap.

                                                                  Add this block with the mount of secrets to the <volumes></volumes> block of each Jenkins agent:

                                                                  ...\n        <org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n<mountPath>/etc/ssl/certs/java</mountPath>\n<secretName>jenkins-agent-etc-ssl-certs-java-cacerts</secretName>\n</org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n<org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n<mountPath>/opt/java/openjdk/lib/security</mountPath>\n<secretName>jenkins-agent-opt-java-openjdk-lib-security-cacerts</secretName>\n</org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n...\n

                                                                  As an example, the template of gradle-java11-template is shown below:

                                                                  ...\n      </workspaceVolume>\n<volumes>\n<org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n<mountPath>/etc/ssl/certs/java</mountPath>\n<secretName>jenkins-agent-etc-ssl-certs-java-cacerts</secretName>\n</org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n<org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n<mountPath>/opt/java/openjdk/lib/security</mountPath>\n<secretName>jenkins-agent-opt-java-openjdk-lib-security-cacerts</secretName>\n</org.csanchez.jenkins.plugins.kubernetes.volumes.SecretVolume>\n</volumes>\n<containers>\n...\n
                                                                4. Reload the Jenkins pod:

                                                                  ns=\"edp\"\nkubectl rollout restart -n \"${ns}\" deployment/jenkins\n
                                                                "},{"location":"operator-guide/manage-custom-certificate/#related-articles","title":"Related Articles","text":"
                                                                • Install EDP
                                                                • Install Keycloak
                                                                "},{"location":"operator-guide/manage-jenkins-cd-job-provision/","title":"Manage Jenkins CD Pipeline Job Provisioner","text":"

                                                                The Jenkins CD job provisioner (or seed-job) is used to create and manage the cd-pipeline folder, and its Deploy pipelines. There is a special job-provisions/cd folder in Jenkins for these provisioners. Explore the steps for managing different provisioner types below.

                                                                "},{"location":"operator-guide/manage-jenkins-cd-job-provision/#default","title":"Default","text":"

                                                                During the EDP deployment, a default provisioner is created to deploy application with container and custom deployment type.

                                                                1. Find the configuration in job-provisions/cd/default.

                                                                2. Default template is presented below:

                                                                  View: Default template
                                                                  /* Copyright 2022 EPAM Systems.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n\nSee the License for the specific language governing permissions and\nlimitations under the License. */\n\nimport groovy.json.*\nimport jenkins.model.Jenkins\n\nJenkins jenkins = Jenkins.instance\n\ndef pipelineName = \"${PIPELINE_NAME}-cd-pipeline\"\ndef stageName = \"${STAGE_NAME}\"\ndef qgStages = \"${QG_STAGES}\"\ndef gitServerCrVersion = \"${GIT_SERVER_CR_VERSION}\"\ndef gitCredentialsId = \"${GIT_CREDENTIALS_ID}\"\ndef sourceType = \"${SOURCE_TYPE}\"\ndef libraryURL = \"${LIBRARY_URL}\"\ndef libraryBranch = \"${LIBRARY_BRANCH}\"\ndef isAutoDeploy = \"${AUTODEPLOY}\"\ndef scriptPath = \"Jenkinsfile\"\ndef containerDeploymentType = \"container\"\ndef deploymentType = \"${DEPLOYMENT_TYPE}\"\ndef codebaseFolder = jenkins.getItem(pipelineName)\n\ndef autoDeploy = '{\"name\":\"auto-deploy-input\",\"step_name\":\"auto-deploy-input\"}'\ndef manualDeploy = '{\"name\":\"manual-deploy-input\",\"step_name\":\"manual-deploy-input\"}'\ndef runType = isAutoDeploy.toBoolean() ? autoDeploy : manualDeploy\n\ndef stages = buildStages(deploymentType, containerDeploymentType, qgStages, runType)\n\nif (codebaseFolder == null) {\nfolder(pipelineName)\n}\n\nif (deploymentType == containerDeploymentType) {\ncreateContainerizedCdPipeline(pipelineName, stageName, stages, scriptPath, sourceType,\nlibraryURL, libraryBranch, gitCredentialsId, gitServerCrVersion,\nisAutoDeploy)\n} else {\ncreateCustomCdPipeline(pipelineName, stageName)\n}\n\ndef buildStages(deploymentType, containerDeploymentType, qgStages, runType) {\nreturn deploymentType == containerDeploymentType\n? '[{\"name\":\"init\",\"step_name\":\"init\"},' + runType + ',{\"name\":\"deploy\",\"step_name\":\"deploy\"},' + qgStages + ',{\"name\":\"promote-images\",\"step_name\":\"promote-images\"}]'\n: ''\n}\n\ndef createContainerizedCdPipeline(pipelineName, stageName, stages, pipelineScript, sourceType, libraryURL, libraryBranch, libraryCredId, gitServerCrVersion, isAutoDeploy) {\npipelineJob(\"${pipelineName}/${stageName}\") {\nif (sourceType == \"library\") {\ndefinition {\ncpsScm {\nscm {\ngit {\nremote {\nurl(libraryURL)\ncredentials(libraryCredId)\n}\nbranches(\"${libraryBranch}\")\nscriptPath(\"${pipelineScript}\")\n}\n}\n}\n}\n} else {\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\nDeploy()\")\nsandbox(true)\n}\n}\n}\nproperties {\ndisableConcurrentBuilds()\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${stages}\", \"Consequence of stages in JSON format to be run during execution\")\n\nif (isAutoDeploy?.trim() && isAutoDeploy.toBoolean()) {\nstringParam(\"CODEBASE_VERSION\", null, \"Codebase versions to deploy.\")\n}\n}\n}\n}\n\ndef createCustomCdPipeline(pipelineName, stageName) {\npipelineJob(\"${pipelineName}/${stageName}\") {\nproperties {\ndisableConcurrentBuilds()\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\n}\n}\n}\n
                                                                "},{"location":"operator-guide/manage-jenkins-cd-job-provision/#custom","title":"Custom","text":"

                                                                In some cases, it is necessary to modify or update the job provisioner logic. For example, when adding a new stage requires a custom job provisioner created on the basis of an existing one out of the box. Take the steps below to add a custom job provision.

                                                                1. Navigate to the Jenkins main page and open the job-provisions/cd folder, click New Item and type the name of job provisions, for example - custom.

                                                                  CD provisioner name

                                                                  Scroll down to the Copy from field, enter \"/job-provisions/cd/default\", and click OK: Copy CD provisioner

                                                                2. Update the required parameters in the new provisioner. For example, if it is necessary to implement a new stage clean, add the following code to the provisioner:

                                                                     def buildStages(deploymentType, containerDeploymentType, qgStages) {\n       return deploymentType == containerDeploymentType\n? '[{\"name\":\"init\",\"step_name\":\"init\"},{\"name\":\"clean\",\"step_name\":\"clean\"},{\"name\":\"deploy\",\"step_name\":\"deploy\"},' + qgStages + ',{\"name\":\"promote-images-ecr\",\"step_name\":\"promote-images\"}]'\n: ''\n}\n

                                                                  Note

                                                                  Make sure the support for the above mentioned logic is implemented. Please refer to the How to Redefine or Extend the EDP Pipeline Stages Library section of the guide.

                                                                  After the steps above are performed, the new custom job-provision will be available in Adding Stage during the CD pipeline creation in Admin Console.

                                                                  Custom CD provision

                                                                "},{"location":"operator-guide/manage-jenkins-ci-job-provision/","title":"Manage Jenkins CI Pipeline Job Provisioner","text":"

                                                                The Jenkins CI job provisioner (or seed-job) is used to create and manage the application folder, and its Code Review, Build and Create Release pipelines. Depending on the version control system, different job provisioners are used. EDP supports integration with the following version control systems:

                                                                • Gerrit (default)
                                                                • GitHub (github)
                                                                • GitLab (gitlab)

                                                                By default, the Jenkins operator creates a pipeline for several types of application and libraries. There is a special job-provisions/ci folder in Jenkins for these provisioners. During the EDP deployment, a default provisioner is created for integration with Gerrit version control system. To configure integration with other version control systems, you need to add the required job provisioners to job-provisions/ci folder in Jenkins.

                                                                "},{"location":"operator-guide/manage-jenkins-ci-job-provision/#create-custom-provisioner-custom-defaultgithubgitlab","title":"Create Custom Provisioner (custom-default/github/gitlab)","text":"

                                                                In some cases it is necessary to modify or update the job provisioner logic, for example when an added other code language needs to create a custom job provisioner on the basis of an existing one out of the box. Take the steps below to add a custom job provision:

                                                                1. Navigate to the Jenkins main page and open the job-provisions/ci folder, click New Item and type the name of job-provisions, for example - custom-github.

                                                                  CI provisioner name

                                                                  Scroll down to the Copy from field and enter \"/job-provisions/ci/github\", and click OK: Copy ci provisioner

                                                                2. Update the required parameters in the new provisioner. For example, if it is necessary to implement a new build tool docker, several parameters are to be updated. Add the following stages to the docker Code Review and Build pipelines for docker application:

                                                                  stages['Code-review-application-docker'] = '[{\"name\": \"checkout\"},{\"name\": \"lint\"},{\"name\": \"build\"}]'\n...\nstages['Build-application-docker'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"lint\"},{\"name\": \"build\"},{\"name\": \"push\"},{\"name\": \"git-tag\"}]'\n...\ndef getStageKeyName(buildTool) {\n    ...\n    if (buildTool.toString().equalsIgnoreCase('docker')) {\n    return \"Code-review-application-docker\"\n}\n    ...\n}\n

                                                                  Note

                                                                  Make sure the support for the above mentioned logic is implemented. Please refer to the How to Redefine or Extend the EDP Pipeline Stages Library section of the guide.

                                                                  Note

                                                                  The default template should be changed if there is another creation logic for the Code Review, Build and Create Release pipelines. Furthermore, all pipeline types should have the necessary stages as well.

                                                                  After the steps above are performed, the new custom job provision will be available in Advanced Settings during the application creation in the EDP Portal UI:

                                                                  Custom ci provision

                                                                "},{"location":"operator-guide/manage-jenkins-ci-job-provision/#gerrit-default","title":"Gerrit (default)","text":"

                                                                During the EDP deployment, a default provisioner is created for integration with Gerrit version control system.

                                                                1. Find the configuration in job-provisions/ci/default.

                                                                2. Default template is presented below:

                                                                  View: Default template
                                                                  /* Copyright 2022 EPAM Systems.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n\nSee the License for the specific language governing permissions and\nlimitations under the License. */\n\nimport groovy.json.*\nimport jenkins.model.Jenkins\n\nJenkins jenkins = Jenkins.instance\ndef stages = [:]\ndef jiraIntegrationEnabled = Boolean.parseBoolean(\"${JIRA_INTEGRATION_ENABLED}\" as String)\ndef commitValidateStage = jiraIntegrationEnabled ? ',{\"name\": \"commit-validate\"}' : ''\ndef createJIMStage = jiraIntegrationEnabled ? ',{\"name\": \"create-jira-issue-metadata\"}' : ''\ndef platformType = \"${PLATFORM_TYPE}\"\ndef buildStage = platformType.toString() == \"kubernetes\" ? ',{\"name\": \"build-image-kaniko\"}' : ',{\"name\": \"build-image-from-dockerfile\"}'\ndef buildTool = \"${BUILD_TOOL}\"\ndef goBuildStage = buildTool.toString() == \"go\" ? ',{\"name\": \"build\"}' : ',{\"name\": \"compile\"}'\n\nstages['Code-review-application'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" + goBuildStage +\n',{\"name\": \"tests\"},[{\"name\": \"sonar\"},{\"name\": \"dockerfile-lint\"},{\"name\": \"helm-lint\"}]]'\nstages['Code-review-library'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"compile\"},{\"name\": \"tests\"},{\"name\": \"sonar\"}]'\nstages['Code-review-autotests'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"},{\"name\": \"sonar\"}' + ']'\nstages['Code-review-default'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" + ']'\nstages['Code-review-library-terraform'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"terraform-lint\"}]'\nstages['Code-review-library-opa'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"}]'\nstages['Code-review-library-codenarc'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"sonar\"},{\"name\": \"build\"}]'\nstages['Code-review-library-kaniko'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"dockerbuild-verify\"}]'\n\nstages['Build-library-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-npm'] = stages['Build-library-maven']\nstages['Build-library-gradle'] = stages['Build-library-maven']\nstages['Build-library-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-terraform'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"terraform-lint\"}' +\n',{\"name\": \"terraform-plan\"},{\"name\": \"terraform-apply\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-opa'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"tests\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-codenarc'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' +\n\"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-kaniko'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"build-image-kaniko\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Build-autotests-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-autotests-gradle'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Build-application-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sast\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildStage}\" +\n',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-npm'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildStage}\" +\n',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-gradle'] = stages['Build-application-maven']\nstages['Build-application-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"}' + \"${buildStage}\" +\n',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-go'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sast\"},{\"name\": \"tests\"}' +\n',{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildStage}\" +\n\"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"}' +\n\"${buildStage}\" + ',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Create-release'] = '[{\"name\": \"checkout\"},{\"name\": \"create-branch\"},{\"name\": \"trigger-job\"}]'\n\ndef defaultBuild = '[{\"name\": \"checkout\"}' + \"${createJIMStage}\" + ']'\n\ndef codebaseName = \"${NAME}\"\ndef gitServerCrName = \"${GIT_SERVER_CR_NAME}\"\ndef gitServerCrVersion = \"${GIT_SERVER_CR_VERSION}\"\ndef gitCredentialsId = \"${GIT_CREDENTIALS_ID ? GIT_CREDENTIALS_ID : 'gerrit-ciuser-sshkey'}\"\ndef repositoryPath = \"${REPOSITORY_PATH}\"\ndef defaultBranch = \"${DEFAULT_BRANCH}\"\n\ndef codebaseFolder = jenkins.getItem(codebaseName)\nif (codebaseFolder == null) {\nfolder(codebaseName)\n}\n\ncreateListView(codebaseName, \"Releases\")\ncreateReleasePipeline(\"Create-release-${codebaseName}\", codebaseName, stages[\"Create-release\"], \"CreateRelease\",\nrepositoryPath, gitCredentialsId, gitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, platformType, defaultBranch)\n\nif (buildTool.toString().equalsIgnoreCase('none')) {\nreturn true\n}\n\nif (BRANCH) {\ndef branch = \"${BRANCH}\"\ndef formattedBranch = \"${branch.toUpperCase().replaceAll(/\\//, \"-\")}\"\ncreateListView(codebaseName, formattedBranch)\n\ndef type = \"${TYPE}\"\ndef crKey = getStageKeyName(buildTool)\ncreateCiPipeline(\"Code-review-${codebaseName}\", codebaseName, stages[crKey], \"CodeReview\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion)\n\ndef buildKey = \"Build-${type}-${buildTool.toLowerCase()}\".toString()\nif (type.equalsIgnoreCase('application') || type.equalsIgnoreCase('library') || type.equalsIgnoreCase('autotests')) {\ndef jobExists = false\nif(\"${formattedBranch}-Build-${codebaseName}\".toString() in Jenkins.instance.getAllItems().collect{it.name})\njobExists = true\n\ncreateCiPipeline(\"Build-${codebaseName}\", codebaseName, stages.get(buildKey, defaultBuild), \"Build\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion)\n\nif(!jobExists)\nqueue(\"${codebaseName}/${formattedBranch}-Build-${codebaseName}\")\n}\n}\n\ndef createCiPipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId, watchBranch, gitServerCrName, gitServerCrVersion) {\npipelineJob(\"${codebaseName}/${watchBranch.toUpperCase().replaceAll(/\\//, \"-\")}-${pipelineName}\") {\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\ntriggers {\ngerrit {\nevents {\nif (pipelineName.contains(\"Build\"))\nchangeMerged()\nelse\npatchsetCreated()\n}\nproject(\"plain:${codebaseName}\", [\"plain:${watchBranch}\"])\n}\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${codebaseStages}\", \"Consequence of stages in JSON format to be run during execution\")\nstringParam(\"GERRIT_PROJECT_NAME\", \"${codebaseName}\", \"Gerrit project name(Codebase name) to be build\")\nstringParam(\"BRANCH\", \"${watchBranch}\", \"Branch to build artifact from\")\n}\n}\n}\n}\n\ndef getStageKeyName(buildTool) {\nif (buildTool.toString().equalsIgnoreCase('terraform')) {\nreturn \"Code-review-library-terraform\"\n}\nif (buildTool.toString().equalsIgnoreCase('opa')) {\nreturn \"Code-review-library-opa\"\n}\nif (buildTool.toString().equalsIgnoreCase('codenarc')) {\nreturn \"Code-review-library-codenarc\"\n}\nif (buildTool.toString().equalsIgnoreCase('kaniko')) {\nreturn \"Code-review-library-kaniko\"\n}\ndef buildToolsOutOfTheBox = [\"maven\",\"npm\",\"gradle\",\"dotnet\",\"none\",\"go\",\"python\"]\ndef supBuildTool = buildToolsOutOfTheBox.contains(buildTool.toString())\nreturn supBuildTool ? \"Code-review-${TYPE}\" : \"Code-review-default\"\n}\n\ndef createReleasePipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId,\ngitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, platformType, defaultBranch) {\npipelineJob(\"${codebaseName}/${pipelineName}\") {\nlogRotator {\nnumToKeep(14)\ndaysToKeep(30)\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"STAGES\", \"${codebaseStages}\", \"\")\nif (pipelineName.contains(\"Create-release\")) {\nstringParam(\"JIRA_INTEGRATION_ENABLED\", \"${jiraIntegrationEnabled}\", \"Is Jira integration enabled\")\nstringParam(\"PLATFORM_TYPE\", \"${platformType}\", \"Platform type\")\nstringParam(\"GERRIT_PROJECT\", \"${codebaseName}\", \"\")\nstringParam(\"RELEASE_NAME\", \"\", \"Name of the release(branch to be created)\")\nstringParam(\"COMMIT_ID\", \"\", \"Commit ID that will be used to create branch from for new release. If empty, DEFAULT_BRANCH will be used\")\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"REPOSITORY_PATH\", \"${repository}\", \"Full repository path\")\nstringParam(\"DEFAULT_BRANCH\", \"${defaultBranch}\", \"Default repository branch\")\n}\n}\n}\n}\n}\n\ndef createListView(codebaseName, branchName) {\nlistView(\"${codebaseName}/${branchName}\") {\nif (branchName.toLowerCase() == \"releases\") {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^Create-release.*\")\n}\n}\n} else {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^${branchName}-(Code-review|Build).*\")\n}\n}\n}\ncolumns {\nstatus()\nweather()\nname()\nlastSuccess()\nlastFailure()\nlastDuration()\nbuildButton()\n}\n}\n}\n

                                                                  Job Provision Pipeline Parameters

                                                                  The job-provisions pipeline consists of the following parameters of type string:

                                                                • NAME - the application name;
                                                                • TYPE - the codebase type (the application / library / autotest);
                                                                • BUILD_TOOL - a tool that is used to build the application;
                                                                • BRANCH - a branch name;
                                                                • GIT_SERVER_CR_NAME - the name of the application Git server custom resource;
                                                                • GIT_SERVER_CR_VERSION - the version of the application Git server custom resource;
                                                                • GIT_CREDENTIALS_ID - the secret name where Git server credentials are stored (default 'gerrit-ciuser-sshkey');
                                                                • REPOSITORY_PATH - the full repository path;
                                                                • JIRA_INTEGRATION_ENABLED - the Jira integration is enabled or not;
                                                                • PLATFORM_TYPE - the type of platform (kubernetes or openshift);
                                                                • DEFAULT_BRANCH - the default repository branch.
                                                                "},{"location":"operator-guide/manage-jenkins-ci-job-provision/#github-github","title":"GitHub (github)","text":"

                                                                To create a new job provision for work with GitHub, take the following steps:

                                                                1. Navigate to the Jenkins main page and open the job-provisions/ci folder.

                                                                2. Click New Item and type the name of job-provisions - github.

                                                                3. Select the Freestyle project option and click OK.

                                                                4. Select the Discard old builds check box and configure a few parameters:

                                                                  Strategy: Log Rotation

                                                                  Days to keep builds: 10

                                                                  Max # of builds to keep: 10

                                                                5. Select the This project is parameterized check box and add a few input parameters (the type of the variables is string):

                                                                  • NAME;
                                                                  • TYPE;
                                                                  • BUILD_TOOL;
                                                                  • BRANCH;
                                                                  • GIT_SERVER_CR_NAME;
                                                                  • GIT_SERVER_CR_VERSION;
                                                                  • GIT_CREDENTIALS_ID;
                                                                  • REPOSITORY_PATH;
                                                                  • JIRA_INTEGRATION_ENABLED;
                                                                  • PLATFORM_TYPE;
                                                                  • DEFAULT_BRANCH.
                                                                6. Check the Execute concurrent builds if necessary option.

                                                                7. Check the Restrict where this project can be run option.

                                                                8. Fill in the Label Expression field by typing master to ensure job runs on Jenkins Master.

                                                                9. In the Build section, perform the following:

                                                                  • Select DSL Script;
                                                                  • Select the Use the provided DSL script check box:

                                                                  DSL script check box

                                                                10. As soon as all the steps above are performed, insert the code:

                                                                  View: Template
                                                                  import groovy.json.*\nimport jenkins.model.Jenkins\nimport javaposse.jobdsl.plugin.*\nimport com.cloudbees.hudson.plugins.folder.*\n\nJenkins jenkins = Jenkins.instance\ndef stages = [:]\ndef jiraIntegrationEnabled = Boolean.parseBoolean(\"${JIRA_INTEGRATION_ENABLED}\" as String)\ndef commitValidateStage = jiraIntegrationEnabled ? ',{\"name\": \"commit-validate\"}' : ''\ndef createJIMStage = jiraIntegrationEnabled ? ',{\"name\": \"create-jira-issue-metadata\"}' : ''\ndef platformType = \"${PLATFORM_TYPE}\"\ndef buildStage = platformType == \"kubernetes\" ? ',{\"name\": \"build-image-kaniko\"}' : ',{\"name\": \"build-image-from-dockerfile\"}'\ndef buildTool = \"${BUILD_TOOL}\"\ndef goBuildStage = buildTool.toString() == \"go\" ? ',{\"name\": \"build\"}' : ',{\"name\": \"compile\"}'\n\nstages['Code-review-application'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" + goBuildStage +\n',{\"name\": \"tests\"},[{\"name\": \"sonar\"},{\"name\": \"dockerfile-lint\"},{\"name\": \"helm-lint\"}]]'\nstages['Code-review-library'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"compile\"},{\"name\": \"tests\"},' +\n'{\"name\": \"sonar\"}]'\nstages['Code-review-autotests'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"},{\"name\": \"sonar\"}' + ']'\nstages['Code-review-default'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" + ']'\nstages['Code-review-library-terraform'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"terraform-lint\"}]'\nstages['Code-review-library-opa'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"}]'\nstages['Code-review-library-codenarc'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"sonar\"},{\"name\": \"build\"}]'\nstages['Code-review-library-kaniko'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"dockerbuild-verify\"}]'\n\nstages['Build-library-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-npm'] = stages['Build-library-maven']\nstages['Build-library-gradle'] = stages['Build-library-maven']\nstages['Build-library-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-terraform'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"terraform-lint\"}' +\n',{\"name\": \"terraform-plan\"},{\"name\": \"terraform-apply\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-opa'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"tests\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-codenarc'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' +\n\"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-kaniko'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"build-image-kaniko\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Build-autotests-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-autotests-gradle'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Build-application-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sast\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildStage}\" +\n',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-npm'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildStage}\" +\n',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-gradle'] = stages['Build-application-maven']\nstages['Build-application-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"}' + \"${buildStage}\" +\n',{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-go'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sast\"},{\"name\": \"tests\"},{\"name\": \"sonar\"},' +\n'{\"name\": \"build\"}' + \"${buildStage}\" + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},{\"name\": \"tests\"},{\"name\": \"sonar\"}' +\n\"${buildStage}\" + ',{\"name\":\"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Create-release'] = '[{\"name\": \"checkout\"},{\"name\": \"create-branch\"},{\"name\": \"trigger-job\"}]'\n\ndef buildToolsOutOfTheBox = [\"maven\",\"npm\",\"gradle\",\"dotnet\",\"none\",\"go\",\"python\"]\ndef defaultStages = '[{\"name\": \"checkout\"}' + \"${createJIMStage}\" + ']'\n\ndef codebaseName = \"${NAME}\"\ndef gitServerCrName = \"${GIT_SERVER_CR_NAME}\"\ndef gitServerCrVersion = \"${GIT_SERVER_CR_VERSION}\"\ndef gitCredentialsId = \"${GIT_CREDENTIALS_ID ? GIT_CREDENTIALS_ID : 'gerrit-ciuser-sshkey'}\"\ndef repositoryPath = \"${REPOSITORY_PATH.replaceAll(~/:\\d+\\\\//,\"/\")}\"\ndef githubRepository = \"https://${repositoryPath.split(\"@\")[1]}\"\ndef defaultBranch = \"${DEFAULT_BRANCH}\"\n\ndef codebaseFolder = jenkins.getItem(codebaseName)\nif (codebaseFolder == null) {\n    folder(codebaseName)\n}\n\ncreateListView(codebaseName, \"Releases\")\ncreateReleasePipeline(\"Create-release-${codebaseName}\", codebaseName, stages[\"Create-release\"], \"CreateRelease\",\n        repositoryPath, gitCredentialsId, gitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, platformType, defaultBranch)\n\nif (buildTool.toString().equalsIgnoreCase('none')) {\n    return true\n}\n\nif (BRANCH) {\n    def branch = \"${BRANCH}\"\n    def formattedBranch = \"${branch.toUpperCase().replaceAll(/\\\\//, \"-\")}\"\ncreateListView(codebaseName, formattedBranch)\n\ndef type = \"${TYPE}\"\ndef supBuildTool = buildToolsOutOfTheBox.contains(buildTool.toString())\ndef crKey = getStageKeyName(buildTool).toString()\ncreateCodeReviewPipeline(\"Code-review-${codebaseName}\", codebaseName, stages.get(crKey, defaultStages), \"CodeReview\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion, githubRepository)\nregisterWebHook(repositoryPath)\n\n\ndef buildKey = \"Build-${type}-${buildTool.toLowerCase()}\".toString()\n\nif (type.equalsIgnoreCase('application') || type.equalsIgnoreCase('library') || type.equalsIgnoreCase('autotests')) {\ndef jobExists = false\nif(\"${formattedBranch}-Build-${codebaseName}\".toString() in Jenkins.instance.getAllItems().collect{it.name})\njobExists = true\ncreateBuildPipeline(\"Build-${codebaseName}\", codebaseName, stages.get(buildKey, defaultStages), \"Build\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion, githubRepository)\nregisterWebHook(repositoryPath, 'build')\n\nif(!jobExists)\nqueue(\"${codebaseName}/${formattedBranch}-Build-${codebaseName}\")\n}\n}\n\ndef getStageKeyName(buildTool) {\nif (buildTool.toString().equalsIgnoreCase('terraform')) {\nreturn \"Code-review-library-terraform\"\n}\nif (buildTool.toString().equalsIgnoreCase('opa')) {\nreturn \"Code-review-library-opa\"\n}\nif (buildTool.toString().equalsIgnoreCase('codenarc')) {\nreturn \"Code-review-library-codenarc\"\n}\nif (buildTool.toString().equalsIgnoreCase('kaniko')) {\nreturn \"Code-review-library-kaniko\"\n}\ndef buildToolsOutOfTheBox = [\"maven\",\"npm\",\"gradle\",\"dotnet\",\"none\",\"go\",\"python\"]\ndef supBuildTool = buildToolsOutOfTheBox.contains(buildTool.toString())\nreturn supBuildTool ? \"Code-review-${TYPE}\" : \"Code-review-default\"\n}\n\ndef createCodeReviewPipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId, defaultBranch, gitServerCrName, gitServerCrVersion, githubRepository) {\npipelineJob(\"${codebaseName}/${defaultBranch.toUpperCase().replaceAll(/\\\\//, \"-\")}-${pipelineName}\") {\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${codebaseStages}\", \"Consequence of stages in JSON format to be run during execution\")\nstringParam(\"GERRIT_PROJECT_NAME\", \"${codebaseName}\", \"Gerrit project name(Codebase name) to be build\")\nif (pipelineName.contains(\"Build\"))\nstringParam(\"BRANCH\", \"${defaultBranch}\", \"Branch to build artifact from\")\nelse\nstringParam(\"BRANCH\", \"\\${ghprbActualCommit}\", \"Branch to build artifact from\")\n}\n}\ntriggers {\ngithubPullRequest {\ncron('')\nonlyTriggerPhrase(false)\nuseGitHubHooks(true)\npermitAll(true)\nautoCloseFailedPullRequests(false)\ndisplayBuildErrorsOnDownstreamBuilds(false)\nwhiteListTargetBranches([defaultBranch.toString()])\nextensions {\ncommitStatus {\ncontext('Jenkins Code-Review')\ntriggeredStatus('Build is Triggered')\nstartedStatus('Build is Started')\naddTestResults(true)\ncompletedStatus('SUCCESS', 'Verified')\ncompletedStatus('FAILURE', 'Failed')\ncompletedStatus('PENDING', 'Penging')\ncompletedStatus('ERROR', 'Error')\n}\n}\n}\n}\nproperties {\ngithubProjectProperty {\nprojectUrlStr(\"${githubRepository}\")\n}\n}\n}\n}\n\ndef createBuildPipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId, defaultBranch, gitServerCrName, gitServerCrVersion, githubRepository) {\npipelineJob(\"${codebaseName}/${defaultBranch.toUpperCase().replaceAll(/\\\\//, \"-\")}-${pipelineName}\") {\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\nnode {\\n    git credentialsId: \\'${credId}\\', url: \\'${repository}\\', branch: \\'${BRANCH}\\'\\n}\\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${codebaseStages}\", \"Consequence of stages in JSON format to be run during execution\")\nstringParam(\"GERRIT_PROJECT_NAME\", \"${codebaseName}\", \"Gerrit project name(Codebase name) to be build\")\nstringParam(\"BRANCH\", \"${defaultBranch}\", \"Branch to run from\")\n}\n}\ntriggers {\ngitHubPushTrigger()\n}\nproperties {\ngithubProjectProperty {\nprojectUrlStr(\"${githubRepository}\")\n}\n}\n}\n}\n\n\ndef createListView(codebaseName, branchName) {\nlistView(\"${codebaseName}/${branchName}\") {\nif (branchName.toLowerCase() == \"releases\") {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^Create-release.*\")\n}\n}\n} else {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^${branchName}-(Code-review|Build).*\")\n}\n}\n}\ncolumns {\nstatus()\nweather()\nname()\nlastSuccess()\nlastFailure()\nlastDuration()\nbuildButton()\n}\n}\n}\n\ndef createReleasePipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId,\ngitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, platformType, defaultBranch) {\npipelineJob(\"${codebaseName}/${pipelineName}\") {\nlogRotator {\nnumToKeep(14)\ndaysToKeep(30)\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"STAGES\", \"${codebaseStages}\", \"\")\nif (pipelineName.contains(\"Create-release\")) {\nstringParam(\"JIRA_INTEGRATION_ENABLED\", \"${jiraIntegrationEnabled}\", \"Is Jira integration enabled\")\nstringParam(\"PLATFORM_TYPE\", \"${platformType}\", \"Platform type\")\nstringParam(\"GERRIT_PROJECT\", \"${codebaseName}\", \"\")\nstringParam(\"RELEASE_NAME\", \"\", \"Name of the release(branch to be created)\")\nstringParam(\"COMMIT_ID\", \"\", \"Commit ID that will be used to create branch from for new release. If empty, DEFAULT_BRANCH will be used\")\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"REPOSITORY_PATH\", \"${repository}\", \"Full repository path\")\nstringParam(\"DEFAULT_BRANCH\", \"${defaultBranch}\", \"Default repository branch\")\n}\n}\n}\n}\n}\n\ndef registerWebHook(repositoryPath, type = 'code-review') {\ndef url = repositoryPath.split('@')[1].split('/')[0]\ndef owner = repositoryPath.split('@')[1].split('/')[1]\ndef repo = repositoryPath.split('@')[1].split('/')[2]\ndef apiUrl = 'https://api.' + url + '/repos/' + owner + '/' + repo + '/hooks'\ndef webhookUrl = ''\ndef webhookConfig = [:]\ndef config = [:]\ndef events = []\n\nif (type.equalsIgnoreCase('build')) {\nwebhookUrl = System.getenv('JENKINS_UI_URL') + \"/github-webhook/\"\nevents = [\"push\"]\nconfig[\"url\"] = webhookUrl\nconfig[\"content_type\"] = \"json\"\nconfig[\"insecure_ssl\"] = 0\nwebhookConfig[\"name\"] = \"web\"\nwebhookConfig[\"config\"] = config\nwebhookConfig[\"events\"] = events\nwebhookConfig[\"active\"] = true\n\n} else {\nwebhookUrl = System.getenv('JENKINS_UI_URL') + \"/ghprbhook/\"\nevents = [\"issue_comment\",\"pull_request\"]\nconfig[\"url\"] = webhookUrl\nconfig[\"content_type\"] = \"form\"\nconfig[\"insecure_ssl\"] = 0\nwebhookConfig[\"name\"] = \"web\"\nwebhookConfig[\"config\"] = config\nwebhookConfig[\"events\"] = events\nwebhookConfig[\"active\"] = true\n}\n\ndef requestBody = JsonOutput.toJson(webhookConfig)\ndef http = new URL(apiUrl).openConnection() as HttpURLConnection\nhttp.setRequestMethod('POST')\nhttp.setDoOutput(true)\nprintln(apiUrl)\nhttp.setRequestProperty(\"Accept\", 'application/json')\nhttp.setRequestProperty(\"Content-Type\", 'application/json')\nhttp.setRequestProperty(\"Authorization\", \"token ${getSecretValue('github-access-token')}\")\nhttp.outputStream.write(requestBody.getBytes(\"UTF-8\"))\nhttp.connect()\nprintln(http.responseCode)\n\nif (http.responseCode == 201) {\nresponse = new JsonSlurper().parseText(http.inputStream.getText('UTF-8'))\n} else {\nresponse = new JsonSlurper().parseText(http.errorStream.getText('UTF-8'))\n}\n\nprintln \"response: ${response}\"\n}\n\ndef getSecretValue(name) {\ndef creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(\ncom.cloudbees.plugins.credentials.common.StandardCredentials.class,\nJenkins.instance,\nnull,\nnull\n)\n\ndef secret = creds.find { it.properties['id'] == name }\nreturn secret != null ? secret['secret'] : null\n}\n

                                                                  After the steps above are performed, the new custom job-provision will be available in Advanced Settings during the application creation in the EDP Portal UI:

                                                                  Github job provision

                                                                "},{"location":"operator-guide/manage-jenkins-ci-job-provision/#gitlab-gitlab","title":"GitLab (gitlab)","text":"

                                                                To create a new job provision for work with GitLab, take the following steps:

                                                                1. Navigate to the Jenkins main page and open the job-provisions/ci folder.

                                                                2. Click New Item and type the name of job-provisions - gitlab.

                                                                3. Select the Freestyle project option and click OK.

                                                                4. Select the Discard old builds check box and configure a few parameters:

                                                                  Strategy: Log Rotation

                                                                  Days to keep builds: 10

                                                                  Max # of builds to keep: 10

                                                                5. Select the This project is parameterized check box and add a few input parameters as the following strings (the type of the variables is string):

                                                                  • NAME;
                                                                  • TYPE;
                                                                  • BUILD_TOOL;
                                                                  • BRANCH;
                                                                  • GIT_SERVER_CR_NAME;
                                                                  • GIT_SERVER_CR_VERSION;
                                                                  • GIT_SERVER;
                                                                  • GIT_SSH_PORT;
                                                                  • GIT_USERNAME;
                                                                  • GIT_CREDENTIALS_ID;
                                                                  • REPOSITORY_PATH;
                                                                  • JIRA_INTEGRATION_ENABLED;
                                                                  • PLATFORM_TYPE;
                                                                  • DEFAULT_BRANCH;
                                                                6. Check the Execute concurrent builds if necessary option.

                                                                7. Check the Restrict where this project can be run option.

                                                                8. Fill in the Label Expression field by typing master to ensure job runs on Jenkins Master.

                                                                9. In the Build Steps section, perform the following:

                                                                  • Select Add build step;
                                                                  • Choose Process Job DSLs;
                                                                  • Select the Use the provided DSL script check box:

                                                                  DSL script check box

                                                                10. As soon as all the steps above are performed, insert the code:

                                                                  View: Template
                                                                  import groovy.json.*\nimport jenkins.model.Jenkins\n\nJenkins jenkins = Jenkins.instance\ndef stages = [:]\ndef jiraIntegrationEnabled = Boolean.parseBoolean(\"${JIRA_INTEGRATION_ENABLED}\" as String)\ndef commitValidateStage = jiraIntegrationEnabled ? ',{\"name\": \"commit-validate\"}' : ''\ndef createJIMStage = jiraIntegrationEnabled ? ',{\"name\": \"create-jira-issue-metadata\"}' : ''\ndef platformType = \"${PLATFORM_TYPE}\"\ndef buildTool = \"${BUILD_TOOL}\"\ndef buildImageStage = platformType == \"kubernetes\" ? ',{\"name\": \"build-image-kaniko\"},' : ',{\"name\": \"build-image-from-dockerfile\"},'\ndef goBuildImageStage = platformType == \"kubernetes\" ? ',{\"name\": \"build-image-kaniko\"}' : ',{\"name\": \"build-image-from-dockerfile\"}'\ndef goBuildStage = buildTool.toString() == \"go\" ? ',{\"name\": \"build\"}' : ',{\"name\": \"compile\"}'\n\nstages['Code-review-application'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" + goBuildStage +\n',{\"name\": \"tests\"},[{\"name\": \"sonar\"},{\"name\": \"dockerfile-lint\"},{\"name\": \"helm-lint\"}]]'\nstages['Code-review-library'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"compile\"},{\"name\": \"tests\"},' +\n'{\"name\": \"sonar\"}]'\nstages['Code-review-autotests'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"},{\"name\": \"sonar\"}' + ']'\nstages['Code-review-default'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" + ']'\nstages['Code-review-library-terraform'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"terraform-lint\"}]'\nstages['Code-review-library-opa'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"}]'\nstages['Code-review-library-codenarc'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"sonar\"},{\"name\": \"build\"}]'\nstages['Code-review-library-kaniko'] = '[{\"name\": \"checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"dockerbuild-verify\"}]'\n\nstages['Build-library-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-npm'] = stages['Build-library-maven']\nstages['Build-library-gradle'] = stages['Build-library-maven']\nstages['Build-library-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-terraform'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"terraform-lint\"}' +\n',{\"name\": \"terraform-plan\"},{\"name\": \"terraform-apply\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-opa'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"tests\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-codenarc'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' +\n\"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-kaniko'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"build-image-kaniko\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Build-autotests-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-autotests-gradle'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\nstages['Build-application-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sast\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildImageStage}\" +\n'{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},{\"name\": \"tests\"},{\"name\": \"sonar\"}' +\n\"${buildImageStage}\" + '{\"name\":\"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-npm'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' + \"${buildImageStage}\" +\n'{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-gradle'] = stages['Build-application-maven']\nstages['Build-application-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"}' + \"${buildImageStage}\" +\n'{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-terraform'] = '[{\"name\": \"checkout\"},{\"name\": \"tool-init\"},' +\n'{\"name\": \"lint\"},{\"name\": \"git-tag\"}]'\nstages['Build-application-helm'] = '[{\"name\": \"checkout\"},{\"name\": \"lint\"}]'\nstages['Build-application-docker'] = '[{\"name\": \"checkout\"},{\"name\": \"lint\"}]'\nstages['Build-application-go'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sast\"},{\"name\": \"tests\"},{\"name\": \"sonar\"},' +\n'{\"name\": \"build\"}' + \"${goBuildImageStage}\" + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Create-release'] = '[{\"name\": \"checkout\"},{\"name\": \"create-branch\"},{\"name\": \"trigger-job\"}]'\n\ndef defaultStages = '[{\"name\": \"checkout\"}' + \"${createJIMStage}\" + ']'\n\n\ndef codebaseName = \"${NAME}\"\ndef gitServerCrName = \"${GIT_SERVER_CR_NAME}\"\ndef gitServerCrVersion = \"${GIT_SERVER_CR_VERSION}\"\ndef gitServer = \"${GIT_SERVER ? GIT_SERVER : 'gerrit'}\"\ndef gitSshPort = \"${GIT_SSH_PORT ? GIT_SSH_PORT : '29418'}\"\ndef gitUsername = \"${GIT_USERNAME ? GIT_USERNAME : 'jenkins'}\"\ndef gitCredentialsId = \"${GIT_CREDENTIALS_ID ? GIT_CREDENTIALS_ID : 'gerrit-ciuser-sshkey'}\"\ndef defaultRepoPath = \"ssh://${gitUsername}@${gitServer}:${gitSshPort}/${codebaseName}\"\ndef repositoryPath = \"${REPOSITORY_PATH ? REPOSITORY_PATH : defaultRepoPath}\"\ndef defaultBranch = \"${DEFAULT_BRANCH}\"\n\ndef codebaseFolder = jenkins.getItem(codebaseName)\nif (codebaseFolder == null) {\nfolder(codebaseName)\n}\n\ncreateListView(codebaseName, \"Releases\")\ncreateReleasePipeline(\"Create-release-${codebaseName}\", codebaseName, stages[\"Create-release\"], \"CreateRelease\",\nrepositoryPath, gitCredentialsId, gitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, platformType, defaultBranch)\n\nif (BRANCH) {\ndef branch = \"${BRANCH}\"\ndef formattedBranch = \"${branch.toUpperCase().replaceAll(/\\\\//, \"-\")}\"\ncreateListView(codebaseName, formattedBranch)\n\ndef type = \"${TYPE}\"\ndef crKey = getStageKeyName(buildTool).toString()\ncreateCiPipeline(\"Code-review-${codebaseName}\", codebaseName, stages.get(crKey, defaultStages), \"CodeReview\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion)\n\ndef buildKey = \"Build-${type}-${buildTool.toLowerCase()}\".toString()\n\nif (type.equalsIgnoreCase('application') || type.equalsIgnoreCase('library') || type.equalsIgnoreCase('autotests')) {\ndef jobExists = false\nif(\"${formattedBranch}-Build-${codebaseName}\".toString() in Jenkins.instance.getAllItems().collect{it.name}) {\njobExists = true\n}\ncreateCiPipeline(\"Build-${codebaseName}\", codebaseName, stages.get(buildKey, defaultStages), \"Build\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion)\nif(!jobExists) {\nqueue(\"${codebaseName}/${formattedBranch}-Build-${codebaseName}\")\n}\n}\n}\n\n\ndef createCiPipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId, defaultBranch, gitServerCrName, gitServerCrVersion) {\ndef jobName = \"${defaultBranch.toUpperCase().replaceAll(/\\\\//, \"-\")}-${pipelineName}\"\ndef existingJob = Jenkins.getInstance().getItemByFullName(\"${codebaseName}/${jobName}\")\ndef webhookToken = null\nif (existingJob) {\ndef triggersMap = existingJob.getTriggers()\ntriggersMap.each { key, value ->\nwebhookToken = value.getSecretToken()\n}\n} else {\ndef random = new byte[16]\nnew java.security.SecureRandom().nextBytes(random)\nwebhookToken = random.encodeHex().toString()\n}\npipelineJob(\"${codebaseName}/${jobName}\") {\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\nproperties {\ngitLabConnection {\ngitLabConnection('gitlab')\n}\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${codebaseStages}\", \"Consequence of stages in JSON format to be run during execution\")\nstringParam(\"GERRIT_PROJECT_NAME\", \"${codebaseName}\", \"Gerrit project name(Codebase name) to be build\")\nif (pipelineName.contains(\"Build\"))\nstringParam(\"BRANCH\", \"${defaultBranch}\", \"Branch to build artifact from\")\nelse\nstringParam(\"BRANCH\", \"\\${gitlabMergeRequestLastCommit}\", \"Branch to build artifact from\")\n}\n}\ntriggers {\ngitlabPush {\nbuildOnMergeRequestEvents(pipelineName.contains(\"Build\") ? false : true)\nbuildOnPushEvents(pipelineName.contains(\"Build\") ? true : false)\nenableCiSkip(false)\nsetBuildDescription(true)\nrebuildOpenMergeRequest(pipelineName.contains(\"Build\") ? 'never' : 'source')\ncommentTrigger(\"Build it please\")\nskipWorkInProgressMergeRequest(true)\ntargetBranchRegex(\"${defaultBranch}\")\n}\n}\nconfigure {\nit / triggers / 'com.dabsquared.gitlabjenkins.GitLabPushTrigger' << secretToken(webhookToken)\nit / triggers / 'com.dabsquared.gitlabjenkins.GitLabPushTrigger' << triggerOnApprovedMergeRequest(pipelineName.contains(\"Build\") ? false : true)\nit / triggers / 'com.dabsquared.gitlabjenkins.GitLabPushTrigger' << pendingBuildName(pipelineName.contains(\"Build\") ? \"\" : \"Jenkins\")\n}\n}\nregisterWebHook(repository, codebaseName, jobName, webhookToken)\n}\n\ndef getStageKeyName(buildTool) {\nif (buildTool.toString().equalsIgnoreCase('terraform')) {\nreturn \"Code-review-library-terraform\"\n}\nif (buildTool.toString().equalsIgnoreCase('opa')) {\nreturn \"Code-review-library-opa\"\n}\nif (buildTool.toString().equalsIgnoreCase('codenarc')) {\nreturn \"Code-review-library-codenarc\"\n}\nif (buildTool.toString().equalsIgnoreCase('kaniko')) {\nreturn \"Code-review-library-kaniko\"\n}\ndef buildToolsOutOfTheBox = [\"maven\",\"npm\",\"gradle\",\"dotnet\",\"none\",\"go\",\"python\"]\ndef supBuildTool = buildToolsOutOfTheBox.contains(buildTool.toString())\nreturn supBuildTool ? \"Code-review-${TYPE}\" : \"Code-review-default\"\n}\n\ndef createReleasePipeline(pipelineName, codebaseName, codebaseStages, pipelineType, repository, credId,\ngitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, platformType, defaultBranch) {\npipelineJob(\"${codebaseName}/${pipelineName}\") {\nlogRotator {\nnumToKeep(14)\ndaysToKeep(30)\n}\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\n${pipelineType}()\")\nsandbox(true)\n}\nparameters {\nstringParam(\"STAGES\", \"${codebaseStages}\", \"\")\nif (pipelineName.contains(\"Create-release\")) {\nstringParam(\"JIRA_INTEGRATION_ENABLED\", \"${jiraIntegrationEnabled}\", \"Is Jira integration enabled\")\nstringParam(\"PLATFORM_TYPE\", \"${platformType}\", \"Platform type\")\nstringParam(\"GERRIT_PROJECT\", \"${codebaseName}\", \"\")\nstringParam(\"RELEASE_NAME\", \"\", \"Name of the release(branch to be created)\")\nstringParam(\"COMMIT_ID\", \"\", \"Commit ID that will be used to create branch from for new release. If empty, DEFAULT_BRANCH will be used\")\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"REPOSITORY_PATH\", \"${repository}\", \"Full repository path\")\nstringParam(\"DEFAULT_BRANCH\", \"${defaultBranch}\", \"Default repository branch\")\n}\n}\n}\n}\n}\n\ndef createListView(codebaseName, branchName) {\nlistView(\"${codebaseName}/${branchName}\") {\nif (branchName.toLowerCase() == \"releases\") {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^Create-release.*\")\n}\n}\n} else {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^${branchName}-(Code-review|Build).*\")\n}\n}\n}\ncolumns {\nstatus()\nweather()\nname()\nlastSuccess()\nlastFailure()\nlastDuration()\nbuildButton()\n}\n}\n}\n\ndef registerWebHook(repositoryPath, codebaseName, jobName, webhookToken) {\ndef apiUrl = 'https://' + repositoryPath.replaceAll(\"ssh://\", \"\").split('@')[1].replace('/', \"%2F\").replaceAll(~/:\\d+%2F/, '/api/v4/projects/') + '/hooks'\ndef jobWebhookUrl = \"${System.getenv('JENKINS_UI_URL')}/project/${codebaseName}/${jobName}\"\ndef gitlabToken = getSecretValue('gitlab-access-token')\n\nif (checkWebHookExist(apiUrl, jobWebhookUrl, gitlabToken)) {\nprintln(\"[JENKINS][DEBUG] Webhook for job ${jobName} is already exist\\r\\n\")\nreturn\n}\n\nprintln(\"[JENKINS][DEBUG] Creating webhook for job ${jobName}\")\ndef webhookConfig = [:]\nwebhookConfig[\"url\"] = jobWebhookUrl\nwebhookConfig[\"push_events\"] = jobName.contains(\"Build\") ? \"true\" : \"false\"\nwebhookConfig[\"merge_requests_events\"] = jobName.contains(\"Build\") ? \"false\" : \"true\"\nwebhookConfig[\"issues_events\"] = \"false\"\nwebhookConfig[\"confidential_issues_events\"] = \"false\"\nwebhookConfig[\"tag_push_events\"] = \"false\"\nwebhookConfig[\"note_events\"] = \"true\"\nwebhookConfig[\"job_events\"] = \"false\"\nwebhookConfig[\"pipeline_events\"] = \"false\"\nwebhookConfig[\"wiki_page_events\"] = \"false\"\nwebhookConfig[\"enable_ssl_verification\"] = \"true\"\nwebhookConfig[\"token\"] = webhookToken\ndef requestBody = JsonOutput.toJson(webhookConfig)\ndef httpConnector = new URL(apiUrl).openConnection() as HttpURLConnection\nhttpConnector.setRequestMethod('POST')\nhttpConnector.setDoOutput(true)\n\nhttpConnector.setRequestProperty(\"Accept\", 'application/json')\nhttpConnector.setRequestProperty(\"Content-Type\", 'application/json')\nhttpConnector.setRequestProperty(\"PRIVATE-TOKEN\", \"${gitlabToken}\")\nhttpConnector.outputStream.write(requestBody.getBytes(\"UTF-8\"))\nhttpConnector.connect()\n\nif (httpConnector.responseCode == 201)\nprintln(\"[JENKINS][DEBUG] Webhook for job ${jobName} has been created\\r\\n\")\nelse {\nprintln(\"[JENKINS][ERROR] Responce code - ${httpConnector.responseCode}\")\ndef response = new JsonSlurper().parseText(httpConnector.errorStream.getText('UTF-8'))\nprintln(\"[JENKINS][ERROR] Failed to create webhook for job ${jobName}. Response - ${response}\")\n}\n}\n\ndef checkWebHookExist(apiUrl, jobWebhookUrl, gitlabToken) {\nprintln(\"[JENKINS][DEBUG] Checking if webhook ${jobWebhookUrl} exists\")\ndef httpConnector = new URL(apiUrl).openConnection() as HttpURLConnection\nhttpConnector.setRequestMethod('GET')\nhttpConnector.setDoOutput(true)\n\nhttpConnector.setRequestProperty(\"Accept\", 'application/json')\nhttpConnector.setRequestProperty(\"Content-Type\", 'application/json')\nhttpConnector.setRequestProperty(\"PRIVATE-TOKEN\", \"${gitlabToken}\")\nhttpConnector.connect()\n\nif (httpConnector.responseCode == 200) {\ndef response = new JsonSlurper().parseText(httpConnector.inputStream.getText('UTF-8'))\nreturn response.find { it.url == jobWebhookUrl } ? true : false\n}\n}\n\ndef getSecretValue(name) {\ndef creds = com.cloudbees.plugins.credentials.CredentialsProvider.lookupCredentials(\ncom.cloudbees.plugins.credentials.common.StandardCredentials.class,\nJenkins.instance,\nnull,\nnull\n)\n\ndef secret = creds.find { it.properties['id'] == name }\nreturn secret != null ? secret['secret'] : null\n}\n

                                                                  After the steps above are performed, the new custom job-provision will be available in Advanced Settings during the application creation in the EDP Portal UI:

                                                                  Gitlab job provision

                                                                "},{"location":"operator-guide/manage-jenkins-ci-job-provision/#related-articles","title":"Related Articles","text":"
                                                                • CI Pipeline for Container
                                                                • GitLab Webhook Configuration
                                                                • GitHub Webhook Configuration
                                                                • Integrate GitHub/GitLab in Jenkins
                                                                • Integrate GitHub/GitLab in Tekton
                                                                "},{"location":"operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/","title":"Migrate CI Pipelines From Jenkins to Tekton","text":"

                                                                To migrate the CI pipelines for a codebase from Jenkins to Tekton, follow the steps below:

                                                                • Migrate CI Pipelines From Jenkins to Tekton
                                                                • Deploy a Custom EDP Scenario With Tekton and Jenkins CI Tools
                                                                • Disable Jenkins Triggers
                                                                • Manage Tekton Triggers the Codebase(s)
                                                                • Switch CI Tool for Codebase(s)
                                                                "},{"location":"operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/#deploy-a-custom-edp-scenario-with-tekton-and-jenkins-ci-tools","title":"Deploy a Custom EDP Scenario With Tekton and Jenkins CI Tools","text":"

                                                                Make sure that Tekton stack is deployed according to the documentation. Enable Tekton as an EDP subcomponent:

                                                                values.yaml
                                                                edp-tekton:\nenabled: true\n
                                                                "},{"location":"operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/#disable-jenkins-triggers","title":"Disable Jenkins Triggers","text":"

                                                                To disable Jenkins Triggers for the codebase, add the following code to the provisioner:

                                                                job-provisioner
                                                                def tektonCodebaseList = [\"<codebase_name>\"]\nif (!tektonCodebaseList.contains(codebaseName.toString())){\ntriggers {\ngerrit {\nevents {\nif (pipelineName.contains(\"Build\"))\nchangeMerged()\nelse\npatchsetCreated()\n}\nproject(\"plain:${codebaseName}\", [\"plain:${watchBranch}\"])\n}\n}\n}\n

                                                                Note

                                                                The sample above shows the usage of Gerrit VCS where the <codebase_name> value is your codebase name.

                                                                • If using GitHub or GitLab, additionally remove the webhook from the relevant repository.
                                                                • If webhooks generation for new codebase(s) is not required, correct the code above so that it creates a webhook in the job-provisioner.
                                                                • To recreate the pipeline in Jenkins, trigger the job-provisioner.
                                                                • Check that the new pipeline is created without triggering Gerrit events.
                                                                "},{"location":"operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/#manage-tekton-triggers-the-codebases","title":"Manage Tekton Triggers the Codebase(s)","text":"

                                                                By default, each Gerrit project inherits configuration from the All-Projects repository.

                                                                To exclude triggering in Jenkins and Tekton CI tools simultaneously, edit the configuration in the All-Projects repository or in the project which inherits rights from your project.

                                                                Edit the webhooks.config file in the refs/meta/config and remove all context from this configuration.

                                                                Warning

                                                                The clearance of the webhooks.config file will disable the pipeline trigger in Tekton.

                                                                To use Tekton pipelines, add the configuration to the corresponding Gerrit project (webhooks.config file in the refs/meta/config):

                                                                webhooks.config
                                                                [remote \"changemerged\"]\nurl = http://el-gerrit-listener:8080\nevent = change-merged\n[remote \"patchsetcreated\"]\nurl = http://el-gerrit-listener:8080\nevent = patchset-created\n
                                                                "},{"location":"operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/#switch-ci-tool-for-codebases","title":"Switch CI Tool for Codebase(s)","text":"

                                                                Go to the codebase Custom Resource and change the spec.ciTool field from jenkins to tekton.

                                                                "},{"location":"operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/#related-articles","title":"Related Articles","text":"
                                                                • Install EDP
                                                                • Install Tekton
                                                                "},{"location":"operator-guide/multitenant-logging/","title":"Multitenant Logging","text":"

                                                                Get acquainted with the multitenant logging components and the project logs location in the Shared cluster.

                                                                "},{"location":"operator-guide/multitenant-logging/#logging-components","title":"Logging Components","text":"

                                                                To configure the multitenant logging, it is necessary to deploy the following components:

                                                                • Grafana
                                                                • Loki
                                                                • Logging-operator
                                                                • Logging-operator stack-fluentbit

                                                                In Grafana, every tenant represents an organization, i.e. it is necessary to create an organization for every namespace in the cluster. To get more details regarding the architecture of the Logging Operator, please review the Diagram 1.

                                                                Logging operator scheme

                                                                Note

                                                                It is necessary to deploy Loki with the auth_enabled: true flag with the aim to ensure that the logs are separated for each tenant. For the authentication, Loki requires the HTTP header X-Scope-OrgID.

                                                                "},{"location":"operator-guide/multitenant-logging/#review-project-logs-in-grafana","title":"Review Project Logs in Grafana","text":"

                                                                To find the project logs, navigate to Grafana and follow the steps below:

                                                                Note

                                                                Grafana is a common service for different customers where each customer works in its own separated Grafana Organization and doesn't have any access to another project.

                                                                1. Choose the organization by clicking the Current Organization drop-down list. If a user is assigned to several organizations, switch easily by using the Switch button.

                                                                  Current organization

                                                                2. Navigate to the left-side menu and click the Explore button to see the Log Browser:

                                                                  Grafana explore

                                                                3. Click the Log Browser button to see the labels that can be used to filter logs (e.g., hostname, namespace, application name, pod, etc.):

                                                                  Note

                                                                  Enable the correct data source, select the relevant logging data from the top left-side corner, and pay attention that the data source name always follows the \u2039project_name\u203a-logging pattern.

                                                                  Log browser

                                                                4. Filter out logs by clicking the Show logs button or write the query and click the Run query button.

                                                                5. Review the results with the quantity of logs per time, see the example below:

                                                                  Logs example

                                                                  • Expand the logs to get detailed information about the object entry:

                                                                  Expand logs

                                                                  • Use the following buttons to include or remove the labels from the query:

                                                                  Addition button

                                                                  • See the ad-hoc statistics for a particular label:

                                                                  Ad-hoc stat example

                                                                "},{"location":"operator-guide/multitenant-logging/#related-articles","title":"Related Articles","text":"
                                                                • Grafana Documentation
                                                                "},{"location":"operator-guide/namespace-management/","title":"Manage Namespace","text":"

                                                                EDP provides the ability to deploy services to namespaces. By default, EDP creates these namespaces automatically. This chapter describes the alternative way of namespace creation and management.

                                                                "},{"location":"operator-guide/namespace-management/#overview","title":"Overview","text":"

                                                                Namespaces are typically created by the platform when running CD Pipelines. The operator creates them according to the specific format: edp-<application-name>-<stage-name>. The cd-pipeline-operator should have the permissions to automatically create namespaces when deploying applications and delete them when uninstalling applications.

                                                                "},{"location":"operator-guide/namespace-management/#disable-automatic-namespace-creation","title":"Disable Automatic Namespace Creation","text":"

                                                                Occasionally, there are cases when automatic creation of namespaces is not allowed. For example, due to security reasons of the project, EDP user may need to disable this setting. This option is manipulated by the manageNamespace parameter which is located in the values.yaml file. The manageNamespace parameter is set to true by default, but it can be changed to false. As an aftermath, after setting the manageNamespace parameter users are supposed to face the problem that they can not deploy their application in EDP Portal UI because of permission restrictions:

                                                                Namespace creation error

                                                                The error message shown above says that user needs to create the namespace in the edp-<application-name>-<stage-name> format first before creating stages. In addition to it, the cd-pipeline-operator must be granted with the administrator permissions to have the ability to manage this namespace. The manual namespace creation procedure does not depend on the deployment scenario whether Jenkins or Tekton is used. To create namespace manually, follow the steps below:

                                                                1. Create the namespace by running the command below:

                                                                   kubectl create namespace edp-<pipelineName>-<stageName>\n
                                                                2. Create the administrator RoleBinding resource by applying the file below with the kubectl apply -f grant_admin_permissions.yaml command:

                                                                  View: grant_admin_permissions.yaml
                                                                   kind: RoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\nname: edp-cd-pipeline-operator-admin\nnamespace: edp-<pipelineName>-<stageName>\nsubjects:\n- kind: ServiceAccount\nname: edp-cd-pipeline-operator\nnamespace: edp\nroleRef:\napiGroup: rbac.authorization.k8s.io\nkind: ClusterRole\nname: admin\n
                                                                3. Restart the cd-pipeline-operator pod, in order not to wait for the operator reconciliation.

                                                                "},{"location":"operator-guide/namespace-management/#cd-pipeline-operator-rbac-model","title":"CD Pipeline Operator RBAC Model","text":"

                                                                The manageNamespace parameter also defines the resources that will be created depending on the cluster deployed whether it is OpenShift or Kubernetes. This scheme displays the nesting of operator input parameters:

                                                                CD Pipeline Operator Input Parameter Scheme

                                                                Note

                                                                When deploying application on the OpenShift cluster, the registry-view RoleBinding is created in the main namespace.

                                                                "},{"location":"operator-guide/namespace-management/#related-articles","title":"Related Articles","text":"
                                                                • EDP Access Model
                                                                • EKS OIDC With Keycloak
                                                                "},{"location":"operator-guide/nexus-sonatype/","title":"Nexus Sonatype Integration","text":"

                                                                This documentation guide provides comprehensive instructions for integrating Nexus with the EPAM Delivery Platform.

                                                                Info

                                                                In EDP release 3.5, we have changed the deployment strategy for the nexus-operator component, now it is not installed by default. The nexusURL parameter management has been transferred from the values.yaml file to Kubernetes secrets.

                                                                "},{"location":"operator-guide/nexus-sonatype/#prerequisites","title":"Prerequisites","text":"

                                                                Before proceeding, ensure that you have the following prerequisites:

                                                                • Kubectl version 1.26.0 is installed.
                                                                • Helm version 3.12.0+ is installed.
                                                                "},{"location":"operator-guide/nexus-sonatype/#installation","title":"Installation","text":"

                                                                To install Nexus with pre-defined templates, use the nexus-operator installed via Cluster Add-Ons approach.

                                                                "},{"location":"operator-guide/nexus-sonatype/#configuration","title":"Configuration","text":"

                                                                To ensure strong authentication and accurate access control, creating a Nexus Sonatype service account with the name ci.user is crucial. This user serves as a unique identifier, facilitating connection with the EDP ecosystem.

                                                                To create the Nexus ci.userand define repository parameters follow the steps below:

                                                                1. Open the Nexus UI and navigate to Server administration and configuration -> Security -> User. Click the Create local user button to create a new user:

                                                                  Nexus user settings

                                                                2. Type the ci-user username, define an expiration period, and click the Generate button to create the token:

                                                                  Nexus create user

                                                                3. EDP relies on a predetermined repository naming convention all repository names are predefined. Navigate to Server administration and configuration -> Repository -> Repositories in Nexus. You can only create a repository with the required language.

                                                                  Nexus repository list

                                                                  JavaJavaScriptDotnetPython

                                                                  a) Click Create a repository by selecting \"maven2(proxy)\" and set the name as \"edp-maven-proxy\". Enter the remote storage URL as \"https://repo1.maven.org/maven2/\". Save the configuration.

                                                                  b) Click Create a repository by selecting \"maven2(hosted)\" and set the name as \"edp-maven-snapshot\". Change the Version policy to \"snapshot\". Save the configuration.

                                                                  c) Click Create a repository by selecting \"maven2(hosted)\" and set the name as \"edp-maven-releases\". Change the Version policy to \"release\". Save the configuration.

                                                                  d) Click Create a repository by selecting \"maven2(group)\" and set the name as \"edp-maven-group\". Change the Version policy to \"release\". Add repository to group. Save the configuration.

                                                                  a) Click Create a repository by selecting \"npm(proxy)\" and set the name as \"edp-npm-proxy\". Enter the remote storage URL as \"https://registry.npmjs.org\". Save the configuration.

                                                                  b) Click Create a repository by selecting \"npm(hosted)\" and set the name as \"edp-npm-snapshot\". Save the configuration.

                                                                  c) Click Create a repository by selecting \"npm(hosted)\" and set the name as \"edp-npm-releases\". Save the configuration.

                                                                  d) Click Create a repository by selecting \"npm(hosted)\" and set the name as \"edp-npm-hosted\". Save the configuration.

                                                                  e) Click Create a repository by selecting \"npm(group)\" and set the name as \"edp-npm-group\". Add repository to group. Save the configuration.

                                                                  a) Click Create a repository by selecting \"nuget(proxy)\" and set the name as \"edp-dotnet-proxy\". Select Protocol version NuGet V3. Enter the remote storage URL as \"https://api.nuget.org/v3/index.json\". Save the configuration.

                                                                  b) Click Create a repository by selecting \"nuget(hosted)\" and set the name as \"edp-dotnet-snapshot\". Save the configuration.

                                                                  c) Click Create a repository by selecting \"nuget(hosted)\" and set the name as \"edp-dotnet-releases\". Save the configuration.

                                                                  d) Click Create a repository by selecting \"nuget(hosted)\" and set the name as \"edp-dotnet-hosted\". Save the configuration.

                                                                  e) Click Create a repository by selecting \"nuget(group)\" and set the name as \"edp-dotnet-group\". Add repository to group. Save the configuration.

                                                                  a) Click Create a repository by selecting \"pypi(proxy)\" and set the name as \"edp-python-proxy\". Enter the remote storage URL as \"https://pypi.org\". Save the configuration.

                                                                  b) Click Create a repository by selecting \"pypi(hosted)\" and set the name as \"edp-python-snapshot\". Save the configuration.

                                                                  c) Click Create a repository by selecting \"pypi(hosted)\" and set the name as \"edp-python-releases\". Save the configuration.

                                                                  d) Click Create a repository by selecting \"pypi(group)\" and set the name as \"edp-python-group\". Add repository to group. Save the configuration.

                                                                4. Provision secrets using manifest, EDP Portal or with the externalSecrets operator

                                                                EDP PortalManifestExternal Secrets Operator

                                                                Go to EDP Portal -> EDP -> Configuration -> Nexus. Update or fill in the URL, nexus-user-id, nexus-user-password and click the Save button:

                                                                Nexus update manual secret

                                                                apiVersion: v1\nkind: Secret\nmetadata:\nname: ci-nexus\nnamespace: edp\nlabels:\napp.edp.epam.com/secret-type: nexus\ntype: Opaque\nstringData:\nurl: https://nexus.example.com\nusername: <nexus-user-id>\npassword: <nexus-user-password>\n
                                                                \"ci-nexus\":\n{\n\"url\": \"https://nexus.example.com\",\n\"username\": \"XXXXXXX\",\n\"password\": \"XXXXXXX\"\n},\n

                                                                Go to EDP Portal -> EDP -> Configuration -> Nexus and see Managed by External Secret message.

                                                                Nexus managed by external secret operator

                                                                More detail of External Secrets Operator Integration can found on the following page

                                                                "},{"location":"operator-guide/nexus-sonatype/#related-articles","title":"Related Articles","text":"
                                                                • Install EDP With Values File
                                                                • Install External Secrets Operator
                                                                • External Secrets Operator Integration
                                                                • Cluster Add-Ons Overview
                                                                "},{"location":"operator-guide/notification-msteams/","title":"Microsoft Teams Notification","text":"

                                                                This section describes how to set up and add notification status to Tekton pipelines by sending pipeline status to the Microsoft Teams channel.

                                                                "},{"location":"operator-guide/notification-msteams/#create-incoming-webhook","title":"Create Incoming WebHook","text":"

                                                                To create a link to Incoming Webhook for the Microsoft Teams channel, follow the steps below:

                                                                1. Open the channel which will be receiving notifications and click the \u2022\u2022\u2022 button from the upper-right corner. Select Connectors in the dropdown menu: Microsoft Teams menu

                                                                2. In the search field, type Incoming Webhook and click Configure: Connectors

                                                                3. Provide a name and upload an image for the webhook if necessary. Click Create: Connectors setup

                                                                4. Copy and save the unique WebHookURL presented in the dialog. Click Done: WebHookURL

                                                                5. Create a secret with the within the edp namespace.

                                                                  kubectl -n edp create secret generic microsoft-teams-webhook-url \\\n--from-literal=url=<webhookURL>\n

                                                                6. Add the notification task to the pipeline and add the code below in final-block in the pipeline and save:

                                                                {{ include \"send-to-microsoft-teams-build\" . | nindent 4 }}\n
                                                                "},{"location":"operator-guide/notification-msteams/#customize-notification-message","title":"Customize Notification Message","text":"

                                                                To make notification message informative, relevant text should be added to the message. Here are the steps to implement it:

                                                                1. Create a new pipeline with a unique name or modify your custom pipeline created before.

                                                                2. Add the task below in the finally block with a unique name. Edit the params.message value if necessary:

                                                                View: Task send-to-microsoft-teams
                                                                - name: 'microsoft-teams-pipeline-status-notification-failed\nparams:\n- name: webhook-url-secret\nvalue: microsoft-teams-webhook-url\n- name: webhook-url-secret-key\nvalue: url\n- name: message\nvalue: >-\nBuild Failed project: $(params.CODEBASE_NAME)<br> branch: $(params.git-source-revision)<br> pipeline: <a href=$(params.pipelineUrl)>$(context.pipelineRun.name)</a><br> commit message: $(params.COMMIT_MESSAGE)\ntaskRef:\nkind: Task\nname: send-to-microsoft-teams\nwhen:\n- input: $(tasks.status)\noperator: in\nvalues:\n- Failed\n- PipelineRunTimeout\n

                                                                After customization, the following message is supposed to appear in the channel when failing pipelines:

                                                                Notification example

                                                                "},{"location":"operator-guide/notification-msteams/#related-articles","title":"Related Articles","text":"
                                                                • Install EDP
                                                                • Install Tekton
                                                                "},{"location":"operator-guide/oauth2-proxy/","title":"Protect Endpoints","text":"

                                                                OAuth2-Proxy is a versatile tool that serves as a reverse proxy, utilizing the OAuth 2.0 protocol with various providers like Google, GitHub, and Keycloak to provide both authentication and authorization. This guide instructs readers on how to protect their applications' endpoints using OAuth2-Proxy. By following these steps, users can strengthen their endpoints' security without modifying their current application code. In the context of EDP, it has integration with the Keycloak OIDC provider, enabling it to link with any component that lacks built-in authentication.

                                                                Note

                                                                OAuth2-Proxy is disabled by default when installing EDP.

                                                                "},{"location":"operator-guide/oauth2-proxy/#prerequisites","title":"Prerequisites","text":"
                                                                • Keycloak with OIDC authentication is installed.
                                                                "},{"location":"operator-guide/oauth2-proxy/#enable-oauth2-proxy","title":"Enable OAuth2-Proxy","text":"

                                                                Enabling OAuth2-Proxy implies the following general steps:

                                                                1. Update your EDP deployment using command --set 'oauth2_proxy.enabled=true' or the --values file by enabling the oauth2_proxy parameter.
                                                                2. Check that OAuth2-Proxy is deployed successfully.
                                                                3. Enable authentication for your Ingress by adding auth-signin and auth-url of OAuth2-Proxy to its annotation.

                                                                This will deploy and connect OAuth2-Proxy to your application endpoint.

                                                                "},{"location":"operator-guide/oauth2-proxy/#enable-oauth2-proxy-on-tekton-dashboard","title":"Enable OAuth2-Proxy on Tekton Dashboard","text":"

                                                                The example below illustrates how to use OAuth2-Proxy in practice when using the Tekton dashboard:

                                                                KubernetesOpenshift
                                                                1. Run helm upgrade to update edp-install release:
                                                                  helm upgrade --version <version> --set 'oauth2_proxy.enabled=true' edp-install --namespace edp\n
                                                                2. Check that OAuth2-Proxy is deployed successfully.
                                                                3. Edit the Tekton dashboard Ingress annotation by adding auth-signin and auth-url of oauth2-proxy by kubectl command:
                                                                  kubectl annotate ingress <application-ingress-name> nginx.ingress.kubernetes.io/auth-signin='https://<oauth-ingress-host>/oauth2/start?rd=https://$host$request_uri' nginx.ingress.kubernetes.io/auth-url='http://oauth2-proxy.edp.svc.cluster.local:8080/oauth2/auth'\n
                                                                1. Generate a cookie-secret for proxy with the following command:
                                                                  tekton_dashboard_cookie_secret=$(openssl rand -base64 32 | head -c 32)\n
                                                                2. Create tekton-dashboard-proxy-cookie-secret in the edp namespace:
                                                                  kubectl -n edp create secret generic tekton-dashboard-proxy-cookie-secret \\\n--from-literal=cookie-secret=${tekton_dashboard_cookie_secret}\n
                                                                3. Run helm upgrade to update edp-install release:
                                                                  helm upgrade --version <version> --set 'edp-tekton.dashboard.openshift_proxy.enabled=true' edp-install --namespace edp\n
                                                                "},{"location":"operator-guide/oauth2-proxy/#related-articles","title":"Related Articles","text":"

                                                                Keycloak Installation Keycloak OIDC Installation Tekton Installation

                                                                "},{"location":"operator-guide/openshift-cluster-settings/","title":"Set Up OpenShift","text":"

                                                                Make sure the cluster meets the following conditions:

                                                                1. OpenShift cluster is installed with minimum 2 worker nodes with total capacity 8 Cores and 32Gb RAM.

                                                                2. Load balancer (if any exists in front of OpenShift router or ingress controller) is configured with session stickiness, disabled HTTP/2 protocol and header size of 64k support.

                                                                  Find below an example of the Config Map for the NGINX Ingress Controller:

                                                                  kind: ConfigMap\napiVersion: v1\nmetadata:\nname: nginx-configuration\nnamespace: ingress-nginx\nlabels:\napp.kubernetes.io/name: ingress-nginx\napp.kubernetes.io/part-of: ingress-nginx\ndata:\nclient-header-buffer-size: 64k\nlarge-client-header-buffers: 4 64k\nuse-http2: \"false\"\n
                                                                3. Cluster nodes and pods have access to the cluster via external URLs. For instance, add in AWS the VPC NAT gateway elastic IP to the cluster external load balancers security group).

                                                                4. Keycloak instance is installed. To get accurate information on how to install Keycloak, please refer to the Install Keycloak instruction.

                                                                5. The installation machine with oc is installed with the cluster-admin access to the OpenShift cluster.

                                                                6. Helm 3.10 is installed on the installation machine with the help of the Installing Helm instruction.

                                                                7. Storage classes are used with the Retain Reclaim Policy and Delete Reclaim Policy.

                                                                8. We recommended using our storage class as default storage class.

                                                                  Info

                                                                  By default, EDP uses the default Storage Class in a cluster. The EDP development team recommends using the following Storage Classes. See an example below.

                                                                  Storage class templates with the Retain and Delete Reclaim Policies:

                                                                  ebs-scgp3gp3-retain
                                                                  apiVersion: storage.k8s.io/v1\nkind: StorageClass\nmetadata:\nname: ebs-sc\nannotations:\nstorageclass.kubernetes.io/is-default-class: 'true'\nallowedTopologies: []\nmountOptions: []\nprovisioner: ebs.csi.aws.com\nreclaimPolicy: Retain\nvolumeBindingMode: Immediate\n
                                                                  kind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\nname: gp3\nannotations:\nstorageclass.kubernetes.io/is-default-class: 'true'\nallowedTopologies: []\nmountOptions: []\nprovisioner: ebs.csi.aws.com\nreclaimPolicy: Delete\nvolumeBindingMode: Immediate\nallowVolumeExpansion: true\n
                                                                  kind: StorageClass\napiVersion: storage.k8s.io/v1\nmetadata:\nname: gp3-retain\nallowedTopologies: []\nmountOptions: []\nprovisioner: ebs.csi.aws.com\nreclaimPolicy: Retain\nvolumeBindingMode: WaitForFirstConsumer\nallowVolumeExpansion: true\n
                                                                "},{"location":"operator-guide/openshift-cluster-settings/#related-articles","title":"Related Articles","text":"
                                                                • Install Amazon EBS CSI Driver
                                                                • Install Keycloak
                                                                "},{"location":"operator-guide/overview-devsecops/","title":"Secure Delivery on the Platform","text":"

                                                                The EPAM Delivery Platform emphasizes the importance of incorporating security practices into the software development lifecycle through the DevSecOps approach. By integrating a diverse range of open-source and enterprise security tools tailored to specific functionalities, organizations can ensure efficient and secure software development. These tools, combined with fundamental DevSecOps principles such as collaboration, continuous security, and automation, contribute to the identification and remediation of vulnerabilities early in the process, minimizes risks, and fosters a security-first culture across the organization.

                                                                The EPAM Delivery Platform enabling seamless integration with various security tools and vulnerability management systems, enhancing the security of source code and ensuring compliance.

                                                                "},{"location":"operator-guide/overview-devsecops/#supported-solutions","title":"Supported Solutions","text":"

                                                                The below table categorizes various open-source and enterprise security tools based on their specific functionalities. It provides a comprehensive view of the available options for each security aspect. This classification facilitates informed decision-making when selecting and integrating security tools into a development pipeline, ensuring an efficient and robust security stance. EDP supports the integration of both open-source and enterprise security tools, providing a flexible and versatile solution for security automation. See table below for more details.

                                                                Functionality Open-Source Tools (integrated in Pipelines) Enterprise Tools (available for Integration) Hardcoded Credentials Scanner TruffleHog, GitLeaks, GitSecrets GitGuardian, SpectralOps, Bridgecrew Static Application Security Testing SonarQube, Semgrep CLI Veracode, Checkmarx, Coverity Software Composition Analysis OWASP Dependency-Check, cdxgen, Nancy Black Duck Hub, Mend, Snyk Container Security Trivy, Grype, Clair Aqua Security, Sysdig Secure, Snyk Infrastructure as Code Security Checkov, Tfsec Bridgecrew, Prisma Cloud, Snyk Dynamic Application Security Testing OWASP Zed Attack Proxy Fortify WebInspect, Rapid7 InsightAppSec, Checkmarx Continuous Monitoring and Logging ELK Stack, OpenSearch, Loki Splunk, Datadog Security Audits and Assessments OpenVAS Tenable Nessus, QualysGuard, BurpSuite Professional Vulnerability Management and Reporting DefectDojo, OWASP Dependency-Track -

                                                                For obtaining and managing report post scanning, deployment of various vulnerability management systems and security tools is required. These include:

                                                                "},{"location":"operator-guide/overview-devsecops/#defectdojo","title":"DefectDojo","text":"

                                                                DefectDojo is a comprehensive vulnerability management and security orchestration platform facilitating the handling of uploaded security reports. Examine the prerequisites and fundamental instructions for installing DefectDojo on Kubernetes or OpenShift platforms.

                                                                "},{"location":"operator-guide/overview-devsecops/#owasp-dependency-track","title":"OWASP Dependency Track","text":"

                                                                Dependency Track is an intelligent Software Composition Analysis (SCA) platform that provides a comprehensive solution for managing vulnerabilities in third-party and open-source components.

                                                                "},{"location":"operator-guide/overview-devsecops/#gitleaks","title":"Gitleaks","text":"

                                                                Gitleaks is a versatile SAST tool used to scan Git repositories for hardcoded secrets, such as passwords and API keys, to prevent potential data leaks and unauthorized access.

                                                                "},{"location":"operator-guide/overview-devsecops/#trivy","title":"Trivy","text":"

                                                                Trivy is a simple and comprehensive vulnerability scanner for containers and other artifacts, providing insight into potential security issues across multiple ecosystems.

                                                                "},{"location":"operator-guide/overview-devsecops/#grype","title":"Grype","text":"

                                                                Grype is a fast and reliable vulnerability scanner for container images and filesystems, maintaining an up-to-date vulnerability database for efficient and accurate scanning.

                                                                "},{"location":"operator-guide/overview-devsecops/#tfsec","title":"Tfsec","text":"

                                                                Tfsec is an effective Infrastructure as Code (IaC) security scanner, tailored specifically for reviewing Terraform templates. It helps identify potential security issues related to misconfigurations and non-compliant practices, enabling developers to address vulnerabilities and ensure secure infrastructure deployment.

                                                                "},{"location":"operator-guide/overview-devsecops/#checkov","title":"Checkov","text":"

                                                                Checkov is a robust static code analysis tool designed for IaC security, supporting various IaC frameworks such as Terraform, CloudFormation, and Kubernetes. It assists in detecting and mitigating security and compliance misconfigurations, promoting best practices and adherence to industry standards across the infrastructure.

                                                                "},{"location":"operator-guide/overview-devsecops/#cdxgen","title":"Cdxgen","text":"

                                                                Cdxgen is a lightweight and efficient tool for generating Software Bill of Materials (SBOM) using CycloneDX, a standard format for managing component inventory. It helps organizations maintain an up-to-date record of all software components, their versions, and related vulnerabilities, streamlining monitoring and compliance within the software supply chain.

                                                                "},{"location":"operator-guide/overview-devsecops/#semgrep-cli","title":"Semgrep CLI","text":"

                                                                Semgrep CLI is a versatile and user-friendly command-line interface for the Semgrep security scanner, enabling developers to perform Static Application Security Testing (SAST) for various programming languages. It focuses on detecting and preventing potential security vulnerabilities, code quality issues, and custom anti-patterns, ensuring secure and efficient code development.

                                                                "},{"location":"operator-guide/overview-manage-jenkins-pipelines/","title":"Overview","text":"

                                                                Jenkins job provisioners are responsible for creating and managing pipelines in Jenkins. In other words, provisioners configure all Jenkins pipelines and bring them to the state described in the provisioners code. Two types of provisioners are available in EDP:

                                                                • CI-provisioner - manages the application folder, and its Code Review, Build and Create Release pipelines.
                                                                • CD-provisioner - manages the Deploy pipelines.

                                                                The subsections describe the creation/update process of provisioners and their content depending on EDP customization.

                                                                "},{"location":"operator-guide/overview-sast/","title":"Static Application Security Testing Overview","text":"

                                                                EPAM Delivery Platform provides the implemented Static Application Security Testing support allowing to work with the Semgrep security scanner and the DefectDojo vulnerability management system to check the source code for known vulnerabilities.

                                                                "},{"location":"operator-guide/overview-sast/#supported-languages","title":"Supported Languages","text":"

                                                                EDP SAST supports a number of languages and package managers.

                                                                Language (Package Managers) Scan Tool Build Tool Java Semgrep Maven, Gradle Go Semgrep Go React Semgrep Npm"},{"location":"operator-guide/overview-sast/#supported-vulnerability-management-system","title":"Supported Vulnerability Management System","text":"

                                                                To get and then manage a SAST report after scanning, it is necessary to deploy the vulnerability management system, for instance, DefectDojo.

                                                                "},{"location":"operator-guide/overview-sast/#defectdojo","title":"DefectDojo","text":"

                                                                DefectDojo is a vulnerability management and security orchestration platform that allows managing the uploaded security reports.

                                                                Inspect the prerequisites and the main steps for installing DefectDojo on Kubernetes or OpenShift platforms.

                                                                "},{"location":"operator-guide/overview-sast/#related-articles","title":"Related Articles","text":"
                                                                • Add Security Scanner
                                                                • Semgrep
                                                                "},{"location":"operator-guide/perf-integration/","title":"Perf Server Integration","text":"

                                                                Integration with Perf Server allows connecting to the PERF Board (Project Performance Board) and monitoring the overall team performance as well as setting up necessary metrics.

                                                                Note

                                                                To adjust the PERF Server integration, make sure that PERF Operator is deployed. To get more information about the PERF Operator installation and architecture, please refer to the PERF Operator page.

                                                                For integration, take the following steps:

                                                                1. Create Secret in the OpenShift/Kubernetes namespace for Perf Server account with the username and password fields:

                                                                  apiVersion: v1\ndata:\npassword: passwordInBase64\nusername: usernameInBase64\nkind: Secret\nmetadata:\nname: epam-perf-user\ntype: kubernetes.io/basic-auth\n
                                                                2. In the edp-config config map, enable the perf_integration flag and click Save:

                                                                   perf_integration_enabled: 'true'\n
                                                                3. Being in Admin Console, navigate to the Advanced Settings menu to check that the Integrate with Perf Server check box appeared:

                                                                  Advanced settings

                                                                "},{"location":"operator-guide/perf-integration/#related-articles","title":"Related Articles","text":"
                                                                • Add Application
                                                                • Add Autotest
                                                                • Add Library
                                                                "},{"location":"operator-guide/prerequisites/","title":"EDP Installation Prerequisites Overview","text":"

                                                                Before installing EDP:

                                                                • Install and configure Kubernetes or OpenShift cluster.
                                                                • Install EDP components for the selected EDP installation scenario.
                                                                "},{"location":"operator-guide/prerequisites/#edp-installation-scenarios","title":"EDP Installation Scenarios","text":"

                                                                There are two EDP installation scenarios based on the selected CI tool: Tekton (default) or Jenkins.

                                                                Scenario 1: Tekton CI tool. By default, EDP uses Tekton as a CI tool and EDP Portal as a UI tool.

                                                                Scenario 2: Jenkins CI tool. To use Jenkins as a CI tool, it is required to install the deprecated Admin Console UI tool. Admin Console is used only as a dependency for Jenkins, and Portal will still be used as a UI tool.

                                                                Note

                                                                Starting from version 3.0.0, all the new enhancements and functionalities will be introduced only for Tekton deploy scenario. Jenkins deploy scenario will be supported at the bug fix and security breach level only. We understand that some users may need additional functionality in Jenkins, so if any, please create your request here. To stay up-to-date with all the updates, please check the Release Notes page.

                                                                Find below the list of the components to be installed for each scenario:

                                                                Component Tekton CI tool Jenkins CI tool Cluster Tekton Mandatory - NGINX Ingress Controller1 Mandatory Mandatory Keycloak Mandatory Mandatory DefectDojo Mandatory Mandatory Argo CD Mandatory Optional ReportPortal Optional Optional Kiosk Optional Optional External Secrets Optional Optional Harbor Optional Optional

                                                                Note

                                                                Alternatively, use Helmfiles to install the EDP components.

                                                                After setting up the cluster and installing EDP components according to the selected scenario, proceed to the EDP installation.

                                                                "},{"location":"operator-guide/prerequisites/#related-articles","title":"Related Articles","text":"
                                                                • Set Up Kubernetes
                                                                • Set Up OpenShift
                                                                • Install EDP
                                                                1. OpenShift cluster uses Routes to provide access to pods from external resources.\u00a0\u21a9

                                                                "},{"location":"operator-guide/report-portal-integration-tekton/","title":"Integration With Tekton","text":"

                                                                ReportPortal integration with Tekton allows managing all automation results and reports in one place, visualizing metrics and analytics, team collaborating to associate statistics results.

                                                                For integration, take the following steps:

                                                                1. Log in to the ReportPortal console and navigate to the User Profile menu:

                                                                  ReportPortal profile

                                                                2. Copy the Access token and use it as a value while creating a kubernetes secret for the ReportPortal credentials:

                                                                  apiVersion: v1\nkind: Secret\ntype: Opaque\nmetadata:\nname: rp-credentials\nnamespace: edp\nstringData:\nrp_uuid: <access-token>\n
                                                                3. In the Configuration examples section of the ReportPortal User Profile menu, copy the following REQUIRED fields: rp.endpoint, rp.launch and rp.project. Insert these fields to the pytest.ini file in root directory of your project:

                                                                  [pytest]\naddopts = -rsxX -l --tb=short --junitxml test-report.xml\nrp_endpoint = <endpoint>\nrp_launch = <launch>\nrp_project = <project>\n
                                                                4. In root directory of the project create/update requirements.txt file and fill with following. it's mandatory to install report-portal python library (version may vary):

                                                                  pytest-reportportal == 5.1.2\n

                                                                5. Create a custom Tekton task:

                                                                  View: Custom Tekton task
                                                                  apiVersion: tekton.dev/v1beta1\nkind: Task\nmetadata:\nlabels:\napp.kubernetes.io/version: '0.1'\nname: pytest-reportportal\nnamespace: edp\nspec:\ndescription: |-\nThis task can be used to run pytest integrated with report portal.\nparams:\n- default: .\ndescription: The path where package.json of the project is defined.\nname: PATH_CONTEXT\ntype: string\n- name: EXTRA_COMMANDS\ntype: string\n- default: python:3.8-alpine3.16\ndescription: The python image you want to use.\nname: BASE_IMAGE\ntype: string\n- default: rp-credentials\ndescription: name of the secret holding the rp token\nname: rp-secret\ntype: string\nsteps:\n- env:\n- name: HOME\nvalue: $(workspaces.source.path)\n- name: RP_UUID\nvalueFrom:\nsecretKeyRef:\nkey: rp_uuid\nname: $(params.rp-secret)\nimage: $(params.BASE_IMAGE)\nname: pytest\nresources: {}\nscript: >\n#!/usr/bin/env sh\nset -e\nexport PATH=$PATH:$HOME/.local/bin\n$(params.EXTRA_COMMANDS)\n# tests are being run from ./test directory in the project\npytest ./tests --reportportal\nworkingDir: $(workspaces.source.path)/$(params.PATH_CONTEXT)\nworkspaces:\n- name: source\n
                                                                6. Add this task ref to your Tekton pipeline after tasks:

                                                                  View: Tekton pipeline
                                                                  - name: pytest\nparams:\n- name: BASE_IMAGE\nvalue: $(params.image)\n- name: EXTRA_COMMANDS\nvalue: |\nset -ex\npip3 install -r requirements.txt\n[ -f run_service.py ] && python run_service.py &\nrunAfter:\n- compile\ntaskRef:\nkind: Task\nname: pytest-reportportal\nworkspaces:\n- name: source\nworkspace: shared-workspace\n
                                                                7. Launch your Tekton pipeline and check that the custom task has been successfully executed:

                                                                  Tekton task successfully executed

                                                                8. Test reports will be displayed in the Launches section of the ReportPortal:

                                                                  Test report results

                                                                "},{"location":"operator-guide/report-portal-integration-tekton/#related-articles","title":"Related Articles","text":"
                                                                • ReportPortal Installation
                                                                • Keycloak Integration
                                                                • Pytest Integration With ReportPortal
                                                                "},{"location":"operator-guide/reportportal-keycloak/","title":"Keycloak Integration","text":"

                                                                Follow the steps below to integrate the ReportPortal with Keycloak.

                                                                "},{"location":"operator-guide/reportportal-keycloak/#prerequisites","title":"Prerequisites","text":"
                                                                • Installed Keycloak. Please follow the instruction for details.
                                                                • Installed ReportPortal. Please follow the instruction to install it from Helmfile or using the Helm Chart.
                                                                "},{"location":"operator-guide/reportportal-keycloak/#keycloak-configuration","title":"Keycloak Configuration","text":"
                                                                1. Navigate to Client Scopes > Create client scope and create a new scope with the SAML protocol type.

                                                                2. Navigate to Client Scopes > your_scope_name > Mappers > Configure a new mapper > select the User Attribute mapper type. Add three mappers for the email, first name, and last name by typing lastName, firstName, and email in the User Attribute field:

                                                                  • Name is a display name in Keycloak.
                                                                  • User Attribute is a user property for mapping.
                                                                  • SAML Attribute Name is an attribute used for requesting information in the ReportPortal configuration.
                                                                  • SAML Attribute NameFormat: Basic.
                                                                  • Aggregate attribute values: Off.

                                                                  User mapper sample Scope mappers

                                                                3. Navigate to Clients > Create client and fill in the following fields:

                                                                  • Client type: SAML.
                                                                  • Client ID: report.portal.sp.id.

                                                                  Warning

                                                                  The report.portal.sp.id Client ID is a constant value.

                                                                4. Navigate to Client > your_client > Settings and add https://<report-portal-url\\>/* to the Valid redirect URIs.

                                                                5. Navigate to Client > your_client > Keys and disable Client signature required.

                                                                  Client keys

                                                                6. Navigate to Client > your_client > Client scopes and add the scope created on step 3 with the default Assigned type.

                                                                  Client scopes

                                                                "},{"location":"operator-guide/reportportal-keycloak/#reportportal-configuration","title":"ReportPortal Configuration","text":"
                                                                1. Log in to the ReportPortal with the admin permissions.

                                                                2. Navigate to Client > Administrate > Plugins and select the SAML plugin.

                                                                  Plugins menu

                                                                3. To add a new integration, fill in the following fields:

                                                                  Add SAML configuration

                                                                  • Provider name is the display name in the ReportPortal login page.
                                                                  • Metadata URL: https://<keycloak_url\\>/auth/realms/<realm\\>/protocol/saml/descriptor.
                                                                  • Email is the value from the SAML Attribute Name field in the Keycloak mapper.
                                                                  • RP callback URL: https://<report_portal_url\\>/uat.
                                                                  • Name attributes mode is the first & last name (type based on your mapper).
                                                                  • First name is the value from the SAML Attribute Name field in the Keycloak mapper.
                                                                  • Last name is the value from the SAML Attribute Name field in the Keycloak mapper.
                                                                4. Log in to the ReportPortal.

                                                                  Note

                                                                  By default, after the first login, ReportPortal creates the <your_email>_personal project and adds an account with the Project manager role.

                                                                  Report portal login page

                                                                "},{"location":"operator-guide/reportportal-keycloak/#related-articles","title":"Related Articles","text":"
                                                                • ReportPortal Installation
                                                                • Integration With Tekton
                                                                "},{"location":"operator-guide/restore-edp-with-velero/","title":"Restore EDP Tenant With Velero","text":"

                                                                You can use the Velero tool to restore a EDP tenant. Explore the main steps for backup and restoring below.

                                                                1. Delete all related entities in Keycloak: realm and clients from master/openshift realms. Navigate to the entities list in the Keycloak, select the necessary ones, and click the deletion icon on the entity overview page. If there are customized configs in Keycloak, save them before making backup.

                                                                  Remove keycloak realm

                                                                2. To restore EDP, install and configure the Velero tool. Please refer to the Install Velero documentation for details.

                                                                3. Remove all locks for operators. Delete all config maps that have \u2039OPERATOR_NAME\u203a-operator-lock names. Then restart all pods with operators, or simply run the following command:

                                                                       kubectl -n edp delete cm $(kubectl -n edp get cm | grep 'operator-lock' | awk '{print $1}')\n
                                                                4. Recreate the admin password and delete the Jenkins pod. Or change the script to update the admin password in Jenkins every time when the pod is updated.

                                                                "},{"location":"operator-guide/sast-scaner-semgrep/","title":"Semgrep","text":"

                                                                Semgrep is an open-source static source code analyzer for finding bugs and enforcing code standards.

                                                                Semgrep scanner is installed on the EDP Jenkins SAST agent and runs on the sast pipeline stage. For details, please refer to the edp-library-stages repository.

                                                                "},{"location":"operator-guide/sast-scaner-semgrep/#supported-languages","title":"Supported Languages","text":"

                                                                Semgrep supports more than 20 languages, see the full list in the official documentation. EDP uses Semgrep to scan Java, JavaScript and Go languages.

                                                                "},{"location":"operator-guide/sast-scaner-semgrep/#related-articles","title":"Related Articles","text":"
                                                                • Add Security Scanner
                                                                "},{"location":"operator-guide/schedule-pods-restart/","title":"Schedule Pods Restart","text":"

                                                                In case it is necessary to restart pods, use a CronJob according to the following template:

                                                                View: template
                                                                ---\nkind: Role\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\nnamespace: <NAMESPACE>\nname: apps-restart\nrules:\n- apiGroups: [\"apps\"]\nresources:\n- deployments\n- statefulsets\nverbs:\n- 'get'\n- 'list'\n- 'patch'\n---\nkind: RoleBinding\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\nname: apps-restart\nnamespace: <NAMESPACE>\nsubjects:\n- kind: ServiceAccount\nname: apps-restart-sa\nnamespace: <NAMESPACE>\nroleRef:\nkind: Role\nname: apps-restart\napiGroup: \"\"\n---\napiVersion: v1\nkind: ServiceAccount\nmetadata:\nname: apps-restart-sa\nnamespace: <NAMESPACE>\n---\napiVersion: batch/v1beta1\nkind: CronJob\nmetadata:\nname: apps-rollout-restart\nnamespace: <NAMESPACE>\nspec:\nschedule: \"0 9 * * MON-FRI\"\njobTemplate:\nspec:\ntemplate:\nspec:\nserviceAccountName: apps-restart-sa\ncontainers:\n- name: kubectl-runner\nimage: bitnami/kubectl\ncommand:\n- /bin/sh\n- -c\n- kubectl get -n <NAMESPACE> -o name deployment,statefulset | grep <NAME_PATTERN>| xargs kubectl -n <NAMESPACE> rollout restart\nrestartPolicy: Never\n

                                                                Modify the Cron expression in the CronJob manifest if needed.

                                                                "},{"location":"operator-guide/sonarqube/","title":"SonarQube Integration","text":"

                                                                This documentation guide provides comprehensive instructions for integrating SonarQube with the EPAM Delivery Platform.

                                                                Info

                                                                In EDP release 3.5, we have changed the deployment strategy for the sonarqube-operator component, now it is not installed by default. The sonarURL parameter management has been transferred from the values.yaml file to Kubernetes secrets.

                                                                "},{"location":"operator-guide/sonarqube/#prerequisites","title":"Prerequisites","text":"

                                                                Before proceeding, ensure that you have the following prerequisites:

                                                                • Kubectl version 1.26.0 is installed.
                                                                • Helm version 3.12.0+ is installed.
                                                                "},{"location":"operator-guide/sonarqube/#installation","title":"Installation","text":"

                                                                To install SonarQube with pre-defined templates, use the sonar-operator installed via Cluster Add-Ons approach.

                                                                "},{"location":"operator-guide/sonarqube/#configuration","title":"Configuration","text":"

                                                                To establish robust authentication and precise access control, generating a SonarQube token is essential. This token is a distinct identifier, enabling effortless integration between SonarQube and EDP. To generate the SonarQube token, proceed with the following steps:

                                                                1. Open the SonarQube UI and navigate to Administration -> Security -> User. Create a new user or select an existing one. Click the Options List icon to create a token:

                                                                  SonarQube user settings

                                                                2. Type the ci-user username, define an expiration period, and click the Generate button to create the token:

                                                                  SonarQube create token

                                                                3. Click the Copy button to copy the generated <Sonarqube-token>:

                                                                  SonarQube token

                                                                4. Provision secrets using Manifest, EDP Portal or with the externalSecrets operator:

                                                                EDP PortalManifestExternal Secrets Operator

                                                                Go to EDP Portal -> EDP -> Configuration -> SonarQube. Update or fill in the URL and Token fields and click the Save button:

                                                                SonarQube update manual secret

                                                                apiVersion: v1\nkind: Secret\nmetadata:\nname: ci-sonarqube\nnamespace: edp\nlabels:\napp.edp.epam.com/secret-type: sonar\ntype: Opaque\nstringData:\nurl: https://sonarqube.example.com\ntoken: <sonarqube-token>\n
                                                                \"ci-sonarqube\":\n{\n\"url\": \"https://sonarqube.example.com\",\n\"token\": \"XXXXXXXXXXXX\"\n},\n

                                                                Go to EDP Portal -> EDP -> Configuration -> SonarQube and see the Managed by External Secret message:

                                                                SonarQube managed by external secret operator

                                                                More details about External Secrets Operator integration can be found in the External Secrets Operator Integration page.

                                                                "},{"location":"operator-guide/sonarqube/#related-articles","title":"Related Articles","text":"
                                                                • Install EDP With Values File
                                                                • Install External Secrets Operator
                                                                • External Secrets Operator Integration
                                                                • Cluster Add-Ons Overview
                                                                "},{"location":"operator-guide/ssl-automation-okd/","title":"Use Cert-Manager in OpenShift","text":"

                                                                The following material covers Let's Encrypt certificate automation with cert-manager using AWS Route53.

                                                                The cert-manager is a Kubernetes/OpenShift operator that allows to issue and automatically renew SSL certificates. In this tutorial, the steps to secure DNS Name will be demonstrated.

                                                                Below is an instruction on how to automatically issue and install wildcard certificates on OpenShift Ingress Controller and API Server covering all cluster Routes. To secure separate OpenShift Routes, please refer to the OpenShift Route Support project for cert-manager.

                                                                "},{"location":"operator-guide/ssl-automation-okd/#prerequisites","title":"Prerequisites","text":"
                                                                • The cert-manager;
                                                                • OpenShift v4.7 - v4.11;
                                                                • Connection to the OpenShift Cluster;
                                                                • Enabled AWS IRSA;
                                                                • The latest oc utility. The kubectl tool can also be used for most of the commands.
                                                                "},{"location":"operator-guide/ssl-automation-okd/#install-cert-manager-operator","title":"Install Cert-Manager Operator","text":"

                                                                Install the cert-manager operator via OpenShift OperatorHub that uses Operator Lifecycle Manager (OLM):

                                                                1. Go to the OpenShift Admin Console \u2192 OperatorHub, search for the cert-manager, and click Install:

                                                                  Cert-Manager Installation

                                                                2. Modify the ClusterServiceVersion OLM resource, by selecting the Update approval \u2192 Manual. If selecting Update approval \u2192 Automatic after the automatic operator update, the parameters in the ClusterServiceVersion will be reset to default.

                                                                  Note

                                                                  Installing an operator with Manual approval causes all operators installed in namespace openshift-operators to function as manual approval strategy. In case the Manual approval is chosen, review the manual installation plan and approve it.

                                                                  Cert-Manager Installation

                                                                3. Navigate to Operators \u2192 Installed Operators and check the operator status to be Succeeded:

                                                                  Cert-Manager Installation

                                                                4. In case of errors, troubleshoot the Operator issues:

                                                                  oc describe operator cert-manager -n openshift-operators\noc describe sub cert-manager -n openshift-operators\n
                                                                "},{"location":"operator-guide/ssl-automation-okd/#create-aws-role-for-route53","title":"Create AWS Role for Route53","text":"

                                                                The cert-manager should be configured to validate Wildcard certificates using the DNS-based method.

                                                                1. Check the DNS Hosted zone ID in AWS Route53 for your domain.

                                                                  Hosted Zone ID

                                                                2. Create Route53 Permissions policy in AWS for cert-manager to be able to create DNS TXT records for the certificate validation. In this example, cert-manager permissions are given for a particular DNS zone only. Replace Hosted zone ID XXXXXXXX in the \"Resource\": \"arn:aws:route53:::hostedzone/XXXXXXXXXXXX\".

                                                                  {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Effect\": \"Allow\",\n\"Action\": \"route53:GetChange\",\n\"Resource\": \"arn:aws:route53:::change/*\"\n},\n{\n\"Effect\": \"Allow\",\n\"Action\": [\n\"route53:ChangeResourceRecordSets\",\n\"route53:ListResourceRecordSets\"\n],\n\"Resource\": \"arn:aws:route53:::hostedzone/XXXXXXXXXXXX\"\n}\n]\n}\n
                                                                3. Create an AWS Role with Custom trust policy for the cert-manager service account to use the AWS IRSA feature and then attach the created policy. Replace the following:

                                                                  • ${aws-account-id} with the AWS account ID of the EKS cluster.
                                                                  • ${aws-region} with the region where the EKS cluster is located.
                                                                  • ${eks-hash} with the hash in the EKS API URL; this will be a random 32 character hex string, for example, 45DABD88EEE3A227AF0FA468BE4EF0B5.
                                                                  • ${namespace} with the namespace where cert-manager is running.
                                                                  • ${service-account-name} with the name of the ServiceAccount object created by cert-manager.
                                                                  • By default, it is \"system:serviceaccount:openshift-operators:cert-manager\" if cert-manager is installed via OperatorHub.
                                                                  • Attach the created Permission policy for Route53 to the Role.
                                                                  • Optionally, add Permissions boundary to the Role.

                                                                    {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Effect\": \"Allow\",\n\"Action\": \"sts:AssumeRoleWithWebIdentity\",\n\"Principal\": {\n\"Federated\": \"arn:aws:iam::* ${aws-account-id}:oidc-provider/oidc.eks.${aws-region}.amazonaws.com/id/${eks-hash}\"\n},\n\"Condition\": {\n\"StringEquals\": {\n\"oidc.eks.${aws-region}.amazonaws.com/id/${eks-hash}:sub\": \"system:serviceaccount:${namespace}:${service-account-name}\"\n}\n}\n}\n]\n}\n
                                                                4. Copy the created Role ARN.

                                                                "},{"location":"operator-guide/ssl-automation-okd/#configure-cert-manager-integration-with-aws-route53","title":"Configure Cert-Manager Integration With AWS Route53","text":"
                                                                1. Annotate the ServiceAccount created by cert-manager (required for AWS IRSA), and restart the cert-manager pod.

                                                                2. Replace the eks.amazonaws.com/role-arn annotation value with your own Role ARN.

                                                                  oc edit sa cert-manager -n openshift-operators\n
                                                                  apiVersion: v1\nkind: ServiceAccount\nmetadata:\nannotations:\neks.amazonaws.com/role-arn: arn:aws:iam::XXXXXXXXXXXX:role/cert-manager\n
                                                                3. Modify the cert-manager Deployment with the correct file system permissions fsGroup: 1001, so that the ServiceAccount token can be read.

                                                                  Note

                                                                  In case the ServiceAccount token cannot be read and the operator is installed using the OperatorHub, add fsGroup: 1001 via OpenShift ClusterServiceVersion OLM resource. It should be a cert-manager controller spec. These actions are not required for OpenShift v4.10.

                                                                  oc get csv\noc edit csv cert-manager.${VERSION}\n
                                                                  spec:\ntemplate:\nspec:\nsecurityContext:\nfsGroup: 1001\nserviceAccountName: cert-manager\n

                                                                  Cert-Manager System Permissions

                                                                  Info

                                                                  A mutating admission controller will automatically modify all pods running with the service account:

                                                                  cert-manager controller pod

                                                                  apiVersion: apps/v1\nkind: Pod\n# ...\nspec:\n# ...\nserviceAccountName: cert-manager\nserviceAccount: cert-manager\ncontainers:\n- name: ...\n# ...\nenv:\n- name: AWS_ROLE_ARN\nvalue: >-\narn:aws:iam::XXXXXXXXXXX:role/cert-manager\n- name: AWS_WEB_IDENTITY_TOKEN_FILE\nvalue: /var/run/secrets/eks.amazonaws.com/serviceaccount/token\nvolumeMounts:\n- name: aws-iam-token\nreadOnly: true\nmountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount\nvolumes:\n- name: aws-iam-token\nprojected:\nsources:\n- serviceAccountToken:\naudience: sts.amazonaws.com\nexpirationSeconds: 86400\npath: token\ndefaultMode: 420\n

                                                                4. If you have separate public and private DNS zones for the same domain (split-horizon DNS), modify the cert-manager Deployment in order to validate DNS TXT records via public recursive nameservers.

                                                                  Note

                                                                  Otherwise, you will be getting an error during a record validation:

                                                                  Waiting for DNS-01 challenge propagation: NS ns-123.awsdns-00.net.:53 returned REFUSED for _acme-challenge.\n
                                                                  To avoid the error, add --dns01-recursive-nameservers-only --dns01-recursive-nameservers=8.8.8.8:53,1.1.1.1:53 as ARGs to the cert-manager controller Deployment.

                                                                  oc get csv\noc edit csv cert-manager.${VERSION}\n
                                                                    labels:\napp: cert-manager\napp.kubernetes.io/component: controller\napp.kubernetes.io/instance: cert-manager\napp.kubernetes.io/name: cert-manager\napp.kubernetes.io/version: v1.9.1\nspec:\ncontainers:\n- args:\n- '--v=2'\n- '--cluster-resource-namespace=$(POD_NAMESPACE)'\n- '--leader-election-namespace=kube-system'\n- '--dns01-recursive-nameservers-only'\n- '--dns01-recursive-nameservers=8.8.8.8:53,1.1.1.1:53'\n

                                                                  Note

                                                                  The Deployment must be modified via OpenShift ClusterServiceVersion OLM resource if the operator was installed using the OperatorHub. The OpenShift ClusterServiceVersion OLM resource includes several Deployments, and the ARGs must be modified only for the cert-manager controller.

                                                                  • Save the resource. After that, OLM will try to reload the resource automatically and save it to the YAML file. If OLM resets the config file, double-check the entered values.

                                                                  Cert-Manager Nameservers

                                                                "},{"location":"operator-guide/ssl-automation-okd/#configure-clusterissuers","title":"Configure ClusterIssuers","text":"

                                                                ClusterIssuer is available on the whole cluster.

                                                                1. Create the ClusterIssuer resource for Let's Encrypt Staging and Prod environments that signs a Certificate using cert-manager.

                                                                  Note

                                                                  Let's Encrypt has a limit of duplicate certificates in the Prod environment. Therefore, a ClusterIssuer has been created for Let's Encrypt Staging environment. By default, Let's Encrypt Staging certificates will not be trusted in your browser. The certificate validation cannot be tested in the Let's Encrypt Staging environment.

                                                                  • Change user@example.com with your contact email.
                                                                  • Replace hostedZoneID XXXXXXXXXXX with the DNS Hosted zone ID in AWS for your domain.
                                                                  • Replace the region value ${region}.
                                                                  • The secret under privateKeySecretRef will be created automatically by the cert-manager operator.
                                                                  apiVersion: cert-manager.io/v1\nkind: ClusterIssuer\nmetadata:\nname: letsencrypt-staging\nspec:\nacme:\nemail: user@example.com\nserver: https://acme-staging-v02.api.letsencrypt.org/directory\nprivateKeySecretRef:\nname: letsencrypt-staging-issuer-account-key\nsolvers:\n- dns01:\nroute53:\nregion: ${region}\nhostedZoneID: XXXXXXXXXXX\n
                                                                  apiVersion: cert-manager.io/v1\nkind: ClusterIssuer\nmetadata:\nname: letsencrypt-prod\nspec:\nacme:\nemail: user@example.com\nserver: https://acme-v02.api.letsencrypt.org/directory\nprivateKeySecretRef:\nname: letsencrypt-prod-issuer-account-key\nsolvers:\n- dns01:\nroute53:\nregion: ${region}\nhostedZoneID: XXXXXXXXXXX\n

                                                                  Cert-Manager ClusterIssuer

                                                                2. Check the ClusterIssuer status:

                                                                  Cert-Manager ClusterIssuer

                                                                  oc describe clusterissuer letsencrypt-prod\noc describe clusterissuer letsencrypt-staging\n
                                                                3. If the ClusterIssuer state is not ready, investigate cert-manager controller pod logs:

                                                                  oc get pod -n openshift-operators | grep 'cert-manager'\noc logs -f cert-manager-${replica_set}-${random_string} -n openshift-operators\n
                                                                "},{"location":"operator-guide/ssl-automation-okd/#configure-certificates","title":"Configure Certificates","text":"
                                                                1. In two different namespaces, create a Certificate resource for the OpenShift Router (Ingress controller for OpenShift) and for the OpenShift APIServer.

                                                                  • OpenShift Router supports a single wildcard certificate for Ingress/Route resources in different namespaces (so called, default SSL certificate). The Ingress controller expects the certificates in a Secret to be created in the openshift-ingress namespace; the API Server, in the openshift-config namespace. The cert-manager operator will automatically create these secrets from the Certificate resource.
                                                                  • Replace ${DOMAIN} with your domain name. It can be checked with oc whoami --show-server. Put domain names in quotes.
                                                                  The certificate for OpenShift Router in the `openshift-ingress` namespace
                                                                  apiVersion: cert-manager.io/v1\nkind: Certificate\nmetadata:\nname: router-certs\nnamespace: openshift-ingress\nlabels:\napp: cert-manager\nspec:\nsecretName: router-certs\nsecretTemplate:\nlabels:\napp: cert-manager\nduration: 2160h # 90d\nrenewBefore: 360h # 15d\nsubject:\norganizations:\n- Org Name\ncommonName: '*.${DOMAIN}'\nprivateKey:\nalgorithm: RSA\nencoding: PKCS1\nsize: 2048\nrotationPolicy: Always\nusages:\n- server auth\n- client auth\ndnsNames:\n- '*.${DOMAIN}'\n- '*.apps.${DOMAIN}'\nissuerRef:\nname: letsencrypt-staging\nkind: ClusterIssuer\n
                                                                  The certificate for OpenShift APIServer in the `openshift-config` namespace
                                                                  apiVersion: cert-manager.io/v1\nkind: Certificate\nmetadata:\nname: api-certs\nnamespace: openshift-config\nlabels:\napp: cert-manager\nspec:\nsecretName: api-certs\nsecretTemplate:\nlabels:\napp: cert-manager\nduration: 2160h # 90d\nrenewBefore: 360h # 15d\nsubject:\norganizations:\n- Org Name\ncommonName: '*.${DOMAIN}'\nprivateKey:\nalgorithm: RSA\nencoding: PKCS1\nsize: 2048\nrotationPolicy: Always\nusages:\n- server auth\n- client auth\ndnsNames:\n- '*.${DOMAIN}'\n- '*.apps.${DOMAIN}'\nissuerRef:\nname: letsencrypt-staging\nkind: ClusterIssuer\n

                                                                  Info

                                                                  • cert-manager supports ECDSA key pairs in the Certificate resource. To use it, change RSA privateKey to ECDSA:

                                                                    privateKey:\nalgorithm: ECDSA\nencoding: PKCS1\nsize: 256\nrotationPolicy: Always\n
                                                                  • rotationPolicy: Always is highly recommended since cert-manager does not rotate private keys by default.
                                                                  • Full Certificate spec is described in the cert-manager API documentation.
                                                                2. Check that the certificates in the namespaces are ready:

                                                                  Cert-Manager Certificate Status

                                                                  Cert-Manager Certificate Status

                                                                3. Check the details of the certificates via CLI:

                                                                  oc describe certificate api-certs -n openshift-config\noc describe certificate router-certs -n openshift-ingress\n
                                                                4. Check the cert-manager controller pod logs if the Staging Certificate condition is not ready for more than 7 minutes:

                                                                  oc get pod -n openshift-operators | grep 'cert-manager'\noc logs -f cert-manager-${replica_set}-${random_string} -n openshift-operators\n
                                                                5. When the certificate is ready, its private key will be put into the OpenShift Secret in the namespace indicated in the Certificate resource:

                                                                  oc describe secret api-certs -n openshift-config\noc describe secret router-certs -n openshift-ingress\n
                                                                "},{"location":"operator-guide/ssl-automation-okd/#modify-openshift-router-and-api-server-custom-resources","title":"Modify OpenShift Router and API Server Custom Resources","text":"
                                                                1. Update the Custom Resource of your Router (Ingress controller). Patch the defaultCertificate object value with { \"name\": \"router-certs\" }:

                                                                  oc patch ingresscontroller default -n openshift-ingress-operator --type=merge --patch='{\"spec\": { \"defaultCertificate\": { \"name\": \"router-certs\" }}}' --insecure-skip-tls-verify\n

                                                                  Info

                                                                  After updating the IngressController object, the OpenShift Ingress operator redeploys the router.

                                                                2. Update the Custom Resource for the OpenShift API Server:

                                                                  • Export the name of APIServer:

                                                                    export OKD_API=$(oc whoami --show-server --insecure-skip-tls-verify | cut -f 2 -d ':' | cut -f 3 -d '/' | sed 's/-api././')\n
                                                                  • Patch the servingCertificate object value with { \"name\": \"api-certs\" }:

                                                                    oc patch apiserver cluster --type merge --patch=\"{\\\"spec\\\": {\\\"servingCerts\\\": {\\\"namedCertificates\\\": [ { \\\"names\\\": [  \\\"$OKD_API\\\"  ], \\\"servingCertificate\\\": {\\\"name\\\": \\\"api-certs\\\" }}]}}}\" --insecure-skip-tls-verify\n
                                                                "},{"location":"operator-guide/ssl-automation-okd/#move-from-lets-encrypt-staging-environment-to-prod","title":"Move From Let's Encrypt Staging Environment to Prod","text":"
                                                                1. Test the Staging certificate on the OpenShift Admin Console. The --insecure flag is used because Let's Encrypt Staging certificates are not trusted in browsers by default:

                                                                  curl -v --insecure https://console-openshift-console.apps.${DOMAIN}\n
                                                                2. Change issuerRef to letsencrypt-prod in both Certificate resources:

                                                                  oc edit certificate api-certs -n openshift-config\noc edit certificate router-certs -n openshift-ingress\n
                                                                  issuerRef:\nname: letsencrypt-prod\nkind: ClusterIssuer\n

                                                                  Note

                                                                  In case the certificate reissue is not triggered after that, try to force the certificate renewal with cmctl:

                                                                  cmctl renew router-certs -n openshift-ingress\ncmctl renew api-certs -n openshift-config\n

                                                                  If this won't work, delete the api-certs and router-certs secrets. It should trigger the Prod certificates issuance:

                                                                  oc delete secret router-certs -n openshift-ingress\noc delete secret api-certs -n openshift-config\n

                                                                  Please note that these actions will lead to logging your account out of the OpenShift Admin Console, since certificates will be deleted. Accept the certificate warning in the browser and log in again after that.

                                                                3. Check the status of the Prod certificates:

                                                                  oc describe certificate api-certs -n openshift-config\noc describe certificate router-certs -n openshift-ingress\n
                                                                  cmctl status certificate api-certs -n openshift-config\ncmctl status certificate router-certs -n openshift-ingress\n
                                                                4. Check the web console and make sure it has secure connection:

                                                                  curl -v https://console-openshift-console.apps.${DOMAIN}\n
                                                                "},{"location":"operator-guide/ssl-automation-okd/#troubleshoot-certificates","title":"Troubleshoot Certificates","text":"

                                                                Below is an example of the DNS TXT challenge record created by the cert-manager operator:

                                                                DNS Validation

                                                                Use nslookup or dig tools to check if the DNS propagation for the TXT record is complete:

                                                                nslookup -type=txt _acme-challenge.${DOMAIN}\ndig txt _acme-challenge.${DOMAIN}\n

                                                                Otherwise, use web tools like Google Admin Toolbox:

                                                                DNS Validation

                                                                If the correct TXT value is shown (the value corresponds to the current TXT value in the DNS zone), it means that the DNS propagation is complete and Let's Encrypt is able to access the record in order to validate it and issue a trusted certificate.

                                                                Note

                                                                If the DNS validation challenge self check fails, cert-manager will retry the self check with a fixed 10-second retry interval. Challenges that do not ever complete the self check will continue retrying until the user intervenes by either retrying the Order (by deleting the Order resource) or amending the associated Certificate resource to resolve any configuration errors.

                                                                As soon as the domain ownership has been verified, any cert-manager affected validation TXT records in the AWS Route53 DNS zone will be cleaned up.

                                                                Please find below the issues that may occur and their troubleshooting:

                                                                • When certificates are not issued for a long time, or a cert-manager resource is not in a Ready state, describing a resource may show the reason for the error.
                                                                • Basically, the cert-manager creates the following resources during a Certificate issuance: CertificateRequest, Order, and Challenge. Investigate each of them in case of errors.
                                                                • Use the cmctl tool to show the state of a Certificate and its associated resources.
                                                                • Check the cert-manager controller pod logs:

                                                                  oc get pod -n openshift-operators | grep 'cert-manager'\noc logs -f cert-manager-${replica_set}-${random_string} -n openshift-operators\n
                                                                • Certificate error debugging: a. Decode certificate chain located in the secrets:

                                                                  oc get secret router-certs -n openshift-ingress -o 'go-template={{index .data \"tls.crt\"}}' | base64 -d | while openssl x509 -noout -text; do :; done 2>/dev/null\noc get secret api-certs -n openshift-config -o 'go-template={{index .data \"tls.crt\"}}' | base64 -d | while openssl x509 -noout -text; do :; done 2>/dev/null\n
                                                                  cmctl inspect secret router-certs -n openshift-ingress\ncmctl inspect secret api-certs -n openshift-config\n

                                                                  b. Check the SSL RSA private key consistency:

                                                                  oc get secret router-certs -n openshift-ingress -o 'go-template={{index .data \"tls.key\"}}' | base64 -d | openssl rsa -check -noout\noc get secret api-certs -n openshift-config -o 'go-template={{index .data \"tls.key\"}}' | base64 -d | openssl rsa -check -noout\n

                                                                  c. Match the SSL certificate public key against its RSA private key. Their modulus must be identical:

                                                                  diff <(oc get secret api-certs -n openshift-config -o 'go-template={{index .data \"tls.crt\"}}' | base64 -d | openssl x509 -noout -modulus | openssl md5) <(oc get secret api-certs -n openshift-config -o 'go-template={{index .data \"tls.key\"}}' | base64 -d | openssl rsa -noout -modulus | openssl md5)\ndiff <(oc get secret router-certs -n openshift-ingress -o 'go-template={{index .data \"tls.crt\"}}' | base64 -d | openssl x509 -noout -modulus | openssl md5) <(oc get secret router-certs -n openshift-ingress -o 'go-template={{index .data \"tls.key\"}}' | base64 -d | openssl rsa -noout -modulus | openssl md5)\n
                                                                "},{"location":"operator-guide/ssl-automation-okd/#remove-obsolete-certificate-authority-data-from-kubeconfig","title":"Remove Obsolete Certificate Authority Data From Kubeconfig","text":"

                                                                After updating the certificates, the access to the cluster via Lens or CLI will be denied because of the untrusted certificate errors:

                                                                $ oc whoami\nUnable to connect to the server: x509: certificate signed by unknown authority\n

                                                                Such behavior appears because the oc tool references an old CA data in the kubeconfig file.

                                                                Note

                                                                Examine the Certificate Authority data using the following command:

                                                                oc config view --minify --raw -o jsonpath='{.clusters[].cluster.certificate-authority-data}' | base64 -d | openssl x509 -text\n

                                                                This certificate has the CA:TRUE parameter, which means that this is a self-signed root CA certificate.

                                                                To fix the error, remove the old CA data from your OpenShift kubeconfig file:

                                                                sed -i \"/certificate-authority-data/d\" $KUBECONFIG\n

                                                                Since this field will be absent in the kubeconfig file, system root SSL certificate will be used to validate the cluster certificate trust chain. On Ubuntu, Let's Encrypt OpenShift cluster certificates will be validated against Internet Security Research Group root in /etc/ssl/certs/ca-certificates.crt.

                                                                "},{"location":"operator-guide/ssl-automation-okd/#certificate-renewals","title":"Certificate Renewals","text":"

                                                                The cert-manager automatically renews the certificates based on the X.509 certificate's duration and the renewBefore value. The minimum value for the spec.duration is 1 hour; for spec.renewBefore, 5 minutes. It is also required that spec.duration > spec.renewBefore.

                                                                Use the cmctl tool to manually trigger a single instant certificate renewal:

                                                                cmctl renew router-certs -n openshift-ingress\ncmctl renew api-certs -n openshift-config\n

                                                                Otherwise, manually renew all certificates in all namespaces with the app=cert-manager label:

                                                                cmctl renew --all-namespaces -l app=cert-manager\n

                                                                Run the cmctl renew --help command to get more details.

                                                                "},{"location":"operator-guide/ssl-automation-okd/#related-articles","title":"Related Articles","text":"
                                                                • Cert-Manager Official Documentation
                                                                • Installing the Cert-Manager Operator for Red Hat OpenShift
                                                                • Checking Issued Certificate Details
                                                                "},{"location":"operator-guide/tekton-monitoring/","title":"Monitoring","text":"

                                                                This documentation describes how to integrate tekton-pipelines metrics with Prometheus and Grafana monitoring stack.

                                                                "},{"location":"operator-guide/tekton-monitoring/#prerequisites","title":"Prerequisites","text":"

                                                                Ensure the following requirements are met first before moving ahead:

                                                                • Kube prometheus stack is installed;
                                                                • Tekton pipeline is installed.
                                                                "},{"location":"operator-guide/tekton-monitoring/#create-and-apply-the-additional-scrape-config","title":"Create and Apply the Additional Scrape Config","text":"

                                                                To create and apply the additional scrape config, follow the steps below:

                                                                1. Create the kubernetes secret file with the additional scrape config:

                                                                  additional-scrape-configs.yaml file
                                                                  apiVersion: v1\nkind: Secret\nmetadata:\n  name: additional-scrape-configs\nstringData:\n  prometheus-additional-job.yaml: |\n    - job_name: \"tekton-pipelines\"\n      scrape_interval: 30s\n      static_configs:\n      - targets: [\"tekton-pipelines-controller.<tekton-pipelines-namespace>.svc.cluster.local:9090\"]\n
                                                                2. Apply the created secret:

                                                                  kubectl apply -f additional-scrape-configs.yaml -n <monitoring-namespace>\n
                                                                3. Update the prometheus stack:

                                                                  helm update --install prometheus prometheus-community/kube-prometheus-stack --values values.yaml -n <monitoring-namespace>\n

                                                                  The values.yaml file should have the following contents:

                                                                  values.yaml file
                                                                  prometheus:\nprometheusSpec:\nadditionalScrapeConfigsSecret:\nenabled: true\nname: additional-scrape-configs\nkey: prometheus-additional-job.yaml\n
                                                                4. Download the EDP Tekton Pipeline dashboard:

                                                                  Import Grafana dashboard

                                                                  a. Click on the dashboard menu;

                                                                  b. In the dropdown menu, click the + Import button;

                                                                  c. Select the created edp-tekton-overview_rev1.json file;

                                                                  Import Grafana dashboard: Options

                                                                  d. Type the name of the dashboard;

                                                                  e. Select the folder for the dashboard;

                                                                  f. Type the UID (set of eight numbers or letters and symbols);

                                                                  g. Click the Import button.

                                                                As soon as the dashboard procedure is completed, you can track the newcoming metrics in the dashboard menu:

                                                                Tekton dashboard

                                                                "},{"location":"operator-guide/tekton-monitoring/#related-articles","title":"Related Articles","text":"
                                                                • Install Tekton
                                                                • Install EDP
                                                                • Install via Helmfile
                                                                "},{"location":"operator-guide/tekton-overview/","title":"Tekton Overview","text":"

                                                                EPAM Delivery Platform provides Continuous Integration based on Tekton.

                                                                Tekton is an open-source Kubernetes native framework for creating CI pipelines, allowing a user to compile, build and test applications.

                                                                The edp-tekton GitHub repository provides all Tekton implementation logic on the platform. The Helm charts are used to deploy the resources inside the Kubernetes cluster. Tekton logic is decoupled into separate components:

                                                                Edp-tekton components diagram

                                                                The diagram above describes the following:

                                                                • Common-library is the Helm chart of Library type which stores the common logic shareable across all Tekton pipelines. This library contains Helm templates that generate common Tekton resources.
                                                                • Pipelines-library is the Helm chart of the Application type which stores the core logic for the EDP pipelines. Tekton CRs like Pipelines, Tasks, EventListeners, Triggers, TriggerTemplates, and other resources are delivered with this chart.
                                                                • Custom-pipelines is the Helm chart of the Application type which implements custom logic running specifically for internal EDP development, for example, CI and Release. It also demonstrates the customization flow on the platform.
                                                                • Tekton-dashboard is a multitenancy-adopted implementation of the upstream Tekton Dashboard. It is configured to share Tekton resources across a single namespace.
                                                                • EDP Interceptor is the custom Tekton Interceptor which enriches the payload from the VCSs events with EDP data from the Codebase CR specification. These data are used to define the Pipeline logic.

                                                                Inspect the schema below that describes the logic behind the Tekton functionality on the platform:

                                                                Component view for the Tekton on EDP

                                                                The platform logic consists of the following:

                                                                1. The EventListener exposes a dedicated Pod that runs the sink logic and receives incoming events from the VCSs (Gerrit, GitHub, GitLab) through the Ingress. It contains triggers with filtering and routing rules for incoming requests.

                                                                2. Upon the Event Payload arrival, the EventListener runs triggers to process information or validate it via different interceptors.

                                                                3. The EDP Interceptor extracts information from the codebases.v2.edp.epam.com CR and injects the received data into top-level 'extensions' field of the Event Payload. The Interceptor consists of running Pod and Service.

                                                                4. The Tekton Cel Interceptor does simple transformations of the resulting data and prepares them for the Pipeline parameters substitution.

                                                                5. The TriggerTemplate creates a PipelineRun instance with the required parameters extracted from the Event Payload by Interceptors. These parameters are mandatory for Pipelines.

                                                                6. The PipelineRun has a mapping to the EDP Tekton Pipelines using a template approach which reduces code duplication. Each Pipeline is designed for a specific VCS (Gerrit, GitLab, GitHub), technology stack (such as Java or Python), and type (code-review, build).

                                                                7. A Pipeline consists of separate EDP Tekton or open-source Tasks. They are arranged in a specific order of execution in the Pipeline.

                                                                8. Each Task is executed as a Pod on the Kubernetes cluster. Also, Tasks can have a different number of steps that are executed as a \u0421ontainer in Pod.

                                                                9. The Kubernetes native approach allows the creation of PipelineRun either with the kubectl tool or using the EDP Portal UI.

                                                                "},{"location":"operator-guide/upgrade-edp-2.10/","title":"Upgrade EDP v2.9 to 2.10","text":"

                                                                This section provides the details on the EDP upgrade to 2.10.2. Explore the actions and requirements below.

                                                                Note

                                                                Kiosk is optional for EDP v.2.9.0 and higher, and is enabled by default. To disable it, add the following parameter to the values.yaml file: global.kioskEnabled: false. Please refer to the Set Up Kiosk documentation for the details.

                                                                Note

                                                                In the process of updating the EDP, it is necessary to migrate the database for SonarQube, before performing the update procedure, please carefully read section 4 of this guide.

                                                                1. Before updating EDP to 2.10.2, delete SonarQube plugins by executing the following command in SonarQube pod:

                                                                  rm -r /opt/sonarqube/extensions/plugins/*\n
                                                                2. Update Custom Resource Definitions. Run the following command to apply all the necessary CRDs to the cluster:

                                                                  kubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.10/deploy-templates/crds/v2_v1alpha1_jenkins_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.10/deploy-templates/crds/v1_v1alpha1_keycloakclient_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.10/deploy-templates/crds/v1_v1alpha1_keycloakrealmcomponent_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.10/deploy-templates/crds/v1_v1alpha1_keycloakrealmidentityprovider_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.10/deploy-templates/crds/v1_v1alpha1_keycloakrealmrole_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.10/deploy-templates/crds/v1_v1alpha1_keycloakrealmuser_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.10/deploy-templates/crds/v1_v1alpha1_keycloak_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/release/2.10/deploy-templates/crds/edp_v1alpha1_nexus_crd.yaml\n
                                                                3. To upgrade EDP to the v.2.10.2, run the following command:

                                                                  helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.10.2\n

                                                                  Note

                                                                  To verify the installation, it is possible to test the deployment before applying it to the cluster with: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.10.2 --dry-run

                                                                4. Migrate the database for SonarQube according to the official documentation.

                                                                  Note

                                                                  Please be aware of possible tables duplication for speeding up the migration process during the upgrade. Due to the duplication, the database disk usage can be temporarily increased to twice as the normal usage. Therefore, the recommended database disk usage is below 50% before the migration start.

                                                                  • Navigate to the project http://SonarQubeServerURL/setup link and follow the setup instructions:

                                                                    Migrate SonarQube database

                                                                  • Click the Upgrade button and wait for the end of the migration process.
                                                                5. Remove the resources related to the deprecated Sonar Gerrit Plugin that is deleted in EDP 2.10.2:

                                                                  • Remove Sonar Gerrit Plugin from Jenkins (go to Manage Jenkins -> Manage Plugins -> Installed -> Uninstall Sonar Gerrit Plugin).
                                                                  • In Gerrit, clone the All-Projects repository.
                                                                  • Edit the project.config file in the All-Projects repository and remove the Sonar-Verified label declaration:
                                                                    [label \"Sonar-Verified\"]\nfunction = MaxWithBlock\nvalue = -1 Issues found\nvalue = 0 No score\nvalue = +1 Verified\ndefaultValue = 0\n
                                                                  • Also, remove the following permissions for the Sonar-Verified label in the project.config file:
                                                                    label-Sonar-Verified = -1..+1 group Administrators\nlabel-Sonar-Verified = -1..+1 group Project Owners\nlabel-Sonar-Verified = -1..+1 group Service Users\n
                                                                  • Save the changes, and commit and push the repository to HEAD:refs/meta/config bypassing the Gerrit code review:
                                                                    git push origin HEAD:refs/meta/config\n
                                                                6. Update image versions for the Jenkins agents in the ConfigMap:

                                                                  kubectl edit configmap jenkins-slaves -n <edp-namespace>\n
                                                                  • The versions of the images should be:
                                                                    epamedp/edp-jenkins-codenarc-agent:1.0.1\nepamedp/edp-jenkins-dotnet-21-agent:1.0.5\nepamedp/edp-jenkins-dotnet-31-agent:1.0.4\nepamedp/edp-jenkins-go-agent:1.0.6\nepamedp/edp-jenkins-gradle-java8-agent:1.0.3\nepamedp/edp-jenkins-gradle-java11-agent:2.0.3\nepamedp/edp-jenkins-helm-agent:1.0.10\nepamedp/edp-jenkins-maven-java8-agent:1.0.3\nepamedp/edp-jenkins-maven-java11-agent:2.0.4\nepamedp/edp-jenkins-npm-agent:2.0.3\nepamedp/edp-jenkins-opa-agent:1.0.2\nepamedp/edp-jenkins-python-38-agent:2.0.4\nepamedp/edp-jenkins-terraform-agent:2.0.5\n
                                                                  • Restart the Jenkins pod.
                                                                7. Since EDP version v.2.10.x, the create-release.groovy, code-review.groovy, and build.groovy files are deprecated (pipeline script from SCM is replaced with pipeline script, see below).

                                                                  • Pipeline script from SCM: Pipeline script from scm example
                                                                  • Pipeline script: Pipeline script example
                                                                  • Update the job-provisioner code and restart the codebase-operator pod. Consult the default job-provisioners code section.
                                                                "},{"location":"operator-guide/upgrade-edp-2.10/#related-articles","title":"Related Articles","text":"
                                                                • Manage Jenkins CI Pipeline Job Provisioner
                                                                • Set Up Kiosk
                                                                • SonarQube Upgrade Guide
                                                                "},{"location":"operator-guide/upgrade-edp-2.11/","title":"Upgrade EDP v2.10 to 2.11","text":"

                                                                This section provides the details on the EDP upgrade to 2.11. Explore the actions and requirements below.

                                                                1. Update Custom Resource Definitions. Run the following command to apply all the necessary CRDs to the cluster:

                                                                  kubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.12/deploy-templates/crds/edp_v1alpha1_cd_stage_deploy_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.11/deploy-templates/crds/v2_v1alpha1_merge_request_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/release/2.11/deploy-templates/crds/edp_v1alpha1_user_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/release/2.11/deploy-templates/crds/edp_v1alpha1_cdpipeline_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.11/deploy-templates/crds/v2_v1alpha1_jenkinssharedlibrary_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.11/deploy-templates/crds/v2_v1alpha1_cdstagejenkinsdeployment_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.11/deploy-templates/crds/v1_v1alpha1_keycloakauthflow_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.11/deploy-templates/crds/v1_v1alpha1_keycloakrealmuser_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.12/deploy-templates/crds/edp_v1alpha1_codebaseimagestream_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.12/deploy-templates/crds/edp_v1alpha1_codebase_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/release/2.11/deploy-templates/crds/edp_v1alpha1_sonar_group_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/release/2.11/deploy-templates/crds/edp_v1alpha1_permission_template_crd.yaml\n
                                                                2. Backup kaniko-template config-map and then remove it. This component will be delivered during upgrade.

                                                                3. Set required awsRegion parameter. Pay attention that the nesting of the kanikoRoleArn parameter has been changed to the kaniko.roleArn parameter. Check the parameters in the EDP installation chart. For details, please refer to the values.yaml file. To upgrade EDP to the v.2.11.x, run the following command:

                                                                  helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.11.x\n

                                                                  Note

                                                                  To verify the installation, it is possible to test the deployment before applying it to the cluster with: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.11.x --dry-run

                                                                4. Update Sonar Project Key:

                                                                  Note

                                                                  Avoid using special characters when creating projects in SonarQube. Allowed characters are: letters, numbers, -, _, . and :, with at least one non-digit. For details, please refer to the SonarQube documentation. As the result, the project name will be: project-name-release-0.0 or project-name-branchName.

                                                                  Such actions are required to be followed with the aim to store the SonarQube statistics from the previous EDP version:

                                                                  Warning

                                                                  Do not run any pipeline with the updated sonar stage on any existing application before the completion of the first step.

                                                                  4.1. Update the project key in SonarQube from old to new format by adding the default branch name.

                                                                  - Navigate to Project Settings -> Update Key: Update SonarQube project key - Enter the default branch name and click Update: Update SonarQube project key

                                                                  4.2. As the result, after the first run, the project name will be changed to a new format containing all previous statistics:

                                                                  SonarQube project history activity

                                                                5. Update image versions for the Jenkins agents in the ConfigMap:

                                                                    kubectl edit configmap jenkins-slaves -n <edp-namespace>\n
                                                                  • The versions of the images should be:
                                                                    epamedp/edp-jenkins-codenarc-agent:3.0.4\nepamedp/edp-jenkins-dotnet-21-agent:3.0.4\nepamedp/edp-jenkins-dotnet-31-agent:3.0.3\nepamedp/edp-jenkins-go-agent:3.0.5\nepamedp/edp-jenkins-gradle-java11-agent:3.0.2\nepamedp/edp-jenkins-gradle-java8-agent:3.0.2\nepamedp/edp-jenkins-helm-agent:3.0.3\nepamedp/edp-jenkins-maven-java11-agent:3.0.3\nepamedp/edp-jenkins-maven-java8-agent:3.0.3\nepamedp/edp-jenkins-npm-agent:3.0.4\nepamedp/edp-jenkins-opa-agent:3.0.2\nepamedp/edp-jenkins-python-38-agent:3.0.2\nepamedp/edp-jenkins-terraform-agent:3.0.3\n
                                                                  • Add Jenkins agent by following the template:

                                                                    View: values.yaml

                                                                    kaniko-docker-template: |-\n<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n<inheritFrom></inheritFrom>\n<name>kaniko-docker</name>\n<namespace></namespace>\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<instanceCap>2147483647</instanceCap>\n<slaveConnectTimeout>100</slaveConnectTimeout>\n<idleMinutes>5</idleMinutes>\n<activeDeadlineSeconds>0</activeDeadlineSeconds>\n<label>kaniko-docker</label>\n<serviceAccount>jenkins</serviceAccount>\n<nodeSelector>beta.kubernetes.io/os=linux</nodeSelector>\n<nodeUsageMode>NORMAL</nodeUsageMode>\n<workspaceVolume class=\"org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume\">\n<memory>false</memory>\n</workspaceVolume>\n<volumes/>\n<containers>\n<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n<name>jnlp</name>\n\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<workingDir>/tmp</workingDir>\n<command></command>\n<args>${computer.jnlpmac} ${computer.name}</args>\n<ttyEnabled>false</ttyEnabled>\n<resourceRequestCpu></resourceRequestCpu>\n<resourceRequestMemory></resourceRequestMemory>\n<resourceLimitCpu></resourceLimitCpu>\n<resourceLimitMemory></resourceLimitMemory>\n<envVars>\n<org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n<key>JAVA_TOOL_OPTIONS</key>\n<value>-XX:+UnlockExperimentalVMOptions -Dsun.zip.disableMemoryMapping=true</value>\n</org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n</envVars>\n<ports/>\n</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n</containers>\n<envVars/>\n<annotations/>\n<imagePullSecrets/>\n<podRetention class=\"org.csanchez.jenkins.plugins.kubernetes.pod.retention.Default\"/>\n</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n
                                                                    • Restart the Jenkins pod.
                                                                  • Update the Jenkins plugins with the 'pipeline' name and 'HTTP Request Plugin'.

                                                                  • Update Jenkins provisioners according to the Manage Jenkins CI Pipeline Job Provisioner and Manage Jenkins CD Pipeline Job Provisioner documentation.

                                                                  • Restart the codebase-operator to recreate the Code-review and Build pipelines for codebases.

                                                                  • Run the CD job-provisioners for every CD pipeline to align the CD stages.
                                                                  • "},{"location":"operator-guide/upgrade-edp-2.12/","title":"Upgrade EDP v2.11 to 2.12","text":"

                                                                    Important

                                                                    We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                                                    This section provides the details on the EDP upgrade to 2.12. Explore the actions and requirements below.

                                                                    Notes

                                                                    • EDP now supports Kubernetes 1.22: Ingress Resources use networking.k8s.io/v1, and Ingress Operators use CustomResourceDefinition apiextensions.k8s.io/v1.
                                                                    • EDP Team now delivers its own Gerrit Docker image: epamedp/edp-gerrit. It is based on the openfrontier Gerrit Docker image.
                                                                    1. EDP now uses DefectDojo as a SAST tool. It is mandatory to deploy DefectDojo before updating EDP to v.2.12.x.

                                                                    2. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                                                      kubectl apply -f https://raw.githubusercontent.com/epam/edp-admin-console-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_adminconsoles.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_cdpipelines.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_cdstagedeployments.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_cdstagejenkinsdeployments.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_codebasebranches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_codebaseimagestreams.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_codebases.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-component-operator/release/0.12/deploy-templates/crds/v1.edp.epam.com_edpcomponents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerritgroupmembers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerritgroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerritmergerequests.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerritprojectaccesses.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerritprojects.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerritreplicationconfigs.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_gerrits.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_gitservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_gittags.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_imagestreamtags.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkins.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsagents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsauthorizationrolemappings.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsauthorizationroles.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsfolders.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsjobbuildruns.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsjobs.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsscripts.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinsserviceaccounts.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_jenkinssharedlibraries.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_jiraissuemetadatas.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.13/deploy-templates/crds/v2.edp.epam.com_jiraservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakauthflows.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakclients.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakclientscopes.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealmcomponents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealmgroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealmidentityproviders.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealmrolebatches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealmroles.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealms.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloakrealmusers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.12/deploy-templates/crds/v1.edp.epam.com_keycloaks.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_nexuses.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_nexususers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_perfdatasourcegitlabs.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_perfdatasourcejenkinses.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_perfdatasourcesonars.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_perfservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_sonargroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_sonarpermissiontemplates.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_sonars.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/release/2.12/deploy-templates/crds/v2.edp.epam.com_stages.yaml\n
                                                                    3. Set the required parameters. For details, please refer to the values.yaml file.

                                                                      • In version v.2.12.x, EDP contains Gerrit v3.6.1. According to the Official Gerrit Upgrade flow, a user must initially upgrade to Gerrit v3.5.2, and then upgrade to v3.6.1. Therefore, define the gerrit-operator.gerrit.version=3.5.2 value in the edp-install values.yaml file.
                                                                      • Two more components are available with the new functionality:

                                                                        • edp-argocd-operator
                                                                        • external-secrets
                                                                      • If there is no need to use these new operators, define false values for them in the existing value.yaml file:

                                                                        View: values.yaml

                                                                        gerrit-operator:\ngerrit:\nversion: \"3.5.2\"\nexternalSecrets:\nenabled: false\nargocd:\nenabled: false\n
                                                                      • The edp-jenkins-role is renamed to the jenkins-resources-role. Delete the edp-jenkins-role with the following command:

                                                                          kubectl delete role edp-jenkins-role -n <edp-namespace>\n

                                                                        The jenkins-resources-role role will be created automatically while EDP upgrade.

                                                                      • Recreate the edp-jenkins-resources-permissions RoleBinding according to the following template:

                                                                        View: jenkins-resources-role

                                                                        apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\nname: edp-jenkins-resources-permissions\nnamespace: <edp-namespace>\nroleRef:\napiGroup: rbac.authorization.k8s.io\nkind: Role\nname: jenkins-resources-role\n
                                                                      • To upgrade EDP to the v.2.12.x, run the following command:

                                                                        helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.12.x\n

                                                                        Note

                                                                        To verify the installation, it is possible to test the deployment before applying it to the cluster with the following command: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.12.x --dry-run

                                                                      • After the update, please remove the gerrit-operator.gerrit.version value. In this case, the default value will be used, and Gerrit will be updated to the v3.6.1 version. Run the following command:

                                                                          helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.12.x\n

                                                                        Note

                                                                        To verify the installation, it is possible to test the deployment before applying it to the cluster with the following command: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.12.x --dry-run

                                                                      • Update image versions for the Jenkins agents in the ConfigMap:

                                                                          kubectl edit configmap jenkins-slaves -n <edp-namespace>\n
                                                                        • The versions of the images must be the following:
                                                                          epamedp/edp-jenkins-codenarc-agent:3.0.8\nepamedp/edp-jenkins-dotnet-21-agent:3.0.7\nepamedp/edp-jenkins-dotnet-31-agent:3.0.7\nepamedp/edp-jenkins-go-agent:3.0.11\nepamedp/edp-jenkins-gradle-java11-agent:3.0.5\nepamedp/edp-jenkins-gradle-java8-agent:3.0.7\nepamedp/edp-jenkins-helm-agent:3.0.8\nepamedp/edp-jenkins-maven-java11-agent:3.0.6\nepamedp/edp-jenkins-maven-java8-agent:3.0.8\nepamedp/edp-jenkins-npm-agent:3.0.7\nepamedp/edp-jenkins-opa-agent:3.0.5\nepamedp/edp-jenkins-python-38-agent:3.0.5\nepamedp/edp-jenkins-terraform-agent:3.0.6\n
                                                                        • Add Jenkins agents by following the template:

                                                                          View: jenkins-slaves

                                                                            sast-template: |\n<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n<inheritFrom></inheritFrom>\n<name>sast</name>\n<namespace></namespace>\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<instanceCap>2147483647</instanceCap>\n<slaveConnectTimeout>100</slaveConnectTimeout>\n<idleMinutes>5</idleMinutes>\n<activeDeadlineSeconds>0</activeDeadlineSeconds>\n<label>sast</label>\n<serviceAccount>jenkins</serviceAccount>\n<nodeSelector>beta.kubernetes.io/os=linux</nodeSelector>\n<nodeUsageMode>NORMAL</nodeUsageMode>\n<workspaceVolume class=\"org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume\">\n<memory>false</memory>\n</workspaceVolume>\n<volumes/>\n<containers>\n<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n<name>jnlp</name>\n\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<workingDir>/tmp</workingDir>\n<command></command>\n<args>${computer.jnlpmac} ${computer.name}</args>\n<ttyEnabled>false</ttyEnabled>\n<resourceRequestCpu></resourceRequestCpu>\n<resourceRequestMemory></resourceRequestMemory>\n<resourceLimitCpu></resourceLimitCpu>\n<resourceLimitMemory></resourceLimitMemory>\n<envVars>\n<org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n<key>JAVA_TOOL_OPTIONS</key>\n<value>-XX:+UnlockExperimentalVMOptions -Dsun.zip.disableMemoryMapping=true</value>\n</org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n</envVars>\n<ports/>\n</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n</containers>\n<envVars/>\n<annotations/>\n<imagePullSecrets/>\n<podRetention class=\"org.csanchez.jenkins.plugins.kubernetes.pod.retention.Default\"/>\n</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n
                                                                          • If required, update the requests and limits for the following Jenkins agents:

                                                                            • edp-jenkins-codenarc-agent
                                                                            • edp-jenkins-go-agent
                                                                            • edp-jenkins-gradle-java11-agent
                                                                            • edp-jenkins-gradle-java8-agent
                                                                            • edp-jenkins-maven-java11-agent
                                                                            • edp-jenkins-maven-java8-agent
                                                                            • edp-jenkins-npm-agent
                                                                            • edp-jenkins-dotnet-21-agent
                                                                            • edp-jenkins-dotnet-31-agent

                                                                            EDP requires to start with the following values:

                                                                            View: jenkins-slaves

                                                                              <resourceRequestCpu>500m</resourceRequestCpu>\n<resourceRequestMemory>1Gi</resourceRequestMemory>\n<resourceLimitCpu>2</resourceLimitCpu>\n<resourceLimitMemory>5Gi</resourceLimitMemory>\n
                                                                            • Restart the Jenkins pod.
                                                                          • Update Jenkins provisioners according to the Manage Jenkins CI Pipeline Job Provisioner instruction.

                                                                          • Restart the codebase-operator, to recreate the Code Review and Build pipelines for the codebases.

                                                                          • Warning

                                                                            In case there are different EDP versions on one cluster, the following error may occur on the init stage of Jenkins Groovy pipeline: java.lang.NumberFormatException: For input string: \"\". To fix this issue, please run the following command using kubectl v1.24.4+:

                                                                            kubectl patch codebasebranches.v2.edp.epam.com <codebase-branch-name>  -n <edp-namespace>  '--subresource=status' '--type=merge' -p '{\"status\": {\"build\": \"0\"}}'\n
                                                                            "},{"location":"operator-guide/upgrade-edp-2.12/#upgrade-edp-to-2122","title":"Upgrade EDP to 2.12.2","text":"

                                                                            Important

                                                                            We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                                                            This section provides the details on the EDP upgrade to 2.12.2. Explore the actions and requirements below.

                                                                            1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                                                              kubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.12.2/deploy-templates/crds/v2.edp.epam.com_jenkins.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.12.1/deploy-templates/crds/v2.edp.epam.com_gerrits.yaml\n
                                                                            2. To upgrade EDP to 2.12.2, run the following command:

                                                                              helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.12.2\n

                                                                              Note

                                                                              To verify the installation, it is possible to test the deployment before applying it to the cluster with the following command: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.12.2 --dry-run

                                                                            "},{"location":"operator-guide/upgrade-edp-2.8/","title":"Upgrade EDP v2.7 to 2.8","text":"

                                                                            This section provides the details on the EDP upgrade to 2.8.4. Explore the actions and requirements below.

                                                                            Note

                                                                            Kiosk is implemented and mandatory for EDP v.2.8.4 and is optional for EDP v.2.9.0 and higher.

                                                                            To upgrade EDP to 2.8.4, take the following steps:

                                                                            1. Deploy and configure Kiosk (create a Service Account, Account, and ClusterRoleBinging) according to the Set Up Kiosk documentation.

                                                                              • Update the spec field in the Kiosk space:
                                                                                apiVersion: tenancy.kiosk.sh/v1alpha1\nkind: Space\nmetadata:\nname: <edp-project>\nspec:\naccount: <edp-project>-admin\n
                                                                              • Create RoleBinding (required for namespaces created before using Kiosk):

                                                                                Note

                                                                                In the uid field under the ownerReferences in the Kubernetes manifest, indicate the Account Custom Resource ID from accounts.config.kiosk.sh kubectl get account <edp-project>-admin -o=custom-columns=NAME:.metadata.uid --no-headers=true

                                                                                View: rolebinding-kiosk.yaml

                                                                                apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\ngenerateName: <edp-project>-admin-\nnamespace: <edp-project>\nownerReferences:\n- apiVersion: config.kiosk.sh/v1alpha1\nblockOwnerDeletion: true\ncontroller: true\nkind: Account\nname: <edp-project>-admin\nuid: ''\nroleRef:\napiGroup: rbac.authorization.k8s.io\nkind: ClusterRole\nname: kiosk-space-admin\nsubjects:\n- kind: ServiceAccount\nname: <edp-project>\nnamespace: security\n
                                                                                kubectl create -f rolebinding-kiosk.yaml\n
                                                                              • With Amazon Elastic Container Registry to store the images, there are two options:

                                                                                • Enable IRSA and create AWS IAM Role for Kaniko image builder. Please refer to the IAM Roles for Kaniko Service Accounts section for the details.
                                                                                • The Amazon Elastic Container Registry Roles can be stored in an instance profile.
                                                                              • Update Custom Resource Definitions by applying all the necessary CRD to the cluster with the command below:

                                                                                kubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/release/2.8/deploy-templates/crds/edp_v1alpha1_cdpipeline_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.8/deploy-templates/crds/edp_v1alpha1_codebase_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.8/deploy-templates/crds/edp_v1alpha1_cd_stage_deploy_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.8/deploy-templates/crds/v2_v1alpha1_jenkinsjobbuildrun_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.8/deploy-templates/crds/v2_v1alpha1_cdstagejenkinsdeployment_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.8/deploy-templates/crds/v2_v1alpha1_jenkinsjob_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/release/2.8/deploy-templates/crds/edp_v1alpha1_nexus_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.8/deploy-templates/crds/v1_v1alpha1_keycloakauthflow_crd.yaml\n
                                                                              • With Amazon Elastic Container Registry to store and Kaniko to build the images, add the kanikoRoleArn parameter to the values before starting the update process. This parameter is indicated in AWS Roles once IRSA is enabled and AWS IAM Role is created for Kaniko. The value should look as follows:

                                                                                kanikoRoleArn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/AWSIRSA\u2039CLUSTER_NAME\u203a\u2039EDP_NAMESPACE\u203aKaniko\n
                                                                              • To upgrade EDP to the v.2.8.4, run the following command:

                                                                                helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.8.4\n

                                                                                Note

                                                                                To verify the installation, it is possible to test the deployment before applying it to the cluster with: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.8.4 --dry-run

                                                                              • Remove the following Kubernetes resources left from the previous EDP installation (it is optional):

                                                                                kubectl delete cm luminatesec-conf -n <edp-namespace>\nkubectl delete sa edp edp-perf-operator -n <edp-namespace>\nkubectl delete deployment perf-operator -n <edp-namespace>\nkubectl delete clusterrole edp-<edp-namespace> edp-perf-operator-<edp-namespace>\nkubectl delete clusterrolebinding edp-<edp-namespace> edp-perf-operator-<edp-namespace>\nkubectl delete rolebinding edp-<edp-namespace> edp-perf-operator-<edp-namespace>-admin -n <edp-namespace>\nkubectl delete perfserver epam-perf -n <edp-namespace>\nkubectl delete services.v2.edp.epam.com postgres rabbit-mq -n <edp-namespace>\n
                                                                              • Update the CI and CD Jenkins job provisioners:

                                                                                Note

                                                                                Please refer to the Manage Jenkins CI Pipeline Job Provisioner section for the details.

                                                                                View: Default CI provisioner template for EDP 2.8.4
                                                                                /* Copyright 2021 EPAM Systems.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n\nSee the License for the specific language governing permissions and\nlimitations under the License. */\n\nimport groovy.json.*\nimport jenkins.model.Jenkins\nimport hudson.model.*\n\nJenkins jenkins = Jenkins.instance\ndef stages = [:]\ndef jiraIntegrationEnabled = Boolean.parseBoolean(\"${JIRA_INTEGRATION_ENABLED}\" as String)\ndef commitValidateStage = jiraIntegrationEnabled ? ',{\"name\": \"commit-validate\"}' : ''\ndef createJIMStage = jiraIntegrationEnabled ? ',{\"name\": \"create-jira-issue-metadata\"}' : ''\ndef buildTool = \"${BUILD_TOOL}\"\ndef goBuildStage = buildTool.toString() == \"go\" ? ',{\"name\": \"build\"}' : ',{\"name\": \"compile\"}'\n\nstages['Code-review-application'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" + goBuildStage +\n',{\"name\": \"tests\"},[{\"name\": \"sonar\"},{\"name\": \"dockerfile-lint\"},{\"name\": \"helm-lint\"}]]'\nstages['Code-review-library'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"compile\"},{\"name\": \"tests\"},' +\n'{\"name\": \"sonar\"}]'\nstages['Code-review-autotests'] = '[{\"name\": \"gerrit-checkout\"},{\"name\": \"get-version\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"},{\"name\": \"sonar\"}' + \"${createJIMStage}\" + ']'\nstages['Code-review-default'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" + ']'\nstages['Code-review-library-terraform'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"terraform-lint\"}]'\nstages['Code-review-library-opa'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"tests\"}]'\nstages['Code-review-library-codenarc'] = '[{\"name\": \"gerrit-checkout\"}' + \"${commitValidateStage}\" +\n',{\"name\": \"sonar\"},{\"name\": \"build\"}]'\n\nstages['Build-library-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"build\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-npm'] = stages['Build-library-maven']\nstages['Build-library-gradle'] = stages['Build-library-maven']\nstages['Build-library-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-terraform'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"terraform-lint\"}' +\n',{\"name\": \"terraform-plan\"},{\"name\": \"terraform-apply\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-opa'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"tests\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-library-codenarc'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"sonar\"},{\"name\": \"build\"}' +\n\"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n\n\nstages['Build-application-maven'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},[{\"name\": \"sonar\"}],{\"name\": \"build\"},{\"name\": \"build-image-kaniko\"},' +\n'{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-npm'] = stages['Build-application-maven']\nstages['Build-application-gradle'] = stages['Build-application-maven']\nstages['Build-application-dotnet'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},[{\"name\": \"sonar\"}],{\"name\": \"build-image-kaniko\"},' +\n'{\"name\": \"push\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-go'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"tests\"},{\"name\": \"sonar\"},' +\n'{\"name\": \"build\"},{\"name\": \"build-image-kaniko\"}' +\n\"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\nstages['Build-application-python'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"},{\"name\": \"compile\"},' +\n'{\"name\": \"tests\"},{\"name\": \"sonar\"},' +\n'{\"name\": \"build-image-kaniko\"},{\"name\": \"push\"}' + \"${createJIMStage}\" +\n',{\"name\": \"git-tag\"}]'\n\nstages['Create-release'] = '[{\"name\": \"checkout\"},{\"name\": \"create-branch\"},{\"name\": \"trigger-job\"}]'\n\ndef defaultBuild = '[{\"name\": \"checkout\"}' + \"${createJIMStage}\" + ']'\n\ndef codebaseName = \"${NAME}\"\ndef gitServerCrName = \"${GIT_SERVER_CR_NAME}\"\ndef gitServerCrVersion = \"${GIT_SERVER_CR_VERSION}\"\ndef gitCredentialsId = \"${GIT_CREDENTIALS_ID ? GIT_CREDENTIALS_ID : 'gerrit-ciuser-sshkey'}\"\ndef repositoryPath = \"${REPOSITORY_PATH}\"\ndef defaultBranch = \"${DEFAULT_BRANCH}\"\n\ndef codebaseFolder = jenkins.getItem(codebaseName)\nif (codebaseFolder == null) {\nfolder(codebaseName)\n}\n\ncreateListView(codebaseName, \"Releases\")\ncreateReleasePipeline(\"Create-release-${codebaseName}\", codebaseName, stages[\"Create-release\"], \"create-release.groovy\",\nrepositoryPath, gitCredentialsId, gitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, defaultBranch)\n\nif (buildTool.toString().equalsIgnoreCase('none')) {\nreturn true\n}\n\nif (BRANCH) {\ndef branch = \"${BRANCH}\"\ndef formattedBranch = \"${branch.toUpperCase().replaceAll(/\\\\//, \"-\")}\"\ncreateListView(codebaseName, formattedBranch)\n\ndef type = \"${TYPE}\"\ndef crKey = getStageKeyName(buildTool)\ncreateCiPipeline(\"Code-review-${codebaseName}\", codebaseName, stages[crKey], \"code-review.groovy\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion)\n\ndef buildKey = \"Build-${type}-${buildTool.toLowerCase()}\".toString()\nif (type.equalsIgnoreCase('application') || type.equalsIgnoreCase('library')) {\ndef jobExists = false\nif(\"${formattedBranch}-Build-${codebaseName}\".toString() in Jenkins.instance.getAllItems().collect{it.name})\njobExists = true\n\ncreateCiPipeline(\"Build-${codebaseName}\", codebaseName, stages.get(buildKey, defaultBuild), \"build.groovy\",\nrepositoryPath, gitCredentialsId, branch, gitServerCrName, gitServerCrVersion)\n\nif(!jobExists)\nqueue(\"${codebaseName}/${formattedBranch}-Build-${codebaseName}\")\n}\n}\n\ndef createCiPipeline(pipelineName, codebaseName, codebaseStages, pipelineScript, repository, credId, watchBranch, gitServerCrName, gitServerCrVersion) {\npipelineJob(\"${codebaseName}/${watchBranch.toUpperCase().replaceAll(/\\\\//, \"-\")}-${pipelineName}\") {\nlogRotator {\nnumToKeep(10)\ndaysToKeep(7)\n}\ntriggers {\ngerrit {\nevents {\nif (pipelineName.contains(\"Build\"))\nchangeMerged()\nelse\npatchsetCreated()\n}\nproject(\"plain:${codebaseName}\", [\"plain:${watchBranch}\"])\n}\n}\ndefinition {\ncpsScm {\nscm {\ngit {\nremote {\nurl(repository)\ncredentials(credId)\n}\nbranches(\"${watchBranch}\")\nscriptPath(\"${pipelineScript}\")\n}\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${codebaseStages}\", \"Consequence of stages in JSON format to be run during execution\")\nstringParam(\"GERRIT_PROJECT_NAME\", \"${codebaseName}\", \"Gerrit project name(Codebase name) to be build\")\nstringParam(\"BRANCH\", \"${watchBranch}\", \"Branch to build artifact from\")\n}\n}\n}\n}\n}\n\ndef getStageKeyName(buildTool) {\nif (buildTool.toString().equalsIgnoreCase('terraform')) {\nreturn \"Code-review-library-terraform\"\n}\nif (buildTool.toString().equalsIgnoreCase('opa')) {\nreturn \"Code-review-library-opa\"\n}\nif (buildTool.toString().equalsIgnoreCase('codenarc')) {\nreturn \"Code-review-library-codenarc\"\n}\ndef buildToolsOutOfTheBox = [\"maven\",\"npm\",\"gradle\",\"dotnet\",\"none\",\"go\",\"python\"]\ndef supBuildTool = buildToolsOutOfTheBox.contains(buildTool.toString())\nreturn supBuildTool ? \"Code-review-${TYPE}\" : \"Code-review-default\"\n}\n\ndef createReleasePipeline(pipelineName, codebaseName, codebaseStages, pipelineScript, repository, credId,\ngitServerCrName, gitServerCrVersion, jiraIntegrationEnabled, defaultBranch) {\npipelineJob(\"${codebaseName}/${pipelineName}\") {\nlogRotator {\nnumToKeep(14)\ndaysToKeep(30)\n}\ndefinition {\ncpsScm {\nscm {\ngit {\nremote {\nurl(repository)\ncredentials(credId)\n}\nbranches(\"${defaultBranch}\")\nscriptPath(\"${pipelineScript}\")\n}\n}\nparameters {\nstringParam(\"STAGES\", \"${codebaseStages}\", \"\")\nif (pipelineName.contains(\"Create-release\")) {\nstringParam(\"JIRA_INTEGRATION_ENABLED\", \"${jiraIntegrationEnabled}\", \"Is Jira integration enabled\")\nstringParam(\"GERRIT_PROJECT\", \"${codebaseName}\", \"\")\nstringParam(\"RELEASE_NAME\", \"\", \"Name of the release(branch to be created)\")\nstringParam(\"COMMIT_ID\", \"\", \"Commit ID that will be used to create branch from for new release. If empty, HEAD of master will be used\")\nstringParam(\"GIT_SERVER_CR_NAME\", \"${gitServerCrName}\", \"Name of Git Server CR to generate link to Git server\")\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"REPOSITORY_PATH\", \"${repository}\", \"Full repository path\")\nstringParam(\"DEFAULT_BRANCH\", \"${defaultBranch}\", \"Default repository branch\")\n}\n}\n}\n}\n}\n}\n\ndef createListView(codebaseName, branchName) {\nlistView(\"${codebaseName}/${branchName}\") {\nif (branchName.toLowerCase() == \"releases\") {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^Create-release.*\")\n}\n}\n} else {\njobFilters {\nregex {\nmatchType(MatchType.INCLUDE_MATCHED)\nmatchValue(RegexMatchValue.NAME)\nregex(\"^${branchName}-(Code-review|Build).*\")\n}\n}\n}\ncolumns {\nstatus()\nweather()\nname()\nlastSuccess()\nlastFailure()\nlastDuration()\nbuildButton()\n}\n}\n}\n

                                                                                Note

                                                                                Please refer to the Manage Jenkins CD Pipeline Job Provisioner page for the details.

                                                                                View: Default CD provisioner template for EDP 2.8.4
                                                                                /* Copyright 2021 EPAM Systems.\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n\nSee the License for the specific language governing permissions and\nlimitations under the License. */\n\nimport groovy.json.*\nimport jenkins.model.Jenkins\n\nJenkins jenkins = Jenkins.instance\n\ndef pipelineName = \"${PIPELINE_NAME}-cd-pipeline\"\ndef stageName = \"${STAGE_NAME}\"\ndef qgStages = \"${QG_STAGES}\"\ndef gitServerCrVersion = \"${GIT_SERVER_CR_VERSION}\"\ndef gitCredentialsId = \"${GIT_CREDENTIALS_ID}\"\ndef sourceType = \"${SOURCE_TYPE}\"\ndef libraryURL = \"${LIBRARY_URL}\"\ndef libraryBranch = \"${LIBRARY_BRANCH}\"\ndef autodeploy = \"${AUTODEPLOY}\"\ndef scriptPath = \"Jenkinsfile\"\ndef containerDeploymentType = \"container\"\ndef deploymentType = \"${DEPLOYMENT_TYPE}\"\n\ndef stages = buildStages(deploymentType, containerDeploymentType, qgStages)\n\ndef codebaseFolder = jenkins.getItem(pipelineName)\nif (codebaseFolder == null) {\nfolder(pipelineName)\n}\n\nif (deploymentType == containerDeploymentType) {\ncreateContainerizedCdPipeline(pipelineName, stageName, stages, scriptPath, sourceType,\nlibraryURL, libraryBranch, gitCredentialsId, gitServerCrVersion,\nautodeploy)\n} else {\ncreateCustomCdPipeline(pipelineName, stageName)\n}\n\ndef buildStages(deploymentType, containerDeploymentType, qgStages) {\nreturn deploymentType == containerDeploymentType\n? '[{\"name\":\"init\",\"step_name\":\"init\"},{\"name\":\"deploy\",\"step_name\":\"deploy\"},' + qgStages + ',{\"name\":\"promote-images-ecr\",\"step_name\":\"promote-images\"}]'\n: ''\n}\n\ndef createContainerizedCdPipeline(pipelineName, stageName, stages, pipelineScript, sourceType, libraryURL, libraryBranch, libraryCredId, gitServerCrVersion, autodeploy) {\npipelineJob(\"${pipelineName}/${stageName}\") {\nif (sourceType == \"library\") {\ndefinition {\ncpsScm {\nscm {\ngit {\nremote {\nurl(libraryURL)\ncredentials(libraryCredId)\n}\nbranches(\"${libraryBranch}\")\nscriptPath(\"${pipelineScript}\")\n}\n}\n}\n}\n} else {\ndefinition {\ncps {\nscript(\"@Library(['edp-library-stages', 'edp-library-pipelines']) _ \\n\\nDeploy()\")\nsandbox(true)\n}\n}\n}\nproperties {\ndisableConcurrentBuilds()\n}\nparameters {\nstringParam(\"GIT_SERVER_CR_VERSION\", \"${gitServerCrVersion}\", \"Version of GitServer CR Resource\")\nstringParam(\"STAGES\", \"${stages}\", \"Consequence of stages in JSON format to be run during execution\")\n\nif (autodeploy?.trim() && autodeploy.toBoolean()) {\nstringParam(\"AUTODEPLOY\", \"${autodeploy}\", \"Is autodeploy enabled?\")\nstringParam(\"CODEBASE_VERSION\", null, \"Codebase versions to deploy.\")\n}\n}\n}\n}\n\ndef createCustomCdPipeline(pipelineName, stageName) {\npipelineJob(\"${pipelineName}/${stageName}\") {\nproperties {\ndisableConcurrentBuilds()\n}\n}\n}\n
                                                                                • It is also necessary to add the string parameter DEPLOYMENT_TYPE to the CD provisioner:
                                                                                  • Go to job-provisions - > cd -> default -> configure;
                                                                                  • Add Parameter - > String parameter;
                                                                                  • Name -> DEPLOYMENT_TYPE
                                                                              • Update Jenkins pipelines and stages to the new release tag:

                                                                                • In Jenkins, go to Manage Jenkins -> Configure system -> Find the Global Pipeline Libraries menu.
                                                                                • Change the Default version for edp-library-stages from build/2.8.0-RC.6 to build/2.9.0-RC.5
                                                                                • Change the Default version for edp-library-pipelines from build/2.8.0-RC.4 to build/2.9.0-RC.3
                                                                              • Update the edp-admin-console Custom Resource in the KeycloakClient Custom Resource Definition:

                                                                                View: keycloakclient.yaml
                                                                                kind: KeycloakClient\napiVersion: v1.edp.epam.com/v1alpha1\nmetadata:\nname: edp-admin-console\nnamespace: <edp-namespace>\nspec:\nadvancedProtocolMappers: false\nattributes: null\naudRequired: true\nclientId: admin-console-client\ndirectAccess: true\npublic: false\nsecret: admin-console-client\nserviceAccount:\nenabled: true\nrealmRoles:\n- developer\ntargetRealm: <keycloak-edp-realm>\nwebUrl: >-\nhttps://edp-admin-console-example.com\n
                                                                                kubectl apply -f keycloakclient.yaml\n
                                                                              • Remove the admin-console-client client ID in the edp-namespace-main realm in Keycloak, restart the keycloak-operator pod and check that the new KeycloakClient is created with the confidential access type.

                                                                                Note

                                                                                If \"Internal error\" occurs, regenerate the admin-console-client secret in the Credentials tab in Keycloak and update the admin-console-client secret key \"clientSecret\" and \"password\".

                                                                              • Update image versions for the Jenkins agents in the ConfigMap:

                                                                                kubectl edit configmap jenkins-slaves -n <edp-namespace>\n
                                                                                • The versions of the images should be:
                                                                                  epamedp/edp-jenkins-dotnet-21-agent:1.0.2\nepamedp/edp-jenkins-dotnet-31-agent:1.0.2\nepamedp/edp-jenkins-go-agent:1.0.3\nepamedp/edp-jenkins-gradle-java11-agent:2.0.2\nepamedp/edp-jenkins-gradle-java8-agent:1.0.2\nepamedp/edp-jenkins-helm-agent:1.0.6\nepamedp/edp-jenkins-maven-java11-agent:2.0.3\nepamedp/edp-jenkins-maven-java8-agent:1.0.2\nepamedp/edp-jenkins-npm-agent:2.0.2\nepamedp/edp-jenkins-python-38-agent:2.0.3\nepamedp/edp-jenkins-terraform-agent:2.0.4\n
                                                                                • Add new Jenkins agents under the data field:
                                                                                View
                                                                                data:\ncodenarc-template: |-\n<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n<inheritFrom></inheritFrom>\n<name>codenarc</name>\n<namespace></namespace>\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<instanceCap>2147483647</instanceCap>\n<slaveConnectTimeout>100</slaveConnectTimeout>\n<idleMinutes>5</idleMinutes>\n<activeDeadlineSeconds>0</activeDeadlineSeconds>\n<label>codenarc</label>\n<serviceAccount>jenkins</serviceAccount>\n<nodeSelector>beta.kubernetes.io/os=linux</nodeSelector>\n<nodeUsageMode>NORMAL</nodeUsageMode>\n<workspaceVolume class=\"org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume\">\n<memory>false</memory>\n</workspaceVolume>\n<volumes/>\n<containers>\n<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n<name>jnlp</name>\n\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<workingDir>/tmp</workingDir>\n<command></command>\n<args>${computer.jnlpmac} ${computer.name}</args>\n<ttyEnabled>false</ttyEnabled>\n<resourceRequestCpu></resourceRequestCpu>\n<resourceRequestMemory></resourceRequestMemory>\n<resourceLimitCpu></resourceLimitCpu>\n<resourceLimitMemory></resourceLimitMemory>\n<envVars>\n<org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n<key>JAVA_TOOL_OPTIONS</key>\n<value>-XX:+UnlockExperimentalVMOptions -Dsun.zip.disableMemoryMapping=true</value>\n</org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n</envVars>\n<ports/>\n</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n</containers>\n<envVars/>\n<annotations/>\n<imagePullSecrets/>\n<podRetention class=\"org.csanchez.jenkins.plugins.kubernetes.pod.retention.Default\"/>\n</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\nopa-template: |-\n<org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n<inheritFrom></inheritFrom>\n<name>opa</name>\n<namespace></namespace>\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<instanceCap>2147483647</instanceCap>\n<slaveConnectTimeout>100</slaveConnectTimeout>\n<idleMinutes>5</idleMinutes>\n<activeDeadlineSeconds>0</activeDeadlineSeconds>\n<label>opa</label>\n<serviceAccount>jenkins</serviceAccount>\n<nodeSelector>beta.kubernetes.io/os=linux</nodeSelector>\n<nodeUsageMode>NORMAL</nodeUsageMode>\n<workspaceVolume class=\"org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume\">\n<memory>false</memory>\n</workspaceVolume>\n<volumes/>\n<containers>\n<org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n<name>jnlp</name>\n\n<privileged>false</privileged>\n<alwaysPullImage>false</alwaysPullImage>\n<workingDir>/tmp</workingDir>\n<command></command>\n<args>${computer.jnlpmac} ${computer.name}</args>\n<ttyEnabled>false</ttyEnabled>\n<resourceRequestCpu></resourceRequestCpu>\n<resourceRequestMemory></resourceRequestMemory>\n<resourceLimitCpu></resourceLimitCpu>\n<resourceLimitMemory></resourceLimitMemory>\n<envVars>\n<org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n<key>JAVA_TOOL_OPTIONS</key>\n<value>-XX:+UnlockExperimentalVMOptions -Dsun.zip.disableMemoryMapping=true</value>\n</org.csanchez.jenkins.plugins.kubernetes.model.KeyValueEnvVar>\n</envVars>\n<ports/>\n</org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>\n</containers>\n<envVars/>\n<annotations/>\n<imagePullSecrets/>\n<podRetention class=\"org.csanchez.jenkins.plugins.kubernetes.pod.retention.Default\"/>\n</org.csanchez.jenkins.plugins.kubernetes.PodTemplate>\n
                                                                                • Restart the Jenkins pod.
                                                                              • Update compatible plugins in Jenkins and install additional plugins:

                                                                                • Go to Manage Jenkins -> Manage Plugins -> Select Compatible -> Click Download now and install after restart
                                                                                • Install the following additional plugins (click the Available plugins tab in Jenkins):
                                                                                  • Groovy Postbuild
                                                                                  • CloudBees AWS Credentials
                                                                                  • Badge
                                                                                  • Timestamper
                                                                              • Add the annotation deploy.edp.epam.com/previous-stage-name: '' (it should be empty if the CD pipeline contains one stage) to each Custom Resource in the Custom Resource Definition Stage, for example:

                                                                                • List all Custom Resources in Stage: kubectl get stages.v2.edp.epam.com -n <edp-namespace>
                                                                                • Edit resources: kubectl edit stages.v2.edp.epam.com <cd-stage-name> -n <edp-namespace>
                                                                                  apiVersion: v2.edp.epam.com/v1alpha1\nkind: Stage\nmetadata:\nannotations:\ndeploy.edp.epam.com/previous-stage-name: ''\n

                                                                                Note

                                                                                If a pipeline contains several stages, add a previous stage name indicated in the EDP Admin Console to the annotation, for example: deploy.edp.epam.com/previous-stage-name: 'dev'.

                                                                              • Execute script to align CDPipeline resources to the new API (jq command-line JSON processor is required):

                                                                                pipelines=$( kubectl get cdpipelines -n <edp-namespace> -ojson | jq -c '.items[]' )\nfor p in $pipelines; do\necho \"$p\" | \\\n    jq '. | .spec.inputDockerStreams = .spec.input_docker_streams | del(.spec.input_docker_streams) | .spec += { \"deploymentType\": \"container\" } ' | \\\n    kubectl apply -f -\ndone\n
                                                                              • Update the database in the edp-db pod in the edp-namespace:

                                                                                • Log in to the pod:
                                                                                  kubectl exec -i -t -n <edp-namespace> edp-db-<pod> -c edp-db \"--\" sh -c \"(bash || ash || sh)\"\n
                                                                                • Log in to the Postgress DB (where \"admin\" is the user the secret was created for):
                                                                                  psql edp-db <admin>;\nSET search_path to '<edp-namespace>';\nUPDATE cd_pipeline SET deployment_type = 'container';\n
                                                                              • Add \"AUTODEPLOY\":\"true/false\",\"DEPLOYMENT_TYPE\":\"container\" to every Custom Resource in jenkinsjobs.v2.edp.epam.com:

                                                                                • Edit Kubernetes resources:
                                                                                  kubectl get jenkinsjobs.v2.edp.epam.com -n <edp-namespace>\n\nkubectl edit jenkinsjobs.v2.edp.epam.com <cd-pipeline-name> -n <edp-namespace>\n
                                                                                • Alternatively, use this script to update all the necessary jenkinsjobs Custom Resources:
                                                                                  edp_namespace=<epd_namespace>\nfor stages in $(kubectl get jenkinsjobs -o=name -n $edp_namespace); do kubectl get $stages -n $edp_namespace -o yaml | grep -q \"container\" && echo -e \"\\n$stages is already updated\" || kubectl get $stages -n $edp_namespace -o yaml | sed 's/\"GIT_SERVER_CR_VERSION\"/\"AUTODEPLOY\":\"false\",\"DEPLOYMENT_TYPE\":\"container\",\"GIT_SERVER_CR_VERSION\"/g' | kubectl apply -f -; done\n
                                                                                • Make sure the edited resource looks as follows:
                                                                                  job:\nconfig: '{\"AUTODEPLOY\":\"false\",\"DEPLOYMENT_TYPE\":\"container\",\"GIT_SERVER_CR_VERSION\":\"v2\",\"PIPELINE_NAME\":\"your-pipeline-name\",\"QG_STAGES\":\"{\\\"name\\\":\\\"manual\\\",\\\"step_name\\\":\\\"your-step-name\\\"}\",\"SOURCE_TYPE\":\"default\",\"STAGE_NAME\":\"your-stage-name\"}'\nname: job-provisions/job/cd/job/default\n
                                                                                • Restart the Jenkins operator pod and wait until the CD job provisioner in Jenkins creates the updated pipelines.
                                                                              • "},{"location":"operator-guide/upgrade-edp-2.8/#possible-issues","title":"Possible Issues","text":"
                                                                                1. SonarQube fails during the CI pipeline run. The previous builds of SonarQube used the latest version of the OpenID Connect Authentication for SonarQube plugin. Version 2.1.0 of this plugin may have issues with the connection, so it is necessary to downgrade it in order to get rid of errors in the pipeline. Take the following steps:

                                                                                  • Log in to the Sonar pod:
                                                                                    kubectl exec -i -t -n <edp-namespace> sonar-<pod> -c sonar \"--\" sh -c \"(bash || ash || sh)\"\n
                                                                                  • Run the command in the Sonar container:
                                                                                    rm extensions/plugins/sonar-auth-oidc-plugin*\n
                                                                                  • Install the OpenID Connect Authentication for SonarQube plugin v2.0.0:
                                                                                    curl -L  https://github.com/vaulttec/sonar-auth-oidc/releases/download/v2.0.0/sonar-auth-oidc-plugin-2.0.0.jar --output extensions/plugins/sonar-auth-oidc-plugin-2.0.0.jar\n
                                                                                  • Restart the SonarQube pod;
                                                                                2. The Helm lint checker in EDP 2.8.4 has some additional rules. There can be issues with it during the Code Review pipeline in Jenkins for applications that were transferred from previous EDP versions to EDP 2.8.4. To fix this, add the following annotation to the Chart.yaml file:

                                                                                  • Go to the Git repository -> Choose the application -> Edit the deploy-templates/Chart.yaml file.
                                                                                  • It is necessary to add the following lines to the bottom of the Chart.yaml file:
                                                                                    home: https://github.com/your-repo.git\nsources:\n- https://github.com/your-repo.git\nmaintainers:\n- name: DEV Team\n
                                                                                  • Add a new line character at the end of the last line. Please be aware it is important.
                                                                                "},{"location":"operator-guide/upgrade-edp-2.8/#related-articles","title":"Related Articles","text":"
                                                                                • Set Up Kiosk
                                                                                • IAM Roles for Kaniko Service Accounts
                                                                                • Manage Jenkins CI Pipeline Job Provisioner
                                                                                • Manage Jenkins CD Pipeline Job Provisioner
                                                                                "},{"location":"operator-guide/upgrade-edp-2.9/","title":"Upgrade EDP v2.8 to 2.9","text":"

                                                                                This section provides the details on the EDP upgrade to 2.9.0. Explore the actions and requirements below.

                                                                                Note

                                                                                Kiosk is optional for EDP v.2.9.0 and higher, and enabled by default. To disable it, add the following parameter to the values.yaml file: kioskEnabled: false. Please refer to the Set Up Kiosk documentation for the details.

                                                                                1. With Amazon Elastic Container Registry to store the images, there are two options:

                                                                                  • Enable IRSA and create AWS IAM Role for Kaniko image builder. Please refer to the IAM Roles for Kaniko Service Accounts section for the details.
                                                                                  • The Amazon Elastic Container Registry Roles can be stored in an instance profile.
                                                                                2. Before updating EDP to 2.9.0, update the gerrit-is-credentials secret by adding the new clientSecret key with the value from gerrit-is-credentials.client_secret:

                                                                                  kubectl edit secret gerrit-is-credentials -n <edp-namespace>\n
                                                                                  • Make sure it looks as follows (replace with the necessary key value):
                                                                                    data:\nclient_secret: example\nclientSecret: example\n
                                                                                3. Update Custom Resource Definitions. This command will apply all the necessary CRDs to the cluster:

                                                                                  kubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_gerritgroupmember_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_gerritgroup_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_gerritprojectaccess_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_gerritproject_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_jenkins_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_jenkinsagent_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_jenkinsauthorizationrolemapping_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/release/2.9/deploy-templates/crds/v2_v1alpha1_jenkinsauthorizationrole_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.9/deploy-templates/crds/v1_v1alpha1_keycloakclientscope_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.9/deploy-templates/crds/v1_v1alpha1_keycloakrealmuser_crd.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/release/2.9/deploy-templates/crds/edp_v1alpha1_nexus_crd.yaml\n
                                                                                4. With Amazon Elastic Container Registry to store and Kaniko to build the images, add the kanikoRoleArn parameter to the values before starting the update process. This parameter is indicated in AWS Roles once IRSA is enabled and AWS IAM Role is created for Kaniko.The value should look as follows:

                                                                                  kanikoRoleArn: arn:aws:iam::<AWS_ACCOUNT_ID>:role/AWSIRSA\u2039CLUSTER_NAME\u203a\u2039EDP_NAMESPACE\u203aKaniko\n
                                                                                5. To upgrade EDP to the v.2.9.0, run the following command:

                                                                                  helm upgrade --install edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.9.0\n

                                                                                  Note

                                                                                  To verify the installation, it is possible to test the deployment before applying it to the cluster with: helm upgrade --install edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=2.9.0 --dry-run

                                                                                6. Remove the following Kubernetes resources left from the previous EDP installation (it is optional):

                                                                                  kubectl delete rolebinding edp-cd-pipeline-operator-<edp-namespace>-admin -n <edp-namespace>\n
                                                                                7. After EDP update, please restart the 'sonar-operator' pod to address the proper Sonar plugin versioning. After 'sonar-operator' is restarted, check the list of installed plugins in the corresponding SonarQube menu.

                                                                                8. Update Jenkins pipelines and stages to the new release tag:

                                                                                  • Restart the Jenkins pod
                                                                                  • In Jenkins, go to Manage Jenkins -> Configure system -> Find the Global Pipeline Libraries menu
                                                                                  • Make sure that the Default version for edp-library-stages is build/2.10.0-RC.1
                                                                                  • Make sure that the Default version for edp-library-pipelines is build/2.10.0-RC.1
                                                                                9. Update image versions for the Jenkins agents in the ConfigMap:

                                                                                  kubectl edit configmap jenkins-slaves -n <edp-namespace>\n
                                                                                  • The versions of the images should be:
                                                                                    epamedp/edp-jenkins-codenarc-agent:1.0.1\nepamedp/edp-jenkins-dotnet-21-agent:1.0.3\nepamedp/edp-jenkins-dotnet-31-agent:1.0.3\nepamedp/edp-jenkins-go-agent:1.0.4\nepamedp/edp-jenkins-gradle-java8-agent:1.0.3\nepamedp/edp-jenkins-gradle-java11-agent:2.0.3\nepamedp/edp-jenkins-helm-agent:1.0.7\nepamedp/edp-jenkins-maven-java8-agent:1.0.3\nepamedp/edp-jenkins-maven-java11-agent:2.0.4\nepamedp/edp-jenkins-npm-agent:2.0.3\nepamedp/edp-jenkins-opa-agent:1.0.2\nepamedp/edp-jenkins-python-38-agent:2.0.4\nepamedp/edp-jenkins-terraform-agent:2.0.5\n
                                                                                  • Restart the Jenkins pod.
                                                                                10. Update the compatible plugins in Jenkins:

                                                                                  • Go to Manage Jenkins -> Manage Plugins -> Select Compatible -> Click Download now and install after restart
                                                                                "},{"location":"operator-guide/upgrade-edp-2.9/#related-articles","title":"Related Articles","text":"
                                                                                • Set Up Kiosk
                                                                                • IAM Roles for Kaniko Service Accounts
                                                                                "},{"location":"operator-guide/upgrade-edp-3.0/","title":"Upgrade EDP v2.12 to 3.0","text":"

                                                                                Important

                                                                                • Before starting the upgrade procedure, please make the necessary backups.
                                                                                • Kiosk integration is disabled by default. With EDP below v.3.0.x, define the global.kioskEnabled parameter in the values.yaml file. For details, please refer to the Set Up Kiosk page.
                                                                                • The gerrit-ssh-port parameter is moved from the gerrit-operator.gerrit.sshport to global.gerritSSHPort values.yaml file.
                                                                                • In edp-gerrit-operator, the gitServer.user value is changed from the jenkins to edp-civalues.yaml file.

                                                                                This section provides the details on upgrading EDP to 3.0. Explore the actions and requirements below.

                                                                                1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                                                                  kubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/d9a4d15244c527ef6d1d029af27574282a281b98/deploy-templates/crds/v2.edp.epam.com_gerrits.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_cdstagedeployments.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_codebasebranches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_codebaseimagestreams.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_codebases.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_gitservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_gittags.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_imagestreamtags.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_jiraissuemetadatas.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/release/2.14/deploy-templates/crds/v2.edp.epam.com_jiraservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakauthflows.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakclients.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakclientscopes.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealmcomponents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealmgroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealmidentityproviders.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealmrolebatches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealmroles.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealms.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloakrealmusers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/release/1.14/deploy-templates/crds/v1.edp.epam.com_keycloaks.yaml\n
                                                                                2. Set the required parameters. For more details, please refer to the values.yaml file.

                                                                                  View: values.yaml
                                                                                  edp-tekton:\nenabled: false\nadmin-console-operator:\nenabled: true\njenkins-operator:\nenabled: true\n
                                                                                3. Add proper Helm annotations and labels as indicated below. This step is necessary starting from the release v.3.0.x as custom resources are managed by Helm and removed from the Keycloak Controller logic.

                                                                                    kubectl label EDPComponent main-keycloak app.kubernetes.io/managed-by=Helm -n <edp-namespace>\n  kubectl annotate EDPComponent main-keycloak meta.helm.sh/release-name=<edp-release-name> -n <edp-namespace>\n  kubectl annotate EDPComponent main-keycloak meta.helm.sh/release-namespace=<edp-namespace> -n <edp-namespace>\n  kubectl label KeycloakRealm main app.kubernetes.io/managed-by=Helm -n <edp-namespace>\n  kubectl annotate KeycloakRealm main meta.helm.sh/release-name=<edp-release-name> -n <edp-namespace>\n  kubectl annotate KeycloakRealm main meta.helm.sh/release-namespace=<edp-namespace> -n <edp-namespace>\n

                                                                                4. To upgrade EDP to 3.0, run the following command:

                                                                                  helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=3.0.x\n

                                                                                  Note

                                                                                  To verify the installation, it is possible to test the deployment before applying it to the cluster with the following command: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=3.0.x --dry-run

                                                                                5. Update image versions for the Jenkins agents in the ConfigMap:

                                                                                    kubectl edit configmap jenkins-slaves -n <edp-namespace>\n
                                                                                  • The versions of the images must be the following:
                                                                                    epamedp/edp-jenkins-codenarc-agent:3.0.10\nepamedp/edp-jenkins-dotnet-31-agent:3.0.9\nepamedp/edp-jenkins-go-agent:3.0.17\nepamedp/edp-jenkins-gradle-java11-agent:3.0.7\nepamedp/edp-jenkins-gradle-java8-agent:3.0.10\nepamedp/edp-jenkins-helm-agent:3.0.11\nepamedp/edp-jenkins-kaniko-docker-agent:1.0.9\nepamedp/edp-jenkins-maven-java11-agent:3.0.7\nepamedp/edp-jenkins-maven-java8-agent:3.0.10\nepamedp/edp-jenkins-npm-agent:3.0.9\nepamedp/edp-jenkins-opa-agent:3.0.7\nepamedp/edp-jenkins-python-38-agent:3.0.8\nepamedp/edp-jenkins-sast-agent:0.1.5\nepamedp/edp-jenkins-terraform-agent:3.0.9\n
                                                                                  • Remove the edp-jenkins-dotnet-21-agent agent manifest.
                                                                                  • Restart the Jenkins pod.
                                                                                6. Attach the id_rsa.pub SSH public key from the gerrit-ciuser-sshkey secret to the edp-ci Gerrit user in the gerrit pod:

                                                                                  ssh -p <gerrit_ssh_port> <host> gerrit set-account --add-ssh-key ~/id_rsa.pub\n

                                                                                  Notes

                                                                                  • For this operation, use the gerrit-admin SSH key from secrets.
                                                                                  • <host> is admin@localhost or any other user with permissions.
                                                                                7. Change the username from jenkins to edp-ci in the gerrit-ciuser-sshkey secret:

                                                                                  kubectl -n <edp-namespace> patch secret gerrit-ciuser-sshkey\\\n --patch=\"{\\\"data\\\": { \\\"username\\\": \\\"$(echo -n edp-ci |base64 -w0)\\\" }}\" -oyaml\n

                                                                                Warning

                                                                                In EDP v.3.0.x, Admin Console is deprecated, and EDP interface is available only via EDP Portal.

                                                                                "},{"location":"operator-guide/upgrade-edp-3.0/#related-articles","title":"Related Articles","text":"
                                                                                • Migrate CI Pipelines From Jenkins to Tekton
                                                                                "},{"location":"operator-guide/upgrade-edp-3.1/","title":"Upgrade EDP v3.0 to 3.1","text":"

                                                                                Important

                                                                                We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                                                                This section provides the details on the EDP upgrade to v3.1. Explore the actions and requirements below.

                                                                                1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                                                                  kubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.13.2/deploy-templates/crds/v2.edp.epam.com_jenkins.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.13.4/deploy-templates/crds/v2.edp.epam.com_gerrits.yaml\n
                                                                                2. To upgrade EDP to the v3.1, run the following command:

                                                                                  helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=3.1.0\n

                                                                                  Note

                                                                                  To verify the installation, it is possible to test the deployment before applying it to the cluster with the following command: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=3.1.0 --dry-run

                                                                                "},{"location":"operator-guide/upgrade-edp-3.2/","title":"Upgrade EDP v3.1 to 3.2","text":"

                                                                                Important

                                                                                We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                                                                This section provides the details on the EDP upgrade to v3.2.2. Explore the actions and requirements below.

                                                                                1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                                                                  kubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_cdstagedeployments.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_codebasebranches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_codebaseimagestreams.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_codebases.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_gitservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_gittags.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_imagestreamtags.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_jiraissuemetadatas.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_jiraservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_cdstagejenkinsdeployments.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkins.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsagents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsauthorizationrolemappings.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsauthorizationroles.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsfolders.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsjobbuildruns.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsjobs.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsscripts.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinsserviceaccounts.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_jenkinssharedlibraries.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-component-operator/v0.13.0/deploy-templates/crds/v1.edp.epam.com_edpcomponents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/v2.14.1/deploy-templates/crds/v2.edp.epam.com_cdpipelines.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/v2.14.1/deploy-templates/crds/v2.edp.epam.com_stages.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/v2.14.1/deploy-templates/crds/v2.edp.epam.com_nexuses.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-nexus-operator/v2.14.1/deploy-templates/crds/v2.edp.epam.com_nexususers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_sonargroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_sonarpermissiontemplates.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-sonar-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_sonars.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerritgroupmembers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerritgroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerritmergerequests.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerritprojectaccesses.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerritprojects.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerritreplicationconfigs.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.14.0/deploy-templates/crds/v2.edp.epam.com_gerrits.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/v2.13.0/deploy-templates/crds/v2.edp.epam.com_perfdatasourcegitlabs.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/v2.13.0/deploy-templates/crds/v2.edp.epam.com_perfdatasourcejenkinses.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/v2.13.0/deploy-templates/crds/v2.edp.epam.com_perfdatasourcesonars.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-perf-operator/v2.13.0/deploy-templates/crds/v2.edp.epam.com_perfservers.yaml\n
                                                                                2. Generate a cookie-secret for proxy with the following command:

                                                                                  nexus_proxy_cookie_secret=$(openssl rand -base64 32 | head -c 32)\n
                                                                                  Create nexus-proxy-cookie-secret in the namespace:
                                                                                  kubectl -n <edp-project> create secret generic nexus-proxy-cookie-secret \\\n--from-literal=cookie-secret=${nexus_proxy_cookie_secret}\n
                                                                                3. EDP 3.2.2 features OIDC configuration for EDP Portal. If this parameter is required, create keycloak-client-headlamp-secret as described in this article:

                                                                                  kubectl -n <edp-project> create secret generic keycloak-client-edp-portal-secret \\\n--from-literal=clientSecret=<keycloak_client_secret_key>\n
                                                                                4. Delete the following resources:

                                                                                  kubectl -n <edp-project> delete KeycloakClient nexus\nkubectl -n <edp-project> delete EDPComponent nexus\nkubectl -n <edp-project> delete Ingress nexus\nkubectl -n <edp-project> delete deployment edp-tekton-dashboard\n
                                                                                5. EDP release 3.2.2 uses the default cluster storageClass and we must check previous storageClass parameters. Align , if required, the storageClassName in EDP values.yaml to the same that were used by EDP PVC. For example:

                                                                                  edp-tekton:\nbuildTool:\ngo:\ncache:\npersistentVolume:\n# -- Specifies storageClass type. If not specified, a default storageClass for go-cache volume is used\nstorageClass: ebs-sc\n\njenkins-operator:\nenabled: true\njenkins:\nstorage:\n# -- Storageclass for Jenkins data volume\nclass: gp2\n\nsonar-operator:\nsonar:\nstorage:\ndata:\n# --  Storageclass for Sonar data volume\nclass: gp2\ndatabase:\n# --  Storageclass for database data volume\nclass: gp2\n\ngerrit-operator:\ngerrit:\nstorage:\n# --  Storageclass for Gerrit data volume\nclass: gp2\n\nnexus-operator:\nnexus:\nstorage:\n# --  Storageclass for Nexus data volume\nclass: gp2\n
                                                                                6. To upgrade EDP to the v3.2.2, run the following command:

                                                                                  helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=3.2.2\n

                                                                                  Note

                                                                                  To verify the installation, it is possible to test the deployment before applying it to the cluster with the following command: helm upgrade edp epamedp/edp-install -n <edp-namespace> --values values.yaml --version=3.2.2 --dry-run

                                                                                7. "},{"location":"operator-guide/upgrade-edp-3.3/","title":"Upgrade EDP v3.2 to 3.3","text":"

                                                                                  Important

                                                                                  We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                                                                  Note

                                                                                  We currently disabled cache volumes for go and npm in the EDP 3.3 release.

                                                                                  This section provides the details on the EDP upgrade to v3.3.0. Explore the actions and requirements below.

                                                                                  1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                                                                    kubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.16.0/deploy-templates/crds/v2.edp.epam.com_codebases.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-jenkins-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_jenkins.yaml\n
                                                                                  2. If you use Gerrit VCS, delete the corresponding resource due to changes in annotations:

                                                                                    kubectl -n edp delete EDPComponent gerrit\n
                                                                                    The deployment will create a new EDPComponent called gerrit instead.

                                                                                  3. To upgrade EDP to the v3.3.0, run the following command:

                                                                                    helm upgrade edp epamedp/edp-install -n edp --values values.yaml --version=3.3.0\n

                                                                                    Note

                                                                                    To verify the installation, it is possible to test the deployment before applying it to the cluster with the --dry-run tag: helm upgrade edp epamedp/edp-install -n edp --values values.yaml --version=3.3.0 --dry-run

                                                                                  4. In EDP v3.3.0, a new feature was introduced allowing manual pipeline re-triggering by sending a comment with /recheck. To enable the re-trigger feature for applications that were added before the upgrade, please proceed with the following:

                                                                                    4.1 For Gerrit VCS, add the following event to the webhooks.config configuration file in the All-Projects repository:

                                                                                    [remote \"commentadded\"]\n  url = http://el-gerrit-listener:8080\n  event = comment-added\n

                                                                                    4.2 For GitHub VCS, check the Issue comments permission for each webhook in every application added before the EDP upgrade to 3.3.0.

                                                                                    4.3 For GitLab VCS, check the Comments permission for each webhook in every application added before the EDP upgrade to 3.3.0.

                                                                                  "},{"location":"operator-guide/upgrade-edp-3.4/","title":"Upgrade EDP v3.3 to 3.4","text":"

                                                                                  Important

                                                                                  We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                                                                  Note

                                                                                  Pay attention that the following components: perf-operator, edp-admin-console, edp-admin-console-operator, and edp-jenkins-operator are deprecated and should be additionally migrated in order to avoid their deletion. For migration details, please refer to the Migrate CI Pipelines From Jenkins to Tekton instruction.

                                                                                  This section provides the details on the EDP upgrade to v3.4.1. Explore the actions and requirements below.

                                                                                  1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                                                                    kubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_cdpipelines.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-cd-pipeline-operator/v2.15.0/deploy-templates/crds/v2.edp.epam.com_stages.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_clusterkeycloakrealms.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_clusterkeycloaks.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakauthflows.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakclients.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakclientscopes.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealmcomponents.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealmgroups.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealmidentityproviders.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealmrolebatches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealmroles.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealms.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloakrealmusers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-keycloak-operator/v1.17.0/deploy-templates/crds/v1.edp.epam.com_keycloaks.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.17.0/deploy-templates/crds/v2.edp.epam.com_templates.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.17.0/deploy-templates/crds/v2.edp.epam.com_codebasebranches.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.17.0/deploy-templates/crds/v2.edp.epam.com_codebaseimagestreams.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.17.0/deploy-templates/crds/v2.edp.epam.com_codebases.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.17.0/deploy-templates/crds/v2.edp.epam.com_gitservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.17.0/deploy-templates/crds/v2.edp.epam.com_jiraservers.yaml\nkubectl apply -f https://raw.githubusercontent.com/epam/edp-gerrit-operator/v2.16.0/deploy-templates/crds/v2.edp.epam.com_gerrits.yaml\n
                                                                                  2. Remove deprecated components:

                                                                                    View: values.yaml

                                                                                    perf-operator:\nenabled: false\nadmin-console-operator:\nenabled: false\njenkins-operator:\nenabled: false\n

                                                                                  3. Since the values.yaml file structure has been modified, move the dockerRegistry subsection to the global section:

                                                                                    The dockerRegistry value has been moved to the global section:

                                                                                    global:\ndockerRegistry:\n# -- Define Image Registry that will to be used in Pipelines. Can be ecr (default), harbor\ntype: \"ecr\"\n# -- Docker Registry endpoint\nurl: \"<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com\"\n
                                                                                  4. (Optional) To integrate EDP with Jira, rename the default values from epam-jira-user to jira-user for a secret name. In case Jira is already integrated, it will continue working.

                                                                                    codebase-operator:\njira:\ncredentialName: \"jira-user\"\n
                                                                                  5. (Optional) To switch to the Harbor registry, change the secret format for the external secret from kaniko-docker-config v3.3.0 to kaniko-docker-config v3.4.1:

                                                                                    View: old format
                                                                                     \"kaniko-docker-config\": {\"secret-string\"} //base64 format\n
                                                                                    View: new format
                                                                                    \"kaniko-docker-config\": {\n\"auths\" : {\n\"registry.com\" :\n{\"username\":\"<registry-username>\",\"password\":\"<registry-password>\",\"auth\":\"secret-string\"}\n}\n}\n
                                                                                  6. To upgrade EDP to the v3.4.1, run the following command:

                                                                                    helm upgrade edp epamedp/edp-install -n edp --values values.yaml --version=3.4.1\n

                                                                                    Note

                                                                                    To verify the installation, it is possible to test the deployment before applying it to the cluster with the --dry-run tag: helm upgrade edp epamedp/edp-install -n edp --values values.yaml --version=3.4.1 --dry-run

                                                                                  7. "},{"location":"operator-guide/upgrade-edp-3.5/","title":"Upgrade EDP v3.4 to 3.5","text":"

                                                                                    Important

                                                                                    We suggest making a backup of the EDP environment before starting the upgrade procedure.

                                                                                    This section provides detailed instructions for upgrading EPAM Delivery Platform to version 3.5.3. Follow the steps and requirements outlined below:

                                                                                    1. Update Custom Resource Definitions (CRDs). Run the following command to apply all necessary CRDs to the cluster:

                                                                                      kubectl apply -f https://raw.githubusercontent.com/epam/edp-codebase-operator/v2.19.0/deploy-templates/crds/v2.edp.epam.com_gitservers.yaml\n

                                                                                      Danger

                                                                                      Codebase-operator v2.19.0 is not compatible with the previous versions. Please become familiar with the breaking change in Git Server Custom Resource Definition.

                                                                                    2. Familiarize yourself with the updated file structure of the values.yaml file and adjust your values.yaml file accordingly:

                                                                                      1. By default, the deployment of sub components such as edp-sonar-operator, edp-nexus-operator, edp-gerrit-operator, and keycloak-operator, have been disabled. Set them back to true in case they are needed or manually deploy external tools, such as SonarQube, Nexus, Gerrit and integrate them with the EPAM Delivery Platform.

                                                                                      2. The default Git provider has been changed from Gerrit to GitHub:

                                                                                        Old format:

                                                                                        global:\ngitProvider: gerrit\ngerritSSHPort: \"22\"\n

                                                                                        New format:

                                                                                        global:\ngitProvider: github\n#gerritSSHPort: \"22\"\n
                                                                                      3. The sonarUrl and nexusUrl parameters have been deprecated. All the URLs from external components are stored in integration secrets:

                                                                                        global:\n# -- Optional parameter. Link to use custom sonarqube. Format: http://<service-name>.<sonarqube-namespace>:9000 or http://<ip-address>:9000\nsonarUrl: \"\"\n# -- Optional parameter. Link to use custom nexus. Format: http://<service-name>.<nexus-namespace>:8081 or http://<ip-address>:<port>\nnexusUrl: \"\"\n
                                                                                      4. Keycloak integration has been moved from the global section to the sso section. Update the parameters accordingly:

                                                                                        Old format:

                                                                                        global:\n# -- Keycloak URL\nkeycloakUrl: https://keycloak.example.com\n# -- Administrators of your tenant\nadmins:\n- \"stub_user_one@example.com\"\n# -- Developers of your tenant\ndevelopers:\n- \"stub_user_one@example.com\"\n- \"stub_user_two@example.com\"\n

                                                                                        New format:

                                                                                        sso:\nenabled: true\n# -- Keycloak URL\nkeycloakUrl: https://keycloak.example.com\n# -- Administrators of your tenant\nadmins:\n- \"stub_user_one@example.com\"\n# -- Developers of your tenant\ndevelopers:\n- \"stub_user_one@example.com\"\n- \"stub_user_two@example.com\"\n
                                                                                      5. (Optional) The default secret name for Jira integration has been changed from jira-user to ci-jira. Please adjust the secret name in the parameters accordingly:

                                                                                        codebase-operator:\njira:\ncredentialName: \"ci-jira\"\n
                                                                                    3. The secret naming and format have been refactored. Below are patterns of the changes for various components:

                                                                                      SonarQubeNexusDependency-TrackDefectDojoJiraGitLabGitHub

                                                                                      Old format:

                                                                                      \"sonar-ciuser-token\": {\n\"username\": \"xxxxx\",\n\"secret\": \"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\n}\n
                                                                                      New format:
                                                                                      \"ci-sonarqube\": {\n\"token\": \"xxxxxxxxxxxxxxxxxxxxxxx\",\n\"url\":\"https://sonar.example.com\"\n}\n

                                                                                      Old format:

                                                                                      \"nexus-ci-user\": {\n\"username\": \"xxxxx\",\n\"password\": \"xxxxxxxxxxxxxxxxxx\"\n}\n

                                                                                      New format:

                                                                                      \"ci-nexus\": {\n\"username\": \"xxxxx\",\n\"password\": \"xxxxx\",\n\"url\": \"http://nexus.example.com\"\n}\n

                                                                                      Old format:

                                                                                      \"ci-dependency-track\": {\n\"token\": \"xxxxxxxxxxxxxxxxxx\"\n}\n

                                                                                      New format:

                                                                                      \"ci-dependency-track\": {\n\"token\": \"xxxxxxxxxxxxxxxxxx\",\n\"url\": \"http://dependency-track.example.com\"}\n

                                                                                      Old format:

                                                                                      \"defectdojo-ciuser-token\": {\n\"token\": \"xxxxxxxxxxxxxxxxxx\"\n\"url\": \"http://defectdojo.example.com\"\n}\n

                                                                                      New format:

                                                                                      \"ci-defectdojo\": {\n\"token\": \"xxxxxxxxxxxxxxxxxx\",\n\"url\": \"http://defectdojo.example.com\"\n}\n

                                                                                      Old format:

                                                                                      \"jira-user\": {\n\"username\": \"xxxxx\",\n\"password\": \"xxxxx\"\n}\n

                                                                                      New format:

                                                                                      \"ci-jira\": {\n\"username\": \"xxxxx\",\n\"password\": \"xxxxx\"\n}\n

                                                                                      Old format:

                                                                                      \"gitlab\": {\n\"id_rsa\": \"xxxxxxxxxxxxxx\",\n\"token\": \"xxxxxxxxxxxxxx\",\n\"secretString\": \"xxxxxxxxxxxxxx\"\n}\n

                                                                                      New format:

                                                                                      \"ci-gitlab\": {\n\"id_rsa\": \"xxxxxxxxxxxxxx\",\n\"token\": \"xxxxxxxxxxxxxx\",\n\"secretString\": \"xxxxxxxxxxxxxx\"\n}\n

                                                                                      Old format:

                                                                                      \"github\": {\n\"id_rsa\": \"xxxxxxxxxxxxxx\",\n\"token\": \"xxxxxxxxxxxxxx\",\n\"secretString\": \"xxxxxxxxxxxxxx\"\n}\n

                                                                                      New format:

                                                                                      \"ci-github\": {\n\"id_rsa\": \"xxxxxxxxxxxxxx\",\n\"token\": \"xxxxxxxxxxxxxx\",\n\"secretString\": \"xxxxxxxxxxxxxx\"\n}\n

                                                                                      The tables below illustrate the difference between the old and new format:

                                                                                      Old format

                                                                                      Secret Name Username Password Token Secret URL jira-user * * nexus-ci.user * * sonar-ciuser-token * * defectdojo-ciuser-token * * ci-dependency-track *

                                                                                      New format

                                                                                      Secret Name Username Password Token URL ci-jira * * ci-nexus * * * ci-sonarqube * * ci-defectdojo * * ci-dependency-track * *
                                                                                    4. To upgrade EDP to the v3.5.3, run the following command:

                                                                                      helm upgrade edp epamedp/edp-install -n edp --values values.yaml --version=3.5.3\n

                                                                                      Note

                                                                                      To verify the installation, it is possible to test the deployment before applying it to the cluster with the --dry-run tag: helm upgrade edp epamedp/edp-install -n edp --values values.yaml --version=3.5.3 --dry-run

                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/","title":"Upgrade Keycloak v17.0 to 19.0","text":"

                                                                                    Starting from Keycloak v.18.x.x, the Keycloak server has been moved from the Wildfly (JBoss) Application Server to Quarkus framework and is called Keycloak.X.

                                                                                    There are two ways to upgrade Keycloak v.17.0.x-legacy to v.19.0.x on Kubernetes, please perform the steps described in the Prerequisites section of this tutorial, and then select a suitable upgrade strategy for your environment:

                                                                                    • Upgrade Postgres database to a minor release v.11.17
                                                                                    • Migrate Postgres database from Postgres v.11.x to v.14.5
                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#prerequisites","title":"Prerequisites","text":"

                                                                                    Before upgrading Keycloak, please perform the steps below:

                                                                                    1. Create a backup/snapshot of the Keycloak database volume. Locate the AWS volumeID and then create its snapshot on AWS:

                                                                                      • Find the PVC name attached to the Postgres pod. It can be similar to data-keycloak-postgresql-0 if the Postgres StatefulSet name is keycloak-postgresql:

                                                                                        kubectl get pods keycloak-postgresql-0 -n security -o jsonpath='{.spec.volumes[*].persistentVolumeClaim.claimName}{\"\\n\"}'\n
                                                                                      • Locate the PV volumeName in the data-keycloak-postgresql-0 Persistent Volume Claim:

                                                                                        kubectl get pvc data-keycloak-postgresql-0 -n security -o jsonpath='{.spec.volumeName}{\"\\n\"}'\n
                                                                                      • Get volumeID in the Persistent Volume:

                                                                                        kubectl get pv ${pv_name} -n security -o jsonpath='{.spec.awsElasticBlockStore.volumeID}{\"\\n\"}'\n
                                                                                    2. Add two additional keys: password and postgres-password, to the keycloak-postgresql secret in the Keycloak namespace.

                                                                                      Note

                                                                                      • The password key must have the same value as the postgresql-password key.
                                                                                      • The postgres-password key must have the same value as the postgresql-postgres-password key.

                                                                                      The latest chart for Keycloak.X does not have an option to override Postgres password and admin password keys in the secret, and it uses the Postgres defaults, therefore, a new secret scheme must be implemented:

                                                                                      kubectl -n security edit secret keycloak-postgresql\n
                                                                                      data:\npostgresql-password: XXXXXX\npostgresql-postgres-password: YYYYYY\npassword: XXXXXX\npostgres-password: YYYYYY\n
                                                                                    3. Save Keycloak StatefulSet names, for example, keycloak and keycloak-postgresql. These names will be used in the new Helm deployments:

                                                                                      $ kubectl get statefulset -n security\nNAME                  READY   AGE\nkeycloak              1/1     18h\nkeycloak-postgresql   1/1     18h\n
                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#upgrade-postgres-database-to-a-minor-release-v1117","title":"Upgrade Postgres Database to a Minor Release v.11.17","text":"

                                                                                    To upgrade Keycloak by upgrading Postgres Database to a minor release v.11.17, perform the steps described in the Prerequisites section of this tutorial, and then perform the following steps:

                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#delete-keycloak-resources","title":"Delete Keycloak Resources","text":"
                                                                                    1. Delete Keycloak and Prostgres StatefulSets:

                                                                                      kubectl delete statefulset keycloak keycloak-postgresql -n security\n
                                                                                    2. Delete the Keycloak Ingressobject, to prevent hostname duplication issues:

                                                                                      kubectl delete ingress keycloak -n security\n
                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#upgrade-keycloak","title":"Upgrade Keycloak","text":"
                                                                                    1. Make sure the Keycloak chart repository is added:

                                                                                      helm repo add codecentric https://codecentric.github.io/helm-charts\nhelm repo update\n
                                                                                    2. Create values for Keycloak:

                                                                                      Note

                                                                                      Since the Keycloak.X release, Keycloak and Postgres database charts are separated. Upgrade Keycloak, and then install the Postgres database.

                                                                                      Note

                                                                                      • nameOverride: \"keycloak\" sets the name of the Keycloak pod. It must be the same Keycloak name as in the previous StatefulSet.
                                                                                      • Change Ingress host name to the Keycloak host name.
                                                                                      • hostname: keycloak-postgresql is the hostname of the pod with the Postgres database that is the same as Postgres StatefulSet name, for example, keycloak-postgresql.
                                                                                      • \"/opt/keycloak/bin/kc.sh start --auto-build\" was used in the legacy Keycloak version. However, it is no longer required in the new Keycloak version since it is deprecated and used by default.
                                                                                      • Optionally, use the following command for applying the old Keycloak theme:

                                                                                        bin/kc.sh start --features-disabled=admin2\n

                                                                                      View: keycloak-values.yaml
                                                                                      nameOverride: \"keycloak\"\n\nreplicas: 1\n\n# Deploy the latest verion\nimage:\ntag: \"19.0.1\"\n\n# start: create OpenShift realm which is required by EDP\nextraInitContainers: |\n- name: realm-provider\nimage: busybox\nimagePullPolicy: IfNotPresent\ncommand:\n- sh\nargs:\n- -c\n- |\necho '{\"realm\": \"openshift\",\"enabled\": true}' > /opt/keycloak/data/import/openshift.json\nvolumeMounts:\n- name: realm\nmountPath: /opt/keycloak/data/import\n\nextraVolumeMounts: |\n- name: realm\nmountPath: /opt/keycloak/data/import\n\nextraVolumes: |\n- name: realm\nemptyDir: {}\n\ncommand:\n- \"/opt/keycloak/bin/kc.sh\"\n- \"--verbose\"\n- \"start\"\n- \"--http-enabled=true\"\n- \"--http-port=8080\"\n- \"--hostname-strict=false\"\n- \"--hostname-strict-https=false\"\n- \"--spi-events-listener-jboss-logging-success-level=info\"\n- \"--spi-events-listener-jboss-logging-error-level=warn\"\n- \"--import-realm\"\n\nextraEnv: |\n- name: KC_PROXY\nvalue: \"passthrough\"\n- name: KEYCLOAK_ADMIN\nvalueFrom:\nsecretKeyRef:\nname: keycloak-admin-creds\nkey: username\n- name: KEYCLOAK_ADMIN_PASSWORD\nvalueFrom:\nsecretKeyRef:\nname: keycloak-admin-creds\nkey: password\n- name: JAVA_OPTS_APPEND\nvalue: >-\n-XX:+UseContainerSupport\n-XX:MaxRAMPercentage=50.0\n-Djava.awt.headless=true\n-Djgroups.dns.query={{ include \"keycloak.fullname\" . }}-headless\n\n# This block should be uncommented if you install Keycloak on Kubernetes\ningress:\nenabled: true\nannotations:\nkubernetes.io/ingress.class: nginx\ningress.kubernetes.io/affinity: cookie\nrules:\n- host: keycloak.<ROOT_DOMAIN>\npaths:\n- path: '{{ tpl .Values.http.relativePath $ | trimSuffix \"/\" }}/'\npathType: Prefix\n\n# This block should be uncommented if you set Keycloak to OpenShift and change the host field\n# route:\n#   enabled: false\n#   # Path for the Route\n#   path: '/'\n#   # Host name for the Route\n#   host: \"keycloak.<ROOT_DOMAIN>\"\n#   # TLS configuration\n#   tls:\n#     enabled: true\n\nresources:\nlimits:\nmemory: \"2048Mi\"\nrequests:\ncpu: \"50m\"\nmemory: \"512Mi\"\n\n# Check database readiness at startup\ndbchecker:\nenabled: true\n\ndatabase:\nvendor: postgres\nexistingSecret: keycloak-postgresql\nhostname: keycloak-postgresql\nport: 5432\nusername: admin\ndatabase: keycloak\n
                                                                                    3. Upgrade the Keycloak Helm chart:

                                                                                      Note

                                                                                      • The Helm chart is substituted with the new Keyacloak.X instance.
                                                                                      • Change the namespace and the values file name if required.
                                                                                      helm upgrade keycloak codecentric/keycloakx --version 1.6.0 --values keycloak-values.yaml -n security\n

                                                                                      Note

                                                                                      If there are error messages when upgrading via Helm, make sure that StatefulSets are removed. If they are removed and the error still persists, try to add the --force flag to the Helm command:

                                                                                      helm upgrade keycloak codecentric/keycloakx --version 1.6.0 --values keycloak-values.yaml -n security --force\n
                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#install-postgres","title":"Install Postgres","text":"
                                                                                    1. Add Bitnami chart repository and update Helm repos:

                                                                                      helm repo add bitnami https://charts.bitnami.com/bitnami\nhelm repo update\n
                                                                                    2. Create values for Postgres:

                                                                                      Note

                                                                                      • Postgres v.11 and Postgres v.14.5 are not compatible.
                                                                                      • Postgres image will be upgraded to a minor release v.11.17.
                                                                                      • fullnameOverride: \"keycloak-postgresql\" sets the name of the Postgres StatefulSet. It must be the same as in the previous StatefulSet.
                                                                                      View: postgres-values.yaml
                                                                                      fullnameOverride: \"keycloak-postgresql\"\n\n# PostgreSQL read only replica parameters\nreadReplicas:\n# Number of PostgreSQL read only replicas\nreplicaCount: 1\n\nglobal:\npostgresql:\nauth:\nusername: admin\nexistingSecret: keycloak-postgresql\nsecretKeys:\nadminPasswordKey: postgres-password\nuserPasswordKey: password\ndatabase: keycloak\n\nimage:\nregistry: docker.io\nrepository: bitnami/postgresql\ntag: 11.17.0-debian-11-r3\n\nauth:\nexistingSecret: keycloak-postgresql\nsecretKeys:\nadminPasswordKey: postgres-password\nuserPasswordKey: password\n\nprimary:\npersistence:\nenabled: true\nsize: 3Gi\n# If the StorageClass with reclaimPolicy: Retain is used, install an additional StorageClass before installing PostgreSQL\n# (the code is given below).\n# If the default StorageClass will be used - change \"gp2-retain\" to \"gp2\"\nstorageClass: \"gp2-retain\"\n
                                                                                    3. Install the Postgres database chart:

                                                                                      Note

                                                                                      Change the namespace and the values file name if required.

                                                                                      helm install postgresql bitnami/postgresql \\\n--version 11.7.6 \\\n--values postgres-values.yaml \\\n--namespace security\n
                                                                                    4. Log in to Keycloak and check that everything works as expected.

                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#clean-and-analyze-database","title":"Clean and Analyze Database","text":"

                                                                                    Optionally, run the vacuumdb application on the database, to recover space occupied by \"dead tuples\" in the tables, analyze the contents of database tables, and collect statistics for PostgreSQL query engine to improve performance:

                                                                                    PGPASSWORD=\"${postgresql_postgres-password}\" vacuumdb --analyze --verbose -d keycloak -U postgres\n
                                                                                    For all databases, run the following command:

                                                                                    PGPASSWORD=\"${postgresql_postgres-password}\" vacuumdb --analyze --verbose --all -U postgres\n
                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#migrate-postgres-database-from-postgres-v11x-to-v145","title":"Migrate Postgres Database From Postgres v.11.x to v.14.5","text":"

                                                                                    Info

                                                                                    There is a Postgres database migration script at the end of this tutorial. Please read the section below before using the script.

                                                                                    To upgrade Keycloak by migrating Postgres database from Postgres v.11.x to v.14.5, perform the steps described in the Prerequisites section of this tutorial, and then perform the following steps:

                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#export-postgres-databases","title":"Export Postgres Databases","text":"
                                                                                    1. Log in to the current Keycloak Postgres pod and create a logical backup of all roles and databases using the pg_dumpall application. If there is no access to the Postgres Superuser, backup the Keycloak database with the pg_dump application:

                                                                                      Note

                                                                                      • The secret key postgresql-postgres-password is for the postgres Superuser and postgresql-password is for admin user. The admin user is indicated by default in the Postgres Helm chart. The admin user may not have enough permissions to dump all Postgres databases and roles, so the preferred option for exporting all objects is using the pg_dumpall tool with the postgres Superuser.
                                                                                      • If the PGPASSWORD variable is not specified before using the pg_dumpall tool, you will be prompted to enter a password for each database during the export.
                                                                                      • If the -l keycloak parameter is specified, pg_dumpall will connect to the keycloak database for dumping global objects and discovering what other databases should be dumped. By default, pg_dumpall will try to connect to postgres or template1 databases. This parameter is optional.
                                                                                      • The pg_dumpall --clean option adds SQL commands to the dumped file for dropping databases before recreating them during import, as well as DROP commands for roles and tablespaces (pg_dump also has this option). If the --clean parameter is specified, connect to the postgres database initially during import via psql. The psql script will attempt to drop other databases immediately, and that will fail for the database you are connected to. This flag is optional, and it is not included into this tutorial.
                                                                                      PGPASSWORD=\"${postgresql_postgres-password}\" pg_dumpall -h localhost -p 5432 -U postgres -l keycloak > /tmp/keycloak_wildfly_db_dump.sql\n

                                                                                      Note

                                                                                      If there is no working password for the postgres Superuser, try the admin user using the pg_dump tool to export the keycloak database without global roles:

                                                                                      PGPASSWORD=\"${postgresql_password}\" pg_dump -h localhost -p 5432 -U admin -d keycloak > /tmp/keycloak_wildfly_db_dump.sql\n

                                                                                      Info

                                                                                      Double-check that the contents of the dumped file is not empty. It usually contains more than 4000 lines.

                                                                                    2. Copy the file with the database dump to a local machine. Since tar may not be present in the pod and kubectl cp will not work without tar, use the following command:

                                                                                      kubectl exec -n security ${postgresql_pod} -- cat /tmp/keycloak_wildfly_db_dump.sql  > keycloak_wildfly_db_dump.sql\n

                                                                                      Note

                                                                                      Please find below the alternative commands for exporting the database to the local machine without copying the file to a pod for Postgres and admin users:

                                                                                      kubectl exec -n security ${postgresql_pod} \"--\" sh -c \"PGPASSWORD='\"${postgresql_postgres-password}\"' pg_dumpall -h localhost -p 5432 -U postgres\" > keycloak_wildfly_db_dump.sql\nkubectl exec -n security ${postgresql_pod} \"--\" sh -c \"PGPASSWORD='\"${postgresql_password}\"' pg_dump -h localhost -p 5432 -U admin -d keycloak\" > keycloak_wildfly_db_dump.sql\n
                                                                                    3. Delete the dumped file from the pod for security reasons:

                                                                                      kubectl exec -n security ${postgresql_pod} \"--\" sh -c \"rm /tmp/keycloak_wildfly_db_dump.sql\"\n
                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#delete-keycloak-resources_1","title":"Delete Keycloak Resources","text":"
                                                                                    1. Delete all previous Keycloak resources along with the Postgres database and keycloak StatefulSets, Ingress, and custom resources via Helm, or via the tool used for their deployment.

                                                                                      helm list -n security\nhelm delete keycloak -n security\n

                                                                                      Warning

                                                                                      Don't delete the whole namespace. Keep the keycloak-postgresql and keycloak-admin-creds secrets.

                                                                                    2. Delete the volume in AWS, from which a snapshot has been created. Then delete the PVC:

                                                                                      kubectl delete pvc data-keycloak-postgresql-0 -n security\n
                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#install-postgres_1","title":"Install Postgres","text":"
                                                                                    1. Add Bitnami chart repository and update Helm repos:

                                                                                      helm repo add bitnami https://charts.bitnami.com/bitnami\nhelm repo update\n
                                                                                    2. Create Postgres values:

                                                                                      Note

                                                                                      fullnameOverride: \"keycloak-postgresql\" sets the name of the Postgres StatefulSet. It must be same as in the previous StatefulSet.

                                                                                      View: postgres-values.yaml
                                                                                      nameOverride: \"keycloak-postgresql\"\n\n# PostgreSQL read only replica parameters\nreadReplicas:\n# Number of PostgreSQL read only replicas\nreplicaCount: 1\n\nglobal:\npostgresql:\nauth:\nusername: admin\nexistingSecret: keycloak-postgresql\nsecretKeys:\nadminPasswordKey: postgres-password\nuserPasswordKey: password\ndatabase: keycloak\n\nauth:\nexistingSecret: keycloak-postgresql\nsecretKeys:\nadminPasswordKey: postgres-password\nuserPasswordKey: password\n\nprimary:\npersistence:\nenabled: true\nsize: 3Gi\n# If the StorageClass with reclaimPolicy: Retain is used, install an additional StorageClass before installing PostgreSQL\n# (the code is given below).\n# If the default StorageClass will be used - change \"gp2-retain\" to \"gp2\"\nstorageClass: \"gp2-retain\"\n
                                                                                    3. Install the Postgres database:

                                                                                      Note

                                                                                      Change the namespace and the values file name if required.

                                                                                      helm install postgresql bitnami/postgresql \\\n--version 11.7.6 \\\n--values postgres-values.yaml \\\n--namespace security\n
                                                                                    4. Wait for the database to be ready.

                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#import-postgres-databases","title":"Import Postgres Databases","text":"
                                                                                    1. Upload the database dump to the new Keycloak Postgres pod:

                                                                                      cat keycloak_wildfly_db_dump.sql | kubectl exec -i -n security ${postgresql_pod} \"--\" sh -c \"cat > /tmp/keycloak_wildfly_db_dump.sql\"\n

                                                                                      Warning

                                                                                      Database import must be done before deploying Keycloak, because Keycloak will write its own data to the database during the start, and the import will partially fail. If that happened, scale down the keycloak StatefulSet, and try to drop the Keycloak database in the Postgres pod:

                                                                                      dropdb -i -e keycloak -p 5432 -h localhost -U postgres\n

                                                                                      If there still are some conflicting objects like roles, drop them via the DROP ROLE command.

                                                                                      If the previous steps do not help, downscale the Keycloak and Postgres StatefulSets and delete the attached PVC (save the volumeID before removing), and delete the volume on AWS if using gp2-retain. In case of using gp2, the volume will be deleted automatically after removing PVC. After that, redeploy the Postgres database, so that the new PVC is automatically created.

                                                                                    2. Import the SQL dump file to the Postgres database cluster:

                                                                                      Info

                                                                                      Since the databases were exported in the sql format, the psql tool will be used to restore (reload) them. pg_restore does not support this plain-text format.

                                                                                      • If the entire Postgres database cluster was migrated with the postgres Superuser using pg_dumpall, use the import command without indicating the database:

                                                                                        psql -U postgres -f /tmp/keycloak_wildfly_db_dump.sql\n
                                                                                      • If the database was migrated with the admin user using pg_dump, the postgres Superuser still can be used to restore it, but, in this case, a database must be indicated:

                                                                                        Warning

                                                                                        If the database name was not indicated during the import for the file dumped with pg_dump, the psql tool will import this database to a default Postgres database called postgres.

                                                                                        psql -U postgres -d keycloak -f /tmp/keycloak_wildfly_db_dump.sql\n
                                                                                      • If the postgres Superuser is not accessible in the Postgres pod, run the command under the admin or any other user that has the database permissions. In this case, indicate the database as well:

                                                                                        psql -U admin -d keycloak -f /tmp/keycloak_wildfly_db_dump.sql\n
                                                                                    3. After a successful import, delete the dump file from the pod for security reasons:

                                                                                      kubectl exec -n security ${postgresql_pod} \"--\" sh -c \"rm /tmp/keycloak_wildfly_db_dump.sql\"\n

                                                                                      Note

                                                                                      Please find below the alternative commands for importing the database from the local machine to the pod without storing the backup on a pod for postgres or admin users:

                                                                                      cat \"keycloak_wildfly_db_dump.sql\" | kubectl exec -i -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" sh -c \"cat | PGPASSWORD='\"${postgresql_superuser_password}\"' psql -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\"\"\ncat \"keycloak_wildfly_db_dump.sql\" | kubectl exec -i -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" sh -c \"cat | PGPASSWORD='\"${postgresql_superuser_password}\"' psql -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\" -d \"${database_name}\"\"\ncat \"keycloak_wildfly_db_dump.sql\" | kubectl exec -i -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" sh -c \"cat | PGPASSWORD='\"${postgresql_admin_password}\"' psql -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\" -d \"${database_name}\"\"\n
                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#install-keycloak","title":"Install Keycloak","text":"
                                                                                    1. Make sure the Keycloak chart repository is added:

                                                                                      helm repo add codecentric https://codecentric.github.io/helm-charts\nhelm repo update\n
                                                                                    2. Create Keycloak values:

                                                                                      Note

                                                                                      • nameOverride: \"keycloak\" sets the name of the Keycloak pod. It must be the same Keycloak name as in the previous StatefulSet.
                                                                                      • Change Ingress host name to the Keycloak host name.
                                                                                      • hostname: keycloak-postgresql is the hostname of the pod with the Postgres database that is the same as Postgres StatefulSet name, for example, keycloak-postgresql.
                                                                                      • \"/opt/keycloak/bin/kc.sh start --auto-build\" was used in the legacy Keycloak version. However, it is no longer required in the new Keycloak version since it is deprecated and used by default.
                                                                                      • Optionally, use the following command for applying the old Keycloak theme:

                                                                                        bin/kc.sh start --features-disabled=admin2\n

                                                                                      Info

                                                                                      Automatic database migration will start after the Keycloak installation.

                                                                                      View: keycloak-values.yaml
                                                                                      nameOverride: \"keycloak\"\n\nreplicas: 1\n\n# Deploy the latest verion\nimage:\ntag: \"19.0.1\"\n\n# start: create OpenShift realm which is required by EDP\nextraInitContainers: |\n- name: realm-provider\nimage: busybox\nimagePullPolicy: IfNotPresent\ncommand:\n- sh\nargs:\n- -c\n- |\necho '{\"realm\": \"openshift\",\"enabled\": true}' > /opt/keycloak/data/import/openshift.json\nvolumeMounts:\n- name: realm\nmountPath: /opt/keycloak/data/import\n\nextraVolumeMounts: |\n- name: realm\nmountPath: /opt/keycloak/data/import\n\nextraVolumes: |\n- name: realm\nemptyDir: {}\n\ncommand:\n- \"/opt/keycloak/bin/kc.sh\"\n- \"--verbose\"\n- \"start\"\n- \"--http-enabled=true\"\n- \"--http-port=8080\"\n- \"--hostname-strict=false\"\n- \"--hostname-strict-https=false\"\n- \"--spi-events-listener-jboss-logging-success-level=info\"\n- \"--spi-events-listener-jboss-logging-error-level=warn\"\n- \"--import-realm\"\n\nextraEnv: |\n- name: KC_PROXY\nvalue: \"passthrough\"\n- name: KEYCLOAK_ADMIN\nvalueFrom:\nsecretKeyRef:\nname: keycloak-admin-creds\nkey: username\n- name: KEYCLOAK_ADMIN_PASSWORD\nvalueFrom:\nsecretKeyRef:\nname: keycloak-admin-creds\nkey: password\n- name: JAVA_OPTS_APPEND\nvalue: >-\n-XX:+UseContainerSupport\n-XX:MaxRAMPercentage=50.0\n-Djava.awt.headless=true\n-Djgroups.dns.query={{ include \"keycloak.fullname\" . }}-headless\n\n# This block should be uncommented if you install Keycloak on Kubernetes\ningress:\nenabled: true\nannotations:\nkubernetes.io/ingress.class: nginx\ningress.kubernetes.io/affinity: cookie\nrules:\n- host: keycloak.<ROOT_DOMAIN>\npaths:\n- path: '{{ tpl .Values.http.relativePath $ | trimSuffix \"/\" }}/'\npathType: Prefix\n\n# This block should be uncommented if you set Keycloak to OpenShift and change the host field\n# route:\n#   enabled: false\n#   # Path for the Route\n#   path: '/'\n#   # Host name for the Route\n#   host: \"keycloak.<ROOT_DOMAIN>\"\n#   # TLS configuration\n#   tls:\n#     enabled: true\n\nresources:\nlimits:\nmemory: \"2048Mi\"\nrequests:\ncpu: \"50m\"\nmemory: \"512Mi\"\n\n# Check database readiness at startup\ndbchecker:\nenabled: true\n\ndatabase:\nvendor: postgres\nexistingSecret: keycloak-postgresql\nhostname: keycloak-postgresql\nport: 5432\nusername: admin\ndatabase: keycloak\n
                                                                                    3. Deploy Keycloak:

                                                                                      Note

                                                                                      Change the namespace and the values file name if required.

                                                                                      helm install keycloak codecentric/keycloakx --version 1.6.0 --values keycloak-values.yaml -n security\n
                                                                                    4. Log in to Keycloak and check if everything has been imported correctly.

                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#clean-and-analyze-database_1","title":"Clean and Analyze Database","text":"

                                                                                    Optionally, run the vacuumdb application on the database, to analyze the contents of database tables and collect statistics for the Postgres query optimizer:

                                                                                    PGPASSWORD=\"${postgresql_postgres-password}\" vacuumdb --analyze --verbose -d keycloak -U postgres\n
                                                                                    For all databases, run the following command:

                                                                                    PGPASSWORD=\"${postgresql_postgres-password}\" vacuumdb --analyze --verbose --all -U postgres\n
                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#postgres-database-migration-script","title":"Postgres Database Migration Script","text":"

                                                                                    Info

                                                                                    Please read the Migrate Postgres Database From Postgres v.11.x to v.14.5 section of this tutorial before using the script.

                                                                                    Note

                                                                                    • The kubectl tool is required for using this script.
                                                                                    • This script will likely work for any other Postgres database besides Keycloak after some adjustments. It queries the pg_dump, pg_dumpall, psql, and vacuumdb commands under the hood.

                                                                                    The following script can be used for exporting and importing Postgres databases as well as optimizing them with the vacuumdb application. Please examine the code and make the adjustments if required.

                                                                                    • By default, the following command exports Keycloak Postgres databases from a Kubernetes pod to a local machine:

                                                                                      ./script.sh\n

                                                                                      After running the command, please follow the prompt.

                                                                                    • To import a database backup to a newly created Postgres Kubernetes pod, pass a database dump sql file to the script:
                                                                                      ./script.sh path-to/db_dump.sql\n
                                                                                    • The -h flag prints help, and -c|-v runs the vacuumdb garbage collector and analyzer.
                                                                                    View: keycloak_db_migration.sh
                                                                                    #!/bin/bash\n\n# set -x\n\ndb_migration_help(){\necho \"Keycloak Postgres database migration\"\necho\necho \"Usage:\"\necho \"------------------------------------------\"\necho \"Export Keycloak Postgres database from pod\"\necho \"Run without parameters:\"\necho \"      $0\"\necho \"------------------------------------------\"\necho \"Import Keycloak Postgres database to pod\"\necho \"Pass filename to script:\"\necho \"      $0 path/to/db_dump.sql\"\necho \"------------------------------------------\"\necho \"Additional options: \"\necho \"      $0 [OPTIONS...]\"\necho \"Options:\"\necho \"h     Print Help.\"\necho \"c|v   Run garbage collector and analyzer.\"\n}\n\nkeycloak_ns(){\nprintf '%s\\n' 'Enter keycloak namespace: '\nread -r keycloak_namespace\n\n    if [ -z \"${keycloak_namespace}\" ]; then\necho \"Don't skip namespace\"\nexit 1\nfi\n}\n\npostgres_pod(){\nprintf '%s\\n' 'Enter postgres pod name: '\nread -r postgres_pod_name\n\n    if [ -z \"${postgres_pod_name}\" ]; then\necho \"Don't skip pod name\"\nexit 1\nfi\n}\n\npostgres_user(){\nprintf '%s\\n' 'Enter postgres username: '\nprintf '%s' \"Skip to use [postgres] superuser: \"\nread -r postgres_username\n\n    if [ -z \"${postgres_username}\" ]; then\npostgres_username='postgres'\nfi\n}\n\npgdb_host_info(){\ndatabase_name='keycloak'\ndb_host='localhost'\ndb_port='5432'\n}\n\npostgresql_admin_pass(){\npostgresql_password='POSTGRES_PASSWORD'\npostgresql_admin_password=\"$(kubectl exec -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"printenv ${postgresql_password}\")\"\n}\n\npostgresql_su_pass(){\npostgresql_postgres_password='POSTGRES_POSTGRES_PASSWORD'\npostgresql_superuser_password=\"$(kubectl exec -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"printenv ${postgresql_postgres_password}\")\"\n\nif [ -z \"${postgresql_superuser_password}\" ]; then\necho \"SuperUser password variable does not exist. Using user password instead...\"\npostgresql_admin_pass\n        postgresql_superuser_password=\"${postgresql_admin_password}\"\nfi\n}\n\nkeycloak_pgdb_export(){\ncurrent_cluster=\"$(kubectl config current-context | tr -dc '[:alnum:]-')\"\nexported_db_name=\"keycloak_db_dump_${current_cluster}_${keycloak_namespace}_${postgres_username}_$(date +\"%Y%m%d%H%M\").sql\"\n\nif [ \"${postgres_username}\" == 'postgres' ]; then\n# call a function to get a pass for postgres user\npostgresql_su_pass\n        kubectl exec -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"PGPASSWORD='\"${postgresql_superuser_password}\"' pg_dumpall -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\"\" > \"${exported_db_name}\"\nelse\n# call a function to get a pass for admin user\npostgresql_admin_pass\n        kubectl exec -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"PGPASSWORD='\"${postgresql_admin_password}\"' pg_dump -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\" -d \"${database_name}\"\" > \"${exported_db_name}\"\nfi\n\nseparate_lines=\"---------------\"\n\nif [ ! -s \"${exported_db_name}\" ]; then\nrm -f \"${exported_db_name}\"\necho \"${separate_lines}\"\necho \"Something went wrong. The database dump file is empty and was not saved.\"\nelse\necho \"${separate_lines}\"\ngrep 'Dumped' \"${exported_db_name}\" | sort -u\n        echo \"Database has been exported to $(pwd)/${exported_db_name}\"\nfi\n}\n\nkeycloak_pgdb_import(){\necho \"Preparing Import\"\necho \"----------------\"\n\nif [ ! -f \"$1\" ]; then\necho \"The file $1 does not exist.\"\nexit 1\nfi\n\nkeycloak_ns\n    postgres_pod\n    postgres_user\n    pgdb_host_info\n\n    if [ \"${postgres_username}\" == 'postgres' ]; then\n# restore full backup with all databases and roles as superuser or a single database\npostgresql_su_pass\n        if [ -n \"$(cat \"$1\" | grep 'CREATE ROLE')\" ]; then\ncat \"$1\" | kubectl exec -i -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"cat | PGPASSWORD='\"${postgresql_superuser_password}\"' psql -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\"\"\nelse\ncat \"$1\" | kubectl exec -i -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"cat | PGPASSWORD='\"${postgresql_superuser_password}\"' psql -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\" -d \"${database_name}\"\"\nfi\nelse\n# restore a single database\npostgresql_admin_pass\n        cat \"$1\" | kubectl exec -i -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"cat | PGPASSWORD='\"${postgresql_admin_password}\"' psql -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\" -d \"${database_name}\"\"\nfi\n}\n\nvacuum_pgdb(){\necho \"Preparing garbage collector and analyzer\"\necho \"----------------------------------------\"\n\nkeycloak_ns\n    postgres_pod\n    postgres_user\n    pgdb_host_info\n\n    if [ \"${postgres_username}\" == 'postgres' ]; then\npostgresql_su_pass\n        kubectl exec -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"PGPASSWORD='\"${postgresql_superuser_password}\"' vacuumdb --analyze --all -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\"\"\nelse\npostgresql_admin_pass\n        kubectl exec -n \"${keycloak_namespace}\" \"${postgres_pod_name}\" \"--\" \\\nsh -c \"PGPASSWORD='\"${postgresql_admin_password}\"' vacuumdb --analyze -h \"${db_host}\" -p \"${db_port}\" -U \"${postgres_username}\" -d \"${database_name}\"\"\nfi\n}\n\nwhile [ \"$#\" -eq 1 ]; do\ncase \"$1\" in\n-h | --help)\ndb_migration_help\n            exit 0\n;;\n-c | --clean | -v | --vacuum)\nvacuum_pgdb\n            exit 0\n;;\n--)\nbreak\n;;\n-*)\necho \"Invalid option '$1'. Use -h|--help to see the valid options\" >&2\nexit 1\n;;\n*)\nkeycloak_pgdb_import \"$1\"\nexit 0\n;;\nesac\nshift\ndone\n\nif [ \"$#\" -gt 1 ]; then\necho \"Please pass a single file to the script\"\nexit 1\nfi\n\necho \"Preparing Export\"\necho \"----------------\"\nkeycloak_ns\npostgres_pod\npostgres_user\npgdb_host_info\nkeycloak_pgdb_export\n
                                                                                    "},{"location":"operator-guide/upgrade-keycloak-19.0/#related-articles","title":"Related Articles","text":"
                                                                                    • Deploy OKD 4.10 Cluster
                                                                                    "},{"location":"operator-guide/vcs/","title":"Overview","text":"

                                                                                    The Version Control Systems (VCS) section is dedicated to delivering comprehensive information on VCS within the EPAM Delivery Platform. This section comprises detailed descriptions of all the deployment strategies, along with valuable recommendations for their optimal usage, and the list of supported VCS, facilitating seamless integration with EDP.

                                                                                    "},{"location":"operator-guide/vcs/#supported-vcs","title":"Supported VCS","text":"

                                                                                    EDP can be integrated with the following Version Control Systems:

                                                                                    • Gerrit (used by default);
                                                                                    • GitHub;
                                                                                    • GitLab.

                                                                                    Note

                                                                                    So far, EDP doesn't support authorization mechanisms in the upstream GitLab.

                                                                                    "},{"location":"operator-guide/vcs/#vcs-deployment-strategies","title":"VCS Deployment Strategies","text":"

                                                                                    EDP offers the following strategies to work with repositories:

                                                                                    • Create from template \u2013 creates a project on the pattern in accordance with an application language, a build tool, and a framework selected while creating application. This strategy is recommended for projects that start developing their applications from scratch.

                                                                                    Note

                                                                                    Under the hood, all the built-in application frameworks, build tools and frameworks are stored in our public GitHub repository.

                                                                                    • Import project - enables working with the repository located in the added Git server. This scenario is preferred when the users already have an application stored in their own pre-configured repository and intends to continue working with their repository while also utilizing EDP simultaneously.

                                                                                    Note

                                                                                    In order to use the Import project strategy, make sure to adjust it with the Integrate GitHub/GitLab in Jenkins or Integrate GitHub/GitLab in Tekton page. The Import project strategy is not applicable for Gerrit. Also, it is impossible to choose the Empty project field when using the Import project strategy while creating appication since it is implied that you already have a ready-to-work application in your own repository, whereas the \"Empty project\" option creates a repository but doesn't put anything in it.

                                                                                    • Clone project \u2013 clones the indicated repository into EPAM Delivery Platform. In this scenario, the application repository is forked from the original application repository to EDP. Since EDP doesn't support multiple VCS integration for now, this strategy is recommended when the user has several applications located in several repositories.
                                                                                    "},{"location":"operator-guide/vcs/#related-articles","title":"Related Articles","text":"
                                                                                    • Add Git Server
                                                                                    • Add Application
                                                                                    • Integrate GitHub/GitLab in Jenkins
                                                                                    • Integrate GitHub/GitLab in Tekton
                                                                                    "},{"location":"operator-guide/velero-irsa/","title":"IAM Roles for Velero Service Accounts","text":"

                                                                                    Note

                                                                                    Make sure that IRSA is enabled and amazon-eks-pod-identity-webhook is deployed according to the Associate IAM Roles With Service Accounts documentation.

                                                                                    Velero AWS plugin requires access to AWS resources. Follow the steps below to create a required role:

                                                                                    1. Create AWS IAM Policy \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039VELERO_NAMESPACE\u203aVelero_policy\":

                                                                                      {\n\"Version\": \"2012-10-17\",\n\"Statement\": [\n{\n\"Effect\": \"Allow\",\n\"Action\": [\n\"ec2:DescribeVolumes\",\n\"ec2:DescribeSnapshots\",\n\"ec2:CreateTags\",\n\"ec2:CreateVolume\",\n\"ec2:CreateSnapshot\",\n\"ec2:DeleteSnapshot\"\n],\n\"Resource\": \"*\"\n},\n{\n\"Effect\": \"Allow\",\n\"Action\": [\n\"s3:GetObject\",\n\"s3:DeleteObject\",\n\"s3:PutObject\",\n\"s3:AbortMultipartUpload\",\n\"s3:ListMultipartUploadParts\"\n],\n\"Resource\": [\n\"arn:aws:s3:::velero-*/*\"\n]\n},\n{\n\"Effect\": \"Allow\",\n\"Action\": [\n\"s3:ListBucket\"\n],\n\"Resource\": [\n\"arn:aws:s3:::velero-*\"\n]\n}\n]\n}\n
                                                                                    2. Create AWS IAM Role \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039VELERO_NAMESPACE\u203aVelero\" with trust relationships:

                                                                                      {\n  \"Version\": \"2012-10-17\",\n  \"Statement\": [\n    {\n      \"Effect\": \"Allow\",\n      \"Principal\": {\n        \"Federated\": \"arn:aws:iam::<AWS_ACCOUNT_ID>:oidc-provider/<OIDC_PROVIDER>\"\n      },\n      \"Action\": \"sts:AssumeRoleWithWebIdentity\",\n      \"Condition\": {\n        \"StringEquals\": {\n          \"<OIDC_PROVIDER>:sub\": \"system:serviceaccount:<VELERO_NAMESPACE>:edp-velero\"\n       }\n     }\n   }\n ]\n}\n
                                                                                    3. Attach the \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039VELERO_NAMESPACE\u203aVelero_policy\" policy to the \"AWSIRSA\u2039CLUSTER_NAME\u203a\u2039VELERO_NAMESPACE\u203aVelero\" role.

                                                                                    4. Make sure that Amazon S3 bucket with name velero-\u2039CLUSTER_NAME\u203a exists.

                                                                                    5. Provide key value eks.amazonaws.com/role-arn: \"arn:aws:iam:::role/AWSIRSA\u2039CLUSTER_NAME\u203a\u2039VELERO_NAMESPACE\u203aVelero\" into the serviceAccount.server.annotations parameter in values.yaml during the Velero Installation."},{"location":"operator-guide/velero-irsa/#related-articles","title":"Related Articles","text":"

                                                                                      • Associate IAM Roles With Service Accounts
                                                                                      • Install Velero
                                                                                      "},{"location":"operator-guide/waf-tf-configuration/","title":"Configure AWS WAF With Terraform","text":"

                                                                                      This page contains accurate information on how to configure AWS WAF using Terraform with the aim to have a secured traffic exposure and to prevent the Host Header vulnerabilities.

                                                                                      "},{"location":"operator-guide/waf-tf-configuration/#prerequisites","title":"Prerequisites","text":"

                                                                                      To follow the instruction, check the following prerequisites:

                                                                                      1. Deployed infrastructure includes Nginx Ingress Controller
                                                                                      2. Deployed services for testing
                                                                                      3. Separate and exposed AWS ALB
                                                                                      4. terraform 0.14.10
                                                                                      5. hishicorp/aws = 4.8.0
                                                                                      "},{"location":"operator-guide/waf-tf-configuration/#solution-overview","title":"Solution Overview","text":"

                                                                                      The solution includes two parts:

                                                                                      1. Prerequisites (mostly the left part of the scheme) - AWS ALB, Compute Resources (EC2, EKS, etc.).
                                                                                      2. WAF configuration (the right part of the scheme).

                                                                                      The WAF ACL resource is the main resource used for the configuration; The default web ACL option is Block.

                                                                                      Overview WAF Solution

                                                                                      The ACL includes three managed AWS rules that secure the exposed traffic:

                                                                                      • AWS-AWSManagedRulesCommonRuleSet
                                                                                      • AWS-AWSManagedRulesLinuxRuleSet
                                                                                      • AWS-AWSManagedRulesKnownBadInputsRuleSet

                                                                                      AWS provides a lot of rules such as baseline and use-case specific rules, for details, please refer to the Baseline rule groups.

                                                                                      There is the PreventHostInjections rule that prevents the Host Header vulnerabilities. This rule includes one statement that declares that the Host Header should match Regex Pattern Set, thus only in this case it will be passed.

                                                                                      The Regex Pattern Set is another resource that helps to organize regexes, in fact, is a set of regexes. All regexes added to the single set are matched by the OR statement, i.e. when exposing several URLs, it is necessary to add this statement to the set and refer to it in the rule.

                                                                                      "},{"location":"operator-guide/waf-tf-configuration/#waf-acl-configuration","title":"WAF ACL Configuration","text":"

                                                                                      To create the Regex Pattern Set, inspect the following code:

                                                                                      resource \"aws_wafv2_regex_pattern_set\" \"common\" {\nname  = \"Common\"\nscope = \"REGIONAL\"\n\nregular_expression {\nregex_string = \"^.*(some-url).*((.edp-epam)+)\\\\.com$\"\n}\n\n  #  Add here additional regular expressions for other endpoints, they are merging with OR operator, e.g.\n\n  /*\n   regular_expression {\n      regex_string = \"^.*(jenkins).*((.edp-epam)+)\\\\.com$\"\n   }\n   */\n\ntags = var.tags\n}\n

                                                                                      It includes 'regex_string', for example: url - some-url.edp-epam.com, In addition, it is possible to add other links to the same resource using the regular_expression element.

                                                                                      There is the Terraform code for the aws_wafv2_web_acl resource:

                                                                                      resource \"aws_wafv2_web_acl\" \"external\" {\nname  = \"ExternalACL\"\nscope = \"REGIONAL\"\n\ndefault_action {\nblock {}\n}\n\nrule {\nname     = \"AWS-AWSManagedRulesCommonRuleSet\"\npriority = 1\n\noverride_action {\nnone {}\n}\n\nstatement {\nmanaged_rule_group_statement {\nname        = \"AWSManagedRulesCommonRuleSet\"\nvendor_name = \"AWS\"\n}\n}\n\nvisibility_config {\ncloudwatch_metrics_enabled = true\nmetric_name                = \"AWS-AWSManagedRulesCommonRuleSet\"\nsampled_requests_enabled   = true\n}\n}\n\nrule {\nname     = \"AWS-AWSManagedRulesLinuxRuleSet\"\npriority = 2\n\nstatement {\nmanaged_rule_group_statement {\nname        = \"AWSManagedRulesLinuxRuleSet\"\nvendor_name = \"AWS\"\n}\n}\n\noverride_action {\nnone {}\n}\n\nvisibility_config {\ncloudwatch_metrics_enabled = true\nmetric_name                = \"AWS-AWSManagedRulesLinuxRuleSet\"\nsampled_requests_enabled   = true\n}\n}\n\nrule {\nname     = \"AWS-AWSManagedRulesKnownBadInputsRuleSet\"\npriority = 3\n\noverride_action {\nnone {}\n}\n\nstatement {\nmanaged_rule_group_statement {\nname        = \"AWSManagedRulesKnownBadInputsRuleSet\"\nvendor_name = \"AWS\"\n}\n}\n\nvisibility_config {\ncloudwatch_metrics_enabled = true\nmetric_name                = \"AWS-AWSManagedRulesKnownBadInputsRuleSet\"\nsampled_requests_enabled   = true\n}\n}\n\nrule {\nname     = \"PreventHostInjections\"\npriority = 0\n\nstatement {\nregex_pattern_set_reference_statement {\narn = aws_wafv2_regex_pattern_set.common.arn\n\nfield_to_match {\nsingle_header {\nname = \"host\"\n}\n}\n\ntext_transformation {\npriority = 0\ntype     = \"NONE\"\n}\n}\n}\n\naction {\nallow {}\n}\n\nvisibility_config {\ncloudwatch_metrics_enabled = true\nmetric_name                = \"PreventHostInjections\"\nsampled_requests_enabled   = true\n}\n}\n\nvisibility_config {\ncloudwatch_metrics_enabled = true\nmetric_name                = \"ExternalACL\"\nsampled_requests_enabled   = true\n}\n\ntags = var.tags\n}\n

                                                                                      As mentioned previously, ACL includes three managed AWS rules (group rules), for visibility, enabling sampling, and CloudWatch in the config. The 'PreventHostInjections' custom rule refers to the created pattern set and declares the Host Header, as well as sets the 'Action' if matched to 'Allow'.

                                                                                      "},{"location":"operator-guide/waf-tf-configuration/#associate-aws-resource","title":"Associate AWS Resource","text":"

                                                                                      To have the created ACL working, it is necessary to associate an AWS resource with it, in this case, it is AWS ALB:

                                                                                      resource \"aws_wafv2_web_acl_association\" \"waf_alb\" {\nresource_arn = aws_lb.<aws_alb_for_waf>.arn\nweb_acl_arn  = aws_wafv2_web_acl.external.arn\n}\n

                                                                                      Note

                                                                                      AWS ALB can be created in the scope of this Terraform code or created previously. When creating ALB to expose links, the ALB should have a security group that allows some external traffic.

                                                                                      When ALB is associated with the WAF ACL, direct the traffic to the ALB by the Route53 CNAME record:

                                                                                      module \"some_url_exposure\" {\nsource  = \"terraform-aws-modules/route53/aws//modules/records\"\nversion = \"2.0.0\"\n\nzone_name = \"edp-epam.com\"\n\nrecords = [\n{\nname    = \"some-url\"\ntype    = \"CNAME\"\nttl     = 300\nrecords = [aws_lb.<aws_alb_for_waf>.dns_name]\n}\n]\n}\n

                                                                                      In the sample above, the module is used, but it is also possible to use a Terraform resource.

                                                                                      "},{"location":"use-cases/","title":"Overview","text":"

                                                                                      The Use Cases section provides useful recommendations of how to operate with the EPAM Delivery Platform tools and manage the custom resources. Get acquainted with the description of technical scenarios and solutions.

                                                                                      • Scaffold and Deploy FastAPI Application
                                                                                      • Deploy Application With Custom Build Tool/Framework
                                                                                      • Secured Secrets Management for Application Deployment
                                                                                      • Autotest as a Quality Gate
                                                                                      "},{"location":"use-cases/application-scaffolding/","title":"Scaffold and Deploy FastAPI Application","text":""},{"location":"use-cases/application-scaffolding/#overview","title":"Overview","text":"

                                                                                      This use case describes the creation and deployment of a FastAPI application to enable a developer to quickly generate a functional code structure for a FastAPI web application (with basic read functionality), customize it to meet specific requirements, and deploy it to a development environment. By using a scaffolding tool and a standardized process for code review, testing and deployment, developers can reduce the time and effort required to build and deploy a new application while improving the quality and reliability of the resulting code. Ultimately, the goal is to enable the development team to release new features and applications more quickly and efficiently while maintaining high code quality and reliability.

                                                                                      "},{"location":"use-cases/application-scaffolding/#roles","title":"Roles","text":"

                                                                                      This documentation is tailored for the Developers and Team Leads.

                                                                                      "},{"location":"use-cases/application-scaffolding/#goals","title":"Goals","text":"
                                                                                      • Create a new FastAPI application quickly.
                                                                                      • Deploy the initial code to the DEV environment.
                                                                                      • Check CI pipelines.
                                                                                      • Perform code review.
                                                                                      • Delivery update by deploying the new version.
                                                                                      "},{"location":"use-cases/application-scaffolding/#preconditions","title":"Preconditions","text":"
                                                                                      • EDP instance is configured with Gerrit, Tekton and Argo CD.
                                                                                      • Developer has access to the EDP instances using the Single-Sign-On approach.
                                                                                      • Developer has the Administrator role (to perform merge in Gerrit).
                                                                                      "},{"location":"use-cases/application-scaffolding/#scenario","title":"Scenario","text":"

                                                                                      To scaffold and deploy FastAPI Application, follow the steps below.

                                                                                      "},{"location":"use-cases/application-scaffolding/#scaffold-the-new-fastapi-application","title":"Scaffold the New FastAPI Application","text":"
                                                                                      1. Open EDP Portal URL. Use the Sign-In option.

                                                                                        Logging screen

                                                                                      2. Ensure Namespace value in the User Settings tab points to the namespace with the EDP installation.

                                                                                        Settings button

                                                                                      3. Create the new Codebase with the Application type using the Create strategy. To do this, open EDP tab.

                                                                                        Cluster overview

                                                                                      4. Select the Components Section under the EDP tab and push the create + button.

                                                                                        Components tab

                                                                                      5. Select the Application Codebase type because we are going to deliver our application as a container and deploy it inside the Kubernetes cluster. Choose the Create strategy to scaffold our application from the template provided by the EDP and press the Proceed button.

                                                                                        Step codebase info

                                                                                      6. On the Application Info tab, define the following values and press the Proceed button:

                                                                                        • Application name: fastapi-demo
                                                                                        • Default branch: main
                                                                                        • Application code language: Python
                                                                                        • Language version/framework: FastAPI
                                                                                        • Build tool: Python

                                                                                        Application info

                                                                                      7. On the Advances Settings tab, define the below values and push the Apply button:

                                                                                        • CI tool: Tekton
                                                                                        • Codebase versioning type: edp
                                                                                        • Start version from: 0.0.1 and SNAPSHOT

                                                                                        Advanced settings

                                                                                      8. Check the application status. It should be green:

                                                                                        Application status

                                                                                      "},{"location":"use-cases/application-scaffolding/#deploy-the-application-to-the-development-environment","title":"Deploy the Application to the Development Environment","text":"

                                                                                      This section describes the application deployment approach from the latest branch commit. The general steps are:

                                                                                      • Build the initial version (generated from the template) of the application from the last commit of the main branch.
                                                                                      • Create a CD Pipeline to establish continuous delivery to the development environment.
                                                                                      • Deploy the initial version to the development env.

                                                                                      To succeed with the steps above, follow the instructions below:

                                                                                      1. Build Container from the latest branch commit. To build the initial version of the application's main branch, go to the fastapi-demo application -> branches -> main and select the Build menu.

                                                                                        Application building

                                                                                      2. Build pipeline for the fastapi-demo application starts.

                                                                                        Pipeline building

                                                                                      3. Track Pipeline's status by accessing Tekton Dashboard by clicking the fastapi-demo-main-build-lb57m application link.

                                                                                        Console logs

                                                                                      4. Ensure that Build Pipeline was successfully completed.

                                                                                      5. Create CD Pipeline. To enable application deployment create a CD Pipeline with a single environment - Development (with the name dev).

                                                                                      6. Go to EDP Portal -> EDP -> CD Pipelines tab and push the + button to create pipeline. In the Create CD Pipeline dialog, define the below values:

                                                                                        • Pipeline tab:

                                                                                          • Pipeline name: mypipe
                                                                                          • Deployment type: Container, since we are going to deploy containers

                                                                                          Pipeline tab with parameters

                                                                                        • Applications tab. Add fastapi-demo application, select main branch, and leave Promote in pipeline unchecked:

                                                                                          Applications tab with parameters

                                                                                        • Stages tab. Add the dev stage with the values below:

                                                                                          • Stage name: dev
                                                                                          • Description: Development Environment
                                                                                          • Trigger type: Manual. We plan to deploy applications to this environment manually
                                                                                          • Quality gate type: Manual
                                                                                          • Step name: approve
                                                                                          • Push the Apply button

                                                                                          Stages tab with parameters

                                                                                      7. Deploy the initial version of the application to the development environment:

                                                                                        • Open CD Pipeline with the name mypipe.
                                                                                        • Select the dev stage from the Stages tab.
                                                                                        • In the Image stream version select version 0.0.1-SNAPSHOT.1 and push the Deploy button.

                                                                                        CD Pipeline deploy

                                                                                      "},{"location":"use-cases/application-scaffolding/#check-the-application-status","title":"Check the Application Status","text":"

                                                                                      To ensure the application is deployed successfully, follow the steps below:

                                                                                      1. Ensure application status is Healthy and Synced, and the Deployed version points to 0.0.1-SNAPSHOT.1:

                                                                                        Pipeline health status

                                                                                      2. Check that the selected version of the container is deployed on the dev environment. ${EDP_ENV} - is the EDP namespace name:

                                                                                        # Check the deployment status of fastapi-demo application\n$ kubectl get deployments -n ${EDP_ENV}-mypipe-dev\nNAME                 READY   UP-TO-DATE   AVAILABLE   AGE\nfastapi-demo-dl1ft   1/1     1            1           30m\n\n# Check the image version of fastapi-demo application\n$ kubectl get pods -o jsonpath=\"{.items[*].spec.containers[*].image}\" -n ${EDP_ENV}-mypipe-dev\n012345678901.dkr.ecr.eu-central-1.amazonaws.com/${EDP_ENV}/fastapi-demo:0.0.1-SNAPSHOT.1\n
                                                                                      "},{"location":"use-cases/application-scaffolding/#deliver-new-code","title":"Deliver New Code","text":"

                                                                                      This section describes the Code Review process for a new code. We need to deploy a new version of our fastapi-demo application that deploys Ingress object to expose API outside the Kubernetes cluster.

                                                                                      Perform the below steps to merge new code (Pull Request) that passes the Code Review flow. For the steps below, we use Gerrit UI but the same actions can be performed using the command line and git tool:

                                                                                      1. Login to Gerrit UI, select fastapi-demo project, and create a change request.

                                                                                      2. Browse Gerrit Repositories and select fastapi-demo project.

                                                                                        Browse Gerrit repositories

                                                                                      3. In the Commands section of the project, push the Create Change button.

                                                                                        Create Change request

                                                                                      4. In the Create Change dialog, provide the branch main and the Description (commit message):

                                                                                        Enable ingress for application\n\nCloses: #xyz\n
                                                                                      5. Push the Create button.

                                                                                        Create Change

                                                                                      6. Push the Edit button of the merge request and add deployment-templates/values.yaml for modification.

                                                                                        Update values.yaml file

                                                                                      7. Review the deployment-templates/values.yaml file and change the ingress.enabled flag from false to true. Then push the SAVE & PUBLISH button. As soon as you get Verified +1 from CI, you are ready for review: Push the Mark as Active button.

                                                                                        Review Change

                                                                                      8. You can always check your pipelines status from:

                                                                                        • Gerrit UI.

                                                                                        Pipeline Status Gerrit

                                                                                        • EDP Portal.

                                                                                        Pipeline Status EDP Portal

                                                                                      9. With no Code Review Pipeline issues, set Code-Review +2 for the patchset and push the Submit button. Then, your code is merged to the main branch, triggering the Build Pipeline. The build Pipeline produces the new version of artifact: 0.0.1-SNAPSHOT.2, which is available for the deployment.

                                                                                        Gerrit Code Review screen

                                                                                      10. Deliver the New Version to the Environment. Before the new version deployment, check the ingress object in dev namespace:

                                                                                        $ kubectl get ingress -n ${EDP_ENV}-mypipe-dev\nNo resources found in ${EDP_ENV}-mypipe-dev namespace.\n

                                                                                        No ingress object exists as expected.

                                                                                      11. Deploy the new version 0.0.1-SNAPSHOT.2 which has the ingress object in place. Since we use Manual deployment approach, we perform version upgrade by hand.

                                                                                        • Go to the CD Pipelines section of the EDP Portal, select mypipe pipeline and choose dev stage.
                                                                                        • In the Image stream version select the new version 0.0.1-SNAPSHOT.2 and push the Update button.
                                                                                        • Check that the new version is deployed: application status is Healthy and Synced, and the Deployed version points to 0.0.1-SNAPSHOT.2.

                                                                                        CD Pipeline Deploy New Version

                                                                                      12. Check that the new version with Ingress is deployed:

                                                                                        # Check the version of the deployed image\nkubectl get pods -o jsonpath=\"{.items[*].spec.containers[*].image}\" -n ${EDP_ENV}-mypipe-dev\n012345678901.dkr.ecr.eu-central-1.amazonaws.com/edp-delivery-tekton-dev/fastapi-demo:0.0.1-SNAPSHOT.2\n\n# Check Ingress object\nkubectl get ingress -n ${EDP_ENV}-mypipe-dev\nNAME                 CLASS    HOSTS                            ADDRESS          PORTS   AGE\nfastapi-demo-ko1zs   <none>   fastapi-demo-ko1zs-example.com   12.123.123.123   80      115s\n\n# Check application external URL\ncurl https://your-hostname-appeared-in-hosts-column-above.example.com/\n{\"Hello\":\"World\"}\n
                                                                                      "},{"location":"use-cases/application-scaffolding/#related-articles","title":"Related Articles","text":"
                                                                                      • Use Cases
                                                                                      "},{"location":"use-cases/autotest-as-quality-gate/","title":"Autotest as a Quality Gate","text":"

                                                                                      This use case describes the flow of adding an autotest as a quality gate to a newly created CD pipeline with a selected build version of an application to be promoted. The purpose of autotests is to check if application meets predefined criteria for stability and functionality, ensuring that only reliable versions are promoted. The promotion feature allows users to implement complicated testing, thus improving application stability.

                                                                                      "},{"location":"use-cases/autotest-as-quality-gate/#roles","title":"Roles","text":"

                                                                                      This documentation is tailored for the Developers and Quality Assurance specialists.

                                                                                      "},{"location":"use-cases/autotest-as-quality-gate/#goals","title":"Goals","text":"
                                                                                      • Create several applications and autotests quickly.
                                                                                      • Create a pipeline for Continuous Deployment.
                                                                                      • Perform testing.
                                                                                      • Update delivery by deploying the new version.
                                                                                      "},{"location":"use-cases/autotest-as-quality-gate/#preconditions","title":"Preconditions","text":"
                                                                                      • EDP instance is configured with Gerrit, Tekton and Argo CD.
                                                                                      • Developer has access to the EDP instances using the Single-Sign-On approach.
                                                                                      • Developer has the Administrator role (to perform merge in Gerrit).
                                                                                      "},{"location":"use-cases/autotest-as-quality-gate/#create-applications","title":"Create Applications","text":"

                                                                                      To implement autotests as Quality Gates, follow the steps below:

                                                                                      1. Ensure the namespace is specified in the cluster settings. Click the Settings icon in the top right corner and select Cluster settings:

                                                                                        Cluster settings

                                                                                      2. Enter the name of the default namespace, then enter your default namespace in the Allowed namespaces field and click the + button. You can also add other namespaces to the Allowed namespaces:

                                                                                        Specify namespace

                                                                                      3. Create several applications using the Create strategy. Navigate to the EDP tab, choose Components, click the + button:

                                                                                        Add component

                                                                                      4. Select Application and Create from template:

                                                                                        Create new component menu

                                                                                        Note

                                                                                        Please refer to the Add Application section for details.

                                                                                      5. On the Codebase info tab, define the following values and press the Proceed button:

                                                                                        • Git server: gerrit
                                                                                        • Git repo relative path: js-application
                                                                                        • Component name: js-application
                                                                                        • Description: js application
                                                                                        • Application code language: JavaScript
                                                                                        • Language version/Provider: Vue
                                                                                        • Build tool: NPM

                                                                                        Codebase info tab

                                                                                      6. On the Advanced settings tab, define the below values and push the Apply button:

                                                                                        • Default branch: main
                                                                                        • Codebase versioning type: default

                                                                                        Advanced settings tab

                                                                                      7. Repeat the procedure twice to create the go-application and python-application applications. These applications will have the following parameters:

                                                                                        go-application:

                                                                                        • Git server: gerrit
                                                                                        • Git repo relative path: go-application
                                                                                        • Component name: go-application
                                                                                        • Description: go application
                                                                                        • Application code language: Go
                                                                                        • Language version/Provider: Gin
                                                                                        • Build tool: Go
                                                                                        • Default branch: main
                                                                                        • Codebase versioning type: default

                                                                                        python-application:

                                                                                        • Git server: gerrit
                                                                                        • Git repo relative path: python-application
                                                                                        • Component name: python-application
                                                                                        • Description: python application
                                                                                        • Application code language: Python
                                                                                        • Language version/Provider: FastAPI
                                                                                        • Build tool: Python
                                                                                        • Default branch: main
                                                                                        • Codebase versioning type: default
                                                                                      8. In the Components tab, click one of the applications name to enter the application menu:

                                                                                        Components list

                                                                                      9. Click the three dots (\u22ee) button, select Build:

                                                                                        Application menu

                                                                                      10. Click the down arrow (v) to observe and wait for the application to be built:

                                                                                        Application building

                                                                                      11. Click the application run name to watch the building logs in Tekton:

                                                                                        Tekton pipeline run

                                                                                      12. Wait till the build is successful:

                                                                                        Successful build

                                                                                      13. Repeat steps 8-12 for the rest of the applications.

                                                                                      "},{"location":"use-cases/autotest-as-quality-gate/#create-autotests","title":"Create Autotests","text":"

                                                                                      The steps below instruct how to create autotests in EDP:

                                                                                      1. Create a couple of autotests using the Create strategy. Navigate to the EDP tab, choose Components, click on the + button. Select Autotest and Clone project:

                                                                                        Add autotest

                                                                                        Note

                                                                                        Please refer to the Add Autotest section for details.

                                                                                      2. On the Codebase info tab, define the following values and press the Proceed button:

                                                                                        • Repository URL: https://github.com/SergK/autotests.git
                                                                                        • Git server: gerrit
                                                                                        • Git repo relative path: demo-autotest-gradle
                                                                                        • Component name: demo-autotest-gradle
                                                                                        • Description: demo-autotest-gradle
                                                                                        • Autotest code language: Java
                                                                                        • Language version/framework: Java11
                                                                                        • Build tool: Gradle
                                                                                        • Autotest report framework: Allure

                                                                                        Codebase info tab for autotests

                                                                                      3. On the Advanced settings tab, leave the settings as is and click the Apply button:

                                                                                        Advanced settings tab for autotests

                                                                                      4. Repeat the steps 1-3 to create one more autotest with the parameters below:

                                                                                        • Repository URL: https://github.com/Rolika4/autotests.git
                                                                                        • Git server: gerrit
                                                                                        • Git repo relative path: demo-autotest-maven
                                                                                        • Component name: demo-autotest-maven
                                                                                        • Description: demo-autotest-maven
                                                                                        • Autotest code language: Java
                                                                                        • Language version/framework: Java11
                                                                                        • Build tool: Maven
                                                                                        • Autotest report framework: Allure
                                                                                      "},{"location":"use-cases/autotest-as-quality-gate/#create-cd-pipeline","title":"Create CD Pipeline","text":"

                                                                                      Now that applications and autotests are created, create pipeline for them by following the steps below:

                                                                                      1. Navigate to the CD Pipelines tab and click the + button:

                                                                                        CD pipelines tab

                                                                                      2. On the Pipeline tab, in the Pipeline name field, enter demo-pipeline:

                                                                                        Pipeline tab

                                                                                      3. On the Applications tab, add all the three applications, specify the main branch for all for them and check Promote in pipeline for Go and JavaScript applications:

                                                                                        Applications tab

                                                                                      4. On the Stages tab, click the Add stage button to open the Create stage menu:

                                                                                        Stages tab

                                                                                      5. In the Create stage menu, specify the following parameters and click Apply:

                                                                                        • Cluster: In cluster
                                                                                        • Stage name: dev
                                                                                        • Description: dev
                                                                                        • Trigger type: manual
                                                                                        • Quality gate type: Autotests
                                                                                        • Step name: dev
                                                                                        • Autotest: demo-autotest-gradle
                                                                                        • Autotest branch: main

                                                                                        Create stage menu

                                                                                      6. After the dev stage is added, click Apply:

                                                                                        Create stage menu

                                                                                      7. After the pipeline is created, click its name to open the pipeline details page:

                                                                                        Enter pipeline

                                                                                      8. In the pipeline details page, click the Create button to create a new stage:

                                                                                        Create a new stage

                                                                                      9. In the Create stage menu, specify the following parameters:

                                                                                        • Cluster: In cluster
                                                                                        • Stage name: sit
                                                                                        • Description: sit
                                                                                        • Trigger type: manual
                                                                                        • Quality gate type: Autotests
                                                                                        • Step name: dev
                                                                                        • Autotest: demo-autotest-maven
                                                                                        • Autotest branch: main
                                                                                      "},{"location":"use-cases/autotest-as-quality-gate/#run-autotests","title":"Run Autotests","text":"

                                                                                      After the CD pipeline is created, deploy applications and run autotests by following the steps below:

                                                                                      1. Click the dev stage name to expand its details, specify image versions for each of the applications in the Image stream version field and click Deploy:

                                                                                        Deploy applications

                                                                                      2. Once applications are built, scroll down to Quality Gates and click Promote:

                                                                                        Promote in pipeline

                                                                                      3. Once promotion procedure is finished, the promoted applications will become available in the Sit stage. You will be able to select image stream versions for the promoted applications. The non-promoted application will stay grey in the stage and won't be allowed to get deployed:

                                                                                        Sit stage

                                                                                      "},{"location":"use-cases/autotest-as-quality-gate/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Application
                                                                                      • Add Autotest
                                                                                      • Add CD Pipeline
                                                                                      • Add Quality Gate
                                                                                      "},{"location":"use-cases/external-secrets/","title":"Secured Secrets Management for Application Deployment","text":"

                                                                                      This Use Case demonstrates how to securely manage sensitive data, such as passwords, API keys, and other credentials, that are consumed by application during development or runtime in production. The approach involves storing sensitive data in an external secret store that is located in a \"vault\" namespace (but can be Vault, AWS Secret Store or any other provider). The process implies transmitting confidential information from the vault namespace to the deployed namespace for the purpose of establishing a connection to a database.

                                                                                      "},{"location":"use-cases/external-secrets/#roles","title":"Roles","text":"

                                                                                      This documentation is tailored for the Developers and Team Leads.

                                                                                      "},{"location":"use-cases/external-secrets/#goals","title":"Goals","text":"
                                                                                      • Make confidential information usage secure in the deployment environment.
                                                                                      "},{"location":"use-cases/external-secrets/#preconditions","title":"Preconditions","text":"
                                                                                      • EDP instance is configured with Gerrit, Tekton and Argo CD;
                                                                                      • External Secrets is installed;
                                                                                      • Developer has access to the EDP instances using the Single-Sign-On approach;
                                                                                      • Developer has the Administrator role (to perform merge in Gerrit);
                                                                                      • Developer has access to manage secrets in demo-vault namespace.
                                                                                      "},{"location":"use-cases/external-secrets/#scenario","title":"Scenario","text":"

                                                                                      To use External Secret in EDP approach, follow the steps below:

                                                                                      "},{"location":"use-cases/external-secrets/#add-application","title":"Add Application","text":"

                                                                                      To begin, you will need an application first. Here are the steps to create it:

                                                                                      1. Open EDP Portal URL. Use the Sign-In option:

                                                                                        Logging screen

                                                                                      2. In the top right corner, enter the Cluster settings and ensure that both Default namespace and Allowed namespace are set:

                                                                                        Cluster settings

                                                                                      3. Create the new Codebase with the Application type using the Create strategy. To do this, click the EDP tab:

                                                                                        Cluster overview

                                                                                      4. Select the Components section under the EDP tab and push the + button:

                                                                                        Components tab

                                                                                      5. Select the Application Codebase type because we are going to deliver our application as a container and deploy it inside the Kubernetes cluster. Select the Create strategy to use predefined template:

                                                                                        Step codebase info

                                                                                      6. On the Application Info tab, define the following values and press the Proceed button:

                                                                                        • Application name: es-usage
                                                                                        • Default branch: master
                                                                                        • Application code language: Java
                                                                                        • Language version/framework: Java 17
                                                                                        • Build tool: Maven

                                                                                        Step application info

                                                                                      7. On the Advanced Settings tab, define the below values and push the Apply button:

                                                                                        • CI tool: Tekton
                                                                                        • Codebase versioning type: default

                                                                                        Step application info

                                                                                      8. Check the application status. It should be green:

                                                                                        Application status

                                                                                      "},{"location":"use-cases/external-secrets/#create-cd-pipeline","title":"Create CD Pipeline","text":"

                                                                                      This section outlines the process of establishing a CD pipeline within EDP Portal. There are two fundamental steps in this procedure:

                                                                                      • Build the application from the last commit of the master branch;
                                                                                      • Create a CD Pipeline to establish continuous delivery to the SIT environment.

                                                                                      To succeed with the steps above, follow the instructions below:

                                                                                      1. Create CD Pipeline. To enable application deployment, create a CD Pipeline with a single environment - System Integration Testing (SIT for short). Select the CD Pipelines section under the EDP tab and push the + button:

                                                                                        CD-Pipeline tab

                                                                                      2. On the Pipeline tab, define the following values and press the Proceed button:

                                                                                        • Pipeline name: deploy
                                                                                        • Deployment type: Container

                                                                                        Pipeline tab

                                                                                      3. On the Applications tab, add es-usage application, select master branch, leave Promote in pipeline unchecked and press the Proceed button:

                                                                                        Pipeline tab

                                                                                      4. On the Stage tab, add the sit stage with the values below and push the Apply button:

                                                                                        • Stage name: sit
                                                                                        • Description: System integration testing
                                                                                        • Trigger type: Manual. We plan to deploy applications to this environment manually
                                                                                        • Quality gate type: Manual
                                                                                        • Step name: approve

                                                                                          Stage tab

                                                                                      "},{"location":"use-cases/external-secrets/#configure-rbac-for-external-secret-store","title":"Configure RBAC for External Secret Store","text":"

                                                                                      Note

                                                                                      In this scenario, three namespaces are used: demo, which is the namespace where EDP is deployed, demo-vault, which is the vault where developers store secrets, anddemo-deploy-sit, which is the namespace used for deploying the application. The target namespace name for deploying application is formed with the pattern: edp-<cd_pipeline_name>-<stage_name>.

                                                                                      To make the system to function properly, it is imperative to create the following resources:

                                                                                      1. Create namespace demo-vault to store secrets:

                                                                                         kubectl create namespace demo-vault\n
                                                                                      2. Create Secret:

                                                                                        apiVersion: v1\nkind: Secret\nmetadata:\nname: mongo\nnamespace: demo-vault\nstringData:\npassword: pass\nusername: user\ntype: Opaque\n
                                                                                      3. Create Role to access the secret:

                                                                                        apiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\nnamespace: demo-vault\nname: external-secret-store\nrules:\n- apiGroups: [\"\"]\nresources:\n- secrets\nverbs:\n- get\n- list\n- watch\n- apiGroups:\n- authorization.k8s.io\nresources:\n- selfsubjectrulesreviews\nverbs:\n- create\n
                                                                                      4. Create RoleBinding:

                                                                                        apiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\nname: eso-from-edp\nnamespace: demo-vault\nsubjects:\n- kind: ServiceAccount\nname: secret-manager\nnamespace: demo-deploy-sit\nroleRef:\napiGroup: rbac.authorization.k8s.io\nkind: Role\nname: external-secret-store\n
                                                                                      "},{"location":"use-cases/external-secrets/#add-external-secret-to-helm-chart","title":"Add External Secret to Helm Chart","text":"

                                                                                      Now that RBAC is configured properly, it is time to add external secrets templates to application Helm chart. Follow the instructions provided below:

                                                                                      1. Navigate to EDP Portal -> EDP -> Overview, and push the Gerrit link:

                                                                                        Overview page

                                                                                      2. Log in to Gerrit UI, select Repositories and select es-usage project:

                                                                                        Browse Gerrit repositories

                                                                                      3. In the Commands section of the project, push the Create Change button:

                                                                                        Create Change request

                                                                                      4. In the Create Change dialog, provide the branch master and fill in the Description (commit message) field and push the Create button:

                                                                                        Add external secrets templates\n

                                                                                        Create Change

                                                                                      5. Push the Edit button of the merge request and then the ADD/OPEN/UPLOAD button and add files:

                                                                                        Add files to repository

                                                                                        Once the file menu is opened, and click SAVE after editing each of the files:

                                                                                        1. deploy-templates/templates/sa.yaml:

                                                                                          apiVersion: v1\nkind: ServiceAccount\nmetadata:\nname: secret-manager\nnamespace: demo-deploy-sit\n
                                                                                        2. deploy-templates/templates/secret-store.yaml:

                                                                                          apiVersion: external-secrets.io/v1beta1\nkind: SecretStore\nmetadata:\nname: demo\nnamespace: demo-deploy-sit\nspec:\nprovider:\nkubernetes:\nremoteNamespace: demo-vault\nauth:\nserviceAccount:\nname: secret-manager\nserver:\ncaProvider:\ntype: ConfigMap\nname: kube-root-ca.crt\nkey: ca.crt\n
                                                                                        3. deploy-templates/templates/external-secret.yaml:

                                                                                          apiVersion: external-secrets.io/v1beta1\nkind: ExternalSecret\nmetadata:\nname: mongo                            # target secret name\nnamespace: demo-deploy-sit    # target namespace\nspec:\nrefreshInterval: 1h\nsecretStoreRef:\nkind: SecretStore\nname: demo\ndata:\n- secretKey: username                   # target value property\nremoteRef:\nkey: mongo                          # remote secret key\nproperty: username                  # value will be fetched from this field\n- secretKey: password                   # target value property\nremoteRef:\nkey: mongo                          # remote secret key\nproperty: password                  # value will be fetched from this field\n
                                                                                        4. deploy-templates/templates/deployment.yaml. Add the environment variable for mongodb to the existing deployment configuration that used the secret:

                                                                                                    env:\n- name: MONGO_USERNAME\nvalueFrom:\nsecretKeyRef:\nname: mongo\nkey: username\n- name: MONGO_PASSWORD\nvalueFrom:\nsecretKeyRef:\nname: mongo\nkey: password\n
                                                                                      6. Push the Publish Edit button.

                                                                                      7. As soon as review pipeline finished, and you get Verified +1 from CI, you are ready for review. Click Mark as Active -> Code-Review +2 -> Submit:

                                                                                        Apply change

                                                                                      "},{"location":"use-cases/external-secrets/#deploy-application","title":"Deploy Application","text":"

                                                                                      Deploy the application by following the steps provided below:

                                                                                      1. When build pipeline is finished, navigate to EDP Portal -> EDP -> CD-Pipeline and select deploy pipeline.

                                                                                      2. Deploy the initial version of the application to the SIT environment:

                                                                                        • Select the sit stage from the Stages tab;
                                                                                        • In the Image stream version, select latest version and push the Deploy button.
                                                                                      3. Ensure application status is Healthy and Synced:

                                                                                        CD-Pipeline status

                                                                                      "},{"location":"use-cases/external-secrets/#check-application-status","title":"Check Application Status","text":"

                                                                                      To ensure the application is deployed successfully, do the following:

                                                                                      1. Check that the resources are deployed:

                                                                                        kubectl get secretstore -n demo-deploy-sit\nNAME                           AGE     STATUS   READY\ndemo                           5m57s   Valid    True\n
                                                                                        kubectl get externalsecret -n demo-deploy-sit\nNAME    STORE                          REFRESH INTERVAL   STATUS         READY\nmongo   demo                           1h                 SecretSynced   True\n
                                                                                      2. In the top right corner, enter the Cluster settings and add demo-deploy-sit to the Allowed namespace.

                                                                                      3. Navigate EDP Portal -> Configuration -> Secrets and ensure that secret was created:

                                                                                        Secrets

                                                                                      4. Navigate EDP Portal -> Workloads -> Pods and select deployed application:

                                                                                        Pod information

                                                                                      "},{"location":"use-cases/external-secrets/#related-articles","title":"Related Articles","text":"
                                                                                      • Use Cases
                                                                                      • Add Application
                                                                                      • CD Pipeline
                                                                                      "},{"location":"use-cases/tekton-custom-pipelines/","title":"Deploy Application With Custom Build Tool/Framework","text":"

                                                                                      This Use Case describes the procedure of adding custom Tekton libraries that include pipelines with tasks. In addition to it, the process of modifying custom pipelines and tasks is enlightened as well.

                                                                                      "},{"location":"use-cases/tekton-custom-pipelines/#goals","title":"Goals","text":"
                                                                                      • Add custom Tekton pipeline library;
                                                                                      • Modify existing pipelines and tasks in a custom Tekton library.
                                                                                      "},{"location":"use-cases/tekton-custom-pipelines/#preconditions","title":"Preconditions","text":"
                                                                                      • EDP instance with Gerrit and Tekton inside is configured;
                                                                                      • Developer has access to the EDP instances using the Single-Sign-On approach;
                                                                                      • Developer has the Administrator role to perform merge in Gerrit.
                                                                                      "},{"location":"use-cases/tekton-custom-pipelines/#scenario","title":"Scenario","text":"

                                                                                      Note

                                                                                      This case is based on our predefined repository and application. Your case may be different.

                                                                                      To create and then modify a custom Tekton library, please follow the steps below:

                                                                                      "},{"location":"use-cases/tekton-custom-pipelines/#add-custom-application-to-edp","title":"Add Custom Application to EDP","text":"
                                                                                      1. Open EDP Portal URL. Use the Sign-In option:

                                                                                        Logging screen

                                                                                      2. In the top right corner, enter the Cluster settings and ensure that both Default namespace and Allowed namespace are set:

                                                                                        Cluster settings

                                                                                      3. Create the new Codebase with the Application type using the Clone strategy. To do this, click the EDP tab:

                                                                                        Cluster overview

                                                                                      4. Select the Components section under the EDP tab and push the create + button:

                                                                                        Components tab

                                                                                      5. Select the Application codebase type because is meant to be delivered as a container and deployed inside the Kubernetes cluster. Choose the Clone strategy and this example repository:

                                                                                        Step codebase info

                                                                                      6. In the Application Info tab, define the following values and click the Proceed button:

                                                                                        • Application name: tekton-hello-world
                                                                                        • Default branch: master
                                                                                        • Application code language: Other
                                                                                        • Language version/framework: go
                                                                                        • Build tool: shell

                                                                                        Application info

                                                                                        Note

                                                                                        These application details are required to match the Pipeline name gerrit-shell-go-app-build-default.

                                                                                        The PipelineRun name is formed with the help of TriggerTemplates in pipelines-library so the Pipeline name should correspond to the following structure:

                                                                                          pipelineRef:\n    name: gerrit-$(tt.params.buildtool)-$(tt.params.framework)-$(tt.params.cbtype)-build-$(tt.params.versioning-type)\n
                                                                                        The PipelineRun is created as soon as Gerrit (or, if configured, GitHub, GitLab) sends a payload during Merge Request events.

                                                                                      7. In the Advances Settings tab, define the below values and click the Apply button:

                                                                                        • CI tool: Tekton
                                                                                        • Codebase versioning type: default
                                                                                        • Leave Specify the pattern to validate a commit message empty.

                                                                                        Advanced settings

                                                                                      8. Check the application status. It should be green:

                                                                                        Application status

                                                                                        Now that the application is created successfully, proceed to adding the Tekton library.

                                                                                      "},{"location":"use-cases/tekton-custom-pipelines/#add-tekton-library","title":"Add Tekton Library","text":"
                                                                                      1. Select the Components section under the EDP tab and push the create + button:

                                                                                        Components tab

                                                                                      2. Create a new Codebase with the Library type using the Create strategy:

                                                                                        Step codebase info

                                                                                        Note

                                                                                        The EDP Create strategy will automatically pull the code for the Tekton Helm application from here.

                                                                                      3. In the Application Info tab, define the following values and click the Proceed button:

                                                                                        • Application name: custom-tekton-chart
                                                                                        • Default branch: master
                                                                                        • Application code language: Helm
                                                                                        • Language version/framework: Pipeline
                                                                                        • Build tool: Helm

                                                                                        Step codebase info

                                                                                      4. In the Advances Settings tab, define the below values and click the Apply button:

                                                                                        • CI tool: Tekton
                                                                                        • Codebase versioning type: default
                                                                                        • Leave Specify the pattern to validate a commit message empty.

                                                                                        Advanced settings

                                                                                      5. Check the codebase status:

                                                                                        Codebase status

                                                                                      "},{"location":"use-cases/tekton-custom-pipelines/#modify-tekton-pipeline","title":"Modify Tekton Pipeline","text":"

                                                                                      Note

                                                                                      Our recommendation is to avoid modifying the default Tekton resources. Instead, we suggest creating and modifying your own custom Tekton library.

                                                                                      Now that the Tekton Helm library is created, it is time to clone, modify and then apply it to the Kubernetes cluster.

                                                                                      1. Generate SSH key to work with Gerrit repositories:

                                                                                        ssh-keygen -t ed25519 -C \"your_email@example.com\"\n
                                                                                      2. Log into Gerrit UI.

                                                                                      3. Go to Gerrit Settings -> SSH keys, paste your generated public SSH key to the New SSH key field and click ADD NEW SSH KEY:

                                                                                        Gerrit settings Gerrit settings

                                                                                      4. Browse Gerrit Repositories and select custom-tekton-chart project:

                                                                                        Browse Gerrit repositories

                                                                                      5. Clone the repository with SSH using Clone with commit-msg hook command:

                                                                                        Gerrit clone

                                                                                        Note

                                                                                        In case of the strict firewall configurations, please use the HTTP protocol to pull and configure the HTTP Credentials in Gerrit.

                                                                                      6. Examine the repository structure. It should look this way by default:

                                                                                        custom-tekton-chart\n  \u251c\u2500\u2500 Chart.yaml\n  \u251c\u2500\u2500 chart_schema.yaml\n  \u251c\u2500\u2500 ct.yaml\n  \u251c\u2500\u2500 lintconf.yaml\n  \u251c\u2500\u2500 templates\n  \u2502\u00a0\u00a0 \u251c\u2500\u2500 pipelines\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 hello-world\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gerrit-build-default.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gerrit-build-edp.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gerrit-build-lib-default.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gerrit-build-lib-edp.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gerrit-review-lib.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gerrit-review.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 github-build-default.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 github-build-edp.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 github-build-lib-default.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 github-build-lib-edp.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 github-review-lib.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 github-review.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gitlab-build-default.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gitlab-build-edp.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gitlab-build-lib-default.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gitlab-build-lib-edp.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u251c\u2500\u2500 gitlab-review-lib.yaml\n  \u2502\u00a0\u00a0 \u2502\u00a0\u00a0     \u2514\u2500\u2500 gitlab-review.yaml\n  \u2502\u00a0\u00a0 \u2514\u2500\u2500 tasks\n  \u2502\u00a0\u00a0     \u2514\u2500\u2500 task-hello-world.yaml\n  \u2514\u2500\u2500 values.yaml\n

                                                                                        Note

                                                                                        Change the values in the values.yaml file.

                                                                                        The gitProvider parameter is the git hosting provider, Gerrit in this example. The similar approach be made with GitHub, or GitLab.

                                                                                        The dnsWildCard parameter is the cluster DNS address.

                                                                                        The gerritSSHPort parameter is the SSH port of the Gerrit service on Kubernetes. Check the Gerrit port in your edp installation global section.

                                                                                        Note

                                                                                        Our custom Helm chart includes edp-tekton-common-library dependencies in the Chart.yaml file. This library allows to use our predefined code snippets.

                                                                                        Here is an example of the filled in values.yaml file:

                                                                                        nameOverride: \"\"\nfullnameOverride: \"\"\n\nglobal:\n  gitProvider: gerrit\n  dnsWildCard: \"example.domain.com\"\n  gerritSSHPort: \"30009\"\n
                                                                                      7. Modify and add tasks or pipelines.

                                                                                        As an example, let's assume that we need to add the helm-lint pipeline task to the review pipeline. To implement this, insert the code below to the gerrit-review.yaml file underneath the hello task:

                                                                                            - name: hello\n      taskRef:\n        name: hello\n      runAfter:\n      - init-values\n      params:\n      - name: BASE_IMAGE\n        value: \"$(params.shell-image-version)\"\n      - name: username\n        value: \"$(params.username)\"\n      workspaces:\n        - name: source\n          workspace: shared-workspace\n\n    - name: helm-lint\n      taskRef:\n        kind: Task\n        name: helm-lint\n      runAfter:\n        - hello\n      params:\n        - name: EXTRA_COMMANDS\n          value: |\n            ct lint --validate-maintainers=false --charts deploy-templates/\n      workspaces:\n        - name: source\n          workspace: shared-workspace\n

                                                                                        Note

                                                                                        The helm-lint task references to the default pipeline-library Helm chart which is applied to the cluster during EDP installation.

                                                                                        The runAfter parameter shows that this Pipeline task will be run after the hello pipeline task.

                                                                                      8. Build Helm dependencies in the custom chart:

                                                                                        helm dependency update .\n
                                                                                      9. Ensure that the chart is valid and all the indentations are fine:

                                                                                        helm lint .\n

                                                                                        To validate if the values are substituted in the templates correctly, render the templated YAML files with the values using the following command. It generates and displays all the manifest files with the substituted values:

                                                                                        helm template .\n
                                                                                      10. Install the custom chart with the command below. You can also use the --dry-run flag to simulate the chart installation and catch possible errors:

                                                                                        helm upgrade --install edp-tekton-custom . -n edp --dry-run\n
                                                                                        helm upgrade --install edp-tekton-custom . -n edp\n
                                                                                      11. Check the created pipelines and tasks in the cluster:

                                                                                        kubectl get tasks -n edp\nkubectl get pipelines -n edp\n
                                                                                      12. Commit and push the modified Tekton Helm chart to Gerrit:

                                                                                        git add .\ngit commit -m \"Add Helm chart testing for go-shell application\"\ngit push origin HEAD:refs/for/master\n
                                                                                      13. Check the Gerrit code review for the custom Helm chart pipelines repository in Tekton:

                                                                                        Gerrit code review status

                                                                                      14. Go to Changes -> Open, click CODE-REVIEW and submit the merge request:

                                                                                        Gerrit merge Gerrit merge

                                                                                      15. Check the build Pipeline status for the custom Pipelines Helm chart repository in Tekton:

                                                                                        Tekton status

                                                                                      "},{"location":"use-cases/tekton-custom-pipelines/#create-application-merge-request","title":"Create Application Merge Request","text":"

                                                                                      Since we applied the Tekton library to the Kubernetes cluster in the previous step, let's test the review and build pipelines for our tekton-hello-world application.

                                                                                      Perform the below steps to merge new code (Merge Request) that passes the Code Review flow. For the steps below, we use Gerrit UI but the same actions can be performed using the command line and Git tool:

                                                                                      1. Log into Gerrit UI, select tekton-hello-world project, and create a change request.

                                                                                      2. Browse Gerrit Repositories and select tekton-hello-world project:

                                                                                        Browse Gerrit repositories

                                                                                      3. Clone the tekton-hello-world repository to make the necessary changes or click the Create Change button in the Commands section of the project to make changes via Gerrit GUI:

                                                                                        Create Change request

                                                                                      4. In the Create Change dialog, provide the branch master, write some text in the Description (commit message) and click the Create button:

                                                                                        Create Change

                                                                                      5. Click the Edit button of the merge request and add deployment-templates/values.yaml to modify it and change the ingress.enabled flag from false to true:

                                                                                        Update values.yaml file Update values.yaml file

                                                                                      6. Check the Review Pipeline status. The helm-lint pipeline task should be displayed there:

                                                                                        Review Change

                                                                                      7. Review the deployment-templates/values.yaml file and push the SAVE & PUBLISH button. As soon as you get Verified +1 from CI bot, the change is ready for review. Click the Mark as Active and Code-review buttons:

                                                                                        Review Change

                                                                                      8. Click the Submit button. Then, your code is merged to the main branch, triggering the Build Pipeline.

                                                                                        Review Change

                                                                                        Note

                                                                                        If the build is added and configured, push steps in the pipeline, it will produce a new version of artifact, which will be available for the deployment in EDP Portal.

                                                                                      9. Check the pipelines in the Tekton dashboard:

                                                                                        Tekton custom piplines Tekton custom piplines

                                                                                      What happens under the hood: 1) Gerrit sends a payload during Merge Request event to the Tekton EventListener; 2) EventListener catches it with the help of Interceptor; 3) TriggerTemplate creates a PipelineRun.

                                                                                      The detailed scheme is shown below:

                                                                                      graph LR;\n    A[Gerrit events] --> |Payload| B(Tekton EventListener) --> C(Tekton Interceptor CEL filter) --> D(TriggerTemplate)--> E(PipelineRun)

                                                                                      This chart will be using the core of common-library and pipelines-library and custom resources on the top of them.

                                                                                      "},{"location":"use-cases/tekton-custom-pipelines/#related-articles","title":"Related Articles","text":"
                                                                                      • Tekton Overview
                                                                                      • Add Application using EDP Portal
                                                                                      "},{"location":"user-guide/","title":"Overview","text":"

                                                                                      The EDP Portal user guide is intended for developers and provides details on working with EDP Portal, different codebase types, and EDP CI/CD flow.

                                                                                      "},{"location":"user-guide/#edp-portal","title":"EDP Portal","text":"

                                                                                      EDP Portal is a central management tool in the EDP ecosystem that provides the ability to define pipelines, project resources and new technologies in a simple way. Using EDP Portal enables to manage business entities:

                                                                                      • Create such codebase types as Applications, Libraries, Autotests and Inrastructures;
                                                                                      • Create/Update CD Pipelines;
                                                                                      • Add external Git servers and Clusters.

                                                                                      Overview page

                                                                                      • Navigation bar \u2013 consists of the following sections: Overview, Marketplace, Components, CD Pipelines, and Configuration.
                                                                                      • Top panel bar \u2013 contains documentation link, notifications, EDP Portal settings, and cluster settings, such as default and allowed namespaces.
                                                                                      • Main links \u2013 displays the corresponding links to the major adjusted toolset, to the management tool and to the OpenShift cluster.
                                                                                      • Filters \u2013 used for searching and filtering the namespaces.

                                                                                      EDP Portal is a complete tool allowing to manage and control the codebases (applications, autotests, libraries and infrastructures) added to the environment as well as to create a CD pipeline.

                                                                                      Inspect the main features available in EDP Portal by following the corresponding link:

                                                                                      • Add Application
                                                                                      • Add Autotest
                                                                                      • Add Library
                                                                                      • Add Git Server
                                                                                      • Add CD Pipeline
                                                                                      • Add Quality Gate
                                                                                      "},{"location":"user-guide/add-application/","title":"Add Application","text":"

                                                                                      Portal allows to create, clone and import an application and add it to the environment. It can also be deployed in Gerrit (if the Clone or Create strategy is used) with the Code Review and Build pipelines built in Jenkins/Tekton.

                                                                                      To add an application, navigate to the Components section on the navigation bar and click Create (the plus sign icon in the lower-right corner of the screen). Once clicked, the Create new component dialog will appear, then select Application and choose one of the strategies which will be described later in this page. You can create an Application in YAML or via the two-step menu in the dialog.

                                                                                      "},{"location":"user-guide/add-application/#create-application-in-yaml","title":"Create Application in YAML","text":"

                                                                                      Click Edit YAML in the upper-right corner of the Create Application dialog to open the YAML editor and create the Application.

                                                                                      Edit YAML

                                                                                      To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create Application dialog.

                                                                                      To save the changes, select the Save & Apply button.

                                                                                      "},{"location":"user-guide/add-application/#create-application-via-ui","title":"Create Application via UI","text":"

                                                                                      The Create Application dialog contains the two steps:

                                                                                      • The Codebase Info Menu
                                                                                      • The Advanced Settings Menu
                                                                                      "},{"location":"user-guide/add-application/#codebase-info-menu","title":"Codebase Info Menu","text":"

                                                                                      Follow the instructions below to fill in the fields of the Codebase Info menu:

                                                                                      1. In the Create new component menu, select Application:

                                                                                        Application info

                                                                                      2. Select the necessary configuration strategy. There are three configuration strategies:

                                                                                      • Create from template \u2013 creates a project on the pattern in accordance with an application language, a build tool, and a framework. This strategy is recommended for projects that start developing their applications from scratch.
                                                                                      • Import project - allows configuring a replication from the Git server. While importing the existing repository, select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                                                                        Note

                                                                                        In order to use the Import project strategy, make sure to adjust it with the Integrate GitLab/GitHub With Jenkins or Integrate GitLab/GitHub With Tekton page.

                                                                                      • Clone project \u2013 clones the indicated repository into EPAM Delivery Platform. While cloning the existing repository, it is required to fill in the Repository URL field as well:

                                                                                        Clone application

                                                                                        In our example, we will use the Create from template strategy:

                                                                                        Create application

                                                                                        1. Select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                                                                        2. Type the name of the application in the Component name field by entering at least two characters and by using the lower-case letters, numbers and inner dashes.

                                                                                        3. Type the application description.

                                                                                        4. To create an application with an empty repository in Gerrit, select the Empty project check box.

                                                                                        5. Select any of the supported application languages with their providers in the Application Code Language field:

                                                                                          • Java \u2013 selecting specific Java version (8,11,17 are available).
                                                                                          • JavaScript - selecting JavaScript allows using React, Vue, Angular, Express, Next.js and Antora frameworks.
                                                                                          • Python - selecting Python allows using the Python v.3.8, FastAPI, Flask frameworks.
                                                                                          • Go - selecting Go allows using the Beego, Gin and Operator SDK frameworks.
                                                                                          • C# - selecting C# allows using the .Net v.3.1 and .Net v.6.0 frameworks.
                                                                                          • Helm - selecting Helm allows using the Helm framework.
                                                                                          • Other - selecting Other allows extending the default code languages when creating a codebase with the clone/import strategy. To add another code language, inspect the Add Other Code Language section.

                                                                                          Note

                                                                                          The Create from template strategy does not allow to customize the default code language set.

                                                                                        6. Select necessary Language version/framework depending on the Application code language field.

                                                                                        7. Choose the necessary build tool in the Build Tool field:

                                                                                          • Java - selecting Java allows using the Gradle or Maven tool.
                                                                                          • JavaScript - selecting JavaScript allows using the NPM tool.
                                                                                          • C# - selecting C# allows using the .Net tool.
                                                                                          • Python - selecting Python allows using Python tool.
                                                                                          • Go - selecting Go allows using Go tool.
                                                                                          • Helm - selecting Helm allows using Helm tool.

                                                                                          Note

                                                                                          The Select Build Tool field disposes of the default tools and can be changed in accordance with the selected code language.

                                                                                          Note

                                                                                          Tekton pipelines offer built-in support for Java Maven Multi-Module projects. These pipelines are capable of recognizing Java deployable modules based on the information in the pom.xml file and performing relevant deployment actions. It's important to note that although the Dockerfile is typically located in the root directory, Kaniko, the tool used for building container images, uses the targets folder within the deployable module's context. For a clear illustration of a Multi-Module project structure, please refer to this example on GitHub, which showcases a commonly used structure for Java Maven Multi-Module projects.

                                                                                      "},{"location":"user-guide/add-application/#advanced-settings-menu","title":"Advanced Settings Menu","text":"

                                                                                      The Advanced Settings menu should look similar to the picture below:

                                                                                      Advanced settings

                                                                                      Follow the instructions below to fill in the fields of the Advanced Setting menu:

                                                                                      a. Specify the name of the Default branch where you want the development to be performed.

                                                                                      Note

                                                                                      The default branch cannot be deleted. For the Clone project and Import project strategies: if you want to use the existing branch, enter its name into this field.

                                                                                      b. Select the necessary codebase versioning type:

                                                                                      • default - using the default versioning type, in order to specify the version of the current artifacts, images, and tags in the Version Control System, a developer should navigate to the corresponding file and change the version manually.
                                                                                      • edp - using the edp versioning type, a developer indicates the version number that will be used for all the artifacts stored in artifactory: binaries, pom.xml, metadata, etc. The version stored in repository (e.g. pom.xml) will not be affected or used. Using this versioning overrides any version stored in the repository files without changing actual file.

                                                                                        When selecting the edp versioning type, the extra field will appear:

                                                                                        Edp versioning

                                                                                      Type the version number from which you want the artifacts to be versioned.

                                                                                      Note

                                                                                      The Start Version From field should be filled out in compliance with the semantic versioning rules, e.g. 1.2.3 or 10.10.10. Please refer to the Semantic Versioning page for details.

                                                                                      c. Specify the pattern to validate a commit message. Use regular expression to indicate the pattern that is followed on the project to validate a commit message in the code review pipeline. An example of the pattern: ^[PROJECT_NAME-d{4}]:.*$.

                                                                                      JIRA integration

                                                                                      d. Select the Integrate with Jira Server check box in case it is required to connect Jira tickets with the commits and have a respective label in the Fix Version field.

                                                                                      Note

                                                                                      To adjust the Jira integration functionality, first apply the necessary changes described on the Adjust Jira Integration page, and setup the Adjust VCS Integration With Jira. Pay attention that the Jira integration feature is not available when using the GitLab CI tool.

                                                                                      e. In the Jira Server field, select the Jira server.

                                                                                      f. Specify the pattern to find a Jira ticket number in a commit message. Based on this pattern, the value from EDP will be displayed in Jira. Combine several variables to obtain the desired value.

                                                                                      Note

                                                                                      The GitLab CI tool is available only with the Import strategy and makes the Jira integration feature unavailable.

                                                                                      Mapping fields

                                                                                      g. In the Mapping field name section, specify the names of the Jira fields that should be filled in with attributes from EDP:

                                                                                      1. Select the name of the field in a Jira ticket from the Mapping field name drop-down menu. The available fields are the following: Fix Version/s, Component/s and Labels.

                                                                                      2. Click the Add button to add the mapping field name.

                                                                                      3. Enter Jira pattern for the field name:

                                                                                        • For the Fix Version/s field, select the EDP_VERSION variable that represents an EDP upgrade version, as in 2.7.0-SNAPSHOT. Combine variables to make the value more informative. For example, the pattern EDP_VERSION-EDP_COMPONENT will be displayed as 2.7.0-SNAPSHOT-nexus-operator in Jira.
                                                                                        • For the Component/s field, select the EDP_COMPONENT variable that defines the name of the existing repository. For example, nexus-operator.
                                                                                        • For the Labels field, select the EDP_GITTAG variable that defines a tag assigned to the commit in GitHub. For example, build/2.7.0-SNAPSHOT.59.
                                                                                      4. Click the bin icon to remove the Jira field name.

                                                                                      h. Click the Apply button to add the application to the Applications list.

                                                                                      Note

                                                                                      After the complete adding of the application, inspect the Application Overview part.

                                                                                      Note

                                                                                      Since EDP v3.3.0, the CI tool field has been hidden. Now EDP Portal automatically defines the CI tool depending on which one is deployed with EDP. If both Jenkins and Tekton are deployed, EDP Portal chooses Tekton by default. To define the CI tool manualy, operate with the spec.ciTool parameters.

                                                                                      "},{"location":"user-guide/add-application/#related-articles","title":"Related Articles","text":"
                                                                                      • Manage Applications
                                                                                      • Add CD Pipeline
                                                                                      • Add Other Code Language
                                                                                      • Adjust GitLab CI Tool
                                                                                      • Adjust Jira Integration
                                                                                      • Adjust VCS Integration With Jira
                                                                                      • Integrate GitHub/GitLab in Jenkins
                                                                                      • Integrate GitHub/GitLab in Tekton
                                                                                      • Manage Jenkins CI Pipeline Job Provisioner
                                                                                      • Manage Jenkins Agent
                                                                                      • Perf Server Integration
                                                                                      "},{"location":"user-guide/add-autotest/","title":"Add Autotest","text":"

                                                                                      Portal enables to clone or import an autotest, add it to the environment with its subsequent deployment in Gerrit (in case the Clone strategy is used) and building of the Code Review pipeline in Jenkins/Tekton, as well as to use it for work with an application under development. It is also possible to use autotests as quality gates in a newly created CD pipeline.

                                                                                      Info

                                                                                      Please refer to the Add Application section for the details on how to add an application codebase type. For the details on how to use autotests as quality gates, please refer to the Stages Menu section of the Add CD Pipeline documentation.

                                                                                      To add an autotest, navigate to the Components section on the navigation bar and click Create (the plus sign icon in the lower-right corner of the screen). Once clicked, the Create new component dialog will appear, then select Autotest and choose one of the strategies which will be described later in this page. You can create an autotest in YAML or via the two-step menu in the dialog.

                                                                                      "},{"location":"user-guide/add-autotest/#create-autotest-in-yaml","title":"Create Autotest in YAML","text":"

                                                                                      Click Edit YAML in the upper-right corner of the Create Autotest dialog to open the YAML editor and create an autotest:

                                                                                      Edit YAML

                                                                                      To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create Autotest dialog.

                                                                                      To save the changes, select the Save & Apply button.

                                                                                      "},{"location":"user-guide/add-autotest/#create-autotest-via-ui","title":"Create Autotest via UI","text":"

                                                                                      The Create Autotest dialog contains the two steps:

                                                                                      • The Codebase Info Menu
                                                                                      • The Advanced Settings Menu
                                                                                      "},{"location":"user-guide/add-autotest/#the-codebase-info-menu","title":"The Codebase Info Menu","text":"

                                                                                      There are two available strategies: clone and import.

                                                                                      1. The Create new component menu should look like the picture below:

                                                                                        Create new component menu

                                                                                      2. In the Repository onboarding strategy field, select the necessary configuration strategy:

                                                                                        • Clone project \u2013 clones the indicated repository into EPAM Delivery Platform. While cloning the existing repository, it is required to fill in the Repository URL field as well.
                                                                                        • Import project - allows configuring a replication from the Git server. While importing the existing repository, select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                                                                          Note

                                                                                          In order to use the Import project strategy, make sure to adjust it with the Integrate GitLab/GitHub With Jenkins or Integrate GitLab/GitHub With Tekton page.

                                                                                          In our example, we will use the Clone project strategy:

                                                                                          Clone autotest

                                                                                          1. While cloning the existing repository, it is required to fill in the Repository URL field.

                                                                                          2. Select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                                                                          3. Select the Repository credentials check box in case you clone the private repository, and fill in the repository login and password/access token.

                                                                                          4. Fill in the Component name field by entering at least two characters and by using the lower-case letters, numbers and inner dashes.

                                                                                          5. Type the necessary description in the Description field.

                                                                                          6. In the Autotest code language field, select the Java code language with its framework (specify Java 8 or Java 11 to be used) and get the default Maven build tool OR add another code language. Selecting Other allows extending the default code languages and get the necessary build tool, for details, inspect the Add Other Code Language section.

                                                                                            Note

                                                                                            Using the Create strategy does not allow to customize the default code language set.

                                                                                          7. Select the Java framework if Java is selected above.

                                                                                          8. The Build Tool field can dispose of the default Maven tool, Gradle or other built tool in accordance with the selected code language.

                                                                                          9. All the autotest reports will be created in the Allure framework that is available in the Autotest Report Framework field by default.

                                                                                      3. Click the Proceed button to switch to the next menu.

                                                                                      The Advanced Settings menu should look like the picture below:

                                                                                      Advanced settings

                                                                                      a. Specify the name of the default branch where you want the development to be performed.

                                                                                      Note

                                                                                      The default branch cannot be deleted.

                                                                                      b. Select the necessary codebase versioning type:

                                                                                      • default: Using the default versioning type, in order to specify the version of the current artifacts, images, and tags in the Version Control System, a developer should navigate to the corresponding file and change the version manually.
                                                                                      • edp: Using the edp versioning type, a developer indicates the version number from which all the artifacts will be versioned and, as a result, automatically registered in the corresponding file (e.g. pom.xml).

                                                                                        When selecting the edp versioning type, the extra field will appear:

                                                                                        Edp versioning

                                                                                        Type the version number from which you want the artifacts to be versioned.

                                                                                      Note

                                                                                      The Start Version From field must be filled out in compliance with the semantic versioning rules, e.g. 1.2.3 or 10.10.10. Please refer to the Semantic Versioning page for details.

                                                                                      c. Specify the pattern to validate a commit message. Use regular expression to indicate the pattern that is followed on the project to validate a commit message in the code review pipeline. An example of the pattern: ^[PROJECT_NAME-d{4}]:.*$

                                                                                      Jira integration

                                                                                      d. Select the Integrate with Jira Server check box in case it is required to connect Jira tickets with the commits and have a respective label in the Fix Version field.

                                                                                      Note

                                                                                      To adjust the Jira integration functionality, first apply the necessary changes described on the Adjust Jira Integration page, and Adjust VCS Integration With Jira. Pay attention that the Jira integration feature is not available when using the GitLab CI tool.

                                                                                      e. As soon as the Jira server is set, select it in the Jira Server field.

                                                                                      f. Specify the pattern to find a Jira ticket number in a commit message. Based on this pattern, the value from EDP will be displayed in Jira.

                                                                                      Mapping field name

                                                                                      g. In the Advanced Mapping section, specify the names of the Jira fields that should be filled in with attributes from EDP:

                                                                                      1. Select the name of the field in a Jira ticket. The available fields are the following: Fix Version/s, Component/s and Labels.

                                                                                      2. Click the Add button to add the mapping field name.

                                                                                      3. Enter Jira pattern for the field name:

                                                                                        • For the Fix Version/s field, select the EDP_VERSION variable that represents an EDP upgrade version, as in 2.7.0-SNAPSHOT. Combine variables to make the value more informative. For example, the pattern EDP_VERSION-EDP_COMPONENT will be displayed as 2.7.0-SNAPSHOT-nexus-operator in Jira.
                                                                                        • For the Component/s field select the EDP_COMPONENT variable that defines the name of the existing repository. For fexample, nexus-operator.
                                                                                        • For the Labels field select the EDP_GITTAGvariable that defines a tag assigned to the commit in Git Hub. For example, build/2.7.0-SNAPSHOT.59.
                                                                                      4. Click the bin icon to remove the Jira field name.

                                                                                      h. Click the Apply button to add the library to the Libraries list.

                                                                                      Note

                                                                                      After the complete adding of the autotest, inspect the Autotest Overview part.

                                                                                      Note

                                                                                      Since EDP v3.3.0, the CI tool field has been hidden. Now EDP Portal automatically defines the CI tool depending on which one is deployed with EDP. If both Jenkins and Tekton are deployed, EDP Portal chooses Tekton by default. To define the CI tool manualy, operate with the spec.ciTool parameters.

                                                                                      "},{"location":"user-guide/add-autotest/#the-advanced-settings-menu","title":"The Advanced Settings Menu","text":""},{"location":"user-guide/add-autotest/#related-articles","title":"Related Articles","text":"
                                                                                      • Manage Autotests
                                                                                      • Add Application
                                                                                      • Add CD Pipelines
                                                                                      • Add Other Code Language
                                                                                      • Adjust GitLab CI Tool
                                                                                      • Adjust Jira Integration
                                                                                      • Adjust VCS Integration With Jira
                                                                                      • Integrate GitHub/GitLab in Jenkins
                                                                                      • Integrate GitHub/GitLab in Tekton
                                                                                      • Manage Jenkins CI Pipeline Job Provisioner
                                                                                      • Manage Jenkins Agent
                                                                                      • Perf Server Integration
                                                                                      "},{"location":"user-guide/add-cd-pipeline/","title":"Add CD Pipeline","text":"

                                                                                      Portal provides the ability to deploy an environment on your own and specify the essential components.

                                                                                      Navigate to the CD Pipelines section on the navigation bar and click Create (the plus sign icon in the lower-right corner of the screen). Once clicked, the Create CD Pipeline dialog will appear.

                                                                                      The creation of the CD pipeline becomes available as soon as an application is created including its provisioning in a branch and the necessary entities for the environment. You can create the CD pipeline in YAML or via the three-step menu in the dialog.

                                                                                      "},{"location":"user-guide/add-cd-pipeline/#create-cd-pipeline-in-yaml","title":"Create CD Pipeline in YAML","text":"

                                                                                      Click Edit YAML in the upper-right corner of the Create CD Pipeline dialog to open the YAML editor and create the CD Pipeline.

                                                                                      Edit YAML

                                                                                      To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create CD Pipeline dialog.

                                                                                      To save the changes, select the Save & Apply button.

                                                                                      "},{"location":"user-guide/add-cd-pipeline/#create-cd-pipeline-in-the-dialog","title":"Create CD Pipeline in the Dialog","text":"

                                                                                      The Create CD Pipeline dialog contains the three steps:

                                                                                      • The Pipeline Menu
                                                                                      • The Applications Menu
                                                                                      • The Stages Menu
                                                                                      "},{"location":"user-guide/add-cd-pipeline/#the-pipeline-menu","title":"The Pipeline Menu","text":"

                                                                                      The Pipeline tab of the Create CD Pipeline menu should look like the picture below:

                                                                                      Create CD pipeline

                                                                                      1. Type the name of the pipeline in the Pipeline Name field by entering at least two characters and by using the lower-case letters, numbers and inner dashes.

                                                                                        Note

                                                                                        The namespace created by the CD pipeline has the following pattern combination: [edp namespace]-[cd pipeline name]-[stage name]. Please be aware that the namespace length should not exceed 63 symbols.

                                                                                      2. Select the deployment type from the drop-down list:

                                                                                        • Container - the pipeline will be deployed in a Docker container;
                                                                                        • Custom - this mode allows to deploy non-container applications and customize the Init stage of CD pipeline.
                                                                                      3. Click the Proceed button to switch to the next menu.

                                                                                      "},{"location":"user-guide/add-cd-pipeline/#the-applications-menu","title":"The Applications Menu","text":"

                                                                                      The Pipeline tab of the Create CD Pipeline menu should look like the picture below:

                                                                                      CD pipeline applications

                                                                                      1. Select the necessary application from the Mapping field name drop-down menu.
                                                                                      2. Select the plus sign icon near the selected application to specify the necessary codebase Docker branch for the application (the output for the branch and other stages from other CD pipelines).
                                                                                      3. Select the application branch from the drop-down menu.
                                                                                      4. Select the Promote in pipeline check box in order to transfer the application from one to another stage by the specified codebase Docker branch. If the Promote in pipeline check box is not selected, the same codebase Docker stream will be deployed regardless of the stage, i.e. the codebase Docker stream input, which was selected for the pipeline, will always be used.

                                                                                        Note

                                                                                        The newly created CD pipeline has the following pattern combination: [pipeline name]-[branch name]. If there is another deployed CD pipeline stage with the respective codebase Docker stream (= image stream as an OpenShift term), the pattern combination will be as follows: [pipeline name]-[stage name]-[application name]-[verified].

                                                                                      5. Click the Proceed button to switch to the next menu.

                                                                                      "},{"location":"user-guide/add-cd-pipeline/#the-stages-menu","title":"The Stages Menu","text":"
                                                                                      1. Click the plus sign icon in the Stages menu and fill in the necessary fields in the Adding Stage window :

                                                                                        CD stages

                                                                                        Adding stage

                                                                                        a. Type the stage name;

                                                                                        Note

                                                                                        The namespace created by the CD pipeline has the following pattern combination: [cluster name]-[cd pipeline name]-[stage name]. Please be aware that the namespace length should not exceed 63 symbols.

                                                                                        b. Enter the description for this stage;

                                                                                        c. Select the trigger type. The key benefit of the automatic deploy feature is to keep environments up-to-date. The available trigger types are Manual and Auto. When the Auto trigger type is chosen, the CD pipeline will initiate automatically once the image is built. Manual implies that user has to perform deploy manually by clicking the Deploy button in the CD Pipeline menu. Please refer to the Architecture Scheme of CD Pipeline Operator page for additional details.

                                                                                        Note

                                                                                        In Tekton deploy scenario, automatic deploy will start working only after the first manual deploy.

                                                                                        d. Select the job provisioner. In case of working with non-container-based applications, there is an option to use a custom job provisioner. Please refer to the Manage Jenkins CD Job Provision page for details.

                                                                                        e. Select the groovy-pipeline library;

                                                                                        f. Select the branch;

                                                                                        g. Add an unlimited number of quality gates by clicking a corresponding plus sign icon and remove them as well by clicking the recycle bin icon;

                                                                                        h. Type the step name, which will be displayed in Jenkins/Tekton, for every quality gate;

                                                                                        i. Select the quality gate type:

                                                                                        • Manual - means that the promoting process should be confirmed in Jenkins/Tekton manually;
                                                                                        • Autotests - means that the promoting process should be confirmed by the successful passing of the autotests.

                                                                                        In the additional fields, select the previously created autotest name (j) and specify its branch for the autotest that will be launched on the current stage (k).

                                                                                        Note

                                                                                        Execution sequence. The image promotion and execution of the pipelines depend on the sequence in which the environments are added.

                                                                                        l. Click the Apply button to display the stage in the Stages menu.

                                                                                        Continuous delivery menu

                                                                                      2. Edit the stage by clicking its name and applying changes, and remove the added stage by clicking the recycle bin icon next to its name.

                                                                                      3. Click the Apply button to start the provisioning of the pipeline. After the CD pipeline is added, the new project with the stage name will be created in OpenShift.

                                                                                      "},{"location":"user-guide/add-cd-pipeline/#manage-cd-pipeline","title":"Manage CD Pipeline","text":"

                                                                                      As soon as the CD pipeline is provisioned and added to the CD Pipelines list, there is an ability to:

                                                                                      CD pipeline page

                                                                                      1. Create another application by clicking the plus sign icon in the lower-right corner of the screen and performing the same steps as described in the Add CD Pipeline section.

                                                                                      2. Open CD pipeline data by clicking its link name. Once clicked, the following blocks will be displayed:

                                                                                        • General Info - displays common information about the CD pipeline, such as name and deployment type.
                                                                                        • Applications - displays the CD pipeline applications to promote.
                                                                                        • Stages - displays the CD pipeline stages and stage metadata (by selecting the information icon near the stage name); allows to add, edit and delete stages, as well as deploy or uninstall image stream versions of the related applications for a stage.
                                                                                        • Metadata - displays the CD pipeline name, namespace, creation date, finalizers, generation, resource version, and UID. Open this block by selecting the information icon near the options icon next to the CD pipeline name.
                                                                                      3. Edit the CD pipeline by selecting the options icon next to its name in the CD Pipelines list, and then selecting Edit. For details see the Edit Existing CD Pipeline section.

                                                                                      4. Delete the added CD pipeline by selecting the options icon next to its name in the CD Pipelines list, and then selecting Delete.

                                                                                        Info

                                                                                        In OpenShift, if the deployment fails with the ImagePullBackOff error, delete the POD.

                                                                                      5. Sort the existing CD pipelines in a table by clicking the sorting icons in the table header. When sorting by name, the CD pipelines will be displayed in alphabetical order. You can also sort the CD pipelines by their status.

                                                                                      6. Search the necessary CD pipeline by the namespace or by entering the corresponding name, language or the build tool into the Filter tool.

                                                                                      7. Select a number of CD pipelines displayed per page (15, 25 or 50 rows) and navigate between pages if the number of CD pipelines exceeds the capacity of a single page.

                                                                                      "},{"location":"user-guide/add-cd-pipeline/#edit-existing-cd-pipeline","title":"Edit Existing CD Pipeline","text":"

                                                                                      Edit the CD pipeline directly from the CD Pipelines overview page or when viewing the CD Pipeline data:

                                                                                      1. Select Edit in the options icon menu next to the CD pipeline name:

                                                                                        Edit CD pipeline on the CD Pipelines overview page

                                                                                        Edit CD pipeline when viewing the CD pipeline data

                                                                                      2. Apply the necessary changes (edit the list of applications for deploy, application branches, and promotion in the pipeline). Add new extra stages by clicking the plus sign icon and filling in the application branch and promotion in the pipeline.

                                                                                        Edit CD pipeline dialog

                                                                                      3. Select the Apply button to confirm the changes.

                                                                                      "},{"location":"user-guide/add-cd-pipeline/#add-a-new-stage","title":"Add a New Stage","text":"

                                                                                      In order to create a new stage for the existing CD pipeline, follow the steps below:

                                                                                      1. Navigate to the Stages block by clicking the CD pipeline name link in the CD Pipelines list.

                                                                                        Add CD pipeline stage

                                                                                      2. Select Create to open the Create stage dialog.

                                                                                      3. Click Edit YAML in the upper-right corner of the Create stage dialog to open the YAML editor and add a stage. Otherwise, fill in the required fields in the dialog. Please see the Stages Menu section for details.

                                                                                      4. Click the Apply button.

                                                                                      "},{"location":"user-guide/add-cd-pipeline/#edit-stage","title":"Edit Stage","text":"

                                                                                      In order to edit a stage for the existing CD pipeline, follow the steps below:

                                                                                      1. Navigate to the Stages block by clicking the CD pipeline name link in the CD Pipelines list.

                                                                                        Edit CD pipeline stage

                                                                                      2. Select the options icon related to the necessary stage and then select Edit.

                                                                                        Edit CD pipeline stage dialog

                                                                                      3. In the Edit Stage dialog, change the stage trigger type. See more about this field in the Stages Menu section.

                                                                                      4. Click the Apply button.

                                                                                      "},{"location":"user-guide/add-cd-pipeline/#delete-stage","title":"Delete Stage","text":"

                                                                                      Note

                                                                                      You cannot remove the last stage, as the CD pipeline does not exist without stages.

                                                                                      In order to delete a stage for the existing CD pipeline, follow the steps below:

                                                                                      1. Navigate to the Stages block by clicking the CD pipeline name link in the CD Pipelines list.

                                                                                        Delete CD pipeline stage

                                                                                      2. Select the options icon related to the necessary stage and then select Delete. After the confirmation, the CD stage is deleted with all its components: database record, Jenkins/Tekton pipeline, and cluster namespace.

                                                                                      "},{"location":"user-guide/add-cd-pipeline/#view-stage-data","title":"View Stage Data","text":"

                                                                                      To view the CD pipeline stage data for the existing CD pipeline, follow the steps below:

                                                                                      1. Navigate to the Stages block by clicking the CD pipeline name link in the CD Pipelines list.

                                                                                        Expand CD pipeline stage

                                                                                      2. Select the expand icon near the stage name. The following blocks will be displayed:

                                                                                        CD pipeline stage overview

                                                                                      • Applications - displays the status of the applications related to the stage and allows deploying the applications. Applications health and sync statuses are returned from the Argo CD tool.
                                                                                      • General Info - displays the stage status, CD pipeline, description, job provisioning, order, trigger type, and source.
                                                                                      • Quality Gates - displays the stage quality gate type, step name, autotest name, and branch name.
                                                                                      "},{"location":"user-guide/add-cd-pipeline/#deploy-application","title":"Deploy Application","text":"

                                                                                      Navigate to the Applications block of the stage and select an application. Select the image stream version from the drop-down list and click Deploy. The application will be deployed in the Argo CD tool as well.

                                                                                      Deploy the promoted application

                                                                                      To update or uninstall the application, select Update or Uninstall.

                                                                                      Update or uninstall the application

                                                                                      After this, the application will be updated or uninstalled in the Argo CD tool as well.

                                                                                      Note

                                                                                      In a nutshell, the Update button updates your image version in the Helm chart, whereas the Uninstall button deletes the Helm chart from the namespace where the pipeline is deployed.

                                                                                      "},{"location":"user-guide/add-cd-pipeline/#related-articles","title":"Related Articles","text":"
                                                                                      • Manage Jenkins CD Pipeline Job Provision
                                                                                      "},{"location":"user-guide/add-cluster/","title":"Add Cluster","text":"

                                                                                      Adding other clusters allows deploying applications to several clusters when creating a stage of CD pipeline in EDP Portal.

                                                                                      To add a cluster, follow the steps below:

                                                                                      1. Navigate to the Configuration section on the navigation bar and select Clusters. The appearance differs depending on the chosen display option:

                                                                                        List optionTiled option

                                                                                        Configuration menu (List option)

                                                                                        Configuration menu (Tiled option)

                                                                                      2. Click the + button to enter the Create new cluster menu:

                                                                                        Add Cluster

                                                                                      3. Once clicked, the Create new cluster dialog will appear. You can create a Cluster in YAML or via UI:

                                                                                      Add cluster in YAMLAdd cluster via UI

                                                                                      To add cluster in YAML, follow the steps below:

                                                                                      • Click the Edit YAML button in the upper-right corner of the Create New Cluster dialog to open the YAML editor and create a Kubernetes secret.

                                                                                      Edit YAML

                                                                                      • To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create new cluster dialog.
                                                                                      • To save the changes, select the Save & Apply button.

                                                                                      To add cluster in YAML, follow the steps below:

                                                                                      • To add a new cluster via the dialog menu, fill in the following fields in the Create New Cluster dialog:

                                                                                        • Cluster Name - enter a cluster name;
                                                                                        • Cluster Host - enter a cluster host;
                                                                                        • Cluster Token - enter a cluster token;
                                                                                        • Cluster Certificate - enter a cluster certificate.

                                                                                      Add Cluster

                                                                                      • Click the Apply button to add the cluster to the clusters list.

                                                                                      As a result, the Kubernetes secret will be created for further integration.

                                                                                      Currently, the EDP uses the shared Argo CD and the secret needs to be copied to the namespace where the Argo CD is installed.

                                                                                      "},{"location":"user-guide/add-cluster/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Application
                                                                                      • Add Library
                                                                                      • Add Autotest
                                                                                      • Add CD Pipeline
                                                                                      "},{"location":"user-guide/add-custom-global-pipeline-lib/","title":"Add a Custom Global Pipeline Library","text":"

                                                                                      In order to add a new custom global pipeline library, perform the steps below:

                                                                                      1. Navigate to Jenkins and go to Manage Jenkins -> Configure System -> Global Pipeline Libraries.

                                                                                        Note

                                                                                        It is possible to configure as many libraries as necessary. Since these libraries will be globally usable, any pipeline in the system can utilize the functionality implemented in these libraries.

                                                                                      2. Specify the following values:

                                                                                        Add custom library

                                                                                        a. Library name: The name of a custom library.

                                                                                        b. Default version: The version which can be branched, tagged or hashed of a commit.

                                                                                        c. Load implicitly: If checked, scripts will automatically have access to this library without needing to request it via @Library. It means that there is no need to upload the library manually because it will be downloaded automatically during the build for each job.

                                                                                        d. Allow default version to be overridden: If checked, scripts may select a custom version of the library by appending @someversion in the @Library annotation. Otherwise, they are restricted to using the version selected here.

                                                                                        e. Include @Library changes in job recent changes: If checked, any changes in the library will be included in the changesets of a build, and changing the library would cause new builds to run for Pipelines that include this library. This can be overridden in the jenkinsfile: @Library(value=\"name@version\", changelog=true|false).

                                                                                        f. Cache fetched versions on controller for quick retrieval: If checked, versions fetched using this library will be cached on the controller. If a new library version is not downloaded during the build for some reason, remove the previous library version from cache in the Jenkins workspace.

                                                                                        Note

                                                                                        If the Default version check box is not defined, the pipeline must specify a version, for example, @Library('my-shared-library@master'). If the Allow default version to be overridden check box is enabled in the Shared Library\u2019s configuration, a @Library annotation may also override the default version defined for the library.

                                                                                        Source code management

                                                                                        g. Project repository: The URL of the repository

                                                                                        h. Credentials: The credentials for the repository.

                                                                                      3. Use the Custom Global Pipeline Libraries on the pipeline, for example:

                                                                                      Pipeline

                                                                                      @Library(['edp-library-stages', 'edp-library-pipelines', 'edp-custom-shared-library-name'])_\n\nBuild()\n

                                                                                      Note

                                                                                      edp-custom-shared-library-name is the name of the Custom Global Pipeline Library that should be added to the Jenkins Global Settings.

                                                                                      "},{"location":"user-guide/add-custom-global-pipeline-lib/#related-articles","title":"Related Articles","text":"
                                                                                      • Jenkins Official Documentation: Extending with Shared Libraries
                                                                                      "},{"location":"user-guide/add-git-server/","title":"Add Git Server","text":"

                                                                                      Important

                                                                                      This article describes how to add a Git Server when deploying EDP with Jenkins. When deploying EDP with Tekton, Git Server is created automatically.

                                                                                      Add Git servers to use the Import strategy for Jenkins and Tekton when creating an application, autotest or library in EDP Portal (Codebase Info step of the Create Application/Autotest/Library dialog). Enabling the Import strategy is a prerequisite to integrate EDP with Gitlab or GitHub.

                                                                                      Note

                                                                                      GitServer Custom Resource can be also created manually. See step 3 for Jenkins import strategy in the Integrate GitHub/GitLab in Jenkins article.

                                                                                      To add a Git server, navigate to the Git servers section on the navigation bar and click Create (the plus sign icon in the lower-right corner of the screen). Once clicked, the Create Git server dialog will appear. You can create a Git server in YAML or via the three-step menu in the dialog.

                                                                                      "},{"location":"user-guide/add-git-server/#create-git-server-in-yaml","title":"Create Git Server in YAML","text":"

                                                                                      Click Edit YAML in the upper-right corner of the Create Git server dialog to open the YAML editor and create a Git server.

                                                                                      Edit YAML

                                                                                      To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create Git server dialog.

                                                                                      To save the changes, select the Save & Apply button.

                                                                                      "},{"location":"user-guide/add-git-server/#create-git-server-in-the-dialog","title":"Create Git Server in the Dialog","text":"

                                                                                      Fill in the following fields:

                                                                                      Create Git server

                                                                                      • Git provider - select Gerrit, GitLab or GitHub.
                                                                                      • Host - enter a Git server endpoint.
                                                                                      • User - enter a user for Git integration.
                                                                                      • SSH port - enter a Git SSH port.
                                                                                      • HTTPS port - enter a Git HTTPS port.
                                                                                      • Private SSH key - enter a private SSH key for Git integration. To generate this key, follow the instructions of the step 1 for Jenkins in the Integrate GitHub/GitLab in Jenkins article.
                                                                                      • Access token - enter an access token for Git integration. To generate this token, go to GitLab/GitHub account -> Settings -> SSH and GPG keys -> select New SSH key and add SSH key.

                                                                                      Click the Apply button to add the Git server to the Git servers list. As a result, the Git Server object and the corresponding secret for further integration will be created.

                                                                                      "},{"location":"user-guide/add-git-server/#related-articles","title":"Related Articles","text":"
                                                                                      • Integrate GitHub/GitLab in Jenkins
                                                                                      • Integrate GitHub/GitLab in Tekton
                                                                                      • GitHub Webhook Configuration
                                                                                      • GitLab Webhook Configuration
                                                                                      "},{"location":"user-guide/add-infrastructure/","title":"Add Infrastructure","text":"

                                                                                      EDP Portal allows to create, clone and import an infrastructure. Its functionality is to create resources in cloud provider.

                                                                                      To add an application, navigate to the Components section on the navigation bar and click Create (the plus sign icon in the lower-right corner of the screen). Once clicked, the Create new component dialog will appear, then select Application and choose one of the strategies which will be described later in this page. You can create an Application in YAML or via the two-step menu in the dialog.

                                                                                      "},{"location":"user-guide/add-infrastructure/#create-infrastructure-in-yaml","title":"Create Infrastructure in YAML","text":"

                                                                                      Click Edit YAML in the upper-right corner of the Create Infrastructure dialog to open the YAML editor and create the Infrastructure.

                                                                                      Edit YAML

                                                                                      To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create Infrastructure dialog.

                                                                                      To save the changes, select the Save & Apply button.

                                                                                      "},{"location":"user-guide/add-infrastructure/#create-infrastructure-via-ui","title":"Create Infrastructure via UI","text":"

                                                                                      The Create Infrastructure dialog contains the two steps:

                                                                                      • The Codebase Info Menu
                                                                                      • The Advanced Settings Menu
                                                                                      "},{"location":"user-guide/add-infrastructure/#codebase-info-menu","title":"Codebase Info Menu","text":"

                                                                                      Follow the instructions below to fill in the fields of the Codebase Info menu:

                                                                                      1. In the Create new component menu, select Infrastructure:

                                                                                        Infrastructure info

                                                                                      2. Select the necessary configuration strategy:

                                                                                      • Create from template \u2013 creates a project on the pattern in accordance with an infrastructure language, a build tool, and a framework.
                                                                                      • Import project - allows configuring a replication from the Git server. While importing the existing repository, select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                                                                        Note

                                                                                        In order to use the Import project strategy, make sure to adjust it with the Integrate GitLab/GitHub With Jenkins or Integrate GitLab/GitHub With Tekton page.

                                                                                      • Clone project \u2013 clones the indicated repository into EPAM Delivery Platform. While cloning the existing repository, it is required to fill in the Repository URL field as well:

                                                                                        In our example, we will use the Create from template strategy:

                                                                                        Create infrastructure

                                                                                        1. Select the Git server from the drop-down list and define the Git repo relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                                                                        2. Type the name of the infrastructure in the Component name field by entering at least two characters and by using the lower-case letters, numbers and inner dashes.

                                                                                        3. Write the description in the Description field.

                                                                                        4. To create an application with an empty repository in Gerrit, select the Empty project check box.

                                                                                        5. Select any of the supported application languages with their providers in the Infrastructure Code Language field. So far, only HCL is supported.

                                                                                          Note

                                                                                          The Create from template strategy does not allow to customize the default code language set.

                                                                                        6. Select necessary Language version/framework depending on the Infrastructure code language field. So far, only AWS is supported.

                                                                                        7. Choose the necessary build tool in the Build Tool field. So far, only Terraform is supported.\\

                                                                                          Note

                                                                                          The Select Build Tool field disposes of the default tools and can be changed in accordance with the selected code language.

                                                                                      The Advanced Settings menu should look similar to the picture below:

                                                                                      Advanced settings

                                                                                      Follow the instructions below to fill in the fields of the Advanced Setting menu:

                                                                                      a. Specify the name of the Default branch where you want the development to be performed.

                                                                                      Note

                                                                                      The default branch cannot be deleted. For the Clone project and Import project strategies: if you want to use the existing branch, enter its name into this field.

                                                                                      b. Select the necessary codebase versioning type:

                                                                                      • default - using the default versioning type, in order to specify the version of the current artifacts, images, and tags in the Version Control System, a developer should navigate to the corresponding file and change the version manually.
                                                                                      • edp - using the edp versioning type, a developer indicates the version number that will be used for all the artifacts stored in artifactory: binaries, pom.xml, metadata, etc. The version stored in repository (e.g. pom.xml) will not be affected or used. Using this versioning overrides any version stored in the repository files without changing actual file.

                                                                                        When selecting the edp versioning type, the extra field will appear:

                                                                                        Edp versioning

                                                                                      Type the version number from which you want the artifacts to be versioned.

                                                                                      Note

                                                                                      The Start Version From field should be filled out in compliance with the semantic versioning rules, e.g. 1.2.3 or 10.10.10. Please refer to the Semantic Versioning page for details.

                                                                                      c. Specify the pattern to validate a commit message. Use regular expression to indicate the pattern that is followed on the project to validate a commit message in the code review pipeline. An example of the pattern: ^[PROJECT_NAME-d{4}]:.*$.

                                                                                      JIRA integration

                                                                                      d. Select the Integrate with Jira Server check box in case it is required to connect Jira tickets with the commits and have a respective label in the Fix Version field.

                                                                                      Note

                                                                                      To adjust the Jira integration functionality, first apply the necessary changes described on the Adjust Jira Integration page, and setup the Adjust VCS Integration With Jira. Pay attention that the Jira integration feature is not available when using the GitLab CI tool.

                                                                                      e. In the Jira Server field, select the Jira server.

                                                                                      f. Specify the pattern to find a Jira ticket number in a commit message. Based on this pattern, the value from EDP will be displayed in Jira. Combine several variables to obtain the desired value.

                                                                                      Note

                                                                                      The GitLab CI tool is available only with the Import strategy and makes the Jira integration feature unavailable.

                                                                                      Mapping fields

                                                                                      g. In the Mapping field name section, specify the names of the Jira fields that should be filled in with attributes from EDP:

                                                                                      1. Select the name of the field in a Jira ticket from the Mapping field name drop-down menu. The available fields are the following: Fix Version/s, Component/s and Labels.

                                                                                      2. Click the Add button to add the mapping field name.

                                                                                      3. Enter Jira pattern for the field name:

                                                                                        • For the Fix Version/s field, select the EDP_VERSION variable that represents an EDP upgrade version, as in 2.7.0-SNAPSHOT. Combine variables to make the value more informative. For example, the pattern EDP_VERSION-EDP_COMPONENT will be displayed as 2.7.0-SNAPSHOT-nexus-operator in Jira.
                                                                                        • For the Component/s field, select the EDP_COMPONENT variable that defines the name of the existing repository. For example, nexus-operator.
                                                                                        • For the Labels field, select the EDP_GITTAG variable that defines a tag assigned to the commit in GitHub. For example, build/2.7.0-SNAPSHOT.59.
                                                                                      4. Click the bin icon to remove the Jira field name.

                                                                                      h. Click the Apply button to add the application to the Applications list.

                                                                                      Note

                                                                                      After the complete adding of the application, inspect the Application Overview part.

                                                                                      Note

                                                                                      Since EDP v3.3.0, the CI tool field has been hidden. Now EDP Portal automatically defines the CI tool depending on which one is deployed with EDP. If both Jenkins and Tekton are deployed, EDP Portal chooses Tekton by default. To define the CI tool manualy, operate with the spec.ciTool parameters.

                                                                                      "},{"location":"user-guide/add-infrastructure/#advanced-settings-menu","title":"Advanced Settings Menu","text":""},{"location":"user-guide/add-infrastructure/#related-articles","title":"Related Articles","text":"
                                                                                      • Application Overview
                                                                                      • Add CD Pipelines
                                                                                      • Add Other Code Language
                                                                                      • Adjust GitLab CI Tool
                                                                                      • Adjust Jira Integration
                                                                                      • Adjust VCS Integration With Jira
                                                                                      • Enable VCS Import Strategy
                                                                                      • Manage Jenkins CI Pipeline Job Provisioner
                                                                                      • Manage Jenkins Agent
                                                                                      • Perf Server Integration
                                                                                      "},{"location":"user-guide/add-library/","title":"Add Library","text":"

                                                                                      Portal helps to create, clone and import a library and add it to the environment. It can also be deployed in Gerrit (if the Clone or Create strategy is used) with the Code Review and Build pipelines built in Jenkins/Tekton.

                                                                                      To add a library, navigate to the Components section on the navigation bar and click Create (the plus sign icon in the lower-right corner of the screen). Once clicked, the Create new component dialog will appear, then select Library and choose one of the strategies which will be described later in this page. You can create a library in YAML or via the two-step menu in the dialog.

                                                                                      Create new component menu

                                                                                      "},{"location":"user-guide/add-library/#create-library-in-yaml","title":"Create Library in YAML","text":"

                                                                                      Click Edit YAML in the upper-right corner of the Create Library dialog to open the YAML editor and create the Library.

                                                                                      Edit YAML

                                                                                      To edit YAML in the minimal editor, turn on the Use minimal editor toggle in the upper-right corner of the Create Application dialog.

                                                                                      To save the changes, select the Save & Apply button.

                                                                                      "},{"location":"user-guide/add-library/#create-library-via-ui","title":"Create Library via UI","text":"

                                                                                      The Create Library dialog contains the two steps:

                                                                                      • The Codebase Info Menu
                                                                                      • The Advanced Settings Menu
                                                                                      "},{"location":"user-guide/add-library/#the-codebase-info-menu","title":"The Codebase Info Menu","text":"
                                                                                      1. The Create new component menu should look like the following:

                                                                                        Create new component menu

                                                                                      2. In the Create new component menu, select the necessary configuration strategy. The choice will define the parameters you will need to specify:

                                                                                        • Create from template \u2013 creates a project on the pattern in accordance with a library language, a build tool, and a framework.
                                                                                        • Import project - allows configuring a replication from the Git server. While importing the existing repository, select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example.

                                                                                          Note

                                                                                          In order to use the Import project strategy, make sure to adjust it with the Integrate GitLab/GitHub With Jenkins or Integrate GitLab/GitHub With Tekton page.

                                                                                        • Clone project \u2013 clones the indicated repository into EPAM Delivery Platform. While cloning the existing repository, it is required to fill in the Repository URL field as well:

                                                                                          Clone library

                                                                                          In our example, we will use the Create from template strategy:

                                                                                          Create library

                                                                                          1. While importing the existing repository, select the Git server from the drop-down list and define the relative path to the repository, such as /epmd-edp/examples/basic/edp-auto-tests-simple-example
                                                                                          2. Type the name of the library in the Component name field by entering at least two characters and by using the lower-case letters, numbers and inner dashes.
                                                                                          3. Type the library description.
                                                                                          4. To create a library with an empty repository in Gerrit, select the Empty project check box. The empty repository option is available only for the Create from template strategy.
                                                                                          5. Select any of the supported code languages with its framework in the Library code language field:

                                                                                            • Java \u2013 selecting specific Java version available.
                                                                                            • JavaScript - selecting JavaScript allows using the NPM tool.
                                                                                            • Python - selecting Python allows using the Python v.3.8, FastAPI, Flask.
                                                                                            • Groovy-pipeline - selecting Groovy-pipeline allows having the ability to customize a stages logic. For details, please refer to the Customize CD Pipeline page.
                                                                                            • Terraform - selecting Terraform allows using the Terraform different versions via the Terraform version manager (tfenv). EDP supports all actions available in Terraform, thus providing the ability to modify the virtual infrastructure and launch some checks with the help of linters. For details, please refer to the Use Terraform Library in EDP page.
                                                                                            • Rego - this option allows using Rego code language with an Open Policy Agent (OPA) Library. For details, please refer to the Use Open Policy Agent page.
                                                                                            • Container - this option allows using the Kaniko tool for building the container images from a Dockerfile. For details, please refer to the CI Pipeline for Container page.
                                                                                            • Helm - this option allows using the chart testing lint (Pipeline) for Helm charts or using Helm chart as a set of other Helm charts organized according to the example.
                                                                                            • C# - selecting C# allows using .Net v.3.1 and .Net v.6.0.
                                                                                            • Other - selecting Other allows extending the default code languages when creating a codebase with the Clone/Import strategy. To add another code language, inspect the Add Other Code Language page.

                                                                                            Note

                                                                                            The Create strategy does not allow to customize the default code language set.

                                                                                          6. Select necessary Language version/framework depending on the Library code language field.

                                                                                          7. The Select Build Tool field disposes of the default tools and can be changed in accordance with the selected code language.

                                                                                      3. Click the Proceed button to switch to the next menu.

                                                                                      "},{"location":"user-guide/add-library/#the-advanced-settings-menu","title":"The Advanced Settings Menu","text":"

                                                                                      The Advanced Settings menu should look like the picture below:

                                                                                      Advanced settings

                                                                                      a. Specify the name of the default branch where you want the development to be performed.

                                                                                      Note

                                                                                      The default branch cannot be deleted.

                                                                                      b. Select the necessary codebase versioning type:

                                                                                      • default: Using the default versioning type, in order to specify the version of the current artifacts, images, and tags in the Version Control System, a developer should navigate to the corresponding file and change the version manually.
                                                                                      • edp: Using the edp versioning type, a developer indicates the version number from which all the artifacts will be versioned and, as a result, automatically registered in the corresponding file (e.g. pom.xml).

                                                                                      When selecting the edp versioning type, the extra field will appear:

                                                                                      EDP versioning

                                                                                      Type the version number from which you want the artifacts to be versioned.

                                                                                      Note

                                                                                      The Start Version From field should be filled out in compliance with the semantic versioning rules, e.g. 1.2.3 or 10.10.10. Please refer to the Semantic Versioning page for details.

                                                                                      c. Specify the pattern to validate a commit message. Use regular expression to indicate the pattern that is followed on the project to validate a commit message in the code review pipeline. An example of the pattern: ^[PROJECT_NAME-d{4}]:.*$

                                                                                      Integrate with Jira server

                                                                                      d. Select the Integrate with Jira server check box in case it is required to connect Jira tickets with the commits and have a respective label in the Fix Version field.

                                                                                      Note

                                                                                      To adjust the Jira integration functionality, first apply the necessary changes described on the Adjust Jira Integration page, and Adjust VCS Integration With Jira. Pay attention that the Jira integration feature is not available when using the GitLab CI tool.

                                                                                      e. As soon as the Jira server is set, select it in the Jira Server field.

                                                                                      f. Specify the pattern to find a Jira ticket number in a commit message. Based on this pattern, the value from EDP will be displayed in Jira.

                                                                                      Mapping fields

                                                                                      g. In the Advanced Mapping section, specify the names of the Jira fields that should be filled in with attributes from EDP:

                                                                                      1. Select the name of the field in a Jira ticket. The available fields are the following: Fix Version/s, Component/s and Labels.

                                                                                      2. Click the Add button to add the mapping field name.

                                                                                      3. Enter Jira pattern for the field name:

                                                                                        • For the Fix Version/s field, select the EDP_VERSION variable that represents an EDP upgrade version, as in 2.7.0-SNAPSHOT. Combine variables to make the value more informative. For example, the pattern EDP_VERSION-EDP_COMPONENT will be displayed as 2.7.0-SNAPSHOT-nexus-operator in Jira.
                                                                                        • For the Component/s field select the EDP_COMPONENT variable that defines the name of the existing repository. For example, nexus-operator.
                                                                                        • For the Labels field select the EDP_GITTAGvariable that defines a tag assigned to the commit in Git Hub. For example, build/2.7.0-SNAPSHOT.59.
                                                                                      4. Click the bin icon to remove the Jira field name.

                                                                                      h. Click the Apply button to add the library to the Libraries list.

                                                                                      Note

                                                                                      After the complete adding of the library, inspect the Library Overview part.

                                                                                      Note

                                                                                      Since EDP v3.3.0, the CI tool field has been hidden. Now EDP Portal automatically defines the CI tool depending on which one is deployed with EDP. If both Jenkins and Tekton are deployed, EDP Portal chooses Tekton by default. To define the CI tool manualy, operate with the spec.ciTool parameters.

                                                                                      "},{"location":"user-guide/add-library/#related-articles","title":"Related Articles","text":"
                                                                                      • Manage Libraries
                                                                                      • Add CD Pipeline
                                                                                      • Add Other Code Language
                                                                                      • Adjust GitLab CI Tool
                                                                                      • Adjust Jira Integration
                                                                                      • Adjust VCS Integration With Jira
                                                                                      • Integrate GitHub/GitLab in Jenkins
                                                                                      • Integrate GitHub/GitLab in Tekton
                                                                                      • Manage Jenkins CI Pipeline Job Provisioner
                                                                                      • Manage Jenkins Agent
                                                                                      • Perf Server Integration
                                                                                      "},{"location":"user-guide/add-marketplace/","title":"Add Component via Marketplace","text":"

                                                                                      With the built-in Marketplace, users can easily create a new application by clicking several buttons. This page contains detailed guidelines on how to create a new component with the help of the Marketplace feature.

                                                                                      "},{"location":"user-guide/add-marketplace/#add-component","title":"Add Component","text":"

                                                                                      To create a component from template, follow the instructions below:

                                                                                      1. Navigate to the Marketplace section on the navigation bar to see the Marketplace overview page.

                                                                                      2. Click the component name to open its details window and click Create from template:

                                                                                        Create from template

                                                                                      3. Fill in the required fields and click Apply:

                                                                                        Creating from template window

                                                                                      4. As a result, new component will appear in the Components section:

                                                                                        Creating from template window

                                                                                      "},{"location":"user-guide/add-marketplace/#related-articles","title":"Related Articles","text":"
                                                                                      • Marketplace Overview
                                                                                      • Add Application
                                                                                      • Add Library
                                                                                      • Add Infrastructure
                                                                                      "},{"location":"user-guide/add-quality-gate/","title":"Add Quality Gate","text":"

                                                                                      This section describes how to use quality gate in EDP and how to customize the quality gate for the CD pipeline with the selected build version of the promoted application between stages.

                                                                                      "},{"location":"user-guide/add-quality-gate/#apply-new-quality-gate-to-pipelines","title":"Apply New Quality Gate to Pipelines","text":"

                                                                                      Quality gate pipeline is a usual Tekton pipeline but with a specific label: app.edp.epam.com/pipelinetype: deploy. To add and apply the quality gate to your pipelines, follow the steps below:

                                                                                      1. To use the Tekton pipeline as a quality gate pipeline, add this label to the pipelines:

                                                                                      metadata:\n  labels:\n    app.edp.epam.com/pipelinetype: deploy\n
                                                                                      2. Insert the value that is the quality gate name displayed in the quality gate drop-down list of the CD pipeline menu:
                                                                                      metadata:\n  name: <name-of-quality-gate>\n
                                                                                      3. Ensure the task promote-images contains steps and logic to apply to the project. Also ensure that the last task is promote-images which parameters are mandatory.
                                                                                      spec:\n  params:\n    - default: ''\n      description: Codebases with a tag separated with a space.\n      name: CODEBASE_TAG\n      type: string\n    - default: ''\n      name: CDPIPELINE_CR\n      type: string\n    - default: ''\n      name: CDPIPELINE_STAGE\n      type: string\n  tasks:\n    - name: promote-images\n      params:\n        - name: CODEBASE_TAG\n          value: $(params.CODEBASE_TAG)\n        - name: CDPIPELINE_STAGE\n          value: $(params.CDPIPELINE_STAGE)\n        - name: CDPIPELINE_CR\n          value: $(params.CDPIPELINE_CR)\n      runAfter:\n        - <last-task-name>\n      taskRef:\n        kind: Task\n        name: promote-images\n
                                                                                      4. Create a new pipeline with a unique name or modify your created pipeline with the command below. Please be aware that the \u2039edp-project\u203a value is the name of the EDP tenant:
                                                                                      kubectl apply -f <file>.yaml -namespace \u2039edp-project\u203a\n
                                                                                      Example: file.yaml
                                                                                       apiVersion: tekton.dev/v1beta1\n kind: Pipeline\n metadata:\n   labels:\n     app.edp.epam.com/pipelinetype: deploy\n   name: <name-of-quality-gate>\n   namespace: edp\n spec:\n   params:\n     - default: >-\n         https://<CI-pipeline-provisioner>-edp.<cluster-name>.aws.main.edp.projects.epam.com/#/namespaces/$(context.pipelineRun.namespace)/pipelineruns/$(context.pipelineRun.name)\n       name: pipelineUrl\n       type: string\n     - default: ''\n       description: Codebases with a tag separated with a space.\n       name: CODEBASE_TAG\n       type: string\n     - default: ''\n       name: CDPIPELINE_CR\n       type: string\n     - default: ''\n       name: CDPIPELINE_STAGE\n       type: string\n   tasks:\n     - name: autotests\n       params:\n         - name: BASE_IMAGE\n           value: bitnami/kubectl:1.25.4\n         - name: EXTRA_COMMANDS\n           value: echo \"Hello World\"\n       taskRef:\n         kind: Task\n         name: run-quality-gate\n     - name: promote-images\n       params:\n         - name: CODEBASE_TAG\n           value: $(params.CODEBASE_TAG)\n         - name: CDPIPELINE_STAGE\n           value: $(params.CDPIPELINE_STAGE)\n         - name: CDPIPELINE_CR\n           value: $(params.CDPIPELINE_CR)\n       runAfter:\n         - autotests\n       taskRef:\n         kind: Task\n         name: promote-images\n
                                                                                      "},{"location":"user-guide/add-quality-gate/#run-quality-gate","title":"Run Quality Gate","text":"

                                                                                      Before running the quality gate, first of all, ensure that the environment has deployed the created CD pipeline and then ensure that the application is successfully deployed and ready to run the quality gate. To run quality gate, please follow the steps below:

                                                                                      1. Check the CD pipeline status. To do this, open the created CD pipeline, select Image stream version, click DEPLOY button and wait until Applications, Health and Sync statuses become green. This implies that the application is successfully deployed and ready to run the quality gate.

                                                                                        CD pipeline stage overview

                                                                                      2. Select the <name-of-quality-gate> of Quality gates from the drop-down list and click the RUN button.The execution process should be started in the Pipelines menu:

                                                                                        Quality gate pipeline status

                                                                                      "},{"location":"user-guide/add-quality-gate/#add-stage-for-quality-gate","title":"Add Stage for Quality Gate","text":"

                                                                                      For a better understanding of this section, please read the documentation about how to add a new stage for quality gate. The scheme below illustrates two approaches of adding quality gates:

                                                                                      Types of adding quality gate

                                                                                      • The first type of adding a quality gate is about adding the specific quality gate to the specific pipeline stage.
                                                                                      • The second type is rather optional and implies activating the Promote in pipelines option while creating a CD Pipeline to pass the quality gate in a certain sequence.

                                                                                      As a result, after the quality gate is successfully passed, the projected image is promoted to the next stage.

                                                                                      "},{"location":"user-guide/add-quality-gate/#related-articles","title":"Related Articles","text":"
                                                                                      • Add CD Pipeline
                                                                                      "},{"location":"user-guide/application/","title":"Manage Applications","text":"

                                                                                      This section describes the subsequent possible actions that can be performed with the newly added or existing applications.

                                                                                      "},{"location":"user-guide/application/#check-and-remove-application","title":"Check and Remove Application","text":"

                                                                                      As soon as the application is successfully provisioned, the following will be created:

                                                                                      • Code Review and Build pipelines in Jenkins/Tekton for this application. The Build pipeline will be triggered automatically if at least one environment is already added.
                                                                                      • A new project in Gerrit or another VCS.
                                                                                      • SonarQube integration will be available after the Build pipeline in Jenkins/Tekton is passed.
                                                                                      • Nexus Repository Manager will be available after the Build pipeline in Jenkins/Tekton is passed as well.

                                                                                      The added application will be listed in the Applications list allowing you to do the following:

                                                                                      Applications menu

                                                                                      • Application status - displays the Git Server status. Can be red or green depending on if the EDP Portal managed to connect to the Git Server with the specified credentials or not.
                                                                                      • Application name (clickable) - displays the Git Server name set during the Git Server creation.
                                                                                      • Open documentation - opens the documentation that leads to this page.
                                                                                      • Enable filtering - enables filtering by Git Server name and namespace where this custom resource is located in.
                                                                                      • Create new application - displays the Create new component menu.
                                                                                      • Edit application - edit the application by selecting the options icon next to its name in the applications list, and then selecting Edit. For details see the Edit Existing Application section.
                                                                                      • Delete application - remove application by selecting the options icon next to its name in the applications list, and then selecting Delete.

                                                                                        Note

                                                                                        The application that is used in a CD pipeline cannot be removed.

                                                                                      There are also options to sort the applications:

                                                                                      • Sort the existing applications in a table by clicking the sorting icons in the table header. Sort the applications alphabetically by their name, language, build tool, framework, and CI tool. You can also sort the applications by their status: Created, Failed, or In progress.
                                                                                      • Select a number of applications displayed per page (15, 25 or 50 rows) and navigate between pages if the number of applications exceeds the capacity of a single page:

                                                                                        Applications pages

                                                                                      "},{"location":"user-guide/application/#edit-existing-application","title":"Edit Existing Application","text":"

                                                                                      EDP Portal provides the ability to enable, disable or edit the Jira Integration functionality for applications.

                                                                                      1. To edit an application directly from the Applications overview page or when viewing the application data:

                                                                                        • Select Edit in the options icon menu:

                                                                                        Edit application on the Applications overview page

                                                                                        Edit application when viewing the application data

                                                                                        • The Edit Application dialog opens.
                                                                                      2. To enable Jira integration, in the Edit Application dialog do the following:

                                                                                        Edit application

                                                                                        a. Mark the Integrate with Jira server check box and fill in the necessary fields. Please see steps d-h of the Add Application page.

                                                                                        b. Select the Apply button to apply the changes.

                                                                                        c. (Optional) Enable commit validation mechanism by navigating to Jenkins/Tekton and adding the commit-validate stage in the Code Review pipeline to have your commits reviewed.

                                                                                      3. To disable Jira integration, in the Edit Application dialog do the following:

                                                                                        a. Unmark the Integrate with Jira server check box.

                                                                                        b. Select the Apply button to apply the changes.

                                                                                        c. (Optional) Disable commit validation mechanism by navigating to Jenkins/Tekton and removing the commit-validate stage in the Code Review pipeline to have your commits reviewed.

                                                                                      4. To create, edit and delete application branches, please refer to the Manage Branches page.

                                                                                      "},{"location":"user-guide/application/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Application
                                                                                      • Manage Branches
                                                                                      "},{"location":"user-guide/autotest/","title":"Manage Autotests","text":"

                                                                                      This section describes the subsequent possible actions that can be performed with the newly added or existing autotests.

                                                                                      "},{"location":"user-guide/autotest/#check-and-remove-autotest","title":"Check and Remove Autotest","text":"

                                                                                      As soon as the autotest is successfully provisioned, the following will be created:

                                                                                      • Code Review and Build pipelines in Jenkins/Tekton for this autotest. The Build pipeline will be triggered automatically if at least one environment is already added.
                                                                                      • A new project in Gerrit or another VCS.
                                                                                      • SonarQube integration will be available after the Build pipeline in Jenkins/Tekton is passed.
                                                                                      • Nexus Repository Manager will be available after the Build pipeline in Jenkins/Tekton is passed as well.

                                                                                      Info

                                                                                      To navigate quickly to OpenShift, Jenkins/Tekton, Gerrit, SonarQube, Nexus, and other resources, click the Overview section on the navigation bar and hit the necessary link.

                                                                                      The added autotest will be listed in the Autotests list allowing you to do the following:

                                                                                      Autotests page

                                                                                      • Autotest status - displays the Git Server status. Can be red or green depending on EDP Portal managed to connect to the Git Server with the specified credentials or not.
                                                                                      • Autotest name (clickable) - displays the Git Server name set during the Git Server creation.
                                                                                      • Open documentation - opens the documentation that leads to this page.
                                                                                      • Enable filtering - enables filtering by Git Server name and namespace where this custom resource is located in.
                                                                                      • Create new autotest - displays the Create new component menu.
                                                                                      • Edit autotest - edit the autotest by selecting the options icon next to its name in the autotests list, and then selecting Edit. For details see the Edit Existing Autotest section.
                                                                                      • Delete autotest - remove autotest with the corresponding database and Jenkins/Tekton pipelines by selecting the options icon next to its name in the Autotests list, and then selecting Delete:

                                                                                        Note

                                                                                        The autotest that is used in a CD pipeline cannot be removed.

                                                                                      There are also options to sort the applications:

                                                                                      • Sort the existing autotests in a table by clicking the sorting icons in the table header. Sort the autotests alphabetically by their name, language, build tool, framework, and CI tool. You can also sort the autotests by their status: Created, Failed, or In progress.
                                                                                      • Select a number of autotests displayed per page (15, 25 or 50 rows) and navigate between pages if the number of autotests exceeds the capacity of a single page.
                                                                                      "},{"location":"user-guide/autotest/#edit-existing-autotest","title":"Edit Existing Autotest","text":"

                                                                                      EDP Portal provides the ability to enable, disable or edit the Jira Integration functionality for autotests.

                                                                                      1. To edit an autotest directly from the Autotests overview page or when viewing the autotest data:

                                                                                        • Select Edit in the options icon menu:

                                                                                          Edit autotest on the autotests overview page

                                                                                          Edit autotest when viewing the autotest data

                                                                                        • The Edit Autotest dialog opens.
                                                                                      2. To enable Jira integration, on the Edit Autotest page do the following:

                                                                                        Edit library

                                                                                        a. Mark the Integrate with Jira server check box and fill in the necessary fields. Please see steps d-h on the Add Autotests page.

                                                                                        b. Select the Apply button to apply the changes.

                                                                                        c. Navigate to Jenkins/Tekton and add the create-jira-issue-metadata stage in the Build pipeline. Also add the commit-validate stage in the Code Review pipeline.

                                                                                        Note

                                                                                        Pay attention that the Jira integration feature is not available when using the GitLab CI tool.

                                                                                        Note

                                                                                        To adjust the Jira integration functionality, first apply the necessary changes described on the Adjust Jira Integration and Adjust VCS Integration With Jira pages.

                                                                                      3. To disable Jira integration, in the Edit Autotest dialog do the following:

                                                                                        a. Unmark the Integrate with Jira server check box.

                                                                                        b. Select the Apply button to apply the changes.

                                                                                        c. Navigate to Jenkins/Tekton and remove the create-jira-issue-metadata stage in the Build pipeline. Also remove the commit-validate stage in the Code Review pipeline.

                                                                                        As a result, the necessary changes will be applied.

                                                                                      4. To create, edit and delete application branches, please refer to the Manage Branches page.

                                                                                      "},{"location":"user-guide/autotest/#add-autotest-as-a-quality-gate","title":"Add Autotest as a Quality Gate","text":"

                                                                                      In order to add an autotest as a quality gate to a newly added CD pipeline, do the following:

                                                                                      1. Create a CD pipeline with the necessary parameters. Please refer to the Add CD Pipeline section for the details.

                                                                                      2. In the Stages menu, select the Autotest quality gate type. It means the promoting process should be confirmed by the successful passing of the autotests.

                                                                                      3. In the additional fields, select the previously created autotest name and specify its branch.

                                                                                      4. After filling in all the necessary fields, click the Create button to start the provisioning of the pipeline. After the CD pipeline is added, the new namespace containing the stage name will be created in Kubernetes (in OpenShift, a new project will be created) with the following name pattern: [cluster name]-[cd pipeline name]-[stage name].

                                                                                      "},{"location":"user-guide/autotest/#configure-autotest-launch-at-specific-stage","title":"Configure Autotest Launch at Specific Stage","text":"

                                                                                      In order to configure the added autotest launch at the specific stage with necessary parameters, do the following:

                                                                                      1. Add the necessary stage to the CD pipeline. Please refer to the Add CD Pipeline documentation for the details.

                                                                                      2. Navigate to the run.json file and add the stage name and the specific parameters.

                                                                                      "},{"location":"user-guide/autotest/#launch-autotest-locally","title":"Launch Autotest Locally","text":"

                                                                                      There is an ability to run the autotests locally using the IDEA (Integrated Development Environment Application, such as IntelliJ, NetBeans etc.). To launch the autotest project for the local verification, perform the following steps:

                                                                                      1. Clone the project to the local machine.

                                                                                      2. Open the project in IDEA and find the run.json file to copy out the necessary command value.

                                                                                      3. Paste the copied command value into the Command line field and run it with the necessary values and namespace.

                                                                                      4. As a result, all the launched tests will be executed.

                                                                                      "},{"location":"user-guide/autotest/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Application
                                                                                      • Add Autotests
                                                                                      • Add CD Pipeline
                                                                                      • Adjust Jira Integration
                                                                                      • Adjust VCS Integration With Jira
                                                                                      • Manage Branches
                                                                                      "},{"location":"user-guide/build-pipeline/","title":"Build Pipeline","text":"

                                                                                      This section provides details on the Build pipeline of the EDP CI/CD pipeline framework. Explore below the pipeline purpose, stages and possible actions to perform.

                                                                                      "},{"location":"user-guide/build-pipeline/#build-pipeline-purpose","title":"Build Pipeline Purpose","text":"

                                                                                      The purpose of the Build pipeline contains the following points:

                                                                                      • Check out, test, tag and build an image from the mainstream branch after a patch set is submitted in order to inspect whether the integrated with the mainstream code fits all quality gates, can be built and tested;
                                                                                      • Be triggered if any new patch set is submitted;
                                                                                      • Tag a specific commit in Gerrit in case the build is successful;
                                                                                      • Build a Docker image with an application that can be afterward deployed using the Jenkins Deploy pipeline.

                                                                                      Find below the functional diagram of the Build pipeline with the default stages:

                                                                                      build-pipeline

                                                                                      "},{"location":"user-guide/build-pipeline/#build-pipeline-for-application-and-library","title":"Build Pipeline for Application and Library","text":"

                                                                                      The Build pipeline is triggered automatically after the Code Review pipeline is completed and the changes are submitted.

                                                                                      To review the Build pipeline, take the following steps:

                                                                                      1. Open Jenkins via the created link in Gerrit or via the Admin Console Overview page.

                                                                                      2. Click the Build pipeline link to open its stages for the application and library codebases:

                                                                                        • Init - initialization of the Code Review pipeline inputs;
                                                                                        • Checkout - checkout of the application code;
                                                                                        • Get-version - get the version from the pom.XML file and add the build number;
                                                                                        • Compile - code compilation;
                                                                                        • Tests - tests execution;
                                                                                        • Sonar - Sonar launch that checks the whole code;
                                                                                        • Build - artifact building and adding to Nexus;
                                                                                        • Build-image - docker image building and adding to Docker Registry. The Build pipeline for the library has the same stages as the application except the Build-image stage, i.e. the Docker image is not building.
                                                                                        • Push - artifact docker image pushing to Nexus and Docker Registry;
                                                                                        • Ecr-to-docker - the docker image, after being built, is copied from the ECR project registry to DockerHub via the Crane tool. The stage is not the default and can be set for the application codebase type. To set this stage, please refer to the EcrToDocker.groovy file and to the Promote Docker Images From ECR to Docker Hub page.
                                                                                        • Git-tag - adding of the corresponding Git tag of the current commit to relate with the image, artifact, and build version.

                                                                                      Note

                                                                                      For more details on stages, please refer to the Pipeline Stages documentation.

                                                                                      After the Build pipeline runs all the stages successfully, the corresponding tag numbers will be created in Kubernetes/OpenShift and Nexus.

                                                                                      "},{"location":"user-guide/build-pipeline/#check-the-tag-in-kubernetesopenshift-and-nexus","title":"Check the Tag in Kubernetes/OpenShift and Nexus","text":"
                                                                                      1. After the Build pipeline is completed, check the tag name and the same with the commit revision. Simply navigate to Gerrit \u2192 Projects \u2192 List \u2192 select the project \u2192 Tags.

                                                                                        Note

                                                                                        For the Import strategy, navigate to the repository from which a codebase is imported \u2192 Tags. It is actual both for GitHub and GitLab.

                                                                                      2. Open the Kubernetes/OpenShift Overview page and click the link to Nexus and check the build of a new version.

                                                                                      3. Switch to Kubernetes \u2192 CodebaseImageStream (or OpenShift \u2192 Builds \u2192 Images) \u2192 click the image stream that will be used for deployment.

                                                                                      4. Check the corresponding tag.

                                                                                      "},{"location":"user-guide/build-pipeline/#configure-and-start-pipeline-manually","title":"Configure and Start Pipeline Manually","text":"

                                                                                      The Build pipeline can be started manually. To set the necessary stages and trigger the pipeline manually, take the following steps:

                                                                                      1. Open the Build pipeline for the created library.

                                                                                      2. Click the Build with parameters option from the left-side menu. Modify the stages by removing the whole objects massive:{\"name\". \"tests\"} where name is a key and tests is a stage name that should be executed.

                                                                                      3. Open Jenkins and check the successful execution of all stages.

                                                                                      "},{"location":"user-guide/build-pipeline/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Application
                                                                                      • Add Autotest
                                                                                      • Add Library
                                                                                      • Adjust Jira Integration
                                                                                      • Adjust VCS Integration With Jira
                                                                                      • Autotest as Quality Gate
                                                                                      • Pipeline Stages
                                                                                      "},{"location":"user-guide/cd-pipeline-details/","title":"CD Pipeline Details","text":"

                                                                                      CD Pipeline (Continuous Delivery Pipeline) - an EDP business entity that describes the whole delivery process of the selected application set via the respective stages. The main idea of the CD pipeline is to promote the application build version between the stages by applying the sequential verification (i.e. the second stage will be available if the verification on the first stage is successfully completed). The CD pipeline can include the essential set of applications with its specific stages as well.

                                                                                      In other words, the CD pipeline allows the selected image stream (Docker container in Kubernetes terms) to pass a set of stages for the verification process (SIT - system integration testing with the automatic type of a quality gate, QA - quality assurance, UAT - user acceptance testing with the manual testing).

                                                                                      Note

                                                                                      It is possible to change the image stream for the application in the CD pipeline. Please refer to the Edit CD Pipeline section for the details.

                                                                                      A CI/CD pipeline helps to automate steps in a software delivery process, such as the code build initialization, automated tests running, and deploying to a staging or production environment. Automated pipelines remove manual errors, provide standardized development feedback cycle, and enable the fast product iterations. To get more information on the CI pipeline, please refer to the CI Pipeline Details chapter.

                                                                                      The codebase stream is used as a holder for the output of the stage, i.e. after the Docker container (or an image stream in OpenShift terms) passes the stage verification, it will be placed to the new codebase stream. Every codebase has a branch that has its own codebase stream - a Docker container that is an output of the build for the corresponding branch.

                                                                                      Note

                                                                                      For more information on the main terms used in EPAM Delivery Platform, please refer to the EDP Glossary

                                                                                      EDP CD pipeline

                                                                                      Explore the details of the CD pipeline below.

                                                                                      "},{"location":"user-guide/cd-pipeline-details/#deploy-pipeline","title":"Deploy Pipeline","text":"

                                                                                      The Deploy pipeline is used by default on any stage of the Continuous Delivery pipeline. It addresses the following concerns:

                                                                                      • Deploying the application(s) to the main STAGE (SIT, QA, UAT) environment in order to run autotests and to promote image build versions to the next environments afterwards.
                                                                                      • Deploying the application(s) to a custom STAGE environment in order to run autotests and check manually that everything is ok with the application.
                                                                                      • Deploying the latest or a stable and some particular numeric version of an image build that exists in Docker registry.
                                                                                      • Promoting the image build versions from the main STAGE (SIT, QA, UAT) environment.
                                                                                      • Auto deploying the application(s) version from the passed payload (using the CODEBASE_VERSION job parameter).

                                                                                      Find below the functional diagram of the Deploy pipeline with the default stages:

                                                                                      Note

                                                                                      The input for a CD pipeline depends on the Trigger Type for a deploy stage and can be either Manual or Auto.

                                                                                      Deploy pipeline stages

                                                                                      "},{"location":"user-guide/cd-pipeline-details/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Application
                                                                                      • Add Autotest
                                                                                      • Add CD Pipeline
                                                                                      • Add Library
                                                                                      • CI Pipeline Details
                                                                                      • CI/CD Overview
                                                                                      • EDP Glossary
                                                                                      • EDP Pipeline Framework
                                                                                      • EDP Pipeline Stages
                                                                                      • Prepare for Release
                                                                                      "},{"location":"user-guide/ci-pipeline-details/","title":"CI Pipeline Details","text":"

                                                                                      CI Pipeline (Continuous Integration Pipeline) - an EDP business entity that describes the integration of changes made to a codebase into a single project. The main idea of the CI pipeline is to review the changes in the code submitted through a Version Control System (VCS) and build a new codebase version so that it can be transmitted to the Continuous Delivery Pipeline for the rest of the delivery process.

                                                                                      There are three codebase types in EPAM Delivery Platform:

                                                                                      1. Applications - a codebase that is developed in the Version Control System, has the full lifecycle starting from the Code Review stage to its deployment to the environment;
                                                                                      2. Libraries - this codebase is similar to the Application type, but it is not deployed and stored in the Artifactory. The library can be connected to other applications/libraries;
                                                                                      3. Autotests - a codebase that inspects the code and can be used as a quality gate for the CD pipeline stage. The autotest only has the Code Review pipeline and is launched for the stage verification.

                                                                                      Note

                                                                                      For more information on the above mentioned codebase types, please refer to the Add Application, Add Library, Add Autotests and Autotest as Quality Gate pages.

                                                                                      EDP CI pipeline

                                                                                      "},{"location":"user-guide/ci-pipeline-details/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Application
                                                                                      • Add Autotest
                                                                                      • Add Library
                                                                                      • Adjust Jira Integration
                                                                                      • Adjust VCS Integration With Jira
                                                                                      • Autotest as Quality Gate
                                                                                      • Build Pipeline
                                                                                      • Code Review Pipeline
                                                                                      • Pipeline Stages
                                                                                      "},{"location":"user-guide/cicd-overview/","title":"EDP CI/CD Overview","text":"

                                                                                      This chapter provides information on CI/CD basic definitions and flow, as well as its components and process.

                                                                                      "},{"location":"user-guide/cicd-overview/#cicd-basic-definitions","title":"CI/CD Basic Definitions","text":"

                                                                                      The Continuous Integration part means the following:

                                                                                      • all components of the application development are in the same place and perform the same processes for running;
                                                                                      • the results are published in one place and replicated into EPAM GitLab or VCS (version control system);
                                                                                      • the repository also includes a storage tool (e.g. Nexus) for all binary artifacts that are produced by the Jenkins CI server after submitting changes from Code Review tool into VCS;

                                                                                      The Code Review and Build pipelines are used before the code is delivered. An important part of both of them is the integration tests that are launched during the testing stage.

                                                                                      Many applications (SonarQube, Gerrit, etc,) used by the project need databases for their performance.

                                                                                      The Continuous Delivery comprises an approach allowing to produce an application in short cycles so that it can be reliably released at any time point. This part is tightly bound with the usage of the Code Review, Build, and Deploy pipelines.

                                                                                      The Deploy pipelines deploy the applications configuration and their specific versions, launch automated tests and control quality gates for the specified environment. As a result of the successfully completed process, the specific versions of images are promoted to the next environment. All environments are sequential and promote the build versions of applications one-by-one. The logic of each stage is described as a code of Jenkins pipelines and stored in the VCS.

                                                                                      During the CI/CD, there are several continuous processes that run in the repository, find below the list of possible actions:

                                                                                      • Review the code with the help of Gerrit tool;
                                                                                      • Run the static analysis using SonarQube to control the quality of the source code and keep the historical data which helps to understand the trend and effectivity of particular teams and members;
                                                                                      • Analyze application source code using SAST, byte code, and binaries for coding/design conditions that are indicative of security vulnerabilities;
                                                                                      • Build the code with Jenkins and run automated tests that are written to make sure the applied changes will not break any functionality.

                                                                                      Note

                                                                                      For the details on autotests, please refer to the Autotest, Add Autotest, and Autotest as Quality Gate pages.

                                                                                      The release process is divided into cycles and provides regular delivery of completed pieces of functionality while continuing the development and integration of new functionality into the product mainline.

                                                                                      Explore the main flow that is displayed on the diagram below:

                                                                                      EDP CI/CD pipeline

                                                                                      "},{"location":"user-guide/cicd-overview/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Application
                                                                                      • Add Library
                                                                                      • Add CD Pipeline
                                                                                      • CI Pipeline Details
                                                                                      • CD Pipeline Details
                                                                                      • Customize CI Pipeline
                                                                                      • EDP Pipeline Framework
                                                                                      • Customize CD Pipeline
                                                                                      • EDP Stages
                                                                                      • Glossary
                                                                                      • Use Terraform Library in EDP
                                                                                      "},{"location":"user-guide/cluster/","title":"Manage Clusters","text":"

                                                                                      This section describes the subsequent possible actions that can be performed with the newly added or existing clusters.

                                                                                      In a nutshell, cluster in EDP Portal is a Kubernetes secret that stores credentials and enpoint to connect to the another cluster. Adding new clusters allows users to deploy applications in several clusters, thus improving flexibilty of your infrastructure.

                                                                                      The added cluster will be listed in the clusters list allowing you to do the following:

                                                                                      Clusters list

                                                                                      "},{"location":"user-guide/cluster/#view-authentication-data","title":"View Authentication Data","text":"

                                                                                      To view authentication data that is used to log in to the cluster, run the kubectl describe command:

                                                                                      kubectl describe secret cluster_name -n edp\n
                                                                                      "},{"location":"user-guide/cluster/#delete-cluster","title":"Delete Cluster","text":"

                                                                                      To delete cluster, use the kubectl delete command as follows:

                                                                                      kubectl delete secret cluster_name -n edp\n
                                                                                      "},{"location":"user-guide/cluster/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Cluster
                                                                                      • Add Application
                                                                                      "},{"location":"user-guide/code-review-pipeline/","title":"Code Review Pipeline","text":"

                                                                                      This section provides details on the Code Review pipeline of the EDP CI/CD framework. Explore below the pipeline purpose, stages and possible actions to perform.

                                                                                      "},{"location":"user-guide/code-review-pipeline/#code-review-pipeline-purpose","title":"Code Review Pipeline Purpose","text":"

                                                                                      The purpose of the Code Review pipeline contains the following points:

                                                                                      • Check out and test a particular developer's change (Patch Set) in order to inspect whether the code fits all the quality gates and can be built and tested;
                                                                                      • Be triggered if any new Patch Set appears in Gerrit;
                                                                                      • Send feedback about the build process in Jenkins to review the card in Gerrit;
                                                                                      • Send feedback about Sonar violations that have been found during the Sonar stage.

                                                                                      Find below the functional diagram of the Code Review pipeline with the default stages:

                                                                                      Code review pipeline stages

                                                                                      "},{"location":"user-guide/code-review-pipeline/#code-review-pipeline-for-applications-and-libraries","title":"Code Review Pipeline for Applications and Libraries","text":"

                                                                                      Note

                                                                                      Make sure the necessary applications or libraries are added to the Admin Console. For the details on how to add a codebase, please refer to the Add Application or Add Library pages accordingly.

                                                                                      To discover the Code Review pipeline, apply changes that will trigger the Code Review pipeline automatically and take the following steps:

                                                                                      1. Navigate to Jenkins. In Admin Console, go to the Overview section on the left-side navigation bar and click the link to Jenkins.

                                                                                        Link to Jenkins

                                                                                        or

                                                                                        In Gerrit, go to the Patch Set page and click the CI Jenkins link in the Change Log section

                                                                                        Link from Gerrit

                                                                                        Note

                                                                                        The Code Review pipeline starts automatically for every codebase type (Application, Autotests, Library).

                                                                                      2. Check the Code Review pipeline for the application of for the library. Click the application name in Jenkins and switch to the additional release-01 branch that is created with the respective Code Review and Build pipelines.

                                                                                      3. Click the Code Review pipeline link to open the Code Review pipeline stages for the application:

                                                                                        • Init - initialization of the codebase information and loading of the common libraries
                                                                                        • gerrit-checkout / checkout - the checkout of patch sets from Gerrit. The stage is called gerrit-checkout for the Create and Clone strategies of adding a codebase and checkout for the Import strategy.
                                                                                        • compile - the source code compilation
                                                                                        • tests - the launch of the tests
                                                                                        • sonar - the launch of the static code analyzer that checks the whole code
                                                                                        • helm-lint - the launch of the linting tests for deployment charts
                                                                                        • dockerfile-lint - the launch of the linting tests for Dockerfile
                                                                                        • commit-validate - the stage is optional and appears under enabled integration with Jira. Please refer to the Adjust Jira Integration and Adjust VCS Integration With Jira sections for the details.

                                                                                      Note

                                                                                      For more details on EDP pipeline stages, please refer to the Pipeline Stages section.

                                                                                      "},{"location":"user-guide/code-review-pipeline/#code-review-pipeline-for-autotests","title":"Code Review Pipeline for Autotests","text":"

                                                                                      To discover the Code Review pipeline for autotests, first, apply changes to a codebase that will trigger the Code Review pipeline automatically. The flow for the autotest is similar for that for applications and libraries, however, there are some differences. Explore them below.

                                                                                      1. Open the run.json file for the created autotest.

                                                                                        Note

                                                                                        Please refer to the Add Autotest page for the details on how to create an autotest.

                                                                                        The run.json file keeps a command that is executed on this stage.

                                                                                      2. Open the Code Review pipeline in Jenkins (via the link in Gerrit or via the Admin Console Overview page) and click the Configure option from the left side. There are only four stages available: Initialization - Gerrit-checkout - tests - sonar (the launch of the static code analyzer that checks the whole code).

                                                                                      3. Open the Code Review pipeline in Jenkins with the successfully passed stages.

                                                                                      "},{"location":"user-guide/code-review-pipeline/#retrigger-code-review-pipeline","title":"Retrigger Code Review Pipeline","text":"

                                                                                      The Code Review pipeline can be retriggered manually, especially if the pipeline failed before. To retrigger it, take the following steps:

                                                                                      1. In Jenkins, click the Retrigger option from the drop-down menu for the specific Code Review pipeline version number. Alternatively, click the Jenkins main page and select the Query and Trigger Gerrit Patches option.

                                                                                      2. Click Search and select the check box of the necessary change and patch set and then click Trigger Selected.

                                                                                      As a result, the Code Review pipeline will be retriggered.

                                                                                      "},{"location":"user-guide/code-review-pipeline/#configure-code-review-pipeline","title":"Configure Code Review Pipeline","text":"

                                                                                      The Configure option allows adding/removing the stage from the Code Review pipeline if needed. To configure the Code Review pipeline, take the following steps:

                                                                                      1. Being in Jenkins, click the Configure option from the left-side menu.

                                                                                      2. Define the stages set that will be executed for the current pipeline.

                                                                                        • To remove a stage, select and remove the whole objects massive: {\"name\".\"tests\"}, where name is a key and tests is a stage name that should be executed.
                                                                                        • To add a stage, define the objects massive: {\"name\".\"tests\"}, where name is a key and tests is a stage name that should be added.

                                                                                        Note

                                                                                        All stages are launched from the shared library on GitHub. The list of libraries is located in the edp-library-stages repository.

                                                                                      3. To apply the new stage process, retrigger the Code Review pipeline. For details, please refer to the Retrigger Code Review Pipeline section.

                                                                                      4. Open Jenkins and check that there is no removed stage in the Code Review pipeline.

                                                                                      "},{"location":"user-guide/code-review-pipeline/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Application
                                                                                      • Add Autotest
                                                                                      • Add Library
                                                                                      • Adjust Jira Integration
                                                                                      • Adjust VCS Integration With Jira
                                                                                      • Autotest as Quality Gate
                                                                                      • Pipeline Stages
                                                                                      "},{"location":"user-guide/container-stages/","title":"CI Pipeline for Container","text":"

                                                                                      EPAM Delivery Platform ensures the implemented Container support allowing to work with Dockerfile that is processed by means of stages in the Code-Review and Build pipelines. These pipelines are expected to be created after the Container Library is added.

                                                                                      "},{"location":"user-guide/container-stages/#code-review-pipeline-stages","title":"Code Review Pipeline Stages","text":"

                                                                                      In the Code Review pipeline, the following stages are available:

                                                                                      1. checkout stage is a standard step during which all files are checked out from a selected branch of the Git repository.

                                                                                      2. dockerfile-lint stage uses the hadolint tool to perform linting tests for the Dockerfile.

                                                                                      3. dockerbuild-verify stage collects artifacts and builds an image from the Dockerfile without pushing to registry. This stage is intended to check if the image is built.

                                                                                      "},{"location":"user-guide/container-stages/#build-pipeline-stages","title":"Build Pipeline Stages","text":"

                                                                                      In the Build pipeline, the following stages are available:

                                                                                      1. checkout stage is a standard step during which all files are checked out from a master branch of the Git repository.

                                                                                      2. get-version stage where the library version is determined either via:

                                                                                        2.1. EDP versioning functionality.

                                                                                        2.2. Default versioning functionality.

                                                                                      3. dockerfile-lint stage uses the hadolint tool to perform linting tests for Dockerfile.

                                                                                      4. build-image-kaniko stage builds Dockerfile using the Kaniko tool.

                                                                                      5. git-tag stage that is intended for tagging a repository in Git.

                                                                                      "},{"location":"user-guide/container-stages/#tools-for-container-images-building","title":"Tools for Container Images Building","text":"

                                                                                      EPAM Delivery Platform ensures the implemented Kaniko tool and BuildConfig object support. Using Kaniko tool allows building the container images from a Dockerfile both on the Kubernetes and OpenShift platforms. The BuildConfig object enables the building of the container images only on the OpenShift platform.

                                                                                      EDP uses the BuildConfig object and the Kaniko tool for creating containers from a Dockerfile and pushing them to the internal container image registry. For Kaniko, it is also possible to change the Docker config file and push the containers to different container image registries.

                                                                                      "},{"location":"user-guide/container-stages/#supported-container-image-build-tools","title":"Supported Container Image Build Tools","text":"Platform Build Tools Kubernetes Kaniko OpenShift Kaniko, BuildConfig"},{"location":"user-guide/container-stages/#change-build-tool-in-the-build-pipeline","title":"Change Build Tool in the Build Pipeline","text":"

                                                                                      By default, EPAM Delivery Platform uses the build-image-kaniko stage for building container images on the Kubernetes platform and the build-image-from-dockerfile stage for building container images on the OpenShift platform.

                                                                                      In order to change a build tool for the OpenShift Platform from the default buildConfig object to the Kaniko tool, perform the following steps:

                                                                                      1. Modify or update a job provisioner logic, follow the instructions on the Manage Jenkins CI Pipeline Job Provisioner page.
                                                                                      2. Update the required parameters for a new provisioner. For example, if it is necessary to change the build tool for Container build pipeline, update the list of stages:
                                                                                        stages['Build-library-kaniko'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"build-image-from-dockerfile\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n
                                                                                        stages['Build-library-kaniko'] = '[{\"name\": \"checkout\"},{\"name\": \"get-version\"}' +\n',{\"name\": \"dockerfile-lint\"},{\"name\": \"build-image-kaniko\"}' + \"${createJIMStage}\" + ',{\"name\": \"git-tag\"}]'\n
                                                                                      "},{"location":"user-guide/container-stages/#related-articles","title":"Related Articles","text":"
                                                                                      • Use Dockerfile Linters for Code Review Pipeline
                                                                                      • Manage Jenkins CI Pipeline Job Provisioner
                                                                                      "},{"location":"user-guide/copy-shared-secrets/","title":"Copy Shared Secrets","text":"

                                                                                      The Copy Shared Secrets stage provides the ability to copy secrets from the current Kubernetes namespace into a namespace created during CD pipeline.

                                                                                      Shared secrets

                                                                                      Please follow the steps described below to copy the secrets:

                                                                                      1. Create a secret in the current Kubernetes namespace that should be used in the deployment. The secret label must be app.edp.epam.com/use: cicd, since the pipeline script will attempt to copy the secret by its label. For example:

                                                                                        kind: Secret\nmetadata:\nlabels:\napp.edp.epam.com/use: cicd\n
                                                                                      2. Add the following step to the CD pipeline {\"name\":\"copy-secrets\",\"step_name\":\"copy-secrets\"}. Alternatively, it is possible to create a custom job provisioner with this step.

                                                                                      3. Run the job. The pipeline script will create a secret with the same data in the namespace generated by the cd pipeline.

                                                                                        Note

                                                                                        Service account tokens are not supported.

                                                                                      "},{"location":"user-guide/copy-shared-secrets/#related-articles","title":"Related Articles","text":"
                                                                                      • Customize CD Pipeline
                                                                                      • Manage Jenkins CD Pipeline Job Provisioner
                                                                                      "},{"location":"user-guide/customize-cd-pipeline/","title":"Customize CD Pipeline","text":"

                                                                                      Apart from running CD pipeline stages with the default logic, there is the ability to perform the following:

                                                                                      • Create your own logic for stages;
                                                                                      • Redefine the default EDP stages of a CD pipeline.

                                                                                      In order to have the ability to customize a stage logic, create a CD pipeline stage source as a Library:

                                                                                      1. Navigate to the Libraries section of the Admin Console and create a library with the Groovy-pipeline code language:

                                                                                        Note

                                                                                        If you clone the library, make sure that the correct source branch is selected.

                                                                                        Create library

                                                                                        Select the required fields to build your library:

                                                                                        Advanced settings

                                                                                      2. Go to the Continuous Delivery section of the Admin Console and create a CD pipeline with the library stage source and its branch:

                                                                                        Library source

                                                                                      "},{"location":"user-guide/customize-cd-pipeline/#add-new-stage","title":"Add New Stage","text":"

                                                                                      Follow the steps below to add a new stage:

                                                                                      • Clone the repository with the added library;
                                                                                      • Create a \"stages\" directory in the root;
                                                                                      • Create a Jenkinsfile with default content:
                                                                                        @Library(['edp-library-stages', 'edp-library-pipelines']) _\n\nDeploy()\n
                                                                                      • Create a groovy file with a meaningful name, e.g. NotificationStage.groovy;
                                                                                      • Put the required construction and your own logic into the file:
                                                                                        import com.epam.edp.stages.impl.cd.Stage\n\n@Stage(name = \"notify\")\nclass Notify {\n    Script script\n    void run(context) {\n    --------------- Put your own logic here ------------------\n            script.println(\"Send notification logic\")\n    --------------- Put your own logic here ------------------\n    }\n}\nreturn Notify\n
                                                                                      • Add a new stage to the STAGES parameter of the Jenkins job of your CD pipeline:

                                                                                        Stages parameter

                                                                                        Warning

                                                                                        To make this stage permanently present, please modify the job provisioner.

                                                                                      • Run the job to check that your new stage has been run during the execution.
                                                                                      "},{"location":"user-guide/customize-cd-pipeline/#redefine-existing-stage","title":"Redefine Existing Stage","text":"

                                                                                      By default, the following stages are implemented in EDP pipeline framework:

                                                                                      • deploy,
                                                                                      • deploy-helm,
                                                                                      • autotests,
                                                                                      • manual (Manual approve),
                                                                                      • promote-images.

                                                                                      Using one of these names for annotation in your own class will lead to redefining the default logic with your own.

                                                                                      Find below a sample of the possible flow of the redefining deploy stage:

                                                                                      • Clone the repository with the added library;
                                                                                      • Create a \"stages\" directory in the root;
                                                                                      • Create a Jenkinsfile with default content:
                                                                                        @Library(['edp-library-stages', 'edp-library-pipelines']) _\n\nDeploy()\n
                                                                                      • Create a groovy file with a meaningful name, e.g. CustomDeployStage.groovy;
                                                                                      • Put the required construction and your own logic into the file:
                                                                                        import com.epam.edp.stages.impl.cd.Stage\n\n@Stage(name = \"deploy\")\nclass CustomDeployStage {\n    Script script\n\n    void run(context) {\n    --------------- Put your own logic here ------------------\n            script.println(\"Custom deploy stage logic\")\n    --------------- Put your own logic here ------------------\n    }\n}\nreturn CustomDeployStage\n
                                                                                      "},{"location":"user-guide/customize-cd-pipeline/#add-a-new-stage-using-shared-library-via-custom-global-pipeline-libraries","title":"Add a New Stage Using Shared Library via Custom Global Pipeline Libraries","text":"

                                                                                      Note

                                                                                      To add a new Custom Global Pipeline Library, please refer to the Add a New Custom Global Pipeline Library page.

                                                                                      To redefine any stage and add custom logic using global pipeline libraries, perform the steps below:

                                                                                      • Navigate to the Libraries section of the Admin Console and create a library with the Groovy-pipeline code language:

                                                                                        Create library

                                                                                        Select the required fields to build your library:

                                                                                        Advanced settings

                                                                                      • Clone the repository with the added library;
                                                                                      • Create a directory with the name src/com/epam/edp/customStages/impl/cd/impl/ in the library repository, for instance: src/com/epam/edp/customStages/impl/cd/impl/EmailNotify.groovy;
                                                                                      • Add a Groovy file with another name to the same stages catalog, for instance \u2013 EmailNotify.groovy:
                                                                                        package com.epam.edp.customStages.impl.cd.impl\n\nimport com.epam.edp.stages.impl.cd.Stage\n\n@Stage(name = \"notify\")\nclass Notify {\n    Script script\n    void run(context) {\n    --------------- Put your own logic here ------------------\n    script.println(\"Send notification logic\")\n    --------------- Put your own logic here ------------------\n   }\n}\n
                                                                                      • Create a Jenkinsfile with default content and the added custom library to Jenkins:

                                                                                        @Library(['edp-library-stages', 'edp-library-pipelines', 'edp-custom-shared-library-name']) _\n\nDeploy()\n

                                                                                        Note

                                                                                        edp-custom-shared-library-name is the name of your Custom Global Pipeline Library that should be added to the Jenkins Global Settings.

                                                                                      • Add a new stage to the STAGES parameter of the Jenkins job of your CD pipeline:

                                                                                        Stages parameter

                                                                                        Warning

                                                                                        To make this stage permanently present, please modify the job provisioner.

                                                                                        Note

                                                                                        Pay attention to the appropriate annotation (EDP versions of all stages can be found on GitHub).

                                                                                      • Run the job to check that the new stage has been running during the execution.
                                                                                      "},{"location":"user-guide/customize-cd-pipeline/#redefine-a-default-stage-logic-via-custom-global-pipeline-libraries","title":"Redefine a Default Stage Logic via Custom Global Pipeline Libraries","text":"

                                                                                      Note

                                                                                      To add a new Custom Global Pipeline Library, please refer to the Add a New Custom Global Pipeline Library page.

                                                                                      By default, the following stages are implemented in EDP pipeline framework:

                                                                                      • deploy,
                                                                                      • deploy-helm,
                                                                                      • autotests,
                                                                                      • manual (Manual approve),
                                                                                      • promote-images.

                                                                                      Using one of these names for annotation in your own class will lead to redefining the default logic with your own.

                                                                                      To redefine any stage and add custom logic using global pipeline libraries, perform the steps below:

                                                                                      • Navigate to the Libraries section of the Admin Console and create a library with the Groovy-pipeline code language:

                                                                                        Create library

                                                                                        Select the required fields to build your library:

                                                                                        Advanced settings

                                                                                      • Clone the repository with the added library;
                                                                                      • Create a directory with the name src/com/epam/edp/customStages/impl/cd/impl/ in the library repository, for instance: src/com/epam/edp/customStages/impl/cd/impl/CustomDeployStage.groovy;;
                                                                                      • Add a Groovy file with another name to the same stages catalog, for instance \u2013 CustomDeployStage.groovy:
                                                                                        package com.epam.edp.customStages.impl.cd.impl\n\nimport com.epam.edp.stages.impl.cd.Stage\n\n@Stage(name = \"deploy\")\nclass CustomDeployStage {\n    Script script\n\n    void run(context) {\n    --------------- Put your own logic here ------------------\n            script.println(\"Custom deploy stage logic\")\n    --------------- Put your own logic here ------------------\n    }\n}\n
                                                                                      • Create a Jenkinsfile with default content and the added custom library to Jenkins:

                                                                                        @Library(['edp-library-stages', 'edp-library-pipelines', 'edp-custom-shared-library-name']) _\n\nDeploy()\n

                                                                                        Note

                                                                                        edp-custom-shared-library-name is the name of your Custom Global Pipeline Library that should be added to the Jenkins Global Settings.

                                                                                        Note

                                                                                        Pay attention to the appropriate annotation (EDP versions of all stages can be found on GitHub).

                                                                                      "},{"location":"user-guide/customize-cd-pipeline/#related-articles","title":"Related Articles","text":"
                                                                                      • Add a New Custom Global Pipeline Library
                                                                                      • Manage Jenkins CD Pipeline Job Provisioner
                                                                                      "},{"location":"user-guide/customize-ci-pipeline/","title":"Customize CI Pipeline","text":"

                                                                                      This chapter describes the main steps that should be followed when customizing a CI pipeline.

                                                                                      "},{"location":"user-guide/customize-ci-pipeline/#redefine-a-default-stage-logic-for-a-particular-application","title":"Redefine a Default Stage Logic for a Particular Application","text":"

                                                                                      To redefine any stage and add custom logic, perform the steps below:

                                                                                      1. Open the GitHub repository:

                                                                                        • Create a directory with the name \u201cstages\u201d in the application repository;
                                                                                        • Create a Groovy file with a meaningful name for a custom stage description, for instance: CustomSonar.groovy.
                                                                                      2. Paste the copied skeleton from the reference stage and insert the necessary logic.

                                                                                        Note

                                                                                        Pay attention to the appropriate annotation (EDP versions of all stages can be found on GitHub).

                                                                                        The stage logic structure is the following:

                                                                                        CustomSonar.groovy

                                                                                        import com.epam.edp.stages.impl.ci.ProjectType\nimport com.epam.edp.stages.impl.ci.Stage\n\n@Stage(name = \"sonar\", buildTool = [\"maven\"], type = [ProjectType.APPLICATION, ProjectType.AUTOTESTS, ProjectType.LIBRARY])\nclass CustomSonar {\n    Script script\n    void run(context) {\n        script.sh \"echo 'Your custom logic of the stage'\"\n    }\n}\nreturn CustomSonar\n

                                                                                        Info

                                                                                        There is the ability to redefine the predefined EDP stage as well as to create it from scratch, it depends on the name that is used in the @Stage annotation. For example, using name = \"sonar\" will redefine an existing sonar stage with the same name, but using name=\"new-sonar\" will create a new stage.

                                                                                        By default, the following stages are implemented in EDP:

                                                                                        • build
                                                                                        • build-image-from-dockerfile
                                                                                        • build-image
                                                                                        • build-image-kaniko
                                                                                        • checkout
                                                                                        • compile
                                                                                        • create-branch
                                                                                        • gerrit-checkout
                                                                                        • get-version
                                                                                        • git-tag
                                                                                        • push
                                                                                        • sonar
                                                                                        • sonar-cleanup
                                                                                        • tests
                                                                                        • trigger-job

                                                                                        Mandatory points:

                                                                                        • Importing classes com.epam.edp.stages.impl.ci.ProjectType and com.epam.edp.stages.impl.ci.Stage;
                                                                                        • Annotating \"Stage\" for class - @Stage(name = \"sonar\", buildTool = [\"maven\"], type = [ProjectType.APPLICATION, ProjectType.AUTOTESTS, ProjectType.LIBRARY]);
                                                                                        • Property with the type \"Script\";
                                                                                        • Void the \"run\" method with the \"context input parameter\" value;
                                                                                        • Bring the custom class back to the end of the file: return CustomSonar.
                                                                                      3. Open Jenkins and make sure that all the changes are correct after the completion of the customized pipeline.

                                                                                      "},{"location":"user-guide/customize-ci-pipeline/#add-a-new-stage-for-a-particular-application","title":"Add a New Stage for a Particular Application","text":"

                                                                                      To add a new stage for a particular application, perform the steps below:

                                                                                      1. In the GitHub repository, add a Groovy file with another name to the same stages catalog.
                                                                                      2. Copy the part of a pipeline framework logic that cannot be predefined;

                                                                                        The stage logic structure is the following:

                                                                                        EmailNotify.groovy

                                                                                        import com.epam.edp.stages.impl.ci.ProjectType\nimport com.epam.edp.stages.impl.ci.Stage\n\n@Stage(name = \"email-notify\", buildTool = \"maven\", type = ProjectType.APPLICATION)\nclass EmailNotify {\n    Script script\n    void run(context) {\n        -------------------'Your custom logic here'\n    }\n}\nreturn EmailNotify\n
                                                                                      3. Open the default set of stages and add a new one into the Default Value field by saving the respective type {\"name\": \"email-notify\"}, save the changes: Add stage

                                                                                      4. Open Jenkins to check the pipeline; as soon as the checkout stage is passed, the new stage will appear in the pipeline: Check stage

                                                                                        Warning

                                                                                        To make this stage permanently present, please modify the job provisioner.

                                                                                      "},{"location":"user-guide/customize-ci-pipeline/#redefine-a-default-stage-logic-via-custom-global-pipeline-libraries","title":"Redefine a Default Stage Logic via Custom Global Pipeline Libraries","text":"

                                                                                      Note

                                                                                      To add a new Custom Global Pipeline Library, please refer to the Add a New Custom Global Pipeline Library page.

                                                                                      To redefine any stage and add custom logic using global pipeline libraries, perform the steps below:

                                                                                      1. Open the GitHub repository:

                                                                                        • Create a directory with the name /src/com/epam/edp/customStages/impl/ci/impl/stageName/ in the library repository, for instance: /src/com/epam/edp/customStages/impl/ci/impl/sonar/;
                                                                                        • Create a Groovy file with a meaningful name for a custom stage description, for instance \u2013 CustomSonar.groovy.
                                                                                      2. Paste the copied skeleton from the reference stage and insert the necessary logic.

                                                                                        Note

                                                                                        Pay attention to the appropriate annotation (EDP versions of all stages can be found on GitHub).

                                                                                        The stage logic structure is the following:

                                                                                        CustomSonar.groovy

                                                                                        package com.epam.edp.customStages.impl.ci.impl.sonar\n\nimport com.epam.edp.stages.impl.ci.ProjectType\nimport com.epam.edp.stages.impl.ci.Stage\n\n@Stage(name = \"sonar\", buildTool = [\"maven\"], type = [ProjectType.APPLICATION, ProjectType.AUTOTESTS, ProjectType.LIBRARY])\nclass CustomSonar {\n    Script script\n    void run(context) {\n        script.sh \"echo 'Your custom logic of the stage'\"\n    }\n}\n

                                                                                        Info

                                                                                        There is the ability to redefine the predefined EDP stage as well as to create it from scratch, it depends on the name that is used in the @Stage annotation. For example, using name = \"sonar\" will redefine an existing sonar stage with the same name, but using name=\"new-sonar\" will create a new stage.

                                                                                        By default, the following stages are implemented in EDP:

                                                                                        • build
                                                                                        • build-image-from-dockerfile
                                                                                        • build-image
                                                                                        • build-image-kaniko
                                                                                        • checkout
                                                                                        • compile
                                                                                        • create-branch
                                                                                        • gerrit-checkout
                                                                                        • get-version
                                                                                        • git-tag
                                                                                        • push
                                                                                        • sonar
                                                                                        • sonar-cleanup
                                                                                        • tests
                                                                                        • trigger-job

                                                                                        Mandatory points:

                                                                                        • Defining a package com.epam.edp.customStages.impl.ci.impl.stageName;
                                                                                        • Importing classes com.epam.edp.stages.impl.ci.ProjectType and com.epam.edp.stages.impl.ci.Stage;
                                                                                        • Annotating \"Stage\" for class - @Stage(name = \"sonar\", buildTool = [\"maven\"], type = [ProjectType.APPLICATION, ProjectType.AUTOTESTS, ProjectType.LIBRARY]);
                                                                                        • Property with the type \"Script\";
                                                                                        • Void the \"run\" method with the \"context input parameter\" value.

                                                                                      3.Open Jenkins and make sure that all the changes are correct after the completion of the customized pipeline.

                                                                                      "},{"location":"user-guide/customize-ci-pipeline/#add-a-new-stage-using-shared-library-via-custom-global-pipeline-libraries","title":"Add a New Stage Using Shared Library via Custom Global Pipeline Libraries","text":"

                                                                                      Note

                                                                                      To add a new Custom Global Pipeline Library, please refer to the Add a New Custom Global Pipeline Library page.

                                                                                      To redefine any stage and add custom logic using global pipeline libraries, perform the steps below:

                                                                                      1. Open the GitHub repository:

                                                                                        • Create a directory with the name /src/com/epam/edp/customStages/impl/ci/impl/stageName/ in the library repository, for instance: /src/com/epam/edp/customStages/impl/ci/impl/emailNotify/;
                                                                                        • Add a Groovy file with another name to the same stages catalog, for instance \u2013 EmailNotify.groovy.
                                                                                      2. Copy the part of a pipeline framework logic that cannot be predefined;

                                                                                        Note

                                                                                        Pay attention to the appropriate annotation (EDP versions of all stages can be found on GitHub).

                                                                                        The stage logic structure is the following:

                                                                                        EmailNotify.groovy

                                                                                        package com.epam.edp.customStages.impl.ci.impl.emailNotify\n\nimport com.epam.edp.stages.impl.ci.ProjectType\nimport com.epam.edp.stages.impl.ci.Stage\n\n@Stage(name = \"email-notify\", buildTool = \"maven\", type = ProjectType.APPLICATION)\nclass EmailNotify {\n    Script script\n    void run(context) {\n        -------------------'Your custom logic here'\n    }\n}\n
                                                                                      3. Open the default set of stages and add a new one into the Default Value field by saving the respective type {\"name\": \"email-notify\"}, save the changes: Add stage

                                                                                      4. Open Jenkins to check the pipeline; as soon as the checkout stage is passed, the new stage will appear in the pipeline: Check stage

                                                                                        Warning

                                                                                        To make this stage permanently present, please modify the job provisioner.

                                                                                      "},{"location":"user-guide/customize-ci-pipeline/#related-articles","title":"Related Articles","text":"
                                                                                      • Add a New Custom Global Pipeline Library
                                                                                      • Manage Jenkins CI Pipeline Job Provisioner
                                                                                      • Add Security Scanner
                                                                                      "},{"location":"user-guide/d-d-diagram/","title":"Delivery Dashboard Diagram","text":"

                                                                                      Admin Console allows getting the general visualization of all the relations between CD pipeline, stages, codebases, branches, and image streams that are elements with the specific icon. To open the current project diagram, navigate to the Delivery Dashboard Diagram section on the navigation bar:

                                                                                      Delivery dashboard

                                                                                      Info

                                                                                      All the requested changes (deletion, creation, adding) are displayed immediately on the Delivery Dashboard Diagram.

                                                                                      Possible actions when using dashboard:

                                                                                      • To zoom in or zoom out the diagram scale, scroll up / down.
                                                                                      • To move the diagram, click and drag.
                                                                                      • To move an element, click it and drag to the necessary place.
                                                                                      • To see the relations for one element, click this element.
                                                                                      • To see the whole diagram, click the empty space.
                                                                                      "},{"location":"user-guide/d-d-diagram/#related-articles","title":"Related Articles","text":"
                                                                                      • EDP Admin Console
                                                                                      "},{"location":"user-guide/dockerfile-stages/","title":"Use Dockerfile Linters for Code Review Pipeline","text":"

                                                                                      This section contains the description of dockerbuild-verify, dockerfile-lint stages which one can use in Code Review pipeline.

                                                                                      These stages help to obtain a quick response on the validity of the code in the Code Review pipeline in Kubernetes for all types of applications supported by EDP out of the box.

                                                                                      Add stages

                                                                                      Inspect the functions performed by the following stages:

                                                                                      1. dockerbuild-verify stage collects artifacts and builds an image from the Dockerfile without push to registry. This stage is intended to check if the image is built.

                                                                                      2. dockerfile-lint stage launches the hadolint command in order to check the Dockerfile.

                                                                                      "},{"location":"user-guide/dockerfile-stages/#related-articles","title":"Related Articles","text":"
                                                                                      • Use Terraform Library in EDP
                                                                                      • EDP Pipeline Framework
                                                                                      • Promote Docker Images From ECR to Docker Hub
                                                                                      • CI Pipeline for Container
                                                                                      "},{"location":"user-guide/ecr-to-docker-stages/","title":"Promote Docker Images From ECR to Docker Hub","text":"

                                                                                      This section contains the description of the ecr-to-docker stage, available in the Build pipeline.

                                                                                      The ecr-to-docker stage is intended to perform the push of Docker images collected from the Amazon ECR cluster storage to Docker Hub repositories, where the image becomes accessible to everyone who wants to use it. This stage is optional and is designed for working with various EDP components.

                                                                                      Note

                                                                                      When pushing the image from ECR to Docker Hub using crane, the SHA-256 value remains unchanged.

                                                                                      To run the ecr-to-docker stage just for once, navigate to the Build with Parameters option, add this stage to the stages list, and click Build. To add the ecr-to-docker stage to the pipeline, modify the job provisioner.

                                                                                      Note

                                                                                      To push properly the Docker image from the ECR storage, the ecr-to-docker stage should follow the build-image-kaniko stage. Add custom lib2

                                                                                      The ecr-to-docker stage contains a specific script that launches the following actions:

                                                                                      1. Performs authorization in AWS ECR in the EDP private storage via awsv2.
                                                                                      2. Performs authorization in the Docker Hub.
                                                                                      3. Checks whether a similar image exists in the Docker Hub in order to avoid its overwriting.

                                                                                        • If a similar image exists in the Docker Hub, the script will return the message about it and stop the execution. The ecr-to-docker stage in the Build pipeline will be marked in red.
                                                                                        • If there is no similar image, the script will proceed to promote the image using crane.
                                                                                      "},{"location":"user-guide/ecr-to-docker-stages/#create-secret-for-ecr-to-docker-stage","title":"Create Secret for ECR-to-Docker Stage","text":"

                                                                                      The ecr-to-docker stage expects the authorization credentials to be added as Kubernetes secret into EDP-installed namespace. To create the dockerhub-credentials secret, run the following command:

                                                                                        kubectl -n edp create secret generic dockerhub-credentials \\\n  --from-literal=accesstoken=<dockerhub_access_token> \\\n  --from-literal=account=<dockerhub_account_name> \\\n  --from-literal=username=<dockerhub_user_name>\n

                                                                                      Note

                                                                                      • The \u2039dockerhub_access_token\u203a should be created beforehand and in accordance with the official Docker Hub instruction.
                                                                                      • The \u2039dockerhub_account_name\u203a and \u2039dockerhub_user_name\u203a for the organization account repository will differ and be identical for the personal account repository.
                                                                                      • Pay attention that the Docker Hub repository for images uploading should be created beforehand and named by the following pattern: \u2039dockerhub_account_name\u203a/\u2039Application Name\u203a, where the \u2039Application Name\u203a should match the application name in the EDP Admin Console.
                                                                                      "},{"location":"user-guide/ecr-to-docker-stages/#related-articles","title":"Related Articles","text":"
                                                                                      • EDP Pipeline Framework
                                                                                      • Manage Access Token
                                                                                      • Manage Jenkins CI Pipeline Job Provisioner
                                                                                      "},{"location":"user-guide/git-server-overview/","title":"Manage Git Servers","text":"

                                                                                      Git Server is responsible for integration with Version Control System, whether it is GitHub, Gitlab or Gerrit.

                                                                                      The Git Server is set via the global.gitProvider parameter of the values.yaml file.

                                                                                      To view the current Git Server, you can open EDP -> Configuration -> Git Servers and inspect the following properties:

                                                                                      Git Server menu

                                                                                      • Git Server status and name - displays the Git Server status, which depends on the Git Server integration status (Success/Failed).
                                                                                      • Git Server properties - displays the Git Server type, its host address, username, SSH/HTTPS port, and name of the secret that contains SSH key.
                                                                                      • Open documentation - opens the \"Manage Git Servers\" documentation page.
                                                                                      "},{"location":"user-guide/git-server-overview/#view-authentication-data","title":"View Authentication Data","text":"

                                                                                      To view authentication data that is used to connect to the Git server, use kubectl describe command as follows:

                                                                                      kubectl describe GitServer git_server_name -n edp\n
                                                                                      "},{"location":"user-guide/git-server-overview/#delete-git-server","title":"Delete Git Server","text":"

                                                                                      To remove a Git Server from the Git Servers list, utilize the kubectl delete command as follows:

                                                                                      kubectl delete GitServer git_server_name -n edp\n
                                                                                      "},{"location":"user-guide/git-server-overview/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Git Server
                                                                                      • Integrate GitHub/GitLab in Tekton
                                                                                      "},{"location":"user-guide/helm-release-deletion/","title":"Helm Release Deletion","text":"

                                                                                      The Helm release deletion stage provides the ability to remove Helm releases from the namespace.

                                                                                      Note

                                                                                      Pay attention that this stages will remove all Helm releases from the namespace. To avoid loss of important data, before using this stage, make the necessary backups.

                                                                                      To remove Helm releases, follow the steps below:

                                                                                      1. Add the following step to the CD pipeline {\"name\":\"helm-uninstall\",\"step_name\":\"helm-uninstall\"}. Alternatively, with this step, it is possible to create a custom job provisioner.

                                                                                      2. Run the job. The pipeline script will remove Helm releases from the namespace.

                                                                                      "},{"location":"user-guide/helm-release-deletion/#related-articles","title":"Related Articles","text":"
                                                                                      • Customize CD Pipeline
                                                                                      • Manage Jenkins CD Pipeline Job Provisioner
                                                                                      "},{"location":"user-guide/helm-stages/","title":"Helm Chart Testing and Documentation Tools","text":"

                                                                                      This section contains the description of the helm-lint and helm-docs stages that can be used in the Code Review pipeline.

                                                                                      The stages help to obtain a quick response on the validity of the helm chart code and documentation in the Code Review pipeline.

                                                                                      Inspect the functions performed by the following stages:

                                                                                      1. helm-lint stage launches the ct lint --charts-deploy-templates/ command in order to validate the chart.

                                                                                        Helm lint

                                                                                        • chart_schema.yaml - this file contains some rules by which the chart validity is checked. These rules are necessary for the YAML scheme validation.

                                                                                        See the current scheme:

                                                                                        View: chart_schema.yaml
                                                                                        name: str()\nhome: str()\nversion: str()\ntype: str()\napiVersion: str()\nappVersion: any(str(), num())\ndescription: str()\nkeywords: list(str(), required=False)\nsources: list(str(), required=True)\nmaintainers: list(include('maintainer'), required=True)\ndependencies: list(include('dependency'), required=False)\nicon: str(required=False)\nengine: str(required=False)\ncondition: str(required=False)\ntags: str(required=False)\ndeprecated: bool(required=False)\nkubeVersion: str(required=False)\nannotations: map(str(), str(), required=False)\n---\nmaintainer:\nname: str(required=True)\nemail: str(required=False)\nurl: str(required=False)\n---\ndependency:\nname: str()\nversion: str()\nrepository: str()\ncondition: str(required=False)\ntags: list(str(), required=False)\nenabled: bool(required=False)\nimport-values: any(list(str()), list(include('import-value')), required=False)\nalias: str(required=False)\n
                                                                                        • ct.yaml - this file contains rules that will skip the validation of certain rules.

                                                                                        To get more information about the chart testing lint, please refer to the ct_lint documentation.

                                                                                      2. helm-docs stage helps to validate the generated documentation for the Helm deployment templates in the Code Review pipeline for all types of applications supported by EDP. This stage launches the helm-docs command in order to validate the chart documentation file existence and verify its relevance.

                                                                                        Requirements: helm-docs v1.10.0

                                                                                        Note

                                                                                        The helm-docs stage is optional. To extend the pipeline with an additional stage, please refer to the Configure Code Review Pipeline page.

                                                                                        Helm docs

                                                                                        Note

                                                                                        The example of the generated documentation.

                                                                                      "},{"location":"user-guide/helm-stages/#related-articles","title":"Related Articles","text":"
                                                                                      • EDP Pipeline Framework
                                                                                      "},{"location":"user-guide/infrastructure/","title":"Manage Infrastructures","text":"

                                                                                      This section describes the subsequent possible actions that can be performed with the newly added or existing infrastructures.

                                                                                      "},{"location":"user-guide/infrastructure/#check-and-remove-application","title":"Check and Remove Application","text":"

                                                                                      As soon as the infrastructure is successfully provisioned, the following will be created:

                                                                                      • Code Review and Build pipelines in Jenkins/Tekton for this application. The Build pipeline will be triggered automatically if at least one environment is already added.
                                                                                      • A new project in Gerrit or another VCS.
                                                                                      • SonarQube integration will be available after the Build pipeline in Jenkins/Tekton is passed.
                                                                                      • Nexus Repository Manager will be available after the Build pipeline in Jenkins/Tekton is passed as well.

                                                                                      The added application will be listed in the Applications list allowing you to do the following:

                                                                                      Applications menu

                                                                                      • Infrastructure status - displays the Git Server status. Can be red or green depending on if the EDP Portal managed to connect to the Git Server with the specified credentials or not.
                                                                                      • Infrastructure name (clickable) - displays the infrastructure name set during the Git Server creation.
                                                                                      • Open documentation - opens the documentation that leads to this page.
                                                                                      • Enable filtering - enables filtering by Git Server name and namespace where this custom resource is located in.
                                                                                      • Create new infrastructure - displays the Create new component menu.
                                                                                      • Edit infrastructure - edit the infrastructure by selecting the options icon next to its name in the infrastructures list, and then selecting Edit. For details see the Edit Existing Application section.
                                                                                      • Delete infrastructure - remove infrastructure by selecting the options icon next to its name in the infrastructures list, and then selecting Delete.

                                                                                      There are also options to sort the infrastructures:

                                                                                      • Sort the existing infrastructures in a table by clicking the sorting icons in the table header. Sort the infrastructures alphabetically by their name, language, build tool, framework, and CI tool. You can also sort the infrastructures by their status: Created, Failed, or In progress.
                                                                                      • Select a number of infrastructures displayed per page (15, 25 or 50 rows) and navigate between pages if the number of applications exceeds the capacity of a single page.
                                                                                      "},{"location":"user-guide/infrastructure/#edit-existing-infrastructure","title":"Edit Existing Infrastructure","text":"

                                                                                      EDP Portal provides the ability to enable, disable or edit the Jira Integration functionality for infrastructures.

                                                                                      1. To edit an infrastructure directly from the infrastructures overview page or when viewing the infrastructure data:

                                                                                        • Select Edit in the options icon menu:

                                                                                        Edit infrastructure on the Infrastructures overview page

                                                                                        Edit infrastructure when viewing the infrastructure data

                                                                                        • The Edit Infrastructure dialog opens.
                                                                                      2. To enable Jira integration, in the Edit Infrastructure dialog do the following:

                                                                                        Edit infrastructure

                                                                                        a. Mark the Integrate with Jira server check box and fill in the necessary fields. Please see steps d-h on the Add Infrastructure page.

                                                                                        b. Select the Apply button to apply the changes.

                                                                                        c. Navigate to Jenkins/Tekton and add the create-jira-issue-metadata stage in the Build pipeline. Also add the commit-validate stage in the Code Review pipeline.

                                                                                      3. To disable Jira integration, in the Edit Infrastructure dialog do the following:

                                                                                        a. Unmark the Integrate with Jira server check box.

                                                                                        b. Select the Apply button to apply the changes.

                                                                                        c. Navigate to Jenkins/Tekton and remove the create-jira-issue-metadata stage in the Build pipeline. Also remove the commit-validate stage in the Code Review pipeline.

                                                                                      4. To create, edit and delete infrastructure branches, please refer to the Manage Branches page.

                                                                                      "},{"location":"user-guide/infrastructure/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Infrastructure
                                                                                      • Manage Branches
                                                                                      "},{"location":"user-guide/library/","title":"Manage Libraries","text":"

                                                                                      This section describes the subsequent possible actions that can be performed with the newly added or existing libraries.

                                                                                      "},{"location":"user-guide/library/#check-and-remove-library","title":"Check and Remove Library","text":"

                                                                                      As soon as the library is successfully provisioned, the following will be created:

                                                                                      • Code Review and Build pipelines in Jenkins/Tekton for this library. The Build pipeline will be triggered automatically if at least one environment is already added.
                                                                                      • A new project in Gerrit or another VCS.
                                                                                      • SonarQube integration will be available after the Build pipeline in Jenkins/Tekton is passed.
                                                                                      • Nexus Repository Manager will be available after the Build pipeline in Jenkins/Tekton is passed as well.

                                                                                      Info

                                                                                      To navigate quickly to OpenShift, Jenkins/Tekton, Gerrit, SonarQube, Nexus, and other resources, click the Overview section on the navigation bar and hit the necessary link.

                                                                                      The added library will be listed in the Libraries list allowing to do the following:

                                                                                      Library menu

                                                                                      1. Create another library by clicking the plus sign icon in the lower-right corner of the screen and performing the same steps as described on the Add Library page.

                                                                                      2. Open library data by clicking its link name. Once clicked, the following blocks will be displayed:

                                                                                      • Library status - displays the Git Server status. Can be red or green depending on if the EDP Portal managed to connect to the Git Server with the specified credentials or not.
                                                                                      • Library name (clickable) - displays the Git Server name set during the Git Server creation.
                                                                                      • Open documentation - opens the documentation that leads to this page.
                                                                                      • Enable filtering - enables filtering by Git Server name and namespace where this custom resource is located in.
                                                                                      • Create new library - displays the Create new component menu.
                                                                                      • Edit library - edit the library by selecting the options icon next to its name in the libraries list, and then selecting Edit. For details see the Edit Existing Library section.
                                                                                      • Delete Library - remove library with the corresponding database and Jenkins/Tekton pipelines by selecting the options icon next to its name in the libraries list, and then selecting Delete.

                                                                                        Note

                                                                                        The library that is used in a CD pipeline cannot be removed.

                                                                                      There are also options to sort the libraries:

                                                                                      • Sort the existing libraries in a table by clicking the sorting icons in the table header. Sort the libraries alphabetically by their name, language, build tool, framework, and CI tool. You can also sort the libraries by their status: Created, Failed, or In progress.
                                                                                      • Select a number of libraries displayed per page (15, 25 or 50 rows) and navigate between pages if the number of libraries exceeds the capacity of a single page.
                                                                                      "},{"location":"user-guide/library/#edit-existing-library","title":"Edit Existing Library","text":"

                                                                                      EDP Portal provides the ability to enable, disable or edit the Jira Integration functionality for libraries.

                                                                                      1. To edit a library directly from the Libraries overview page or when viewing the library data:

                                                                                        • Select Edit in the options icon menu:

                                                                                          Edit library on the libraries overview page

                                                                                          Edit library when viewing the library data

                                                                                        • The Edit Library dialog opens.
                                                                                      2. To enable Jira integration, in the Edit Library dialog do the following:

                                                                                        Edit library

                                                                                        a. Mark the Integrate with Jira server check box and fill in the necessary fields. Please see steps d-h on the Add Library page.

                                                                                        b. Select the Apply button to apply the changes.

                                                                                        c. Navigate to Jenkins/Tekton and add the create-jira-issue-metadata stage in the Build pipeline. Also add the commit-validate stage in the Code Review pipeline.

                                                                                      3. To disable Jira integration, in the Edit Library dialog do the following:

                                                                                        a. Unmark the Integrate with Jira server check box.

                                                                                        b. Select the Apply button to apply the changes.

                                                                                        c. Navigate to Jenkins/Tekton and remove the create-jira-issue-metadata stage in the Build pipeline. Also remove the commit-validate stage in the Code Review pipeline.

                                                                                        As a result, the necessary changes will be applied.

                                                                                      4. To create, edit and delete library branches, please refer to the Manage Branches page.

                                                                                      "},{"location":"user-guide/library/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Library
                                                                                      • Manage Branches
                                                                                      "},{"location":"user-guide/manage-branches/","title":"Manage Branches","text":"

                                                                                      This page describes how to manage branches in the created component, whether it is an application, library, autotest or infrastructure.

                                                                                      "},{"location":"user-guide/manage-branches/#add-new-branch","title":"Add New Branch","text":"

                                                                                      Note

                                                                                      When working with libraries, pay attention when specifying the branch name: the branch name is involved in the formation of the library version, so it must comply with the versioning semantic rules for the library.

                                                                                      When adding a component, the default branch is a master branch. In order to add a new branch, follow the steps below:

                                                                                      1. Navigate to the Branches block by clicking the component name link in the Components list.

                                                                                      2. Select the options icon related to the necessary branch and then select Create:

                                                                                        Add branch

                                                                                      3. Click Edit YAML in the upper-right corner of the dialog to open the YAML editor and add a branch. Otherwise, fill in the required fields in the dialog:

                                                                                        New branch

                                                                                        a. Release Branch - select the Release Branch check box if you need to create a release branch.

                                                                                        b. Branch name - type the branch name. Pay attention that this field remains static if you create a release branch. For the Clone and Import strategies: if you want to use the existing branch, enter its name into this field.

                                                                                        c. From Commit Hash - paste the commit hash from which the branch will be created. For the Clone and Import strategies: Note that if the From Commit Hash field is empty, the latest commit from the branch name will be used.

                                                                                        d. Branch version - enter the necessary branch version for the artifact. The Release Candidate (RC) postfix is concatenated to the branch version number.

                                                                                        e. Default branch version - type the branch version that will be used in a master branch after the release creation. The Snapshot postfix is concatenated to the master branch version number.

                                                                                        f. Click the Apply button and wait until the new branch will be added to the list.

                                                                                        Info

                                                                                        Adding of a new branch is indicated in the context of the edp versioning type.

                                                                                      The default component repository is cloned and changed to the new indicated version before the build, i.e. the new indicated version will not be committed to the repository; thus, the existing repository will keep the default version.

                                                                                      "},{"location":"user-guide/manage-branches/#build-branch","title":"Build Branch","text":"

                                                                                      In order to build branch from the latest commit, do the following:

                                                                                      1. Navigate to the Branches block by clicking the library name link in the Libraries list.
                                                                                      2. Select the options icon related to the necessary branch and then select Build:

                                                                                        Build branch

                                                                                      The pipeline run status is displayed near the branch name in the Branches block:

                                                                                      Pipeline run status in EDP Portal

                                                                                      The corresponding item appears on the Tekton Dashboard in the PipelineRuns section:

                                                                                      Pipeline run status in Tekton

                                                                                      "},{"location":"user-guide/manage-branches/#delete-branch","title":"Delete Branch","text":"

                                                                                      Note

                                                                                      The default master branch cannot be removed.

                                                                                      In order to delete the added branch with the corresponding record in the EDP Portal database, do the following:

                                                                                      1. Navigate to the Branches block by clicking the component name link in the compoents list.
                                                                                      2. Select the options icon related to the necessary branch and then select Delete:

                                                                                        Delete branch

                                                                                      "},{"location":"user-guide/manage-branches/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Application
                                                                                      • Add Library
                                                                                      • Add Autotest
                                                                                      "},{"location":"user-guide/marketplace/","title":"Marketplace Overview","text":"

                                                                                      The EDP Marketplace offers a range of Templates, predefined tools and settings for creating software. These Templates speed up development, minimize errors, and ensure consistency. A key EDP Marketplace feature is customization. Organizations can create and share their own Templates, finely tuned to their needs. Each Template serves as a tailored blueprint of tools and settings.

                                                                                      These tailored Templates include preset CI/CD pipelines, automating your development workflows. From initial integration to final deployment, these processes are efficiently managed. Whether for new applications or existing ones, these templates enhance processes, save time, and ensure consistency.

                                                                                      To see the Marketplace section, navigate to the Main menu -> EDP -> Marketplace. General look of the Marketplace section is described below:

                                                                                      Marketplace section (listed view)

                                                                                      • Marketplace templates - all the components marketplace can offer;
                                                                                      • Template properties - the item summary that shows the type, category, language, framework, build tool and maturity;
                                                                                      • Enable/disable filters - enables users to enable/disable searching by the item name or namespace it is available in;
                                                                                      • Change view - allows switching from the listed view to the tiled one and vice versa. See the screenshot below for details.

                                                                                      There is also a possibility to switch into the tiled view instead of the listed one:

                                                                                      Marketplace section (tiled view)

                                                                                      To view the details of a marketplace item, simply click on its name:

                                                                                      Item details

                                                                                      The details window shows suplemental information, such as item's author, keywords, release version and the link to the repository it is located in. The window also contains the Create from template button that allows users to create the component by the chosen template. The procedure of creating new components is described in the Add Component via Marketplace page.

                                                                                      "},{"location":"user-guide/marketplace/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Component via Marketplace
                                                                                      • Add Application
                                                                                      • Add Library
                                                                                      • Add Infrastructure
                                                                                      "},{"location":"user-guide/opa-stages/","title":"Use Open Policy Agent","text":"

                                                                                      Open Policy Agent (OPA) is a policy engine that provides:

                                                                                      • High-level declarative policy language Rego;
                                                                                      • API and tooling for policy execution.

                                                                                      EPAM Delivery Platform ensures the implemented Open Policy Agent support allowing to work with Open Policy Agent bundles that is processed by means of stages in the Code Review and Build pipelines. These pipelines are expected to be created after the Rego OPA Library is added.

                                                                                      "},{"location":"user-guide/opa-stages/#code-review-pipeline-stages","title":"Code Review Pipeline Stages","text":"

                                                                                      In the Code Review pipeline, the following stages are available:

                                                                                      1. checkout stage, a standard step during which all files are checked out from a selected branch of the Git repository.

                                                                                      2. tests stage containing a script that performs the following actions:

                                                                                        2.1. Runs policy tests.

                                                                                        2.2. Converts OPA test results into JUnit format.

                                                                                        2.3. Publishes JUnit-formatted results to Jenkins.

                                                                                      "},{"location":"user-guide/opa-stages/#build-pipeline-stages","title":"Build Pipeline Stages","text":"

                                                                                      In the Build pipeline, the following stages are available:

                                                                                      1. checkout stage, a standard step during which all files are checked out from a selected branch of the Git repository.

                                                                                      2. get-version optional stage, a step where library version is determined either via:

                                                                                        2.1. Standard EDP versioning functionality.

                                                                                        2.2. Manually specified version. In this case .manifest file in a root directory MUST be provided. File must contain a JSON document with revision field. Minimal example: { \"revision\": \"1.0.0\" }\".

                                                                                      3. tests stage containing a script that performs the following actions: 3.1. Runs policy tests. 3.2. Converts OPA test results into JUnit format. 3.3. Publishes JUnit-formatted results to Jenkins.

                                                                                      4. git-tag stage, a standard step where git branch is tagged with a version.

                                                                                      "},{"location":"user-guide/opa-stages/#related-articles","title":"Related Articles","text":"
                                                                                      • EDP Pipeline Framework
                                                                                      "},{"location":"user-guide/pipeline-framework/","title":"EDP Pipeline Framework","text":"

                                                                                      This chapter provides detailed information about the EDP pipeline framework concepts and parts, as well as the accurate data about the Code Review, Build and Deploy pipelines with the respective stages.

                                                                                      "},{"location":"user-guide/pipeline-framework/#edp-pipeline-framework-overview","title":"EDP Pipeline Framework Overview","text":"

                                                                                      Note

                                                                                      The whole logic is applied to Jenkins as it is the main tool for the CI/CD processes organization.

                                                                                      EDP pipeline framework basic

                                                                                      The general EDP Pipeline Framework consists of several parts:

                                                                                      • Jenkinsfile - a text file that keeps the definition of a Jenkins Pipeline and is checked into source control. Every Job has its Jenkinsfile stored in the specific application repository and in Jenkins as the plain text. The behavior logic of the pipelines can be customized easily by modifying a source code which is always copied to the EDP repository after the EDP installation.

                                                                                      Jenkinsfile example

                                                                                      • Loading Shared Libraries - a part where every job loads libraries with the help of the shared libraries mechanism for Jenkins that allows to create reproducible pipelines, write them uniformly, and manage the update process. There are two main libraries: EDP Pipelines with the common logic described for the main pipelines Code Review, Build, Deploy pipelines and EDP Stages library that keeps the description of the stages for every pipeline.
                                                                                      • Run Stages - a part where the predefined default stages are launched.

                                                                                      Pipeline script

                                                                                      "},{"location":"user-guide/pipeline-framework/#cicd-jobs-comparison","title":"CI/CD Jobs Comparison","text":"

                                                                                      Explore the CI and CD job comparison. Please note that the dynamic stages order can be changed, meanwhile, the predefined stages order in the reference pipeline cannot be changed, i.e. only the predefined stages set can be run.

                                                                                      CI/CD jobs comparison

                                                                                      "},{"location":"user-guide/pipeline-framework/#context","title":"Context","text":"

                                                                                      Context - a variable that stores and transfers all necessary parameters between stages that are used by pipeline during performing.

                                                                                      1. The context type is \"Map\".
                                                                                      2. Each stage has input and output context.
                                                                                      3. Each stage has a mandatory input context.

                                                                                      Note

                                                                                      If the input context isn't transferred, the stage will be failed.

                                                                                      "},{"location":"user-guide/pipeline-framework/#annotations-for-cicd-stages","title":"Annotations for CI/CD Stages","text":"

                                                                                      Annotation for CI Stages:

                                                                                      • The annotation type is \"Map\";
                                                                                      • The annotation consists of the name, buildTool, and codebaseType.

                                                                                      Annotation for CD Stages:

                                                                                      • The annotation type is \"Map\";
                                                                                      • The annotation consists of a name.
                                                                                      "},{"location":"user-guide/pipeline-framework/#code-review-pipeline","title":"Code Review Pipeline","text":"

                                                                                      CodeReview() \u2013 a function that allows using the EDP implementation for the Code Review pipeline.

                                                                                      Note

                                                                                      All values of different parameters that are used during the pipeline execution are stored in the \"Map\" context.

                                                                                      The Code Review pipeline consists of several steps:

                                                                                      On the master:

                                                                                      • Initialization of all objects (Platform, Job, Gerrit, Nexus, Sonar, Application, StageFactory) and loading of the default implementations of EDP stages.

                                                                                      On a particular Jenkins agent that depends on the build tool:

                                                                                      • Creating workdir for application sources;
                                                                                      • Loading build tool implementation for a particular application;
                                                                                      • Run in a loop all stages (From) and run them either in parallel or one by one.
                                                                                      "},{"location":"user-guide/pipeline-framework/#code-review-pipeline-overview","title":"Code Review Pipeline Overview","text":"

                                                                                      Using in pipelines - @Library(['edp-library-pipelines@version'])

                                                                                      The corresponding enums, interfaces, classes, and their methods can be used separately from the EDP Pipelines library function (please refer to Table 1 and Table 2).

                                                                                      Table 1. Enums and Interfaces with the respective properties, methods, and examples.

                                                                                      Enums Interfaces PlatformType: - OPENSHIFT - KUBERNETES JobType: - CODEREVIEW - BUILD - DEPLOY BuildToolType: - MAVEN - GRADLE - NPM - DOTNET Platform() - contains methods for working with platform CLI. At the moment only OpenShift is supported. Properties: Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Methods: getJsonPathValue(String k8s_kind, String k8s_kind_name, String jsonPath): return String value of specific parameter of particular object using jsonPath utility. Example: context.platform.getJsonPathValue(''cm'',''project-settings'', ''.data.username''). BuildTool() - contains methods for working with different buildTool from ENUM BuildToolType. Should be invoked on Jenkins build agents. Properties: Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Nexus object - Object of class Nexus. Methods: init: return parameters of buildTool that are needed for running stages. Example: context.buildTool = new BuildToolFactory().getBuildToolImpl(context.application.config.build_tool, this,context.nexus) context.buildTool.init().

                                                                                      Table 2. Classes with the respective properties, methods, and examples.

                                                                                      Classes Description (properties, methods, and examples) PlatformFactory() - Class that contains methods getting an implementation of CLI of the platform. At the moment OpenShift and Kubernetes are supported. Methods: getPlatformImpl(PlatformType platform, Script script): return Class Platform. Example: context.platform = new PlatformFactory().getPlatformImpl(PlatformType.OPENSHIFT, this). Application(String name, Platform platform, Script script) - Class that describes the application object. Properties: Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Platform platform - Object of a class Platform(). String name - Name for the application for creating an object. Map config - Map of configuration settings for the particular application that is loaded from config map project-settings. String version - Application version, initially empty. Is set on the get-version step. String deployableModule - The name of the deployable module for multi-module applications, initially empty. String buildVersion - Version of the built artifact, contains build number of Job initially empty. String deployableModuleDir - The name of deployable module directory for multi-module applications, initially empty. Array imageBuildArgs - List of arguments for building an application Docker image. Methods: setConfig(String gerrit_autouser, String gerrit_host, String gerrit_sshPort, String gerrit_project): set the config property with values from config map. Example: context.application = new Application(context.job, context.gerrit.project, context.platform, this) context.application.setConfig(context.gerrit.autouser, context.gerrit.host, context.gerrit.sshPort, context.gerrit.project) Job(type: JobType.value, platform: Platform, script: Script) - Class that describes the Gerrit tool. Properties: Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Platform platform - Object of a class Platform(). JobType.value type. String deployTemplatesDirectory - The name of the directory in application repository where deploy templates are located. It can be set for a particular Job through DEPLOY_TEMPLATES_DIRECTORY parameter. String edpName - The name of the EDP Project. Map stages - Contains all stages in JSON format that is retrieved from Jenkins job env variable. String envToPromote - The name of the environment for promoting images. Boolean promoteImages - Defines whether images should be promoted or not. Methods: getParameterValue(String parameter, String defaultValue = null): return parameter of ENV variable of Jenkins job. init(): set all the properties of the Job object. setDisplayName(String displayName): set display name of the Jenkins job. setDescription(String description, Boolean addDescription = false): set new or add to the existing description of the Jenkins job. printDebugInfo(Map context): print context info to the log of Jenkins' job. runStage(String stage_name, Map context): run the particular stage according to its name. Example: context.job = new Job(JobType.CODEREVIEW.value, context.platform, this) context.job.init() context.job.printDebugInfo(context) context.job.setDisplayName(\"test\") context.job.setDescription(\"Name: ${context.application.config.name}\") Gerrit(Job job, Platform platform, Script script) - Class that describes the Gerrit tool. Properties: Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().Job job - Object of a class Job().String credentialsId - Credential Id in Jenkins for Gerrit.String autouser - Username of an auto user in Gerrit for integration with Jenkins.String host - Gerrit host.String project - the project name of the built application.String branch - branch to build the application from.String changeNumber - change number of Gerrit commit.String changeName - change name of Gerrit commit.String refspecName - refspecName of Gerrit commit.String sshPort - Gerrit ssh port number.String patchsetNumber - patchsetNumber of Gerrit commit.Methods: init(): set all the properties of Gerrit object. Example: context.gerrit = new Gerrit(context.job, context.platform, this) context.gerrit.init() Nexus(Job job, Platform platform, Script script) - Class that describes the Nexus tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().Job job - Object of a class Job().String autouser - Username of an auto user in Nexus for integration with Jenkins.String credentialsId - Credential Id in Jenkins for Nexus.String host - Nexus host.String port - Nexus http(s) port.String repositoriesUrl - Base URL of repositories in Nexus.String restUrl - URL of Rest API.Methods:init(): set all the properties of Nexus objectExample: context.nexus = new Nexus(context.job, context.platform, this) context.nexus.init() Sonar(Job job, Platform platform, Script script) - Class that describes the Sonar tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().Job job - Object of a class Job().String route - External route of the sonar application.Methods:init(): set all the properties of Sonar objectExample: context.sonar = new Sonar(context.job, context.platform, this) context.sonar.init()"},{"location":"user-guide/pipeline-framework/#code-review-pipeline-stages","title":"Code Review Pipeline Stages","text":"

                                                                                      Each EDP stage implementation has run method that is as input parameter required to pass the \"Map\" context with different keys. Some stages can implement the logic for several build tools and application types, some of them are specific.

                                                                                      The Code Review pipeline includes the following default stages: Checkout \u2192 Gerrit Checkout \u2192 Compile \u2192 Tests \u2192 Sonar.

                                                                                      Info

                                                                                      To get the full description of every stage, please refer to the EDP Stages Framework section.

                                                                                      "},{"location":"user-guide/pipeline-framework/#how-to-redefine-or-extend-the-edp-pipeline-stages-library","title":"How to Redefine or Extend the EDP Pipeline Stages Library","text":"

                                                                                      Inspect the points below to redefine or extend the EDP Pipeline Stages Library:

                                                                                      • Create \u201cstage\u201d folder in your App repository.
                                                                                      • Create a Groovy file with a meaningful name for the custom stage description. For instance \u2013 CustomBuildMavenApplication.groovy.
                                                                                      • Describe the stage logic.

                                                                                      Redefinition:

                                                                                      import com.epam.edp.stages.ProjectType\nimport com.epam.edp.stages.Stage\n@Stage(name = \"compile\", buildTool = \"maven\", type = ProjectType.APPLICATION)\nclass CustomBuildMavenApplication {\nScript script\nvoid run(context) {\nscript.sh \"echo 'Your custom logic of the stage'\"\n}\n}\nreturn CustomBuildMavenApplication\n

                                                                                      Extension:

                                                                                      import com.epam.edp.stages.ProjectType\nimport com.epam.edp.stages.Stage\n@Stage(name = \"new-stage\", buildTool = \"maven\", type = ProjectType.APPLICATION)\nclass NewStageMavenApplication {\nScript script\nvoid run(context) {\nscript.sh \"echo 'Your custom logic of the stage'\"\n}\n}\nreturn NewStageMavenApplication\n
                                                                                      "},{"location":"user-guide/pipeline-framework/#using-edp-stages-library-in-the-pipeline","title":"Using EDP Stages Library in the Pipeline","text":"

                                                                                      In order to use the EDP stages, the created pipeline should fit some requirements, that`s why a developer has to do the following:

                                                                                      • import library - @Library(['edp-library-stages'])
                                                                                      • import StageFactory class - import com.epam.edp.stages.StageFactory
                                                                                      • define context Map \u2013 context = [:]
                                                                                      • define stagesFactory instance and load EDP stages:
                                                                                        context.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n

                                                                                      After that, there is the ability to run any EDP stage beforehand by defining a necessary context: context.factory.getStage(\"checkout\",\"maven\",\"application\").run(context)

                                                                                      For instance, the pipeline can look like:

                                                                                      @Library(['edp-library-stages']) _\n\nimport com.epam.edp.stages.StageFactory\nimport org.apache.commons.lang.RandomStringUtils\n\ncontext = [:]\n\nnode('maven') {\ncontext.workDir = new File(\"/tmp/${RandomStringUtils.random(10, true, true)}\")\ncontext.workDir.deleteDir()\n\ncontext.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n\ncontext.gerrit = [:]\ncontext.application = [:]\ncontext.application.config = [:]\ncontext.buildTool = [:]\ncontext.nexus = [:]\n\n\n\nstage(\"checkout\") {\ncontext.gerrit.branch = \"master\"\ncontext.gerrit.credentialsId = \"jenkins\"\ncontext.application.config.cloneUrl = \"ssh://jenkins@gerrit:32092/sit-718-cloned-java-maven-project\"\ncontext.factory.getStage(\"checkout\",\"maven\",\"application\").run(context)\n}\n\n\nstage(\"compile\") {\ncontext.buildTool.command = \"mvn\"\ncontext.nexus.credentialsId = \"nexus\"\ncontext.factory.getStage(\"compile\",\"maven\",\"application\").run(context)\n}\n}\n

                                                                                      Or in a declarative way:

                                                                                      @Library(['edp-library-stages']) _\n\nimport com.epam.edp.stages.StageFactory\nimport org.apache.commons.lang.RandomStringUtils\n\ncontext = [:]\n\npipeline {\nagent { label 'maven' }\nstages {\nstage('Init'){\nsteps {\nscript {\ncontext.workDir = new File(\"/tmp/${RandomStringUtils.random(10, true, true)}\")\ncontext.workDir.deleteDir()\ncontext.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n\ncontext.gerrit = [:]\ncontext.application = [:]\ncontext.application.config = [:]\ncontext.buildTool = [:]\ncontext.nexus = [:]\n}\n}\n}\n\nstage(\"Checkout\") {\nsteps {\nscript {\ncontext.gerrit.branch = \"master\"\ncontext.gerrit.credentialsId = \"jenkins\"\ncontext.application.config.cloneUrl = \"ssh://jenkins@gerrit:32092/sit-718-cloned-java-maven-project\"\ncontext.factory.getStage(\"checkout\",\"maven\",\"application\").run(context)\n}\n}\n}\n\nstage('Compile') {\nsteps {\nscript {\ncontext.buildTool.command = \"mvn\"\ncontext.nexus.credentialsId = \"nexus\"\ncontext.factory.getStage(\"compile\",\"maven\",\"application\").run(context)\n}\n}\n}\n}\n}\n
                                                                                      "},{"location":"user-guide/pipeline-framework/#build-pipeline","title":"Build Pipeline","text":"

                                                                                      Build() \u2013 a function that allows using the EDP implementation for the Build pipeline. All values of different parameters that are used during the pipeline execution are stored in the \"Map\" context.

                                                                                      The Build pipeline consists of several steps:

                                                                                      On the master:

                                                                                      • Initialization of all objects (Platform, Job, Gerrit, Nexus, Sonar, Application, StageFactory) and loading default implementations of EDP stages.

                                                                                      On a particular Jenkins agent that depends on the build tool:

                                                                                      • Creating workdir for application sources;
                                                                                      • Loading build tool implementation for a particular application;
                                                                                      • Run in a loop all stages (From) and run them either in parallel or one by one.
                                                                                      "},{"location":"user-guide/pipeline-framework/#build-pipeline-overview","title":"Build Pipeline Overview","text":"

                                                                                      Using in pipelines - @Library(['edp-library-pipelines@version'])

                                                                                      The corresponding enums, interfaces, classes, and their methods can be used separately from the EDP Pipelines library function (please refer to Table 3 and Table 4).

                                                                                      Table 3. Enums and Interfaces with the respective properties, methods, and examples. Enums Interfaces PlatformType:- OPENSHIFT- KUBERNETESJobType:- CODEREVIEW- BUILD- DEPLOYBuildToolType:- MAVEN- GRADLE- NPM- DOTNET Platform() - contains methods for working with platform CLI. At the moment only OpenShift is supported.Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Methods:getJsonPathValue(String k8s_kind, String k8s_kind_name, String jsonPath): return String value of specific parameter of particular object using jsonPath utility.Example:context.platform.getJsonPathValue(\"cm\",\"project-settings\",\".data.username\")BuildTool() - contains methods for working with different buildTool from ENUM BuildToolType. Should be invoked on Jenkins build agents.Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Nexus object - Object of class Nexus. See description below:Methods:init: return parameters of buildTool that are needed for running stages.Example:context.buildTool = new BuildToolFactory().getBuildToolImpl(context.application.config.build_tool, this,context.nexus)context.buildTool.init()

                                                                                      Table 4. Classes with the respective properties, methods, and examples.

                                                                                      Classes Description (properties, methods, and examples) PlatformFactory() - Class that contains methods getting an implementation of CLI of the platform. At the moment OpenShift and Kubernetes are supported. Methods:getPlatformImpl(PlatformType platform, Script script): return Class PlatformExample:context.platform = new PlatformFactory().getPlatformImpl(PlatformType.OPENSHIFT, this) Application(String name, Platform platform, Script script) - Class that describes the application object. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().String name - Name for the application for creating an object.Map config - Map of configuration settings for the particular application that is loaded from config map project-settings.String version - Application version, initially empty. Is set on the get-version step.String deployableModule - The name of the deployable module for multi-module applications, initially empty.String buildVersion - Version of the built artifact, contains build number of Job initially empty.String deployableModuleDir - The name of deployable module directory for multi-module applications, initially empty.Array imageBuildArgs - List of arguments for building the application Docker image.Methods:setConfig(String gerrit_autouser, String gerrit_host, String gerrit_sshPort, String gerrit_project): set the config property with values from config map.Example:context.application = new Application(context.job, context.gerrit.project, context.platform, this) context.application.setConfig(context.gerrit.autouser, context.gerrit.host, context.gerrit.sshPort, context.gerrit.project) Job(type: JobType.value, platform: Platform, script: Script) - Class that describes the Gerrit tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().JobType.value type.String deployTemplatesDirectory - The name of the directory in application repository, where deploy templates are located. It can be set for a particular Job through DEPLOY_TEMPLATES_DIRECTORY parameter.String edpName - The name of the EDP Project.Map stages - Contains all stages in JSON format that is retrieved from Jenkins job env variable.String envToPromote - The name of the environment for promoting images.Boolean promoteImages - Defines whether images should be promoted or not.Methods:getParameterValue(String parameter, String defaultValue = null): return parameter of ENV variable of Jenkins job.init(): set all the properties of the Job object.setDisplayName(String displayName): set display name of the Jenkins job.setDescription(String description, Boolean addDescription = false): set new or add to the existing description of the Jenkins job.printDebugInfo(Map context): print context info to the log of Jenkins' job.runStage(String stage_name, Map context): run the particular stage according to its name.Example:context.job = new Job(JobType.CODEREVIEW.value, context.platform, this) context.job.init() context.job.printDebugInfo(context) context.job.setDisplayName(\"test\") context.job.setDescription(\"Name: ${context.application.config.name}\") Gerrit(Job job, Platform platform, Script script) - Class that describes the Gerrit tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().Job job - Object of a class Job().String credentialsId - Credentials Id in Jenkins for Gerrit.String autouser - Username of an auto user in Gerrit for integration with Jenkins.String host - Gerrit host.String project - the project name of the built application.String branch - branch to build an application from.String changeNumber - change number of Gerrit commit.String changeName - change name of Gerrit commit.String refspecName - refspecName of Gerrit commit.String sshPort - Gerrit ssh port number.String patchsetNumber - patchsetNumber of Gerrit commit.Methods:init(): set all the properties of Gerrit objectExample: context.gerrit = new Gerrit(context.job, context.platform, this) context.gerrit.init() Nexus(Job job, Platform platform, Script script) - Class that describes the Nexus tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().Job job - Object of a class Job().String autouser - Username of an auto user in Nexus for integration with Jenkins.String credentialsId - Credentials Id in Jenkins for Nexus.String host - Nexus host.String port - Nexus http(s) port.String repositoriesUrl - Base URL of repositories in Nexus.String restUrl - URL of Rest API.Methods:init(): set all the properties of the Nexus object.Example:context.nexus = new Nexus(context.job, context.platform, this) context.nexus.init() Sonar(Job job, Platform platform, Script script) - Class that describes the Sonar tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform().Job job - Object of a class Job().String route - External route of the sonar application.Methods:init(): set all the properties of Sonar object.Example:context.sonar = new Sonar(context.job, context.platform, this) context.sonar.init()"},{"location":"user-guide/pipeline-framework/#build-pipeline-stages","title":"Build Pipeline Stages","text":"

                                                                                      Each EDP stage implementation has run method that is as input parameter required to pass a context map with different keys. Some stages can implement the logic for several build tools and application types, some of them are specific.

                                                                                      The Build pipeline includes the following default stages: Checkout \u2192 Gerrit Checkout \u2192 Compile \u2192 Get version \u2192 Tests \u2192 Sonar \u2192 Build \u2192 Build Docker Image \u2192 Push \u2192 Git tag.

                                                                                      Info

                                                                                      To get the full description of every stage, please refer to the EDP Stages Framework section.

                                                                                      "},{"location":"user-guide/pipeline-framework/#how-to-redefine-or-extend-edp-pipeline-stages-library","title":"How to Redefine or Extend EDP Pipeline Stages Library","text":"

                                                                                      Inspect the points below to redefine or extend the EDP Pipeline Stages Library:

                                                                                      • Create a \u201cstage\u201d folder in the App repository.
                                                                                      • Create a Groovy file with a meaningful name for the custom stage description. For instance \u2013 CustomBuildMavenApplication.groovy
                                                                                      • Describe stage logic.

                                                                                      Redefinition:

                                                                                      import com.epam.edp.stages.ProjectType\nimport com.epam.edp.stages.Stage\n@Stage(name = \"compile\", buildTool = \"maven\", type = ProjectType.APPLICATION)\nclass CustomBuildMavenApplication {\nScript script\nvoid run(context) {\nscript.sh \"echo 'Your custom logic of the stage'\"\n}\n}\nreturn CustomBuildMavenApplication\n

                                                                                      Extension:

                                                                                      import com.epam.edp.stages.ProjectType\nimport com.epam.edp.stages.Stage\n@Stage(name = \"new-stage\", buildTool = \"maven\", type = ProjectType.APPLICATION)\nclass NewStageMavenApplication {\nScript script\nvoid run(context) {\nscript.sh \"echo 'Your custom logic of the stage'\"\n}\n}\nreturn NewStageMavenApplication\n
                                                                                      "},{"location":"user-guide/pipeline-framework/#using-edp-stages-library-in-the-pipeline_1","title":"Using EDP Stages Library in the Pipeline","text":"

                                                                                      In order to use the EDP stages, the created pipeline should fit some requirements, that`s why a developer has to do the following:

                                                                                      • import library - @Library(['edp-library-stages'])
                                                                                      • import StageFactory class - import com.epam.edp.stages.StageFactory
                                                                                      • define context Map \u2013 context = [:]
                                                                                      • define stagesFactory instance and load EDP stages:
                                                                                      context.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n

                                                                                      After that, there is the ability to run any EDP stage beforehand by defining a requirement context context.factory.getStage(\"checkout\",\"maven\",\"application\").run(context)

                                                                                      For instance, the pipeline can look like:

                                                                                      @Library(['edp-library-stages']) _\n\nimport com.epam.edp.stages.StageFactory\nimport org.apache.commons.lang.RandomStringUtils\n\ncontext = [:]\n\nnode('maven') {\ncontext.workDir = new File(\"/tmp/${RandomStringUtils.random(10, true, true)}\")\ncontext.workDir.deleteDir()\n\ncontext.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n\ncontext.gerrit = [:]\ncontext.application = [:]\ncontext.application.config = [:]\ncontext.buildTool = [:]\ncontext.nexus = [:]\n\n\n\nstage(\"checkout\") {\ncontext.gerrit.branch = \"master\"\ncontext.gerrit.credentialsId = \"jenkins\"\ncontext.application.config.cloneUrl = \"ssh://jenkins@gerrit:32092/sit-718-cloned-java-maven-project\"\ncontext.factory.getStage(\"checkout\",\"maven\",\"application\").run(context)\n}\n\n\nstage(\"compile\") {\ncontext.buildTool.command = \"mvn\"\ncontext.nexus.credentialsId = \"nexus\"\ncontext.factory.getStage(\"compile\",\"maven\",\"application\").run(context)\n}\n}\n

                                                                                      Or in a declarative way:

                                                                                      @Library(['edp-library-stages']) _\n\nimport com.epam.edp.stages.StageFactory\nimport org.apache.commons.lang.RandomStringUtils\n\ncontext = [:]\n\npipeline {\nagent { label 'maven' }\nstages {\nstage('Init'){\nsteps {\nscript {\ncontext.workDir = new File(\"/tmp/${RandomStringUtils.random(10, true, true)}\")\ncontext.workDir.deleteDir()\ncontext.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n\ncontext.gerrit = [:]\ncontext.application = [:]\ncontext.application.config = [:]\ncontext.buildTool = [:]\ncontext.nexus = [:]\n}\n}\n}\n\nstage(\"Checkout\") {\nsteps {\nscript {\ncontext.gerrit.branch = \"master\"\ncontext.gerrit.credentialsId = \"jenkins\"\ncontext.application.config.cloneUrl = \"ssh://jenkins@gerrit:32092/sit-718-cloned-java-maven-project\"\ncontext.factory.getStage(\"checkout\",\"maven\",\"application\").run(context)\n}\n}\n}\n\nstage('Compile') {\nsteps {\nscript {\ncontext.buildTool.command = \"mvn\"\ncontext.nexus.credentialsId = \"nexus\"\ncontext.factory.getStage(\"compile\",\"maven\",\"application\").run(context)\n}\n}\n}\n}\n}\n
                                                                                      "},{"location":"user-guide/pipeline-framework/#edp-library-stages-description","title":"EDP Library Stages Description","text":"

                                                                                      Using in pipelines - @Library(['edp-library-stages@version'])

                                                                                      The corresponding enums, classes, interfaces and their methods can be used separately from the EDP Stages library function (please refer to Table 5).

                                                                                      Table 5. Enums and Classes with the respective properties, methods, and examples.

                                                                                      Enums Classes ProjectType: - APPLICATION - AUTOTESTS - LIBRARY StageFactory() - Class that contains methods getting an implementation of the particular stage either EDP from shared library or custom from application repository.Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Map stages - Map of stages implementations.Methods:loadEdpStages(): return a list of Classes that describes EDP stages implementations.loadCustomStages(String directory): return a list of Classes that describes EDP custom stages from application repository from \"directory\". The \"directory\" should have an absolute path to files with classes of custom stages implementations. Should be run from a Jenkins agent.add(Class clazz): register class for some particular stage in stages map of StageFactory class.getStage(String name, String buildTool, String type): return an object of the class for a particular stage from stages property based on stage name and buildTool, type of application.Example:context.factory = new StageFactory(script: this)context.factory.loadEdpStages().each() { context.factory.add(it) }context.factory.loadCustomStages(\"${context.workDir}/stages\").each() { context.factory.add(it) }context.factory.getStage(stageName.toLowerCase(),context.application.config.build_tool.toLowerCase(),context.application.config.type).run(context)"},{"location":"user-guide/pipeline-framework/#edp-stages-framework","title":"EDP Stages Framework","text":"

                                                                                      Each EDP stage implementation has run method that is as input parameter required to pass a context map with different keys. Some stages can implement the logic for several build tools and application types, some of them are specific.

                                                                                      Inspect the Table 6 and Table 7 that contain the full description of every stage that can be included in Code Review and Build pipelines: Checkout \u2192 Gerrit Checkout \u2192 Compile \u2192 Get version \u2192 Tests \u2192 Sonar \u2192 Build \u2192 Build Docker Image \u2192 Push \u2192 Git tag.

                                                                                      Table 6. The Checkout, Gerrit Checkout, Compile, Get version, and Tests stages description.

                                                                                      Checkout Gerrit Checkout Compile Get version Tests name = \"checkout\",buildTool = [\"maven\", \"npm\", \"dotnet\",\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- StageFactory context.factory- String context.gerrit.branch- String context.gerrit.credentialsId- String context.application.config.cloneUrl name = \"gerrit-checkout\",buildTool = [\"maven\", \"npm\", \"dotnet\",\"gradle\"]type = [ProjectType.APPLICATION, ProjectType.AUTOTESTS, ProjectType.LIBRARY]context required:- String context.workDir- StageFactory context.factory- String context.gerrit.changeName- String context.gerrit.refspecName- String context.gerrit.credentialsId- String context.application.config.cloneUrl name = \"compile\"buildTool = [\"dotnet\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.buildTool.sln_filenameoutput:- String context.buildTool.sln_filenamebuildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.buildTool.command- String context.nexus.credentialsIdbuildTool = [\"maven\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.buildTool.command- String context.nexus.credentialsIdbuildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.buildTool.groupRepository name = \"get-version\"buildTool = [\"dotnet\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- Map(empty) context.application- String context.gerrit.branch- Job context.joboutput:-String context.application.deplyableModule- String context.application.deplyableModuleDir- String context.application.version- String context.application.buildVersionbuildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.buildTool.command- Job context.job- String context.gerrit.branchoutput:- String context.application.deplyableModuleDir- String context.application.version- String context.application.buildVersionbuildTool = [\"maven\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.buildTool.command- Job context.job- String context.gerrit.branchoutput:- String context.application.deplyableModule- String context.application.deplyableModuleDir- String context.application.version- String context.application.buildVersionbuildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- Job context.job- String context.gerrit.branchoutput:- String context.application.deplyableModuleDir- String context.application.version- String context.application.buildVersion name = \"tests\"buildTool = [\"dotnet\"]type = [ProjectType.APPLICATION]context required:- String context.workDirbuildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.buildTool.commandbuildTool = [\"maven\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.buildTool.commandtype = [ProjectType.AUTOTESTS]context required:- String context.workDir- String context.buildTool.command- String context.application.config.report_frameworkbuildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir

                                                                                      Table 7. The Sonar, Build, Build Docker Image, Push, and Git tag stages description.

                                                                                      Sonar Build Build Docker Image Push Git tag name = \"sonar\"buildTool = [\"dotnet\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.job.type- String context.application.name- String context.buildTool.sln_filename- String context.sonar.route- String context.gerrit.changeName(Only for codereview pipeline)- String context.gerrit.branch(Only for build pipeline)buildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.job.type- String context.nexus.credentialsId- String context.buildTool.command- String context.application.name- String context.sonarRoute- String context.gerrit.changeName(Only for codereview pipeline)- String context.gerrit.branch(Only for build pipeline)buildTool = [\"maven\"]type = [ProjectType.APPLICATION, ProjectType.AUTOTESTS, ProjectType.LIBRARY]context required:- String context.workDir- String context.job.type- String context.nexus.credentialsId- String context.application.name- String context.buildTool.command- String context.sonar.route- String context.gerrit.changeName(Only for codereview pipeline)- String context.gerrit.branch(Only for build pipeline)buildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.job.type- String context.sonar.route- String context.application.name- String context.gerrit.changeName(Only for codereview pipeline)- String context.gerrit.branch(Only for build pipeline) name = \"build\"buildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.buildTool.command- String context.nexus.credentialsIdbuildTool = [\"maven\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.buildTool.command- String context.nexus.credentialsIdbuildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.buildTool.groupRepository name = \"build-image\"buildTool = [\"dotnet\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.application.deployableModule- String context.application.deployableModuleDir- String context.application.name- String context.application.config.language- String context.application.buildVersion- Boolean context.job.promoteImages- String context.job.envToPromotebuildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.application.deployableModule- String context.application.deployableModuleDir- String context.application.name- String context.application.config.language- String context.application.buildVersion- Boolean context.job.promoteImages- String context.job.envToPromotebuildTool = [\"maven\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.application.deployableModule- String context.application.deployableModuleDir- String context.application.name- String context.application.config.language- String context.application.buildVersion- Boolean context.job.promoteImages- String context.job.envToPromotebuildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.application.deployableModule- String context.application.deployableModuleDir- String context.application.name- String context.application.config.language- String context.application.buildVersion- Boolean context.job.promoteImages- String context.job.envToPromote name = \"push\"buildTool = [\"dotnet\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.gerrit.project- String context.buildTool.sln_filename- String context.buildTool.snugetApiKey- String context.buildTool.hostedRepositorybuildTool = [\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.application.version- String context.buildTool.hostedRepository- String context. buildTool.settingsbuildTool = [\"maven\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.application.version- String context.buildTool.hostedRepository- String context.buildTool.commandbuildTool = [\"npm\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.nexus.credentialsId- String context.buildTool.hostedRepository- String context.gerrit.autouser name = \"git-tag\"buildTool = [\"maven\", \"npm\", \"dotnet\",\"gradle\"]type = [ProjectType.APPLICATION]context required:- String context.workDir- String context.gerrit.credentialsId- String context.gerrit.sshPort- String context.gerrit.host- String context.gerrit.autouser- String context.application.buildVersion"},{"location":"user-guide/pipeline-framework/#deploy-pipeline","title":"Deploy Pipeline","text":"

                                                                                      Deploy() \u2013 a function that allows using the EDP implementation for the deploy pipeline. All values of different parameters that are used during the pipeline execution are stored in the \"Map\" context.

                                                                                      The deploy pipeline consists of several steps:

                                                                                      On the master:

                                                                                      • Initialization of all objects (Platform, Job, Gerrit, Nexus, StageFactory) and loading the default implementations of EDP stages;
                                                                                      • Creating an environment if it doesn`t exist;
                                                                                      • Deploying the last versions of the applications;
                                                                                      • Run predefined manual gates.

                                                                                      On a particular autotest Jenkins agent that depends on the build tool:

                                                                                      • Creating workdir for autotest sources;
                                                                                      • Run predefined autotests.
                                                                                      "},{"location":"user-guide/pipeline-framework/#edp-library-pipelines-description","title":"EDP Library Pipelines Description","text":"

                                                                                      _Using in pipelines - @Library(['edp-library-pipelines@version']) _

                                                                                      The corresponding enums and interfaces with their methods can be used separately from the EDP Pipelines library function (please refer to Table 8 and Table 9).

                                                                                      Table 8. Enums and Interfaces with the respective properties, methods, and examples.

                                                                                      Enums Interfaces PlatformType:- OPENSHIFT- KUBERNETESJobType:- CODEREVIEW- BUILD- DEPLOYBuildToolType:- MAVEN- GRADLE- NPM- DOTNET Platform() - contains methods for working with platform CLI. At the moment only OpenShift is supported.Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Methods:getJsonPathValue(String k8s_kind, String k8s_kind_name, String jsonPath): return String value of specific parameter of particular object using jsonPath utility. Example: context.platform.getJsonPathValue(\"cm\",\"project-settings\",\".data.username\") BuildTool() - contains methods for working with different buildTool from ENUM BuildToolType. (Should be invoked on Jenkins build agents)Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Nexus object - Object of class Nexus.Methods:init: return parameters of buildTool that are needed for running stages. Example:context.buildTool = new BuildToolFactory().getBuildToolImpl(context.application.config.build_tool, this, context.nexus)context.buildTool.init()

                                                                                      Table 9. Classes with the respective properties, methods, and examples.

                                                                                      Classes Description (properties, methods, and examples) PlatformFactory() - Class that contains methods getting implementation of CLI of platform. At the moment OpenShift and Kubernetes are supported. Methods:getPlatformImpl(PlatformType platform, Script script): return Class PlatformExample: context.platform = new PlatformFactory().getPlatformImpl(PlatformType.OPENSHIFT, this) Application(String name, Platform platform, Script script) - Class that describe the application object. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Platform platform - Object of a class Platform()String name - Name for the application for creating objectMap config - Map of configuration settings for particular application that is loaded from config map project-settingsString version - Application version, initially empty. Is set on get-version step.String deployableModule - The name of deployable module for multi module applications, initially empty.String buildVersion - Version of built artifact, contains build number of Job initially emptyString deployableModuleDir - The name of deployable module directory for multi module applications, initially empty.Array imageBuildArgs - List of arguments for building application Docker imageMethods: setConfig(String gerrit_autouser, String gerrit_host, String gerrit_sshPort, String gerrit_project): set the config property with values from config mapExample: context.application = new Application(context.job, context.gerrit.project, context.platform, this) context.application.setConfig(context.gerrit.autouser, context.gerrit.host, context.gerrit.sshPort, context.gerrit.project) Job(type: JobType.value, platform: Platform, script: Script) - Class that describe the Gerrit tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\"Platform platform - Object of a class Platform().JobType.value type.String deployTemplatesDirectory - The name of the directory in application repository, where deploy templates are located. Can be set for particular Job through DEPLOY_TEMPLATES_DIRECTORY parameter.String edpName - The name of the EDP Project.Map stages - Contains all stages in JSON format that is retrieved from Jenkins job env variable.String envToPromote - The name of the environment for promoting images.Boolean promoteImages - Defines whether images should be promoted or not. Methods:getParameterValue(String parameter, String defaultValue = null): return parameter of ENV variable of Jenkins job. init(): set all the properties of Job object. setDisplayName(String displayName): set display name of the Jenkins job. setDescription(String description, Boolean addDescription = false): set new or add to existing description of the Jenkins job. printDebugInfo(Map context): print context info to log of Jenkins job. runStage(String stage_name, Map context): run the particular stage according to its name. Example: context.job = new Job(JobType.DEPLOY.value, context.platform, this) context.job.init() context.job.printDebugInfo(context) context.job.setDisplayName(\"test\") context.job.setDescription(\"Name: ${context.application.config.name}\") Gerrit(Job job, Platform platform, Script script) - Class that describe the Gerrit tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\".Platform platform - Object of a class Platform(). Job job - Object of a class Job().String credentialsId - Credential Id in Jenkins for Gerrit. String autouser - Username of autouser in Gerrit for integration with Jenkins. String host - Gerrit host. String project - project name of built application. String branch - branch to build application from. String changeNumber - change number of Gerrit commit. String changeName - change name of Gerrit commit. String refspecName - refspecName of Gerrit commit. String sshPort - gerrit ssh port number. String patchsetNumber - patchsetNumber of Gerrit commit.Methods:init(): set all the properties of Gerrit object. Example:context.gerrit = new Gerrit(context.job, context.platform, this)context.gerrit.init(). Nexus(Job job, Platform platform, Script script) - Class that describe the Nexus tool. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\". Platform platform - Object of a class Platform(). Job job - Object of a class Job(). String autouser - Username of autouser in Nexus for integration with Jenkins. String credentialsId - Credential Id in Jenkins for Nexus. String host - Nexus host. String port - Nexus http(s) port. String repositoriesUrl - Base URL of repositories in Nexus. String restUrl - URL of Rest API. Methods:init(): set all the properties of Nexus object. Example: context.nexus = new Nexus(context.job, context.platform, this) context.nexus.init()."},{"location":"user-guide/pipeline-framework/#edp-library-stages-description_1","title":"EDP Library Stages Description","text":"

                                                                                      Using in pipelines - @Library(['edp-library-stages@version']) _

                                                                                      The corresponding classes with methods can be used separately from the EDP Pipelines library function (please refer to Table 10).

                                                                                      Table 10. Classes with the respective properties, methods, and examples.

                                                                                      Classes Description (properties, methods, and examples) StageFactory() - Class that contains methods getting implementation of particular stage either EDP from shared library or custom from application repository. Properties:Script script - Object with type script, in most cases if class created from Jenkins pipelines it is \"this\"Map stages - Map of stages implementationsMethods:loadEdpStages(): return list of Classes that describes EDP stages implementationsloadCustomStages(String directory): return list of Classes that describes EDP custom stages from application repository from \"directory\". The \"directory\" should be absolute path to files with classes of custom stages implementations. Should be run from Jenkins agent.add(Class clazz): register class for some particular stage in stages map of StageFactory classgetStage(String name, String buildTool, String type): return object of the class for particular stage from stages property based on stage name and buildTool, type of applicationExample:context.factory = new StageFactory(script: this)context.factory.loadEdpStages().each() { context.factory.add(it) }context.factory.loadCustomStages(\"${context.workDir}/stages\").each() { context.factory.add(it) }context.factory.getStage(stageName.toLowerCase(),context.application.config.build_tool.toLowerCase(),context.application.config.type).run(context)."},{"location":"user-guide/pipeline-framework/#deploy-pipeline-stages","title":"Deploy Pipeline Stages","text":"

                                                                                      Each EDP stage implementation has run method that is as input parameter required to pass a context map with different keys. Some stages can implement the logic for several build tools and application types, some of them are specific.

                                                                                      The stages for the deploy pipeline are independent of the build tool and application type. Find below (see Table 11 ) the full description of every stage: Deploy \u2192 Automated tests \u2192 Promote Images.

                                                                                      Table 11. The Deploy, Automated tests, and Promote Images stages description.

                                                                                      Deploy Automated tests Promote Images name = \"deploy\"buildTool = nulltype = nullcontext required:\u2022 String context.workDir\u2022 StageFactory context.factory\u2022 String context.gerrit.autouser\u2022 String context.gerrit.host\u2022 String context.application.config.cloneUrl\u2022 String context.jenkins.token\u2022 String context.job.edpName\u2022 String context.job.buildUrl\u2022 String context.job.jenkinsUrl\u2022 String context.job.metaProject\u2022 List context.job.applicationsList [['name':'application1_name','version':'application1_version],...]\u2022 String context.job.deployTemplatesDirectoryoutput:\u2022 List context.job.updatedApplicaions [['name':'application1_name','version':'application1_version],...] name = \"automation-tests\", buildTool = null, type = nullcontext required:- String context.workDir- StageFactory context.factory- String context.gerrit.credentialsId- String context.autotest.config.cloneUrl- String context.autotest.name- String context.job.stageWithoutPrefixName- String context.buildTool.settings- String context.autotest.config.report_framework name = \"promote-images\"buildTool = nulltype = nullcontext required:- String context.workDir- String context.buildTool.sln_filename- List context.job.updatedApplicaions [['name':'application1_name','version':'application1_version],...]"},{"location":"user-guide/pipeline-framework/#how-to-redefine-or-extend-edp-pipeline-stages-library_1","title":"How to Redefine or Extend EDP Pipeline Stages Library","text":"

                                                                                      Info

                                                                                      Currently, the redefinition of Deploy pipeline stages is prohibited.

                                                                                      "},{"location":"user-guide/pipeline-framework/#using-edp-library-stages-in-the-pipeline","title":"Using EDP Library Stages in the Pipeline","text":"

                                                                                      In order to use the EDP stages, the created pipeline should fit some requirements, that`s why a developer has to do the following:

                                                                                      • import libraries - @Library(['edp-library-stages', 'edp-library-pipelines']) _
                                                                                      • import reference EDP classes(See example below)
                                                                                      • define context Map \u2013 context = [:]
                                                                                      • define reference \"init\" stage

                                                                                      After that, there is the ability to run any EDP stage beforehand by defining requirement context context.job.runStage(\"Deploy\", context).

                                                                                      For instance, the pipeline can look like:

                                                                                      @Library(['edp-library-stages', 'edp-library-pipelines']) _\n\nimport com.epam.edp.stages.StageFactory\nimport com.epam.edp.platform.PlatformFactory\nimport com.epam.edp.platform.PlatformType\nimport com.epam.edp.JobType\n\ncontext = [:]\n\nnode('master') {\nstage(\"Init\") {\ncontext.platform = new PlatformFactory().getPlatformImpl(PlatformType.OPENSHIFT, this)\ncontext.job = new com.epam.edp.Job(JobType.DEPLOY.value, context.platform, this)\ncontext.job.init()\ncontext.job.initDeployJob()\nprintln(\"[JENKINS][DEBUG] Created object job with type - ${context.job.type}\")\n\ncontext.nexus = new com.epam.edp.Nexus(context.job, context.platform, this)\ncontext.nexus.init()\n\ncontext.jenkins = new com.epam.edp.Jenkins(context.job, context.platform, this)\ncontext.jenkins.init()\n\ncontext.gerrit = new com.epam.edp.Gerrit(context.job, context.platform, this)\ncontext.gerrit.init()\n\ncontext.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n\ncontext.environment = new com.epam.edp.Environment(context.job.deployProject, context.platform, this)\ncontext.job.printDebugInfo(context)\ncontext.job.setDisplayName(\"${currentBuild.displayName}-${context.job.deployProject}\")\n\ncontext.job.generateInputDataForDeployJob()\n}\n\nstage(\"Pre Deploy Custom stage\") {\nprintln(\"Some custom pre deploy logic\")\n}\n\ncontext.job.runStage(\"Deploy\", context)\n\nstage(\"Post Deploy Custom stage\") {\nprintln(\"Some custom post deploy logic\")\n}\n}\n

                                                                                      Or in a declarative way:

                                                                                      @Library(['edp-library-stages', 'edp-library-pipelines']) _\n\nimport com.epam.edp.stages.StageFactory\nimport com.epam.edp.platform.PlatformFactory\nimport com.epam.edp.platform.PlatformType\nimport com.epam.edp.JobType\n\ncontext = [:]\n\npipeline {\nagent { label 'master'}\nstages {\nstage('Init') {\nsteps {\nscript {\ncontext.platform = new PlatformFactory().getPlatformImpl(PlatformType.OPENSHIFT, this)\ncontext.job = new com.epam.edp.Job(JobType.DEPLOY.value, context.platform, this)\ncontext.job.init()\ncontext.job.initDeployJob()\nprintln(\"[JENKINS][DEBUG] Created object job with type - ${context.job.type}\")\n\ncontext.nexus = new com.epam.edp.Nexus(context.job, context.platform, this)\ncontext.nexus.init()\n\ncontext.jenkins = new com.epam.edp.Jenkins(context.job, context.platform, this)\ncontext.jenkins.init()\n\ncontext.gerrit = new com.epam.edp.Gerrit(context.job, context.platform, this)\ncontext.gerrit.init()\n\ncontext.factory = new StageFactory(script: this)\ncontext.factory.loadEdpStages().each() { context.factory.add(it) }\n\ncontext.environment = new com.epam.edp.Environment(context.job.deployProject, context.platform, this)\ncontext.job.printDebugInfo(context)\ncontext.job.setDisplayName(\"${currentBuild.displayName}-${context.job.deployProject}\")\n\ncontext.job.generateInputDataForDeployJob()\n}\n}\n}\nstage('Deploy') {\nsteps {\nscript {\ncontext.factory.getStage(\"deploy\").run(context)\n}\n}\n}\n\nstage('Custom stage') {\nsteps {\nprintln(\"Some custom logic\")\n}\n}\n}\n}\n
                                                                                      "},{"location":"user-guide/pipeline-framework/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Application
                                                                                      • Add Library
                                                                                      • Add CD Pipeline
                                                                                      • CI Pipeline Details
                                                                                      • CD Pipeline Details
                                                                                      • Customize CI Pipeline
                                                                                      • Customize CD Pipeline
                                                                                      • EDP Stages
                                                                                      • Glossary
                                                                                      • Use Terraform Library in EDP
                                                                                      "},{"location":"user-guide/pipeline-stages/","title":"Pipeline Stages","text":"

                                                                                      Get acquainted with EDP CI/CD workflow and stages description.

                                                                                      "},{"location":"user-guide/pipeline-stages/#edp-cicd-workflow","title":"EDP CI/CD Workflow","text":"

                                                                                      Within EDP, the pipeline framework comprises the following pipelines:

                                                                                      • Code Review;
                                                                                      • Build;
                                                                                      • Deploy.

                                                                                      Note

                                                                                      Please refer to the EDP Pipeline Framework page for details.

                                                                                      The diagram below shows the delivery path through these pipelines and the respective stages. Please be aware that stages may differ for different codebase types.

                                                                                      stages

                                                                                      "},{"location":"user-guide/pipeline-stages/#stages-description","title":"Stages Description","text":"

                                                                                      The table below provides the details on all the stages in the EDP pipeline framework:

                                                                                      Name Dependency Description Pipeline Application Library Autotest Source code Documentation init Initiates information gathering Create Release, Code Review, Build + + Build.groovy checkout Performs for all files the checkout from a selected branch of the Git repository. For the main branch - from HEAD, for code review - from the commit Create Release, Build + + Checkout.groovy sast Launches vulnerability testing via Semgrep scanner. Pushes a vulnerability report to the DefectDojo. Build + Security compile Compiles the code, includes individual groovy files for each type of app or lib (NPM, DotNet, Python, Maven, Gradle) Code Review, Build + + Compile tests Launches testing procedure, includes individual groovy files for each type of app or lib Code Review, Build + + + Tests sonar Launches testing via SonarQube scanner and includes individual groovy files for each type of app or lib Code Review, Build + + Sonar build Builds the application, includes individual groovy files for each type of app or lib (Go, Maven, Gradle, NPM) Code Review, Build + Build create-branch EDP create-release process Creates default branch in Gerrit during create and clone strategies Create Release + + + CreateBranch.groovy trigger-job EDP create-release process Triggers \"build\" job Create Release + + + TriggerJob.groovy gerrit-checkout Performs checkout to the current project branch in Gerrit Code Review + + + GerritCheckout.groovy commit-validate Optional in EDP Admin Console Takes Jira parameters, when \"Jira Integration\" is enabled for the project in the Admin Console. Code Review + + CommitValidate.groovy dockerfile-lint Launches linting tests for Dockerfile Code Review + LintDockerApplicationLibrary.groovy Use Dockerfile Linters for Code Review dockerbuild-verify \"Build\" stage (if there are no \"COPY\" layers in Dockerfile) Launches build procedure for Dockerfile without pushing an image to the repository Code Review + BuildDockerfileApplicationLibrary.groovy Use Dockerfile Linters for Code Review helm-lint Launches linting tests for deployment charts Code Review + LintHelmApplicationLibrary.groovy Use helm-lint for Code Review helm-docs Checks generated documentation for deployment charts Code Review + HelmDocsApplication.groovy Use helm-docs for Code Review helm-uninstall Helm release deletion step to clear Helm releases Deploy + HelmUninstall.groovy Helm release deletion semi-auto-deploy-input Provides auto deploy with timeout and manual deploy flow Deploy + SemiAutoDeployInput.groovy Semi Auto Deploy get-version Defines the versioning of the project depending on the versioning schema selected in Admin Console Build + + GetVersion terraform-plan AWS credentials added to Jenkins Checks Terraform version, and installs default version if necessary, and launches terraform init, returns AWS username which used for action, and terraform plan command is called with an output of results to .tfplan file Build + TerraformPlan.groovy Use Terraform library in EDP terraform-apply AWS credentials added to Jenkins, the \"Terraform-plan\" stage Checks Terraform version, and installs default version if necessary, and launches terraform init, launches terraform plan from saves before .tfplan file, asks to approve, and run terraform apply from .tfplan file Build + TerraformApply.groovy Use Terraform library in EDP build-image-from-dockerfile Platform: OpenShift Builds Dockerfile Build + + .groovy files for building Dockerfile image build-image-kaniko Platform: k8s Builds Dockerfile using the Kaniko tool Build + BuildImageKaniko.groovy push Pushes an artifact to the Nexus repository Build + + Push create-Jira-issue-metadata \"get-version\" stage Creates a temporary CR in the namespace and after that pushes Jira Integration data to Jira ticket, and delete CR Build + + JiraIssueMetadata.groovy ecr-to-docker DockerHub credentials added to Jenkins Copies the docker image from the ECR project registry to DockerHub via the Crane tool after it is built Build + EcrToDocker.groovy Promote Docker Images From ECR to Docker Hub git-tag \"Get-version\" stage Creates a tag in SCM for the current build Build + + GitTagApplicationLibrary.groovy deploy Deploys the application Deploy + Deploy.groovy manual Works with the manual approve to proceed Deploy + ManualApprove.groovy promote-images Promotes docker images to the registry Deploy + PromoteImage.groovy

                                                                                      Note

                                                                                      The Create Release pipeline is an internal EDP mechanism for adding, importing or cloning a codebase. It is not a part of the pipeline framework.

                                                                                      "},{"location":"user-guide/pipeline-stages/#related-articles","title":"Related Articles","text":"
                                                                                      • Manage Jenkins CI Job Provisioner
                                                                                      • GitLab Webhook Configuration
                                                                                      • GitHub Webhook Configuration
                                                                                      "},{"location":"user-guide/prepare-for-release/","title":"Prepare for Release","text":"

                                                                                      After the necessary applications are added to EDP, they can be managed via the Admin Console. To prepare for the release, create a new branch from a selected commit with a set of CI pipelines (Code Review and Build pipelines), launch the Build pipeline, and add a new CD pipeline as well.

                                                                                      Note

                                                                                      Please refer to the Add Application and Add CD Pipeline for the details on how to add an application or a CD pipeline.

                                                                                      Become familiar with the following preparation steps for release and a CD pipeline structure:

                                                                                      • Create a new branch
                                                                                      • Launch the Build pipeline
                                                                                      • Add a new CD pipeline
                                                                                      • Check CD pipeline structure
                                                                                      "},{"location":"user-guide/prepare-for-release/#create-a-new-branch","title":"Create a New Branch","text":"
                                                                                      1. Open Gerrit via the Admin Console Overview page to have this tab available in a web browser.

                                                                                      2. Being in Admin Console, open the Applications section and click an application from the list to create a new branch.

                                                                                      3. Once clicked the application name, scroll down to the Branches menu and click the Create button to open the Create New Branch dialog box, fill in the Branch Name field by typing a branch name.

                                                                                        • Open the Gerrit tab in the web browser, navigate to Projects \u2192 List \u2192 select the application \u2192 Branches \u2192 gitweb for a necessary branch.
                                                                                        • Select the commit that will be the last included to a new branch commit.
                                                                                        • Copy to clipboard the commit hash.
                                                                                      4. Paste the copied hash to the From Commit Hash field and click Proceed.

                                                                                      Note

                                                                                      If the commit hash is not added to the From Commit Hash field, the new branch will be created from the head of the master branch.

                                                                                      "},{"location":"user-guide/prepare-for-release/#launch-the-build-pipeline","title":"Launch the Build Pipeline","text":"
                                                                                      1. After the new branches are added, open the details page of every application and click the CI link that refers to Jenkins.

                                                                                        Note

                                                                                        The adding of a new branch may take some time. As soon as the new branch is created, it will be displayed in the list of the Branches menu.

                                                                                      2. To build a new version of a corresponding Docker container (an image stream in OpenShift terms) for the new branch, start the Build pipeline. Being in Jenkins, select the new branch tab and click the link to the Build pipeline.

                                                                                      3. Navigate to the Build with Parameters option and click the Build button to launch the Build pipeline.

                                                                                        Warning

                                                                                        The predefined default parameters should not be changed when triggering the Build pipeline, otherwise, it will lead to the pipeline failure.

                                                                                      "},{"location":"user-guide/prepare-for-release/#add-a-new-cd-pipeline","title":"Add a New CD Pipeline","text":"
                                                                                      1. Add a new CD pipeline and indicate the new release branch using the Admin console tool. Pay attention to the Applications menu, the necessary application(s) should be selected there, as well as the necessary branch(es) from the drop-down list.

                                                                                        Note

                                                                                        For the details on how to add a CD pipeline, please refer to the Add CD Pipeline page.

                                                                                      2. As soon as the Build pipelines are successfully passed in Jenkins, the Docker Registry, which is used in EDP by default, will have the new image streams (Docker container in Kubernetes terms) version that corresponds to the current branch.

                                                                                      3. Open the Kubernetes/OpenShift page of the project via the Admin Console Overview page \u2192 go to CodebaseImageStream (in OpenShift, go to Builds \u2192 Images) \u2192 check whether the image streams are created under the specific name (the combination of the application and branch names) and the specific tags are added. Click every image stream link.

                                                                                      "},{"location":"user-guide/prepare-for-release/#check-cd-pipeline-structure","title":"Check CD Pipeline Structure","text":"

                                                                                      When the CD pipeline is added through the Admin Console, it becomes available in the CD pipelines list. Every pipeline has the details page with the additional information. To explore the CD pipeline structure, follow the steps below:

                                                                                      1. Open Admin Console and navigate to Continuous Delivery section, click the newly created CD pipeline name.

                                                                                      2. Discover the CD pipeline components:

                                                                                        • Applications - the list of applications with the image streams and links to Jenkins for the respective branch;
                                                                                        • Stages - a set of stages with the defined characteristics and links to Kubernetes/OpenShift project;

                                                                                        Note

                                                                                        Initially, an environment is empty and does not have any deployment unit. When deploying the subsequent stages, the artifacts of the selected versions will be deployed to the current project and the environment will display the current stage status. The project has a standard pattern: \u2039edp-name\u203a-\u2039pipeline-name\u203a-\u2039stage-name\u203a.

                                                                                        • Deployed Versions - the deployment status of the specific application and the predefined stage.
                                                                                      "},{"location":"user-guide/prepare-for-release/#launch-cd-pipeline-manually","title":"Launch CD Pipeline Manually","text":"

                                                                                      Follow the steps below to deploy the QA and UAT application stages:

                                                                                      1. As soon as the Build pipelines for both applications are successfully passed, the new version of the Docker container will appear, thus allowing to launch the CD pipeline. Simply navigate to Continuous Delivery and click the pipeline name to open it in Jenkins.

                                                                                      2. Click the QA stage link.

                                                                                      3. Deploy the QA stage by clicking the Build Now option.

                                                                                      4. After the initialization step starts, in case another menu is opened, the Pause for Input option will appear. Select the application version in the drop-down list and click Proceed. The pipeline passes the following stages:

                                                                                        • Init - initialization of the Jenkins pipeline outputs with the stages that are the Groovy scripts that execute the current code;
                                                                                        • Deploy - the deployment of the selected versions of the docker container and third-party services. As soon as the Deployed pipeline stage is completed, the respective environment will be deployed.
                                                                                        • Approve - the verification stage that enables to Proceed or Abort this stage;
                                                                                        • Promote-images - the creation of the new image streams for the current versions with the pattern combination: [pipeline name]-[stage name]-[application name]-[verified];

                                                                                        After all the stages are passed, the new image streams will be created in the Kubernetes/OpenShift with the new names.

                                                                                      5. Deploy the UAT stage, which takes the versions that were verified during the QA stage, by clicking the Build Now option, and select the necessary application versions. The launch process is the same as for all the deploy pipelines.

                                                                                      6. To get the status of the pipeline deployment, open the CD pipeline details page and check the Deployed versions state.

                                                                                      "},{"location":"user-guide/prepare-for-release/#cd-pipeline-as-a-team-environment","title":"CD Pipeline as a Team Environment","text":"

                                                                                      Admin Console allows creating a CD pipeline with a part of the application set as a team environment. To do this, perform the following steps;

                                                                                      1. Open the Continuous Delivery section \u2192 click the Create button \u2192 enter the pipeline name (e.g. team-a) \u2192 select ONE application and choose the master branch for it \u2192 add one DEV stage.
                                                                                      2. As soon as the CD pipeline is added to the CD pipelines list, its details page will display the links to Jenkins and Kubernetes/OpenShift.
                                                                                      3. Open Jenkins and deploy the DEV stage by clicking the Build Now option.
                                                                                      4. Kubernetes/OpenShift keeps an independent environment that allows checking the new versions, thus speeding up the developing process when working with several microservices.

                                                                                      As a result, the team will have the same abilities to verify the code changes when developing and during the release.

                                                                                      "},{"location":"user-guide/prepare-for-release/#related-articles","title":"Related Articles","text":"
                                                                                      • Add Application
                                                                                      • Add CD Pipeline
                                                                                      • Autotest as Qulity Gate
                                                                                      • Build Pipeline
                                                                                      • CD Pipeline Details
                                                                                      • Customize CD Pipeline
                                                                                      "},{"location":"user-guide/semi-auto-deploy/","title":"Semi Auto Deploy","text":"

                                                                                      The Semi Auto Deploy stage provides the ability to deploy applications with the custom logic that comprises the following behavior:

                                                                                      • When the build of an application selected for deploy in the CD pipeline is completed, the Deploy pipeline is automatically triggered;
                                                                                      • By default, the deploy stage waits for 5 minutes, and if the user does not interfere with the process (cancels or selects certain versions of the application to deploy), then the deploy stage will deploy the latest versions of all applications;
                                                                                      • The stage can be used in the manual mode.

                                                                                      To enable the Semi Auto Deploy stage during the deploy process, follow the steps below:

                                                                                      1. Create or update the CD pipeline: make sure the trigger type for the stage is set to auto.
                                                                                      2. Replace the {\"name\":\"auto-deploy-input\",\"step_name\":\"auto-deploy-input\"} step to the {\"name\":\"semi-auto-deploy-input\",\"step_name\":\"semi-auto-deploy-input\"} step in the CD pipeline. Alternatively, it is possible to create a custom job provisioner with this step.
                                                                                      3. Run the Build pipeline for any application selected in the CD pipeline.
                                                                                      "},{"location":"user-guide/semi-auto-deploy/#exceptional-cases","title":"Exceptional Cases","text":"

                                                                                      After the timeout starts and in case the pipeline has been interrupted not from the Input requested menu, the automatic deployment will be proceeding. To resolve the issue and stop the pipeline, click the Input requested menu -> Abort or being on the pipeline UI, click the Abort button.

                                                                                      "},{"location":"user-guide/semi-auto-deploy/#related-articles","title":"Related Articles","text":"
                                                                                      • Add CD Pipeline
                                                                                      • Customize CD Pipeline
                                                                                      • Manage Jenkins CD Pipeline Job Provisioner
                                                                                      "},{"location":"user-guide/terraform-stages/","title":"CI Pipelines for Terraform","text":"

                                                                                      EPAM Delivery Platform ensures the implemented Terraform support by adding a separate component type called Infrastructure. The Infrastructure codebase type allows to work with Terraform code that is processed by means of stages in the Code-Review and Build pipelines.

                                                                                      "},{"location":"user-guide/terraform-stages/#pipeline-stages-for-terraform","title":"Pipeline Stages for Terraform","text":"

                                                                                      Under the hood, Infrastructure codebase type, namely Terraform, looks quite similar to other codebase types. The distinguishing characterstic of the Infrastructure codebase type is that there is a stage called terraform-check in both of Code Review and Build pipelines. This stage runs the pre-commit activities which in their turn run the following commands and tools:

                                                                                      1. Terraform fmt - the first step of the stage is basically the terraform fmt command. The terraform fmt command automatically updates the formatting of Terraform configuration files to follow the standard conventions and make the code more readable and consistent.

                                                                                      2. Lock provider versions - locks the versions of the Terraform providers used in the project. This ensures that the project uses specific versions of the providers and prevents unexpected changes from impacting the infrastructure due to newer provider versions.

                                                                                      3. Terraform validate - checks the syntax and validity of the Terraform configuration files. It scans the configuration files for all possible issues.

                                                                                      4. Terraform docs - generates human-readable documentation for the Terraform project.

                                                                                      5. Tflint - additional validation step using the tflint linter to provide more in-depth checks in addition to what the terraform validate command does.

                                                                                      6. Checkov - runs the checkov command against the Terraform codebase to identify any security misconfigurations or compliance issues.

                                                                                      7. Tfsec - another security-focused validation step using the tfsec command. Tfsec is a security scanner for Terraform templates that detects potential security issues and insecure configurations in the Terraform code.

                                                                                      Note

                                                                                      The commands and their attributes are displayed in the .pre-commit-config.yaml file.

                                                                                      "},{"location":"user-guide/terraform-stages/#related-articles","title":"Related Articles","text":"
                                                                                      • User Guide Overview
                                                                                      • Add Infrastructure
                                                                                      • Manage Infrastructures
                                                                                      "}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index a02f113db..f4f8fe562 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,737 +2,737 @@ https://epam.github.io/edp-install/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/faq/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/features/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/getting-started/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/glossary/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/overview/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/roadmap/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/supported-versions/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/developer-guide/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/developer-guide/edp-workflow/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/developer-guide/local-development/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/developer-guide/mk-docs-development/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/add-jenkins-agent/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/add-ons-overview/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/add-other-code-language/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/add-security-scanner/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/argocd-integration/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/aws-marketplace-install/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/capsule/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/configure-keycloak-oidc-eks/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/container-registry-harbor-integration-tekton-ci/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/delete-edp/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/delete-jenkins-job-provision/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/dependency-track/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/deploy-aws-eks/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/deploy-okd-4.10/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/deploy-okd/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/ebs-csi-driver/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/edp-access-model/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/edp-kiosk-usage/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/eks-oidc-integration/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/enable-irsa/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/external-secrets-operator-integration/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/github-debug-webhooks/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/github-integration/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/gitlab-debug-webhooks/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/gitlab-integration/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/gitlabci-integration/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/harbor-oidc/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/headlamp-oidc/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/import-strategy-jenkins/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/import-strategy-tekton/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/import-strategy/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-argocd/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-defectdojo/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-edp/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-external-secrets-operator/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-harbor/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-ingress-nginx/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-keycloak/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-kiosk/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-loki/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-reportportal/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-tekton/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-velero/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/install-via-helmfile/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/jira-gerrit-integration/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/jira-integration/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/kaniko-irsa/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/kibana-ilm-rollover/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/kubernetes-cluster-settings/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/logsight-integration/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/loki-irsa/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/manage-custom-certificate/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/manage-jenkins-cd-job-provision/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/manage-jenkins-ci-job-provision/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/migrate-ci-pipelines-from-jenkins-to-tekton/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/multitenant-logging/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/namespace-management/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/nexus-sonatype/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/notification-msteams/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/oauth2-proxy/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/openshift-cluster-settings/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/overview-devsecops/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/overview-manage-jenkins-pipelines/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/overview-sast/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/perf-integration/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/prerequisites/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/report-portal-integration-tekton/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/reportportal-keycloak/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/restore-edp-with-velero/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/sast-scaner-semgrep/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/schedule-pods-restart/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/sonarqube/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/ssl-automation-okd/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/tekton-monitoring/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/tekton-overview/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/upgrade-edp-2.10/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/upgrade-edp-2.11/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/upgrade-edp-2.12/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/upgrade-edp-2.8/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/upgrade-edp-2.9/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/upgrade-edp-3.0/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/upgrade-edp-3.1/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/upgrade-edp-3.2/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/upgrade-edp-3.3/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/upgrade-edp-3.4/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/upgrade-edp-3.5/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/upgrade-keycloak-19.0/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/vcs/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/velero-irsa/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/operator-guide/waf-tf-configuration/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/use-cases/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/use-cases/application-scaffolding/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/use-cases/autotest-as-quality-gate/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/use-cases/external-secrets/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/use-cases/tekton-custom-pipelines/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/add-application/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/add-autotest/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/add-cd-pipeline/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/add-cluster/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/add-custom-global-pipeline-lib/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/add-git-server/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/add-infrastructure/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/add-library/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/add-marketplace/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/add-quality-gate/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/application/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/autotest/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/build-pipeline/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/cd-pipeline-details/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/ci-pipeline-details/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/cicd-overview/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/cluster/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/code-review-pipeline/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/container-stages/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/copy-shared-secrets/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/customize-cd-pipeline/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/customize-ci-pipeline/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/d-d-diagram/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/dockerfile-stages/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/ecr-to-docker-stages/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/git-server-overview/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/helm-release-deletion/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/helm-stages/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/infrastructure/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/library/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/manage-branches/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/marketplace/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/opa-stages/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/pipeline-framework/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/pipeline-stages/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/prepare-for-release/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/semi-auto-deploy/ - 2023-10-11 + 2023-10-12 daily https://epam.github.io/edp-install/user-guide/terraform-stages/ - 2023-10-11 + 2023-10-12 daily \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 554c5ff56cbccbced39b663b972a3e68f6295dbc..c784c90b606b20d8a503b0d61218b4d85cb3c455 100644 GIT binary patch literal 1462 zcmV;n1xflJiwFoU5-4Q?|8r?{Wo=<_E_iKh0M(t%ZsRr($M5?TLGEM6&KB4$lI*Qd z&_2MJ97&8QQXwfj?$dW@$zH?QUV13}cMv#|X!$qPa6b5zmmi;lf7naPE{4~~&Fbm# zL4t|ah4%IFuRs5wZ;wCTzPwDyXDJU_IONyIu9Wfj_4;r)tg|j z%~p#n=dOyGjzcR=l8-mb&L3aaS7&CR^@Zp^vNim1-+aPlN|NsyvXxR) z)ku$05qsaPKIY8nc-N3o+w$JY;jST5WLxdSJu~KM9AniUnjfQ1pjP1E*ff=-CKF zVsayb(d=yUP&#j7Xk4qw3-xj|K5|ddSwqriFfj@{ZLgH1b-AP?b)0s3{i-!kk-q4g zaY&+B@lu4e1R(X<6~D(FjVbP32D3_5MhRAeIZ{b%dN6h$k5mmSSaV*;V{hr}YV(AN zjzlfHOeQ`y>~J9G-a?@Sq;h@D1b(r5_sn|$V{1`us@5W zz`o`wNs2y$^L5bYFc%^#yQ!sv?7A4ST-71~ZuNTxul_0kboCpD%D?1T)e%5p3tP{_ z2nX(lI#5EI-?Hk<@tj^Ei!kTpz!E^A&!=hbDMTc<(ugTGzDGoQuuhx{_P<4)h{DW1 z3z6J~Hq|^gv@U!i3UKaXhGegKIwH7_y+hfG{ zVU5DQyTsH=O0J-$f+rC`*?SJIk8quVz1wlnO7{b$=zRq1x$mc)MwL<|lkrp55L4sJ z(;QW(gx1G4yS9UExHUe2Q~f~|zZEj7HgO;$skp`&&BpaDbUf;c`9?A`kuwa5tIJLzg-*y zq%c`=kihk*L5;4*Bzp77)peRs+A?>)(cJ&`i&?aIvMiz?c0wo7JSUwd0njBnaM8E8=+C(5UvbfY zLqxxcS1^D8ZcMZgL*aM_##gvQqRLVdB)QiH>;aTJI1;RM*g0q10(R z$0VnjLvXF~vMromYH?CW6UQUvj+0oFoc)4+in`t3f|%)C!8zGFH&8Wi*_gF6k0(7XpN3b*FMiM77}Kj(OQ Q%TnI{0YA#4!<1_P0OytH!vFvP literal 1462 zcmV;n1xflJiwFolq9$bm|8r?{Wo=<_E_iKh0M(t%a@#f#$M1OxkMAp4PCAox9Ou?2 zXdhq+EJ>IkKm(v;_3671DMh16FFkbhm&YTMgs9(wi~YctUw(Xy{$VdEyBNMd?lw=4 z4-!nYE_B}?|N8R}`uh0e_0!9oe3tT{g+u=S*q1W>zTF-UhfQ`xMxOE}rf!=}uLpU% zp4>idzCFHvdeB$B!SdJX_2Q#>t#`?K8<}|AbguMsvvF}Nb|M#YVehvu+gkAYbDYZ< z?d$W?^Jm&U(QcK;{gg>KgOQcv!jFpML!N?qscJ*6`e~&(C`~nwwB%`40@Tk1FPo>uhx$s zJj<`xORp-gA+3GZrhQYiXErjgSiW%6tU zA~CrU!Dx0idnlbZF|@AJ(z#sHk$O&zUcYJ$RAean zW*m}eR=gA;EdfYc&kDS;cWL1xMfh5b<^ z1@<*hNmBF~oUemEhoul%Y37a&()2N6xvEtF-0JrXUj0=7=;}8Pm4D5#sw05H7Pg+p z2@c$ib)bYazh%{z<2k)T7GcTBfhB-KUry7~Q;0}zr4>_be2a+mV4XM@?0<_o5rtWN z79zO|U8;F*=v??f6yV&&49Q;ebVP6;2ZzX=q`KMq%N~&*FtAz4_u?f%`q#Z<>cx+( z!y1KqbBU>wlw3hg1y3S?vJV{G5aBul2iI`WO7|nB=zRq1xgX|6qe>}~$@n>Ih^g`A zX^tvXLhECfUDv}l+!`Oiss5;n-w7F2n>dn@R9x$f=Hq}uoX5zG#T_QU6~zII;%Xz7 zD#PR^@h+%=YLlwjS-KJdgS&C|#T5x0ilzjerjQ*He&A8^ROf{)s)ja&Hps`EDaXL& zI6+yd5Leq;XHbpmrg?<&&ynZSKUZ7y0o7NlhavZ_Ezr@?c`9?A`kuwa36uWDzg-*y zq%c`=kihk*MUAefBzp7N)peRs+Ol-N(bE6+i&?aIvaF&Yc0wo7JSUwd0nek&iEIYF|P~8SgVhT5&c;J>wOXt;B{1pB;Pe;MS*=yO+kgk5v4Fzc=-SFIBY zdu^6T%*I|c847zDgM>m}>_+Efqo(g%>EvC5B!w1Ivz_)>b?-u(6eP{2Zj6aUuIiEQ zedoy=