diff --git a/404.html b/404.html index 12ebbc20c..70268dc15 100644 --- a/404.html +++ b/404.html @@ -1 +1 @@ - EPAM Delivery Platform

404 - Not found

\ No newline at end of file + EPAM Delivery Platform

404 - Not found

\ No newline at end of file diff --git a/assets/operator-guide/select_trigger_template.png b/assets/operator-guide/select_trigger_template.png new file mode 100644 index 000000000..82002fc94 Binary files /dev/null and b/assets/operator-guide/select_trigger_template.png differ diff --git a/compliance/index.html b/compliance/index.html index dd05dc230..a55ac32c5 100644 --- a/compliance/index.html +++ b/compliance/index.html @@ -1 +1 @@ - Compliance - EPAM Delivery Platform
Skip to content

Compliance⚓︎

The integrity of your deployments is our paramount commitment. We are devoted to strengthening our Kubernetes platform to comply with the most stringent security standards. Trust is the bedrock of our relationships, and we manifest this commitment by undergoing rigorous third-party audits to ensure compliance. We pledge unwavering support as you manage and deploy solutions within your environment, emphasizing security and reliability. Examine our compliance with various frameworks, laws, and regulations to understand our dedication to upholding robust security standards for the solutions you manage and deploy.

the EDP Badge

\ No newline at end of file + Compliance - EPAM Delivery Platform
Skip to content

Compliance⚓︎

The integrity of your deployments is our paramount commitment. We are devoted to strengthening our Kubernetes platform to comply with the most stringent security standards. Trust is the bedrock of our relationships, and we manifest this commitment by undergoing rigorous third-party audits to ensure compliance. We pledge unwavering support as you manage and deploy solutions within your environment, emphasizing security and reliability. Examine our compliance with various frameworks, laws, and regulations to understand our dedication to upholding robust security standards for the solutions you manage and deploy.

the EDP Badge

\ No newline at end of file diff --git a/developer-guide/annotations-and-labels/index.html b/developer-guide/annotations-and-labels/index.html index 007403fc8..0ebf8e975 100644 --- a/developer-guide/annotations-and-labels/index.html +++ b/developer-guide/annotations-and-labels/index.html @@ -1,4 +1,4 @@ - Annotations and Labels - EPAM Delivery Platform
Skip to content

Annotations and Labels⚓︎

EPAM Delivery Platform uses labels to interact with various resources in a Kubernetes cluster. This guide details the resources, annotations, and labels used by the platform to streamline operations, enhance monitoring, and enforce governance.

Labels⚓︎

The table below contains all the labels used in EDP:

Label Key Target Resources Possible Values Description
app.edp.epam.com/secret-type Secrets jira, nexus, sonar, defectdojo, dependency-track,repository Identifies the type of the secret.
app.edp.epam.com/integration-secret Secrets true Indicates if the secret is used for integration.
app.edp.epam.com/codebase PipelineRun <codebase_name> Identifies the codebase associated with the PipelineRun.
app.edp.epam.com/codebasebranch PipelineRun <codebase_name>-<branch_name> Identifies the codebase branch associated with the PipelineRun.
app.edp.epam.com/pipeline PipelineRun, Taskrun <environment_name> Used by the EDP Portal to display autotests status(on Deploy environment)
app.edp.epam.com/pipelinetype PipelineRun, Taskrun autotestRunner, build, review, deploy Identifies the type of the Pipeline.
app.edp.epam.com/parentPipelineRun PipelineRun <cd-pipeline-autotest-runner-name> Used by the EDP Portal to display autotests status(on Deploy environment)
app.edp.epam.com/stage PipelineRun, Taskrun <stage_name> Used by the EDP Portal to display autotests status(on Deploy environment)
app.edp.epam.com/branch PipelineRun <branch_name> Identifies the branch associated with the PipelineRun.
app.edp.epam.com/codebaseType Codebase system,application Identify the type of the codebase.
app.edp.epam.com/systemType Codebase gitops Identify system repositories.
app.edp.epam.com/gitServer Ingress <gitServer_name> Identifies the ingress associated with the GitServer.
app.edp.epam.com/cdpipeline PipelineRun, Taskrun <cdpipeline> Identify cd pipeline associated with the PipelineRun
app.edp.epam.com/cdstage PipelineRun, Taskrun <cd_stage_name> Identify cd stage associated with the PipelineRun

Labels Usage in Secrets⚓︎

The table below shows what labels are used by specific secrets:

Secret Name Labels
ci-argocd app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=argocd
ci-defectdojo app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=defectdojo
ci-dependency-track app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=dependency-track
ci-jira app.edp.epam.com/secret-type=jira
ci-nexus app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=nexus
ci-sonarqube app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=sonar
gerrit-ciuser-sshkey app.edp.epam.com/secret-type=repository
kaniko-docker-config app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=registry
regcred app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=registry

Labels Usage in Tekton Pipeline Runs⚓︎

The table below displays what labels are used in specific Tekton pipelines:

PipelineRun Labels
review-pipeline app.edp.epam.com/codebase: <codebase_name>
app.edp.epam.com/codebasebranch: <codebase_name>-<branch_name>
app.edp.epam.com/pipelinetype: review
build-pipeline app.edp.epam.com/codebase: <codebase_name>
app.edp.epam.com/codebasebranch: <codebase_name>-<branch_name>
app.edp.epam.com/pipelinetype: build
autotest-runner-pipeline app.edp.epam.com/pipeline: <pipeline_name>
app.edp.epam.com/pipelinetype: autotestRunner
app.edp.epam.com/stage: <stage>
autotest-pipeline app.edp.epam.com/branch: <branch>
app.edp.epam.com/codebase: <codebase_name>
app.edp.epam.com/parentPipelineRun: <cd_pipeline>-<stage>
app.edp.epam.com/pipeline: <cd_pipeline>
app.edp.epam.com/stage: <stage>
deploy app.edp.epam.com/cdpipeline: <cd_pipeline>
app.edp.epam.com/cdstage: <cd_stage_name>
app.edp.epam.com/pipelinetype: deploy

Pipeline Usage Example⚓︎

To demonstrate label usage in the EDP Tekton pipelines, find below some EDP resource examples:

Codebase specification
metadata:
+ Annotations and Labels - EPAM Delivery Platform      

Annotations and Labels⚓︎

EPAM Delivery Platform uses labels to interact with various resources in a Kubernetes cluster. This guide details the resources, annotations, and labels used by the platform to streamline operations, enhance monitoring, and enforce governance.

Labels⚓︎

The table below contains all the labels used in EDP:

Label Key Target Resources Possible Values Description
app.edp.epam.com/secret-type Secrets jira, nexus, sonar, defectdojo, dependency-track,repository Identifies the type of the secret.
app.edp.epam.com/integration-secret Secrets true Indicates if the secret is used for integration.
app.edp.epam.com/codebase PipelineRun <codebase_name> Identifies the codebase associated with the PipelineRun.
app.edp.epam.com/codebasebranch PipelineRun <codebase_name>-<branch_name> Identifies the codebase branch associated with the PipelineRun.
app.edp.epam.com/pipeline PipelineRun, Taskrun <environment_name> Used by the EDP Portal to display autotests status(on Deploy environment)
app.edp.epam.com/pipelinetype PipelineRun, Taskrun autotestRunner, build, review, deploy Identifies the type of the Pipeline.
app.edp.epam.com/parentPipelineRun PipelineRun <cd-pipeline-autotest-runner-name> Used by the EDP Portal to display autotests status(on Deploy environment)
app.edp.epam.com/stage PipelineRun, Taskrun <stage_name> Used by the EDP Portal to display autotests status(on Deploy environment)
app.edp.epam.com/branch PipelineRun <branch_name> Identifies the branch associated with the PipelineRun.
app.edp.epam.com/codebaseType Codebase system,application Identify the type of the codebase.
app.edp.epam.com/systemType Codebase gitops Identify system repositories.
app.edp.epam.com/gitServer Ingress <gitServer_name> Identifies the ingress associated with the GitServer.
app.edp.epam.com/cdpipeline PipelineRun, Taskrun <cdpipeline> Identify cd pipeline associated with the PipelineRun
app.edp.epam.com/cdstage PipelineRun, Taskrun <cd_stage_name> Identify cd stage associated with the PipelineRun

Labels Usage in Secrets⚓︎

The table below shows what labels are used by specific secrets:

Secret Name Labels
ci-argocd app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=argocd
ci-defectdojo app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=defectdojo
ci-dependency-track app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=dependency-track
ci-jira app.edp.epam.com/secret-type=jira
ci-nexus app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=nexus
ci-sonarqube app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=sonar
gerrit-ciuser-sshkey app.edp.epam.com/secret-type=repository
kaniko-docker-config app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=registry
regcred app.edp.epam.com/integration-secret=true
app.edp.epam.com/secret-type=registry

Labels Usage in Tekton Pipeline Runs⚓︎

The table below displays what labels are used in specific Tekton pipelines:

PipelineRun Labels
review-pipeline app.edp.epam.com/codebase: <codebase_name>
app.edp.epam.com/codebasebranch: <codebase_name>-<branch_name>
app.edp.epam.com/pipelinetype: review
build-pipeline app.edp.epam.com/codebase: <codebase_name>
app.edp.epam.com/codebasebranch: <codebase_name>-<branch_name>
app.edp.epam.com/pipelinetype: build
autotest-runner-pipeline app.edp.epam.com/pipeline: <pipeline_name>
app.edp.epam.com/pipelinetype: autotestRunner
app.edp.epam.com/stage: <stage>
autotest-pipeline app.edp.epam.com/branch: <branch>
app.edp.epam.com/codebase: <codebase_name>
app.edp.epam.com/parentPipelineRun: <cd_pipeline>-<stage>
app.edp.epam.com/pipeline: <cd_pipeline>
app.edp.epam.com/stage: <stage>
deploy app.edp.epam.com/cdpipeline: <cd_pipeline>
app.edp.epam.com/cdstage: <cd_stage_name>
app.edp.epam.com/pipelinetype: deploy

Pipeline Usage Example⚓︎

To demonstrate label usage in the EDP Tekton pipelines, find below some EDP resource examples:

Codebase specification
metadata:
   ...
   name: demo
   ...
diff --git a/developer-guide/autotest-coverage/index.html b/developer-guide/autotest-coverage/index.html
index 7b35cbd09..f99fbf1b4 100644
--- a/developer-guide/autotest-coverage/index.html
+++ b/developer-guide/autotest-coverage/index.html
@@ -1 +1 @@
- Quality Control - EPAM Delivery Platform      

Quality Control⚓︎

In EPAM Delivery Platform, we guarantee the quality of the product not only by using the most advanced tools and best practices but also by covering the whole product functionality with our dedicated automated tests.

Autotest Coverage Scheme⚓︎

Autotests are significant part of our verification flow. Continuous improvement of the verification mechanisms quality is performed to provide users with the most stable version of the platform.

The autotest coverage status is presented on the scheme below:

Autotest coverage status
Autotest coverage status

Release Testing⚓︎

In our testing flow, each release is verified by the following tests:

Test Group Description What's Covered
API Tests Tekton Gerrit, GitHub, and GitLab API long regression Codebase provisioning, reviewing and building pipelines, adding new branches, deploying applications (in a custom namespace), Jira integration, and rechecking for review pipeline.
UI Tests Tekton Gerrit, GitHub, and GitLab UI long regression Codebase provisioning, reviewing and building pipelines, adding new branches, deploying applications (in a custom namespace), Jira integration, and rechecking for review pipeline.
Short Tests Tekton Gerrit , GitHub, and GitLab API short regression Codebase provisioning, reviewing and building pipelines, deploying applications (in a custom namespace), rechecking for review pipeline
Smoke Tekton Gerrit Smoke Codebase provisioning, reviewing and building pipelines, deploying applications.
\ No newline at end of file + Quality Control - EPAM Delivery Platform

Quality Control⚓︎

In EPAM Delivery Platform, we guarantee the quality of the product not only by using the most advanced tools and best practices but also by covering the whole product functionality with our dedicated automated tests.

Autotest Coverage Scheme⚓︎

Autotests are significant part of our verification flow. Continuous improvement of the verification mechanisms quality is performed to provide users with the most stable version of the platform.

The autotest coverage status is presented on the scheme below:

Autotest coverage status
Autotest coverage status

Release Testing⚓︎

In our testing flow, each release is verified by the following tests:

Test Group Description What's Covered
API Tests Tekton Gerrit, GitHub, and GitLab API long regression Codebase provisioning, reviewing and building pipelines, adding new branches, deploying applications (in a custom namespace), Jira integration, and rechecking for review pipeline.
UI Tests Tekton Gerrit, GitHub, and GitLab UI long regression Codebase provisioning, reviewing and building pipelines, adding new branches, deploying applications (in a custom namespace), Jira integration, and rechecking for review pipeline.
Short Tests Tekton Gerrit , GitHub, and GitLab API short regression Codebase provisioning, reviewing and building pipelines, deploying applications (in a custom namespace), rechecking for review pipeline
Smoke Tekton Gerrit Smoke Codebase provisioning, reviewing and building pipelines, deploying applications.
\ No newline at end of file diff --git a/developer-guide/aws-deployment-diagram/index.html b/developer-guide/aws-deployment-diagram/index.html index fb1672696..bd2f6cd79 100644 --- a/developer-guide/aws-deployment-diagram/index.html +++ b/developer-guide/aws-deployment-diagram/index.html @@ -1 +1 @@ - EDP Deployment on AWS - EPAM Delivery Platform

EDP Deployment on AWS⚓︎

This document describes the EPAM Delivery Platform (EDP) deployment architecture on AWS. It utilizes various AWS services such as Amazon Elastic Kubernetes Service (EKS), Amazon EC2, Amazon Route 53, and others to build and deploy software in a repeatable, automated way.

Overview⚓︎

The EDP deployment architecture consists of two AWS accounts: Shared and Explorer. The Shared account hosts shared services, while the Explorer account runs the development team workload and EDP services. Both accounts have an AWS EKS cluster deployed in multiple Availability Zones (AZs). The EKS cluster runs the EDP Services, development team workload, and shared services in the case of the Shared account.

EPAM Delivery Platform Deployment Diagram on AWS
EPAM Delivery Platform Deployment Diagram on AWS

Key Components⚓︎

  1. AWS Elastic Kubernetes Service (EKS): A managed Kubernetes service used to run the EDP Services, development team workload, and shared services. EKS provides easy deployment and management of Kubernetes clusters.
  2. Amazon EC2: Instances running within private subnets that serve as nodes for the EKS cluster. Autoscaling Groups are used to deploy these instances, allowing for scalability based on demand.
  3. Amazon Route 53: A DNS web service manages external and internal DNS records for the EDP deployment. It enables easy access to resources using user-friendly domain names.
  4. AWS Application Load Balancer (ALB): Used for managing ingress traffic into the EDP deployment. Depending on requirements, ALBs can be configured as internal or external load balancers.
  5. AWS WAF: Web Application Firewall service used to protect external ALBs from common web exploits by filtering malicious requests.
  6. AWS Certificate Manager (ACM): A service that provisions manages, and deploys SSL/TLS certificates for use with AWS services. ACM is used to manage SSL certificates for secure communication within the EDP deployment.
  7. AWS Elastic Container Registry (ECR): A fully-managed Docker container registry that stores and manages Docker images. ECR provides a secure and scalable solution for storing container images used in the EDP deployment.
  8. AWS Systems Manager Parameter Store: Used to securely store and manage secrets required by various components of the EDP deployment. Parameter Store protects sensitive information such as API keys, database credentials, and other secrets.

High Availability and Fault Tolerance⚓︎

The EKS cluster is deployed across multiple AZs to ensure high availability and fault tolerance. This allows for automatic failover in case of an AZ outage or instance failure. Autoscaling Groups automatically adjust the number of EC2 instances based on demand, ensuring scalability while maintaining availability.

Design Considerations⚓︎

Reliability⚓︎

  • Using multiple AZs ensures high availability and fault tolerance for the EKS cluster.
  • Autoscaling Groups enable automatic scaling of EC2 instances based on demand, providing reliability during peak loads.
  • Multiple NAT gateways are deployed in each AZ to ensure reliable outbound internet connectivity.

Performance Efficiency⚓︎

  • Utilizing AWS EKS allows for efficient management of Kubernetes clusters without the need for manual configuration or maintenance.
  • Spot instances can be utilized alongside on-demand instances within the EKS cluster to optimize costs while maintaining performance requirements.
  • Amazon Route 53 enables efficient DNS resolution by managing external and internal DNS records.

Security⚓︎

  • External ALBs are protected using AWS WAF, which filters out malicious traffic and protects against common web exploits.
  • ACM is used to provision SSL/TLS certificates, ensuring secure communication within the EDP deployment.
  • Secrets required by various components are securely stored and managed using the AWS Systems Manager Parameter Store.

Cost Optimization⚓︎

  • Utilizing spot and on-demand instances within the EKS cluster can significantly reduce costs while maintaining performance requirements.
  • Autoscaling Groups allow for automatic scaling of EC2 instances based on demand, ensuring optimal resource utilization and cost efficiency.

Conclusion⚓︎

The EPAM Delivery Platform (EDP) deployment architecture on AWS follows best practices and patterns from the Well-Architected Framework. By leveraging AWS services such as EKS, EC2, Route 53, ALB, WAF, ACM, and Parameter Store, the EDP provides a robust and scalable CI/CD system that enables developers to deploy and manage infrastructure and applications quickly. The architecture ensures high availability, fault tolerance, reliability, performance efficiency, security, and cost optimization for the EDP deployment.

\ No newline at end of file + EDP Deployment on AWS - EPAM Delivery Platform

EDP Deployment on AWS⚓︎

This document describes the EPAM Delivery Platform (EDP) deployment architecture on AWS. It utilizes various AWS services such as Amazon Elastic Kubernetes Service (EKS), Amazon EC2, Amazon Route 53, and others to build and deploy software in a repeatable, automated way.

Overview⚓︎

The EDP deployment architecture consists of two AWS accounts: Shared and Explorer. The Shared account hosts shared services, while the Explorer account runs the development team workload and EDP services. Both accounts have an AWS EKS cluster deployed in multiple Availability Zones (AZs). The EKS cluster runs the EDP Services, development team workload, and shared services in the case of the Shared account.

EPAM Delivery Platform Deployment Diagram on AWS
EPAM Delivery Platform Deployment Diagram on AWS

Key Components⚓︎

  1. AWS Elastic Kubernetes Service (EKS): A managed Kubernetes service used to run the EDP Services, development team workload, and shared services. EKS provides easy deployment and management of Kubernetes clusters.
  2. Amazon EC2: Instances running within private subnets that serve as nodes for the EKS cluster. Autoscaling Groups are used to deploy these instances, allowing for scalability based on demand.
  3. Amazon Route 53: A DNS web service manages external and internal DNS records for the EDP deployment. It enables easy access to resources using user-friendly domain names.
  4. AWS Application Load Balancer (ALB): Used for managing ingress traffic into the EDP deployment. Depending on requirements, ALBs can be configured as internal or external load balancers.
  5. AWS WAF: Web Application Firewall service used to protect external ALBs from common web exploits by filtering malicious requests.
  6. AWS Certificate Manager (ACM): A service that provisions manages, and deploys SSL/TLS certificates for use with AWS services. ACM is used to manage SSL certificates for secure communication within the EDP deployment.
  7. AWS Elastic Container Registry (ECR): A fully-managed Docker container registry that stores and manages Docker images. ECR provides a secure and scalable solution for storing container images used in the EDP deployment.
  8. AWS Systems Manager Parameter Store: Used to securely store and manage secrets required by various components of the EDP deployment. Parameter Store protects sensitive information such as API keys, database credentials, and other secrets.

High Availability and Fault Tolerance⚓︎

The EKS cluster is deployed across multiple AZs to ensure high availability and fault tolerance. This allows for automatic failover in case of an AZ outage or instance failure. Autoscaling Groups automatically adjust the number of EC2 instances based on demand, ensuring scalability while maintaining availability.

Design Considerations⚓︎

Reliability⚓︎

  • Using multiple AZs ensures high availability and fault tolerance for the EKS cluster.
  • Autoscaling Groups enable automatic scaling of EC2 instances based on demand, providing reliability during peak loads.
  • Multiple NAT gateways are deployed in each AZ to ensure reliable outbound internet connectivity.

Performance Efficiency⚓︎

  • Utilizing AWS EKS allows for efficient management of Kubernetes clusters without the need for manual configuration or maintenance.
  • Spot instances can be utilized alongside on-demand instances within the EKS cluster to optimize costs while maintaining performance requirements.
  • Amazon Route 53 enables efficient DNS resolution by managing external and internal DNS records.

Security⚓︎

  • External ALBs are protected using AWS WAF, which filters out malicious traffic and protects against common web exploits.
  • ACM is used to provision SSL/TLS certificates, ensuring secure communication within the EDP deployment.
  • Secrets required by various components are securely stored and managed using the AWS Systems Manager Parameter Store.

Cost Optimization⚓︎

  • Utilizing spot and on-demand instances within the EKS cluster can significantly reduce costs while maintaining performance requirements.
  • Autoscaling Groups allow for automatic scaling of EC2 instances based on demand, ensuring optimal resource utilization and cost efficiency.

Conclusion⚓︎

The EPAM Delivery Platform (EDP) deployment architecture on AWS follows best practices and patterns from the Well-Architected Framework. By leveraging AWS services such as EKS, EC2, Route 53, ALB, WAF, ACM, and Parameter Store, the EDP provides a robust and scalable CI/CD system that enables developers to deploy and manage infrastructure and applications quickly. The architecture ensures high availability, fault tolerance, reliability, performance efficiency, security, and cost optimization for the EDP deployment.

\ No newline at end of file diff --git a/developer-guide/aws-infrastructure-cost-estimation/index.html b/developer-guide/aws-infrastructure-cost-estimation/index.html index cbb30d24d..58ac095dd 100644 --- a/developer-guide/aws-infrastructure-cost-estimation/index.html +++ b/developer-guide/aws-infrastructure-cost-estimation/index.html @@ -1 +1 @@ - AWS Infrastructure Cost Estimation - EPAM Delivery Platform

AWS Infrastructure Cost Estimation⚓︎

Effective planning and budgeting are essential for developing applications in cloud computing, with a key part being accurate infrastructure cost estimation. This not only helps in keeping within budget but also enables informed decision-making and resource optimization for project viability.

This guide aims to offer an in-depth look at the factors affecting AWS infrastructure costs for KubeRocketCI and includes analytics and tools for cost estimation.

Platform Components and Approximate Costs⚓︎

This section contains tables outlining the key components of our AWS infrastructure, including a brief description of each component's role, its purpose within our infrastructure, and an estimate of its monthly cost.

Note

The costs mentioned below are estimates. For the most accurate and up-to-date pricing, please refer to the AWS official documentation.

The table below outlines key AWS infrastructure components for KubeRocketCI, detailing each component's role, purpose, and estimated monthly cost:

Component Description Purpose Within Infrastructure
Application Load Balancer (ALB) Distributes incoming application traffic across multiple targets. Ensures high availability and fault tolerance for our applications.
Virtual Private Cloud (VPC) Provides an isolated section of the AWS cloud where resources can be launched. Segregates our infrastructure for enhanced security and management.
3x Network Address Translation (NAT) Gateways Enables instances in a private subnet to connect to the internet or other AWS services. Provides internet access to EC2 instances without exposing them to the public internet.
Elastic Container Registry (ECR) A fully managed container registry. Stores, manages, and deploys container images.
Elastic Kubernetes Service (EKS) A managed Kubernetes service. Simplifies running Kubernetes applications on AWS.
Elastic Block Store (EBS) Provides persistent block storage volumes for use with EC2 instances. Offers highly available and durable storage for our applications.
Elastic Compute Cloud (EC2) Provides scalable computing capacity. Hosts our applications, supporting varied compute workloads.

The table below presents an itemized estimate of monthly costs for KubeRocketCI's AWS infrastructure components, including ALB, VPC, EC2, and more:

Component Approximate Monthly Cost
Application Load Balancer (ALB) $30.00
Virtual Private Cloud (VPC)
- 3x Network Address Translation Gateways
- 3x Public IPv4 Address

$113.88
$10.95
Elastic Container Registry (ECR) $5.00
Elastic Kubernetes Service (EKS)
- 1x EKS Clusters

$73.00
Elastic Block Store (EBS) $14.28
Elastic Compute Cloud (EC2)
- 2x c5.2xlarge (Spot)
- 2x c5.2xlarge (On-Demand)

$219.11
$576.00

AWS Pricing Calculator⚓︎

To further assist in your planning and budgeting efforts, we have pre-configured the AWS Pricing Calculator with inputs matching our infrastructure setup. This tool allows you to explore and adjust the cost estimation based on your specific needs, giving you a personalized overview of potential expenses.

Access the AWS Pricing Calculator with our pre-configured setup here: AWS Pricing Calculator

\ No newline at end of file + AWS Infrastructure Cost Estimation - EPAM Delivery Platform

AWS Infrastructure Cost Estimation⚓︎

Effective planning and budgeting are essential for developing applications in cloud computing, with a key part being accurate infrastructure cost estimation. This not only helps in keeping within budget but also enables informed decision-making and resource optimization for project viability.

This guide aims to offer an in-depth look at the factors affecting AWS infrastructure costs for KubeRocketCI and includes analytics and tools for cost estimation.

Platform Components and Approximate Costs⚓︎

This section contains tables outlining the key components of our AWS infrastructure, including a brief description of each component's role, its purpose within our infrastructure, and an estimate of its monthly cost.

Note

The costs mentioned below are estimates. For the most accurate and up-to-date pricing, please refer to the AWS official documentation.

The table below outlines key AWS infrastructure components for KubeRocketCI, detailing each component's role, purpose, and estimated monthly cost:

Component Description Purpose Within Infrastructure
Application Load Balancer (ALB) Distributes incoming application traffic across multiple targets. Ensures high availability and fault tolerance for our applications.
Virtual Private Cloud (VPC) Provides an isolated section of the AWS cloud where resources can be launched. Segregates our infrastructure for enhanced security and management.
3x Network Address Translation (NAT) Gateways Enables instances in a private subnet to connect to the internet or other AWS services. Provides internet access to EC2 instances without exposing them to the public internet.
Elastic Container Registry (ECR) A fully managed container registry. Stores, manages, and deploys container images.
Elastic Kubernetes Service (EKS) A managed Kubernetes service. Simplifies running Kubernetes applications on AWS.
Elastic Block Store (EBS) Provides persistent block storage volumes for use with EC2 instances. Offers highly available and durable storage for our applications.
Elastic Compute Cloud (EC2) Provides scalable computing capacity. Hosts our applications, supporting varied compute workloads.

The table below presents an itemized estimate of monthly costs for KubeRocketCI's AWS infrastructure components, including ALB, VPC, EC2, and more:

Component Approximate Monthly Cost
Application Load Balancer (ALB) $30.00
Virtual Private Cloud (VPC)
- 3x Network Address Translation Gateways
- 3x Public IPv4 Address

$113.88
$10.95
Elastic Container Registry (ECR) $5.00
Elastic Kubernetes Service (EKS)
- 1x EKS Clusters

$73.00
Elastic Block Store (EBS) $14.28
Elastic Compute Cloud (EC2)
- 2x c5.2xlarge (Spot)
- 2x c5.2xlarge (On-Demand)

$219.11
$576.00

AWS Pricing Calculator⚓︎

To further assist in your planning and budgeting efforts, we have pre-configured the AWS Pricing Calculator with inputs matching our infrastructure setup. This tool allows you to explore and adjust the cost estimation based on your specific needs, giving you a personalized overview of potential expenses.

Access the AWS Pricing Calculator with our pre-configured setup here: AWS Pricing Calculator

\ No newline at end of file diff --git a/developer-guide/aws-reference-architecture/index.html b/developer-guide/aws-reference-architecture/index.html index 084d8520b..27f9ec125 100644 --- a/developer-guide/aws-reference-architecture/index.html +++ b/developer-guide/aws-reference-architecture/index.html @@ -1 +1 @@ - EDP Reference Architecture on AWS - EPAM Delivery Platform

EDP Reference Architecture on AWS⚓︎

The reference architecture of the EPAM Delivery Platform (EDP) on AWS is designed to provide a robust and scalable CI/CD system for developing and deploying software in a repeatable and automated manner. The architecture leverages AWS Managed Services to enable developers to quickly deploy and manage infrastructure and applications. EDP recommends to follow the best practices and patterns from the Well-Architected Framework, the AWS Architecture Center, and EKS Best Practices Guide.

Architecture Details⚓︎

The AWS Cloud comprises three accounts: Production, Shared, and Development.

Note

AWS Account management is out of scope for this document.

Each account serves specific purposes:

  • The Production account is used to host production workloads. The Production account serves as the final destination for deploying business applications. It maintains a separate ECR registry to store Docker images for production-level applications. The environment is designed to be highly resilient and scalable, leveraging the EPAM Delivery Platform's CI/CD pipeline to ensure consistent and automated deployments. With proper access control and separation from development environments, the Production account provides a stable and secure environment for running mission-critical applications.
  • The Development account is dedicated to development workload and lower environments. This account hosts the EDP itself, running on AWS EKS. It provides developers an isolated environment to build, test, and deploy their applications in lower environments, ensuring separation from production workloads. Developers can connect to the AWS Cloud using a VPN, enforcing secure access.
  • The Shared holds shared services that are accessible to all accounts within the organization. These services include SonarQube, Nexus, and Keycloak, which are deployed in Kubernetes Clusters managed by AWS Elastic Kubernetes Service (EKS). The shared services leverage AWS RDS, AWS EFS, and AWS ALB/NLB. The deployment of the shared services is automated using Kubernetes cluster-addons approach with GitOps and Argo CD.

EPAM Delivery Platform Reference Architecture on AWS
EPAM Delivery Platform Reference Architecture on AWS

Infrastructure as Code⚓︎

Infrastructure as Code (IaC) is a key principle in the EPAM Delivery Platform architecture. Terraform is the IaC tool to provision and manage all services in each account. AWS S3 and AWS DynamoDB serve as the backend for Terraform state, ensuring consistency and reliability in the deployment process. This approach enables the architecture to be version-controlled and allows for easy replication and reproducibility of environments.

Container Registry⚓︎

The architecture utilizes AWS Elastic Container Registry (ECR) as a Docker Registry for container image management. ECR offers a secure, scalable, and reliable solution for storing and managing container images. It integrates seamlessly with other AWS services and provides a highly available and durable storage solution for containers in the CI/CD pipeline.

IAM Roles for Service Accounts (IRSA)⚓︎

The EPAM Delivery Platform implements IAM Roles for Service Accounts (IRSA) to provide secure access to AWS services from Kubernetes Clusters. This feature enables fine-grained access control with individual Kubernetes pods assuming specific IAM roles for authenticated access to AWS resources. IRSA eliminates the need for managing and distributing access keys within the cluster, significantly enhancing security and reducing operational complexity.

SSL Certificates⚓︎

The architecture uses the AWS Certificate Manager (ACM) to secure communication between services to provide SSL certificates. ACM eliminates the need to manually manage SSL/TLS certificates, automating the renewal and deployment process. The EDP ensures secure and encrypted traffic within its environment by leveraging ACM.

AWS WAF⚓︎

The architecture's external Application Load Balancer (ALB) endpoint is protected by the AWS Web Application Firewall (WAF). WAF protects against common web exploits and ensures the security and availability of the applications hosted within the EDP. It offers regular rule updates and easy integration with other AWS services.

Parameter Store and Secrets Manager⚓︎

The architecture leverages the AWS Systems Manager Parameter Store and Secrets Manager to securely store and manage all secrets and parameters utilized within the EKS clusters—parameter Store stores general configuration information, such as database connection strings and API keys. In contrast, Secrets Manager securely stores sensitive information, such as passwords and access tokens. By centralizing secrets management, the architecture ensures proper access control and reduces the risk of unauthorized access.

Observability and Monitoring⚓︎

For observability and monitoring, the EDP leverages a suite of AWS Managed Services designed to provide comprehensive insights into the performance and health of applications and infrastructure:

AWS CloudWatch is utilized for monitoring and observability, offering detailed insights into application and infrastructure performance. It enables real-time monitoring of logs, metrics, and events, facilitating proactive issue resolution and performance optimization.

AWS OpenSearch Service (successor to Amazon Elasticsearch Service) provides powerful search and analytics capabilities. It allows for the analysis of log data and metrics, supporting enhanced application monitoring and user experience optimization.

AWS Managed Grafana offers a scalable, secure, and fully managed Grafana service, enabling developers to create and share dashboards for visualizing real-time data.

AWS Prometheus Service, a managed Prometheus-compatible monitoring service, is used for monitoring Kubernetes and container environments. It supports powerful queries and provides detailed insights into container and microservices architectures.

Summary⚓︎

The reference architecture of the EPAM Delivery Platform on AWS provides a comprehensive and scalable environment for building and deploying software applications. With a strong focus on automation, security, and best practices, this architecture enables developers to leverage the full potential of AWS services while following industry-standard DevOps practices.

\ No newline at end of file + EDP Reference Architecture on AWS - EPAM Delivery Platform

EDP Reference Architecture on AWS⚓︎

The reference architecture of the EPAM Delivery Platform (EDP) on AWS is designed to provide a robust and scalable CI/CD system for developing and deploying software in a repeatable and automated manner. The architecture leverages AWS Managed Services to enable developers to quickly deploy and manage infrastructure and applications. EDP recommends to follow the best practices and patterns from the Well-Architected Framework, the AWS Architecture Center, and EKS Best Practices Guide.

Architecture Details⚓︎

The AWS Cloud comprises three accounts: Production, Shared, and Development.

Note

AWS Account management is out of scope for this document.

Each account serves specific purposes:

  • The Production account is used to host production workloads. The Production account serves as the final destination for deploying business applications. It maintains a separate ECR registry to store Docker images for production-level applications. The environment is designed to be highly resilient and scalable, leveraging the EPAM Delivery Platform's CI/CD pipeline to ensure consistent and automated deployments. With proper access control and separation from development environments, the Production account provides a stable and secure environment for running mission-critical applications.
  • The Development account is dedicated to development workload and lower environments. This account hosts the EDP itself, running on AWS EKS. It provides developers an isolated environment to build, test, and deploy their applications in lower environments, ensuring separation from production workloads. Developers can connect to the AWS Cloud using a VPN, enforcing secure access.
  • The Shared holds shared services that are accessible to all accounts within the organization. These services include SonarQube, Nexus, and Keycloak, which are deployed in Kubernetes Clusters managed by AWS Elastic Kubernetes Service (EKS). The shared services leverage AWS RDS, AWS EFS, and AWS ALB/NLB. The deployment of the shared services is automated using Kubernetes cluster-addons approach with GitOps and Argo CD.

EPAM Delivery Platform Reference Architecture on AWS
EPAM Delivery Platform Reference Architecture on AWS

Infrastructure as Code⚓︎

Infrastructure as Code (IaC) is a key principle in the EPAM Delivery Platform architecture. Terraform is the IaC tool to provision and manage all services in each account. AWS S3 and AWS DynamoDB serve as the backend for Terraform state, ensuring consistency and reliability in the deployment process. This approach enables the architecture to be version-controlled and allows for easy replication and reproducibility of environments.

Container Registry⚓︎

The architecture utilizes AWS Elastic Container Registry (ECR) as a Docker Registry for container image management. ECR offers a secure, scalable, and reliable solution for storing and managing container images. It integrates seamlessly with other AWS services and provides a highly available and durable storage solution for containers in the CI/CD pipeline.

IAM Roles for Service Accounts (IRSA)⚓︎

The EPAM Delivery Platform implements IAM Roles for Service Accounts (IRSA) to provide secure access to AWS services from Kubernetes Clusters. This feature enables fine-grained access control with individual Kubernetes pods assuming specific IAM roles for authenticated access to AWS resources. IRSA eliminates the need for managing and distributing access keys within the cluster, significantly enhancing security and reducing operational complexity.

SSL Certificates⚓︎

The architecture uses the AWS Certificate Manager (ACM) to secure communication between services to provide SSL certificates. ACM eliminates the need to manually manage SSL/TLS certificates, automating the renewal and deployment process. The EDP ensures secure and encrypted traffic within its environment by leveraging ACM.

AWS WAF⚓︎

The architecture's external Application Load Balancer (ALB) endpoint is protected by the AWS Web Application Firewall (WAF). WAF protects against common web exploits and ensures the security and availability of the applications hosted within the EDP. It offers regular rule updates and easy integration with other AWS services.

Parameter Store and Secrets Manager⚓︎

The architecture leverages the AWS Systems Manager Parameter Store and Secrets Manager to securely store and manage all secrets and parameters utilized within the EKS clusters—parameter Store stores general configuration information, such as database connection strings and API keys. In contrast, Secrets Manager securely stores sensitive information, such as passwords and access tokens. By centralizing secrets management, the architecture ensures proper access control and reduces the risk of unauthorized access.

Observability and Monitoring⚓︎

For observability and monitoring, the EDP leverages a suite of AWS Managed Services designed to provide comprehensive insights into the performance and health of applications and infrastructure:

AWS CloudWatch is utilized for monitoring and observability, offering detailed insights into application and infrastructure performance. It enables real-time monitoring of logs, metrics, and events, facilitating proactive issue resolution and performance optimization.

AWS OpenSearch Service (successor to Amazon Elasticsearch Service) provides powerful search and analytics capabilities. It allows for the analysis of log data and metrics, supporting enhanced application monitoring and user experience optimization.

AWS Managed Grafana offers a scalable, secure, and fully managed Grafana service, enabling developers to create and share dashboards for visualizing real-time data.

AWS Prometheus Service, a managed Prometheus-compatible monitoring service, is used for monitoring Kubernetes and container environments. It supports powerful queries and provides detailed insights into container and microservices architectures.

Summary⚓︎

The reference architecture of the EPAM Delivery Platform on AWS provides a comprehensive and scalable environment for building and deploying software applications. With a strong focus on automation, security, and best practices, this architecture enables developers to leverage the full potential of AWS services while following industry-standard DevOps practices.

\ No newline at end of file diff --git a/developer-guide/edp-workflow/index.html b/developer-guide/edp-workflow/index.html index b7414d18b..c51db31cf 100644 --- a/developer-guide/edp-workflow/index.html +++ b/developer-guide/edp-workflow/index.html @@ -1,4 +1,4 @@ - KubeRocketCI Project Rules. Working Process - EPAM Delivery Platform

KubeRocketCI Project Rules. Working Process⚓︎

This page contains the details on the project rules and working process for KubeRocketCI team and contributors. Explore the main points about working with GitHub, following the main commit flow, as well as the details about commit types and message below.

Project Rules⚓︎

Before starting the development, please check the project rules:

  1. It is highly recommended to become familiar with the GitHub flow. For details, please refer to the GitHub official documentation and pay attention to the main points:

    a. Creating pull requests in GitHub.

    b. Resolution of Merge Conflict.

    c. Comments resolution.

    d. One GitHub task should have one Pull Request (PR) if it doesn't change multiple operators. If there are many changes within one PR, amend the commit.

  2. Only the Assignee is responsible for the PR merger and Jira task status.

  3. Every PR should be merged in a timely manner.

  4. Log time to Jira ticket.

Working Process⚓︎

With KubeRocketCI, the main workflow is based on the getting a Jira task and creating a Pull Request according to the rules described below.

Workflow

Get Jira task → implement, verify by yourself the results → create Pull Request (PR) → send for review → resolve comments/add changes, ask colleagues for the final review → track the PR merge → verify by yourself the results → change the status in the Jira ticket to CODE COMPLETE or RESOLVED → share necessary links with a QA specialist in the QA Verification channel → QA specialist closes the Jira task after his verification → Jira task should be CLOSED.

Commit Flow

  1. Get a task in the Jira/GitHub dashboard. Please be aware of the following points:

    a. Every task has a reporter who can provide more details in case something is not clear.

    b. The responsible person for the task and code implementation is the assignee who tracks the following:

    • Actual Jira task status.
    • Time logging.
    • Add comments, attach necessary files.
    • In comments, add link that refers to the merged PR (optional, if not related to many repositories).
    • Code review and the final merge.
    • MS Teams chats - ping other colleagues, answer questions, etc.
    • Verification by a QA specialist.
    • Bug fixing.

    c. Pay attention to the task Status that differs in different entities, the workflow will help to see the whole task processing:

    View Jira workflow
    View Jira workflow

    d. There are several entities that are used on the KubeRocketCI project: Story, Improvement, Task, Bug.

    a. Every task has a reporter who can provide more details in case something is not clear.

    b. The responsible person for the task and code implementation is the assignee who tracks the following:

    • Actual GitHub task status.
    • Add comments, attach necessary files.
    • In comments, add link that refers to the merged PR (optional, if not related to many repositories).
    • Code review and the final merge.
    • MS Teams chats - ping other colleagues, answer questions, etc.
    • Verification by a QA specialist.
    • Bug fixing.

    c. If the task is created on your own, make sure it is populated completely. See an example below:

    GitHub issue
    GitHub issue

  2. Implement feature, improvement, fix and check the results on your own. If it is impossible to check the results of your work before the merge, verify all later.

  3. When committing, use the pattern: commit type: Commit message (#GitHub ticket number).

    a. commit type:

    feat: (new feature for the user, not a new feature for build script)

    fix: (bug fix for the user, not a fix to a build script)

    docs: (changes to the documentation)

    style: (formatting, missing semicolons, etc; no production code change)

    refactor: (refactoring production code, eg. renaming a variable)

    test: (adding missing tests, refactoring tests; no production code change)

    chore: (updating grunt tasks etc; no production code change)

    !: (added to other commit types to mark breaking changes) For example:

    feat!: Add ingress links column into Applications table on stage page (#77)
    + KubeRocketCI Project Rules. Working Process - EPAM Delivery Platform      

    KubeRocketCI Project Rules. Working Process⚓︎

    This page contains the details on the project rules and working process for KubeRocketCI team and contributors. Explore the main points about working with GitHub, following the main commit flow, as well as the details about commit types and message below.

    Project Rules⚓︎

    Before starting the development, please check the project rules:

    1. It is highly recommended to become familiar with the GitHub flow. For details, please refer to the GitHub official documentation and pay attention to the main points:

      a. Creating pull requests in GitHub.

      b. Resolution of Merge Conflict.

      c. Comments resolution.

      d. One GitHub task should have one Pull Request (PR) if it doesn't change multiple operators. If there are many changes within one PR, amend the commit.

    2. Only the Assignee is responsible for the PR merger and Jira task status.

    3. Every PR should be merged in a timely manner.

    4. Log time to Jira ticket.

    Working Process⚓︎

    With KubeRocketCI, the main workflow is based on the getting a Jira task and creating a Pull Request according to the rules described below.

    Workflow

    Get Jira task → implement, verify by yourself the results → create Pull Request (PR) → send for review → resolve comments/add changes, ask colleagues for the final review → track the PR merge → verify by yourself the results → change the status in the Jira ticket to CODE COMPLETE or RESOLVED → share necessary links with a QA specialist in the QA Verification channel → QA specialist closes the Jira task after his verification → Jira task should be CLOSED.

    Commit Flow

    1. Get a task in the Jira/GitHub dashboard. Please be aware of the following points:

      a. Every task has a reporter who can provide more details in case something is not clear.

      b. The responsible person for the task and code implementation is the assignee who tracks the following:

      • Actual Jira task status.
      • Time logging.
      • Add comments, attach necessary files.
      • In comments, add link that refers to the merged PR (optional, if not related to many repositories).
      • Code review and the final merge.
      • MS Teams chats - ping other colleagues, answer questions, etc.
      • Verification by a QA specialist.
      • Bug fixing.

      c. Pay attention to the task Status that differs in different entities, the workflow will help to see the whole task processing:

      View Jira workflow
      View Jira workflow

      d. There are several entities that are used on the KubeRocketCI project: Story, Improvement, Task, Bug.

      a. Every task has a reporter who can provide more details in case something is not clear.

      b. The responsible person for the task and code implementation is the assignee who tracks the following:

      • Actual GitHub task status.
      • Add comments, attach necessary files.
      • In comments, add link that refers to the merged PR (optional, if not related to many repositories).
      • Code review and the final merge.
      • MS Teams chats - ping other colleagues, answer questions, etc.
      • Verification by a QA specialist.
      • Bug fixing.

      c. If the task is created on your own, make sure it is populated completely. See an example below:

      GitHub issue
      GitHub issue

    2. Implement feature, improvement, fix and check the results on your own. If it is impossible to check the results of your work before the merge, verify all later.

    3. When committing, use the pattern: commit type: Commit message (#GitHub ticket number).

      a. commit type:

      feat: (new feature for the user, not a new feature for build script)

      fix: (bug fix for the user, not a fix to a build script)

      docs: (changes to the documentation)

      style: (formatting, missing semicolons, etc; no production code change)

      refactor: (refactoring production code, eg. renaming a variable)

      test: (adding missing tests, refactoring tests; no production code change)

      chore: (updating grunt tasks etc; no production code change)

      !: (added to other commit types to mark breaking changes) For example:

      feat!: Add ingress links column into Applications table on stage page (#77)
       
       BREAKING CHANGE: Ingress links column has been added into the Applications table on the stage details page
       

      b. Commit message:

      • brief, for example:

        fix: Remove secretKey duplication from registry secrets (#63)

        or

      • descriptive, for example:

        feat: Provide the ability to configure hadolint check (#88)

        * Add configuration files .hadolint.yaml and .hadolint.yml to stash

        Note

        It is mandatory to start a commit message from a capital letter.

      c. GitHub tickets are typically identified using a number preceded by the # sign and enclosed in parentheses.

      Note

      Make sure there is a descriptive commit message for a breaking change Pull Request. For example:

      feat!: Add ingress links column into Applications table on stage page (#77)

      BREAKING CHANGE: Ingress links column has been added into the Applications table on the stage details page

    4. Create a Pull Request, for details, please refer to the Code Review Process:

      GitHub issue
      GitHub issue

      Note

      If a Pull Request contains both new functionality and breaking changes, make sure the functionality description is placed before the breaking changes. For example:

      feat!: Update Gerrit to improve access

      • Implement Developers group creation process
      • Align group permissions

      BREAKING CHANGES: Update Gerrit config according to groups

    \ No newline at end of file diff --git a/developer-guide/index.html b/developer-guide/index.html index 1d3a187e2..5aea3006a 100644 --- a/developer-guide/index.html +++ b/developer-guide/index.html @@ -1 +1 @@ - Overview - EPAM Delivery Platform

    Overview⚓︎

    The EPAM Delivery Platform (EDP) Developer Guide serves as a comprehensive technical resource specifically designed for developers. It offers detailed insights into expanding the functionalities of EDP. This section focuses on explaining the development approach and fundamental architectural blueprints that form the basis of the platform's ecosystem.

    Within these pages, you'll find architectural diagrams, component schemas, and deployment strategies essential for grasping the structural elements of EDP. These technical illustrations serve as references, providing a detailed understanding of component interactions and deployment methodologies. Understanding the architecture of EDP and integrating third-party solutions into its established framework enables the creation of efficient, scalable, and customizable solutions within the EPAM Delivery Platform.

    The diagram below illustrates how GitHub repositories and Docker registries are interconnected within the EDP ecosystem.

    Core components
    Core components
    codebase-operator
    codebase-operator
    cd-pipeline-operator
    cd-pipeline-operator
    EDP Portal
    (edp-headlamp)
    EDP Portal...
    nexus-operator
    nexus-operator
    sonar-operator
    sonar-operator
    keycloak-operator
    keycloak-operator
    edp-tekton
    edp-tekton
    edp-install
    edp-install
    Click the Icons
    Text is not SVG - cannot display

    Release Channels⚓︎

    As a publicly available product, the EPAM Delivery Platform relies on various channels to share information, gather feedback, and distribute new releases effectively. This section outlines the diverse channels through which users can engage with our platform and stay informed about the latest developments and enhancements.

    Marketplaces⚓︎

    Our product is presented on AWS and Civo marketplaces. It's essential to ensure that the product information on these platforms is up-to-date and accurately reflects the latest version of our software:

    OperatorHub⚓︎

    Our product operators are showcased on OperatorHub, enabling seamless integration and management capabilities:

    GitHub Repositories⚓︎

    Our platform components, optional enhancements, add-ons, and deployment resources are hosted on GitHub repositories. Explore the following repositories to access the source code of components.

    Platform Components⚓︎

    Each platform component is available in its corresponding GitHub project:

    Optional Components⚓︎

    These optional components enhance the platform's installation and configuration experience:

    Add-ons Repository⚓︎

    The Add-ons repository provides a streamlined pathway for deploying the all-in-one solution:

    Tekton Custom Library⚓︎

    Explore additional tools and customizations in our Tekton Custom Library:

    Platform Test Data⚓︎

    Access test data from the 'Create' onboarding strategy:

    Helm Charts⚓︎

    Helm chart artifacts are available in repository:

    DockerHub⚓︎

    Our DockerHub repository hosts Docker images for various platform components:

    Social Media⚓︎

    To maintain an active presence on social media channels and share valuable content about our software releases, we continuously publish materials across the following media:

    \ No newline at end of file + Overview - EPAM Delivery Platform

    Overview⚓︎

    The EPAM Delivery Platform (EDP) Developer Guide serves as a comprehensive technical resource specifically designed for developers. It offers detailed insights into expanding the functionalities of EDP. This section focuses on explaining the development approach and fundamental architectural blueprints that form the basis of the platform's ecosystem.

    Within these pages, you'll find architectural diagrams, component schemas, and deployment strategies essential for grasping the structural elements of EDP. These technical illustrations serve as references, providing a detailed understanding of component interactions and deployment methodologies. Understanding the architecture of EDP and integrating third-party solutions into its established framework enables the creation of efficient, scalable, and customizable solutions within the EPAM Delivery Platform.

    The diagram below illustrates how GitHub repositories and Docker registries are interconnected within the EDP ecosystem.

    Core components
    Core components
    codebase-operator
    codebase-operator
    cd-pipeline-operator
    cd-pipeline-operator
    EDP Portal
    (edp-headlamp)
    EDP Portal...
    nexus-operator
    nexus-operator
    sonar-operator
    sonar-operator
    keycloak-operator
    keycloak-operator
    edp-tekton
    edp-tekton
    edp-install
    edp-install
    Click the Icons
    Text is not SVG - cannot display

    Release Channels⚓︎

    As a publicly available product, the EPAM Delivery Platform relies on various channels to share information, gather feedback, and distribute new releases effectively. This section outlines the diverse channels through which users can engage with our platform and stay informed about the latest developments and enhancements.

    Marketplaces⚓︎

    Our product is presented on AWS and Civo marketplaces. It's essential to ensure that the product information on these platforms is up-to-date and accurately reflects the latest version of our software:

    OperatorHub⚓︎

    Our product operators are showcased on OperatorHub, enabling seamless integration and management capabilities:

    GitHub Repositories⚓︎

    Our platform components, optional enhancements, add-ons, and deployment resources are hosted on GitHub repositories. Explore the following repositories to access the source code of components.

    Platform Components⚓︎

    Each platform component is available in its corresponding GitHub project:

    Optional Components⚓︎

    These optional components enhance the platform's installation and configuration experience:

    Add-ons Repository⚓︎

    The Add-ons repository provides a streamlined pathway for deploying the all-in-one solution:

    Tekton Custom Library⚓︎

    Explore additional tools and customizations in our Tekton Custom Library:

    Platform Test Data⚓︎

    Access test data from the 'Create' onboarding strategy:

    Helm Charts⚓︎

    Helm chart artifacts are available in repository:

    DockerHub⚓︎

    Our DockerHub repository hosts Docker images for various platform components:

    Social Media⚓︎

    To maintain an active presence on social media channels and share valuable content about our software releases, we continuously publish materials across the following media:

    \ No newline at end of file diff --git a/developer-guide/kubernetes-deployment/index.html b/developer-guide/kubernetes-deployment/index.html index b3d688428..6896cfff2 100644 --- a/developer-guide/kubernetes-deployment/index.html +++ b/developer-guide/kubernetes-deployment/index.html @@ -1 +1 @@ - Kubernetes Deployment - EPAM Delivery Platform

    Kubernetes Deployment⚓︎

    This section provides a comprehensive overview of the EDP deployment approach on a Kubernetes cluster. EDP is designed and functions based on a set of key guiding principles:

    • Operator Pattern Approach: Approach is used for deployment and configuration, ensuring that the platform aligns with Kubernetes native methodologies (see schema below).
    • Loosely Coupling: EDP comprises several loosely coupled operators responsible for different platform parts. These operators can be deployed independently, enabling the most straightforward platform customization and delivery approach.

      Kubernetes Operator
      Kubernetes Operator

    The following deployment diagram illustrates the platform's core components, which provide the minimum functional capabilities required for the platform operation: build, push, deploy, and run applications. The platform relies on several mandatory dependencies:

    • Ingress: An ingress controller responsible for routing traffic to the platform.
    • Tekton Stack: Includes Tekton pipelines, triggers, dashboard, chains, etc.
    • ArgoCD: Responsible for GitOps deployment.

    EPAM Delivery Platform Deployment Diagram
    EPAM Delivery Platform Deployment Diagram

    • Codebase Operator: Responsible for managing git repositories, versioning, and branching. It also implements Jira integration controller.
    • CD Pipeline Operator: Manages Continuous Delivery (CD) pipelines and CD stages (which is an abstraction of Kubernetes Namespace). Operator acts as the bridge between the artifact and deployment tools, like Argo CD. It defines the CD pipeline structure, artifacts promotion logic and triggers the pipeline execution.
    • Tekton Pipelines: Manages Tekton pipelines and processes events (EventListener, Interceptor) from Version Control Systems. The pipelines are integrated with external tools like SonarQube, Nexus, etc.
    • EDP Portal: This is the User Interface (UI) component, built on top of Headlamp.

    Business applications are deployed on the platform using the CD Pipeline Operator and Argo CD. By default, the CD Pipeline Operator uses Argo CD as a deployment tool. However, it can be replaced with any other tool, like FluxCD, Spinnaker, etc. The target environment for the application deployment is a Kubernetes cluster where EDP is deployed, but it can be any other Kubernetes cluster.

    \ No newline at end of file + Kubernetes Deployment - EPAM Delivery Platform

    Kubernetes Deployment⚓︎

    This section provides a comprehensive overview of the EDP deployment approach on a Kubernetes cluster. EDP is designed and functions based on a set of key guiding principles:

    • Operator Pattern Approach: Approach is used for deployment and configuration, ensuring that the platform aligns with Kubernetes native methodologies (see schema below).
    • Loosely Coupling: EDP comprises several loosely coupled operators responsible for different platform parts. These operators can be deployed independently, enabling the most straightforward platform customization and delivery approach.

      Kubernetes Operator
      Kubernetes Operator

    The following deployment diagram illustrates the platform's core components, which provide the minimum functional capabilities required for the platform operation: build, push, deploy, and run applications. The platform relies on several mandatory dependencies:

    • Ingress: An ingress controller responsible for routing traffic to the platform.
    • Tekton Stack: Includes Tekton pipelines, triggers, dashboard, chains, etc.
    • ArgoCD: Responsible for GitOps deployment.

    EPAM Delivery Platform Deployment Diagram
    EPAM Delivery Platform Deployment Diagram

    • Codebase Operator: Responsible for managing git repositories, versioning, and branching. It also implements Jira integration controller.
    • CD Pipeline Operator: Manages Continuous Delivery (CD) pipelines and CD stages (which is an abstraction of Kubernetes Namespace). Operator acts as the bridge between the artifact and deployment tools, like Argo CD. It defines the CD pipeline structure, artifacts promotion logic and triggers the pipeline execution.
    • Tekton Pipelines: Manages Tekton pipelines and processes events (EventListener, Interceptor) from Version Control Systems. The pipelines are integrated with external tools like SonarQube, Nexus, etc.
    • EDP Portal: This is the User Interface (UI) component, built on top of Headlamp.

    Business applications are deployed on the platform using the CD Pipeline Operator and Argo CD. By default, the CD Pipeline Operator uses Argo CD as a deployment tool. However, it can be replaced with any other tool, like FluxCD, Spinnaker, etc. The target environment for the application deployment is a Kubernetes cluster where EDP is deployed, but it can be any other Kubernetes cluster.

    \ No newline at end of file diff --git a/developer-guide/local-development/index.html b/developer-guide/local-development/index.html index d2fd36064..8bd780d51 100644 --- a/developer-guide/local-development/index.html +++ b/developer-guide/local-development/index.html @@ -1,4 +1,4 @@ - Operator Development - EPAM Delivery Platform

    Operator Development⚓︎

    This page is intended for developers with the aim to share details on how to set up the local environment and start coding in Go language for EPAM Delivery Platform.

    Prerequisites⚓︎

    • Git is installed;
    • One of our repositories where you would like to contribute is cloned locally;
    • Local Kubernetes cluster (Kind is recommended) is installed;
    • Helm is installed;
    • Any IDE (GoLand is used here as an example) is installed;
    • GoLang stable version is installed.

    Note

    Make sure GOPATH and GOROOT environment variables are added in PATH.

    Environment Setup⚓︎

    Set up your environment by following the steps below.

    Set Up Your IDE⚓︎

    We recommend using GoLand and enabling the Kubernetes plugin. Before installing plugins, make sure to save your work because IDE may require restarting.

    Set Up Your Operator⚓︎

    To set up the cloned operator, follow the three steps below:

    1. Configure Go Build Option. Open folder in GoLand, click the add_config_button button and select the Go Build option:

      Add configuration
      Add configuration

    2. Fill in the variables in Configuration tab:

      • In the Files field, indicate the path to the main.go file;
      • In the Working directory field, indicate the path to the operator;
      • In the Environment field, specify the namespace to watch by setting WATCH_NAMESPACE variable. It should equal default but it can be any other if required by the cluster specifications.
      • In the Environment field, also specify the platform type by setting PLATFORM_TYPE. It should equal either kubernetes or openshift.

      Build config
      Build config

    3. Check cluster connectivity and variables. Local development implies working within local Kubernetes clusters. Kind (Kubernetes in Docker) is recommended so set this or another environment first before running code.

    Pre-commit Activities⚓︎

    Before making commit and sending pull request, take care of precautionary measures to avoid crashing some other parts of the code.

    Testing and Linting⚓︎

    Testing and linting must be used before every single commit with no exceptions. The instructions for the commands below are written here.

    It is mandatory to run test and lint to make sure the code passes the tests and meets acceptance criteria. Most operators are covered by tests so just run them by issuing the commands "make test" and "make lint":

      make test
    + Operator Development - EPAM Delivery Platform      

    Operator Development⚓︎

    This page is intended for developers with the aim to share details on how to set up the local environment and start coding in Go language for EPAM Delivery Platform.

    Prerequisites⚓︎

    • Git is installed;
    • One of our repositories where you would like to contribute is cloned locally;
    • Local Kubernetes cluster (Kind is recommended) is installed;
    • Helm is installed;
    • Any IDE (GoLand is used here as an example) is installed;
    • GoLang stable version is installed.

    Note

    Make sure GOPATH and GOROOT environment variables are added in PATH.

    Environment Setup⚓︎

    Set up your environment by following the steps below.

    Set Up Your IDE⚓︎

    We recommend using GoLand and enabling the Kubernetes plugin. Before installing plugins, make sure to save your work because IDE may require restarting.

    Set Up Your Operator⚓︎

    To set up the cloned operator, follow the three steps below:

    1. Configure Go Build Option. Open folder in GoLand, click the add_config_button button and select the Go Build option:

      Add configuration
      Add configuration

    2. Fill in the variables in Configuration tab:

      • In the Files field, indicate the path to the main.go file;
      • In the Working directory field, indicate the path to the operator;
      • In the Environment field, specify the namespace to watch by setting WATCH_NAMESPACE variable. It should equal default but it can be any other if required by the cluster specifications.
      • In the Environment field, also specify the platform type by setting PLATFORM_TYPE. It should equal either kubernetes or openshift.

      Build config
      Build config

    3. Check cluster connectivity and variables. Local development implies working within local Kubernetes clusters. Kind (Kubernetes in Docker) is recommended so set this or another environment first before running code.

    Pre-commit Activities⚓︎

    Before making commit and sending pull request, take care of precautionary measures to avoid crashing some other parts of the code.

    Testing and Linting⚓︎

    Testing and linting must be used before every single commit with no exceptions. The instructions for the commands below are written here.

    It is mandatory to run test and lint to make sure the code passes the tests and meets acceptance criteria. Most operators are covered by tests so just run them by issuing the commands "make test" and "make lint":

      make test
     

    The command "make test" should give the output similar to the following:

    Tests directory for one of the operators
    "make test" command

      make lint
     

    The command "make lint" should give the output similar to the following:

    Tests directory for one of the operators
    "make lint" command

    Observe Auto-Generated Docs, API and Manifests⚓︎

    The commands below are especially essential when making changes to API. The code is unsatisfactory if these commands fail.

    • Generate documentation in the .MD file format so the developer can read it:

      make api-docs
       

      The command "make api-docs" should give the output similar to the following:

    "make api-docs" command with the file contents
    "make api-docs" command with the file contents