diff --git a/ci/vale/dictionary.txt b/ci/vale/dictionary.txt index f442e4e93d1..b1791c275be 100644 --- a/ci/vale/dictionary.txt +++ b/ci/vale/dictionary.txt @@ -2897,6 +2897,7 @@ wordcount wordfence wordlist wordlists +Wordpress wordpress worker1 worker2 diff --git a/docs/guides/databases/postgresql/managed-postgresql-databases-on-akamai-cloud-with-terraform/index.md b/docs/guides/databases/postgresql/managed-postgresql-databases-on-akamai-cloud-with-terraform/index.md new file mode 100644 index 00000000000..2bfee4630ff --- /dev/null +++ b/docs/guides/databases/postgresql/managed-postgresql-databases-on-akamai-cloud-with-terraform/index.md @@ -0,0 +1,695 @@ +--- +slug: managed-postgresql-databases-on-akamai-cloud-with-terraform +title: "Managed PostgreSQL Databases on Akamai Cloud with Terraform" +description: "Learn how to use Terraform to provision a managed PostgreSQL database cluster on Akamai Cloud." +authors: ["Peter Sari"] +contributors: ["Peter Sari", "Nathan Melehan"] +published: 2025-05-02 +keywords: ['managed database','database managed services','managed postgresql','managed postgres','managed postgres database','terraform postgresql provider​','terraform postgresql​','postgresql terraform provider​','terraform postgres provider','postgres terraform provider','terraform postgres','terraform database','postgresql_database terraform','terraform create postgres database','infrastructure as code','iac'] +license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' +external_resources: +- '[Terraform documentation](https://developer.hashicorp.com/terraform)' +- '[Terraform GitHub](https://github.com/hashicorp/terraform)' +- '[PostgreSQL documentation](https://www.postgresql.org/docs/)' +--- + +This guide demonstrates how to use Terraform to set up a PostgreSQL cluster with the [Managed Database](https://www.linode.com/products/databases/?utm_medium=website&utm_source=akamai) service on Akamai Cloud. [Terraform](https://developer.hashicorp.com/terraform) is an infrastructure as code (IaC) tool that allows you to automate the deployment of cloud infrastructure. [PostgreSQL](https://www.postgresql.org/) is a widely-adopted, open source database solution used by many DevOps engineers and supported by a large range of operating systems. + +Akamai's Managed Database is a Relational Database Management System as a Service. Akamai manages both the underlying compute instances and the relational database management system software. Akamai also updates the software and maintains the health of these systems. Using Managed Databases, you can instantiate managed clusters of MySQL and PgSQL with a range of supported versions. + +Managed Database clusters on Akamai Cloud also support multiple databases. This guide shows how to use Terraform to deploy individual databases on a cluster using two [Terraform providers](https://developer.hashicorp.com/terraform/language/providers), where a modular configuration handles the database deployments. + +## Before You Begin + +To follow this tutorial, perform these steps first: + +- [Install Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) on your workstation + +- [Create a personal access token](https://techdocs.akamai.com/linode-api/reference/post-personal-access-token) for the Linode API that has permission to create databases + +- Install a PostgreSQL client on your workstation. This is used to validate the installation of the database cluster. + + The steps in this guide use the [`psql` command](https://www.postgresql.org/docs/current/app-psql.html) in the example commands shown. Visit the PostgreSQL [Downloads](https://www.postgresql.org/download/) page for installation instructions. + + {{< note >}} + A [list of other clients on wiki.postgresql.org](https://wiki.postgresql.org/wiki/PostgreSQL_Clients) is available, but the instructions in this guide are intended for `psql`. + {{< /note >}} + +## Terraform Project File Structure + +The project in this guide follows the directory structure shown below: + +```output +. +├── main.tf +├── modules +│ └── databases +│ ├── main.tf +│ ├── outputs.tf +│ └── variables.tf +├── outputs.tf +├── providers.tf +├── terraform.tfvars +└── variables.tf +``` + +In this structure, the root Terraform files (`main.tf`, `outputs.tf`, `terraform.tfvars`, `variables.tf`) build the database cluster infrastructure. The `modules/databases/` module handles database creation within the cluster. + +{{< note >}} +While it is not within the scope of this guide, you can also add a [null resource](https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource) and a [local-exec provisioner](https://developer.hashicorp.com/terraform/language/resources/provisioners/local-exec) that uses `psql` to create tables within the module. +{{< /note >}} + +## Database Cluster Options + +There are several options that can be configured when provisioning a new Managed Database cluster, including: + +- The size of the cluster you would like to deploy + +- The instance types that underpin the databases + +- The database software and supported version + +- The maintenance window schedule. Managed Databases require periodic maintenance by Akamai, and the window for this can be configured. + +- The region where the cluster should be located + +In a later section, each of these choices is encoded in your Terraform variables file. This file references unique label/strings correspond to the region, instance type, etc that you decide on. + +### Regions + +Document the region in which you would like to deploy. A list of regions that support Managed Databases is returned by this API call: + +```command +curl -s https://api.linode.com/v4/regions | jq '.data[] | select(.capabilities[] | contains("Managed Databases")) | .id' +``` + +The output resembles: + +```output +"ap-west" +"ca-central" +"ap-southeast" +"us-iad" +"us-ord" +"fr-par" +... +``` + +The ID is used as the value for the variable region in the Terraform configuration. See our [Region Availability](https://www.linode.com/global-infrastructure/availability/) page for a full list of compute region IDs. Availability for Managed Database deployment may vary. + +### Instance Types + +Akamai offers a range of different instance types, but a subset of these can be used with Managed Database Clusters. When deciding on an instance type, you must know what the CPU, memory and storage requirements are for your initial database deployments. + +As a reference point, a single instance's capacity is the storage capacity of your databases in aggregate. A list of instances compatible with Managed Databases is returned by this API call: + +```command +curl -s https://api.linode.com/v4/databases/types | jq '.data[] | { "id" , "label" , "disk" , "vcpus" }' +``` + +The output resembles: + +```output +{ + "id": "g6-nanode-1", + "label": "DBaaS - Nanode 1GB", + "disk": 9216, + "vcpus": 1 +} +{ + "id": "g6-standard-1", + "label": "DBaaS - Linode 2GB", + "disk": 30720, + "vcpus": 1 +} +{ + "id": "g6-standard-2", + "label": "DBaaS - Linode 4GB", + "disk": 59392, + "vcpus": 2 +} +... +``` + +This command outputs the `id`, which is later referenced in `db_instance_type` variable in your Terraform configuration, along with some information related to disk space (in MB) and vCPUs. The amount of RAM per instance is indicated in the description. + +{{< note title="Changing the Instance Type After Provisioning" >}} +The database cluster instance type can be changed in the future by altering the instance type variable in your Terraform configuration. Note that resizing is implemented by creating entirely new cluster and migrating the database to the new cluster. As a result, resizes are not instantaneous changes. +{{< /note >}} + +### Relational Database Management System (RDBMS) Software and Version + +When creating the cluster, the server software (PostgreSQL or MySQL at the time of this writing) and version need to be specified. For this guide, PostgreSQL v17 is used. The list of currently supported RDBMS with associated versions is returned by this API call: + +```command +curl -s https://api.linode.com/v4/databases/engines | jq '.data[].id' +``` + +```output +"mysql/8" +"postgresql/13" +"postgresql/14" +"postgresql/15" +"postgresql/16" +"postgresql/17" +``` + +The value `postgresql/17` is later referenced by the variable `rdbms_ver` in the Terraform configuration. + +### Cluster Size + +The size of the cluster (the number of database instances, or nodes, in the cluster) determines its read capacity and whether it remains available when a node fails. Clusters can be built as a single node, for smaller, less critical applications, or they can be provisioned with 2 or 3 nodes, for those that require high availability or higher read capacity. + +Your cluster size can be changed at any time after the cluster is first provisioned. In the Terraform configuration demonstrated later, this is done by configuring the `cluster_nodes` variable. + +### Maintenance Windows + +Because the cluster nodes and RDBMS software are managed and kept up to date by Akamai, it is important to provide a viable maintenance window. This window is specified in the time zone of the cluster's region. + +In the Terraform configuration, the `update_hour` and `update_day` variables control this window. Both variables are integer values: + +- `update_day` ranges from 1 (Sunday) to 7 (Saturday) + +- `update_hour` ranges from 0 (midnight) to 23 (11PM). + +These variables can also be modified after the cluster is first created. + +## Configure Terraform Providers + +A “provider” in Terraform maps the Hashicorp Configuration Language to an API, like the [Linode API](https://techdocs.akamai.com/linode-api/reference/api), so it can communicate with various software and cloud providers. In order to configure the database cluster with Terraform, you first need to declare which Terraform [providers](https://developer.hashicorp.com/terraform/language/providers) are used in the configuration and enter some required information. + +1. On your workstation, create a directory named `postgres-terraform`. All Terraform configuration files in this guide are stored under this directory: + + ```command + mkdir postgres-terraform + ``` + +1. Create a file in your `postgres-terraform/` directory named `providers.tf`, and paste in the following snippet: + + ```file {title="postgres-terraform/providers.tf"} + terraform { + required_providers { + linode = { + source = "linode/linode" + version = "2.35.1" + } + postgresql = { + source = "a0s/postgresql" + version = "1.14.0-jumphost-1" + } + } + } + + provider "linode" { + token = var.linode_token + } + + provider "postgresql" { + database = "defaultdb" + host = linode_database_postgresql_v2.pgsql-cluster-1.host_primary + port = linode_database_postgresql_v2.pgsql-cluster-1.port + username = linode_database_postgresql_v2.pgsql-cluster-1.root_username + password = linode_database_postgresql_v2.pgsql-cluster-1.root_password + sslmode = "require" + } + ``` + +The Terraform file above uses two providers: + +- The Linode provider (`linode/linode`) talks to the Linode API to build the infrastructure. At the time of writing, version 2.35.1 of the Linode provider is the latest. + + {{< note title="Latest Version" >}} + Check the [Terraform Registry](https://registry.terraform.io/providers/linode/linode/latest/docs) for the latest versions prior to deploying. If using a different version, please be aware of proper version syntax. + {{< /note >}} + +- A PostgreSQL provider talks to the PostgreSQL management endpoint. There are multiple PostgreSQL providers, and this guide uses the [`a0s/postgresql` provider](https://registry.terraform.io/providers/a0s/postgresql/latest/docs). + + {{< note title="MySQL Providers" >}} + MySQL providers are also available if you want to provision a MySQL cluster. + {{< /note >}} + +The providers need to be configured to work with your specific environment: + +- The Linode provider requires a personal access token (PAT) with **read** and **write** permissions for the Managed Databases service. This helps ensure proper user authentication to your Akamai Cloud account. + +- The PostgreSQL provider requires: + + - A collection of information that is derived from the cluster deployment: the username, password, hostname, and TCP port to use when connecting. These values are attributes of the `linode_database_postgresql_v2.pgsql-cluster-1` resource, defined later in a file called `main.tf`. + + - The name of the database you wish to connect to. This is assigned the value `defaultdb`. This entry is important as the `psql` interface requires an existing database in the connection string, even when creating a new database. `defaultdb` is created for you during the creation of the database cluster. + +## Configure the Managed PostgreSQL Cluster with Terraform + +1. Create a file named `main.tf` in your `postgres-terraform/` directory and paste in the following Terraform code snippet: + + ```file {title="postgres-terraform/main.tf"} + resource "linode_database_postgresql_v2" "pgsql-cluster-1" { + label = var.db_clustername + engine_id = var.rdbms_ver + region = var.region + type = var.db_instance_type + allow_list = ["0.0.0.0/0"] + cluster_size = var.cluster_nodes + updates = { + duration = 4 + frequency = "weekly" + hour_of_day = var.update_hour + day_of_week = var.update_day } + lifecycle { + ignore_changes = [host_primary] + } + timeouts { + create = "30m" + update = "30m" + delete = "30m" + } + } + + module "database1" { + source = "./modules/databases" + database = "defaultdb" + db_host = linode_database_postgresql_v2.pgsql-cluster-1.host_primary + db_port = linode_database_postgresql_v2.pgsql-cluster-1.port + db_user = linode_database_postgresql_v2.pgsql-cluster-1.root_username + db_password = linode_database_postgresql_v2.pgsql-cluster-1.root_password + db_list = var.db_list1 + cluster_id = linode_database_postgresql_v2.pgsql-cluster-1.id + depends_on = [linode_database_postgresql_v2.pgsql-cluster-1] + providers = { + postgresql = postgresql + } + } + + module "database2" { + source = "./modules/databases" + database = "defaultdb" + db_host = linode_database_postgresql_v2.pgsql-cluster-1.host_primary + db_port = linode_database_postgresql_v2.pgsql-cluster-1.port + db_user = linode_database_postgresql_v2.pgsql-cluster-1.root_username + db_password = linode_database_postgresql_v2.pgsql-cluster-1.root_password + db_list = var.db_list2 + cluster_id = linode_database_postgresql_v2.pgsql-cluster-1.id + depends_on = [linode_database_postgresql_v2.pgsql-cluster-1] + providers = { + postgresql = postgresql + } + } + + resource "local_file" "db_certificate" { + filename = "${linode_database_postgresql_v2.pgsql-cluster-1.id}.crt" + content = linode_database_postgresql_v2.pgsql-cluster-1.ca_cert + } + ``` + + This represents the primary logic of the [root module](https://developer.hashicorp.com/terraform/language/modules#the-root-module) in your Terraform configuration. The following Terraform [resources](https://developer.hashicorp.com/terraform/language/resources) and [modules](https://developer.hashicorp.com/terraform/language/modules) are declared: + + - A `linode_database_postgresql_v2` resource named `pgsql-cluster-1`. This is the Managed Database cluster that is provisioned. + + Several parameters, like `engine_id`, refer to variables that are later defined in the `variables.tf` and `terraform.tfvars` files. + + The `allow_list` parameter defines which IP addresses can access the cluster. In this demonstration, all addresses are permitted, but you should restrict this for your cluster. Assign this to a list of address ranges in [CIDR notation](https://www.ipaddressguide.com/cidr) that include the applications that need to access the databases, as well as your Terraform management infrastructure (e.g. the workstation that you have Terraform installed on). + + The `lifecycle` and `timeouts` parameters are assigned values that are compatible with the Linode and database providers. Keep these same values for your deployments. + + - The `database1` and `database2` module declarations both reference files in the `modules/database/` directory, which are created in a later step. These represent two separate database resources within your cluster. + + Several parameters of these modules, like `db_host`, are assigned values provided by the `pgsql-cluster-1` cluster resource, so they rely on the infrastructure being created prior to managing the database layer. The `depends_on` parameter ensures that these database modules are invoked after the cluster resource is created. As before, the `defaultdb` database name is provided for the initial connection to the cluster. + + - A `local_file` resource named `db_certificate` is assigned the `ca_cert` value from the database cluster. This file is created in the root of the Terraform project, and it is used by your applications to securely connect to the database platform. + +1. Create a file named `variables.tf` in your `postgres-terraform/` directory and paste in the following snippet: + + ```file {title="postgres-terraform/variables.tf"} + variable "linode_token" { + description = "Linode API Personal Access Token" + sensitive = true + } + + variable "db_list1" { + description = "Databases to exist on cluster. More than 1 DB can be specified here." + type = list(string) + default = ["database-1"] + } + + variable "db_list2" { + description = "Databases to exist on cluster. More than 1 DB can be specified here." + type = list(string) + default = ["database-2"] + } + + variable "db_clustername" { + description = "Label for Akamai Cloud system to ID Cluster. This must be unique in your environment. Must be between 3-32 chars, no spec chars except single hyphens" + type = string + default = "My-PgSQl-Cluster" + } + + variable "rdbms_ver"{ + description = "Type and Version of RDBMS. Pull the current supported list via API" + type = string + default = "postgresql/17" + } + + variable "region" { + description = "Region for DB Cluster" + type = string + default = "us-ord" + } + + variable "db_instance_type" { + description = "Linode type of DB Cluster Nodes - Storage, RAM and Compute of a single node equals rough DB capacity. Pull the current list from Linode API" + type = string + default = "g6-dedicated-4" + } + + variable "cluster_nodes" { + description = "Number of Database Cluster Nodes must equal 1, 2 or 3" + type = number + default = 2 + } + + variable "update_hour" { + description = "Hour to apply RDBMS updates midnight through 11pm = 0-23" + type = number + default = 22 + } + + variable "update_day" { + description = "Day to apply RDBMS updates Sun-Sat= 1-7" + type = number + default = 7 + } + ``` + + The variable definitions in this file are referenced in your resource and module declarations to build the database cluster and deploy two databases. The definitions include descriptions to show formatting and describe acceptable values, and a default value. The `rdbms_ver`, `region`, `db_instance_type`, `cluster_nodes`, `update_hour`, and `update_day` variables correspond to the values chosen in the [Preparing to Deploy](#preparing-to-deploy) section. + +1. Create a file named `outputs.tf` in your `postgres-terraform/` directory and paste in the following snippet: + + ```file {title="postgres-terraform/outputs.tf"} + output "database_password" { + value = linode_database_postgresql_v2.pgsql-cluster-1.root_password + sensitive = true + description = "The password associated to the admin username" + } + + output "database_username" { + value = linode_database_postgresql_v2.pgsql-cluster-1.root_username + sensitive = true + description = "The admin username" + } + + output "database_fqdn" { + value = linode_database_postgresql_v2.pgsql-cluster-1.host_primary + description = "The fqdn you can use to access the DB" + } + + output "db_certificate" { + value = linode_database_postgresql_v2.pgsql-cluster-1.ca_cert + sensitive = true + description = "The certificate used for DB Connections" + } + + output "database_port" { + value = linode_database_postgresql_v2.pgsql-cluster-1.port + description = "The TCP Port used by the database" + } + + output "database_id" { + description = "The cluster ID used by Akamai" + value = linode_database_postgresql_v2.pgsql-cluster-1.id + } + + output "database1_created_databases" { + value = module.database1.databases + description = "List of databases created by database1 module" + } + + output "database2_created_databases" { + value = module.database2.databases + description = "List of databases created by database2 module" + } + ``` + + This is a set of [Terraform output](https://developer.hashicorp.com/terraform/language/values/outputs) definitions. Terraform outputs print their values to the command line when the Terraform configuration is applied. The values from these outputs are later used with the `psql` client to connect to the database cluster. + +1. Inside your `postgres-terraform/` directory, create a `modules/` directory and a `databases/` subdirectory underneath it. + + ```command + mkdir -p postgres-terraform/modules/databases/ + ``` + +1. Create a file named `main.tf` in your `postgres-terraform/modules/databases/` directory and paste in the following snippet: + + ```file {title="postgres-terraform/modules/databases/main.tf"} + terraform { + required_providers { + postgresql = { + source = "a0s/postgresql" + version = "1.14.0-jumphost-1" + } + } + } + + resource "postgresql_database" "databases" { + for_each = toset(var.db_list) + name = each.value + owner = var.db_user + depends_on = [var.cluster_id] + lifecycle { + ignore_changes = [owner] + } + } + ``` + + This is a [child module](https://developer.hashicorp.com/terraform/language/modules#child-modules) of the root module. It represents a reusable set of instructions referenced by the root module to create databases in the cluster in a repeatable fashion. + + This module uses the `postgresql_database` resource from the `a0s/postgres` provider to create our Managed Databases. + +1. Create a file named `variables.tf` in your `postgres-terraform/modules/databases/` directory and paste in the following snippet: + + ```file {title="postgres-terraform/modules/databases/variables.tf"} + variable "db_host" { + description = "host connection string" + type = string + } + + variable "db_port" { + description = "cluster port" + type = number + } + + variable "db_user" { + description = "admin cluster user" + type = string + sensitive = true + } + + variable "db_password" { + description = "admin user pass" + type = string + sensitive = true + } + + variable "db_list" { + description = "list of dbs" + type = list(string) + } + + variable "cluster_id" { + description = "id of cluster" + type = string + } + + variable "database" { + description = "db to connect to" + type = string + } + ``` + + These variables are assigned values in the `module` declarations of the `main.tf` file in the root module (step 1 of this section). + +1. Create a file named `outputs.tf` in your `postgres-terraform/modules/databases/` directory and paste in the following snippet: + + ```file {title="postgres-terraform/modules/databases/outputs.tf"} + output "databases" { + value = keys(postgresql_database.databases) + description = "List of created databases" + } + ``` + + The `databases` output is a list of the names of the databases created, and it is referenced in the `database1_created_databases` and `database2_created_databases` outputs of the root module. + +## Provision the Managed PostgreSQL Cluster with Terraform + +The Terraform configuration is now complete and ready to be used to create infrastructure: + +1. While inside the root `postgres-terraform/` project directory, run Terraform's `init` command. + + ```command + cd postgres-terraform/ + terraform init + ``` + + This command downloads the Linode and PostgreSQL providers to the local execution environment and ensure you have everything in place to build your project: + + ```output + Initializing the backend... + Initializing modules... + Initializing provider plugins... + - Finding a0s/postgresql versions matching "1.14.0-jumphost-1"... + - Finding linode/linode versions matching "2.35.1"... + - Installing linode/linode v2.35.1... + - Installed linode/linode v2.35.1 (signed by a HashiCorp partner, key ID F4E6BBD0EA4FE463) + - Installing a0s/postgresql v1.14.0-jumphost-1... + - Installed a0s/postgresql v1.14.0-jumphost-1 (self-signed, key ID 5A0BE9D2989FD2A2) + Terraform has been successfully initialized! + ``` + +1. Run Terraform's `plan` command. This performs a test run of your configuration and ensures it is syntactically correct. This command also asks for your Linode API token, but it does not actually use it to create infrastructure: + + ```command + terraform plan + ``` + + A summary of proposed changes is displayed, along with any warnings. The configuration from the previous section results in a warning, but this is expected behavior: + + ```output + │ Warning: Redundant ignore_changes element + │on main.tf line 1, in resource "linode_database_postgresql_v2" "pgsql-cluster-1": + │1: resource "linode_database_postgresql_v2" "pgsql-cluster-1" { + │ Adding an attribute name to ignore_changes tells Terraform to ignore future changes to the argument in configuration after the object has been created, retaining the value originally configured. + │ The attribute host_primary is decided by the provider alone and therefore there can be no configured value to compare with. Including this attribute in ignore_changes has no effect. Remove the attribute from + │ ignore_changes to quiet this warning. + ``` + + In order to combine database management and infrastructure management in the configuration, lifecycle policies must be used to ignore changes to the `host_primary` attribute. The PostgreSQL provider generates a warning for this. Since this value doesn't actually change, it is safe to ignore future changes to the value. + +1. If there are no errors (aside from the warning that was described), you can run Terraform's `apply` command. + + ```command + terraform apply + ``` + + You should be prompted by the command for your personal access token. + +1. Terraform shows a summary of the proposed changes and asks if you would like to proceed. Enter `yes` for this prompt. + +1. The build process begins, and can take time to complete. Once complete, Terraform provides a summary of successful actions and a list of non-sensitive outputs, like below: + + ```output + Plan: 4 to add, 0 to change, 0 to destroy. + Changes to Outputs: + + database1_created_databases = ["database-1", ] + database2_created_databases = ["database-2", ] + database_fqdn = (known after apply) + database_id = (known after apply) + database_password = (sensitive value) + database_port = (known after apply) + database_username = (sensitive value) + db_certificate = (sensitive value) ╷ + + │ Warning: Redundant ignore_changes element + + linode_database_postgresql_v2.pgsql-cluster-1: Creating... + linode_database_postgresql_v2.pgsql-cluster-1: Still creating... [7m30s elapsed] linode_database_postgresql_v2.pgsql-cluster-1: Creation complete after 7m35s [id=258475] local_file.db_certificate: Creating... module.database1.postgresql_database.databases["database-1"]: Creating... module.database2.postgresql_database.databases["database-2"]: Creating... local_file.db_certificate: Creation complete after 0s [id=9ff519506c470ac8707472757a0880365d7bde03] module.database2.postgresql_database.databases["database-2"]: Creation complete after 1s [id=database-2] + module.database1.postgresql_database.databases["database-1"]: Creation complete after 1s [id=database-1] + Apply complete! Resources: 4 added, 0 changed, 0 destroyed. + Outputs: + database1_created_databases = [ "database-1", ] + database2_created_databases = [ "database-2", ] + database_fqdn = "a258475-akamai-prod-3474339-default.g2a.akamaidb.net" + database_id = "258475" + database_password = + database_port = 26010 + database_username = + db_certificate = + ``` + +## Connect to the Managed PostgreSQL Cluster with psql + +You should now have a fully functional PostgresSQL cluster running on Akamai Cloud's Managed Database service with two databases. You can now test access to the cluster using the `psql` command: + +1. Gather the information needed for the connection from Terraform's outputs. This information was displayed after the `terraform apply` command completed, but you can also run Terraform's `output` command to retrieve it: + + ```command + terraform output + ``` + +1. The output from this command hides sensitive information by default, like the `database_username` and `database_password` outputs. Use the `-raw` flag to reveal the password: + + ```command + terraform output -raw database_password + ``` + + Save this password as it is used later by the `psql` command. + +1. Run the following commands to create environment variables in your terminal for your database FQDN, port, and username: + + ```command + psqlhost=$(terraform output -raw database_fqdn) + psqlport=$(terraform output -raw database_port) + psqluser=$(terraform output -raw database_username) + ``` + +1. Use the `psql` command to connect to your new cluster and verify the databases exist: + + ```command + psql -h$psqlhost -ddefaultdb -U$psqluser -p$psqlport + ``` + + You are prompted for your password. Copy and paste the output from the previous `terraform output -raw database_password` command at this prompt. A successful connection displays output like the following: + + ```output + psql (14.17 (Ubuntu 14.17-0ubuntu0.22.04.1), server 17.4) + WARNING: psql major version 14, server major version 17. + Some psql features might not work. + SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) + Type "help" for help. + + defaultdb=> + ``` + +1. At the database prompt, enter the `\l` command: + + ```command {title="Database prompt"} + defaultdb=> \l + ``` + + The two databases from the Terraform configuration, `database-1` and `database-2`, are listed in the output: + + ```output + List of databases + + Name | Owner | Encoding | Collate | Ctype | Access privileges + + ------------+----------+----------+-------------+-------------+----------------------- + + _aiven | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =T/postgres + + + database-1 | akmadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | + + database-2 | akmadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | + + defaultdb | akmadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | + + template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres + + template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres (6 rows) + ``` + +1. Enter the `\q` command to exit the database prompt: + + ```command {title="Database prompt"} + defaultdb=> \q + ``` + +## Destroy the Managed PostgreSQL Cluster + +If you would like to destroy the database cluster and the underlying compute instances, run Terraform's `destroy` command on your workstation while inside your `postgres-terraform/` directory: + +```command +terraform destroy +``` + +Once destroyed, new costs stop accruing for the cloud infrastructure that was previously provisioned. + diff --git a/docs/guides/kubernetes/deploy-llm-for-ai-inferencing-on-apl/index.md b/docs/guides/kubernetes/deploy-llm-for-ai-inferencing-on-apl/index.md index f01069c6f09..eaec0bc54f0 100644 --- a/docs/guides/kubernetes/deploy-llm-for-ai-inferencing-on-apl/index.md +++ b/docs/guides/kubernetes/deploy-llm-for-ai-inferencing-on-apl/index.md @@ -5,7 +5,7 @@ description: "This guide includes steps and guidance for deploying a large langu authors: ["Akamai"] contributors: ["Akamai"] published: 2025-03-25 -modified: 2025-04-17 +modified: 2025-04-25 keywords: ['ai','ai inference','ai inferencing','llm','large language model','app platform','lke','linode kubernetes engine','llama 3','kserve','istio','knative'] license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' external_resources: @@ -102,7 +102,7 @@ Sign into the App Platform web UI using the `platform-admin` account, or another 1. Click **Create Team**. -1. Provide a **Name** for the Team. Keep all other default values, and click **Submit**. This guide uses the Team name `demo`. +1. Provide a **Name** for the Team. Keep all other default values, and click **Create Team**. This guide uses the Team name `demo`. ### Install the NVIDIA GPU Operator @@ -170,11 +170,7 @@ A [Workload](https://apl-docs.net/docs/for-devs/console/workloads) is a self-ser 1. Continue with the rest of the default values, and click **Submit**. -After the Workload is submitted, App Platform creates an Argo CD application to install the `kserve-crd` Helm chart. Wait for the **Status** of the Workload to become healthy as represented by a green check mark. This may take a few minutes. - -![Workload Status](APL-LLM-Workloads.jpg) - -Click on the ArgoCD **Application** link once the Workload is ready. You should be brought to the Argo CD screen in a separate window: +After the Workload is submitted, App Platform creates an Argo CD application to install the `kserve-crd` Helm chart. Wait for the **Status** of the Workload to become ready, and click on the ArgoCD **Application** link. You should be brought to the Argo CD screen in a separate window: ![Argo CD](APL-LLM-ArgoCDScreen.jpg) @@ -386,11 +382,9 @@ Wait for the Workload to be ready again, and proceed to the following steps for 1. Click **Create Service**. -1. In the **Name** dropdown list, select the `llama3-model-predictor` service. +1. In the **Service Name** dropdown list, select the `llama3-model-predictor` service. -1. Under **Exposure (ingress)**, select **External**. - -1. Click **Submit**. +1. Click **Create Service**. Once the Service is ready, copy the URL for the `llama3-model-predictor` service, and add it to your clipboard. @@ -493,11 +487,9 @@ Follow the steps below to follow the second option and add the Kyverno security 1. Click **Create Service**. -1. In the **Name** dropdown menu, select the `llama3-ui` service. - -1. Under **Exposure (ingress)**, select **External**. +1. In the **Service Name** dropdown menu, select the `llama3-ui` service. -1. Click **Submit**. +1. Click **Create Service**. ## Access the Open Web User Interface diff --git a/docs/guides/kubernetes/deploy-rag-pipeline-and-chatbot-on-apl/index.md b/docs/guides/kubernetes/deploy-rag-pipeline-and-chatbot-on-apl/index.md index d8349c08a48..d39d8f7927e 100644 --- a/docs/guides/kubernetes/deploy-rag-pipeline-and-chatbot-on-apl/index.md +++ b/docs/guides/kubernetes/deploy-rag-pipeline-and-chatbot-on-apl/index.md @@ -5,7 +5,7 @@ description: "This guide expands on a previously built LLM and AI inferencing ar authors: ["Akamai"] contributors: ["Akamai"] published: 2025-03-25 -modified: 2025-04-17 +modified: 2025-04-25 keywords: ['ai','ai inference','ai inferencing','llm','large language model','app platform','lke','linode kubernetes engine','rag pipeline','retrieval augmented generation','open webui','kubeflow'] license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' external_resources: @@ -290,11 +290,9 @@ Create a [**Network Policy**](https://apl-docs.net/docs/for-ops/console/netpols) 1. Click **Create Service**. -1. In the **Name** dropdown menu, select the `ml-pipeline-ui` service. +1. In the **Service Name** dropdown menu, select the `ml-pipeline-ui` service. -1. Under **Exposure**, select **External**. - -1. Click **Submit**. +1. Click **Create Service**. Kubeflow Pipelines is now ready to be used by members of the Team **demo**. @@ -633,13 +631,9 @@ Update the Kyverno **Policy** `open-webui-policy.yaml` created in the previous t 1. Click **Create Service**. -1. In the **Name** dropdown menu, select the `linode-docs-pipeline` service. - -1. In the **Port** dropdown, select port `9099`. - -1. Under **Exposure**, select **External**. +1. In the **Service Name** dropdown menu, select the `linode-docs-pipeline` service. -1. Click **Submit**. +1. Click **Create Service**. 1. Once submitted, copy the URL of the `linode-docs-pipeline` service to your clipboard. @@ -687,11 +681,9 @@ Update the Kyverno **Policy** `open-webui-policy.yaml` created in the previous t 1. Click **Create Service**. -1. In the **Name** dropdown menu, select the `linode-docs-chatbot` service. - -1. Under **Exposure**, select **External**. +1. In the **Service Name** dropdown list, select the `linode-docs-chatbot` service. -1. Click **Submit**. +1. Click **Create Service**. ## Access the Open Web User Interface diff --git a/docs/guides/kubernetes/inter-service-communication-with-rabbitmq-and-apl/index.md b/docs/guides/kubernetes/inter-service-communication-with-rabbitmq-and-apl/index.md index 48674949692..86deb42f97e 100644 --- a/docs/guides/kubernetes/inter-service-communication-with-rabbitmq-and-apl/index.md +++ b/docs/guides/kubernetes/inter-service-communication-with-rabbitmq-and-apl/index.md @@ -5,6 +5,7 @@ description: "This guide shows how to deploy a RabbitMQ message broker architect authors: ["Akamai"] contributors: ["Akamai"] published: 2025-03-20 +modified: 2025-04-25 keywords: ['app platform','lke','linode kubernetes engine','rabbitmq','microservice','message broker'] license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' external_resources: @@ -108,7 +109,7 @@ When working in the context of an admin-level Team, users can create and access 1. Click **Create Team**. -1. Provide a **Name** for the Team. Keep all other default values, and click **Submit**. This guide uses the Team name `demo`. +1. Provide a **Name** for the Team. Keep all other default values, and click **Create Team**. This guide uses the Team name `demo`. ### Create a RabbitMQ Cluster with Workloads @@ -136,27 +137,37 @@ This guide uses an example Python chat app to send messages to all connected cli The example app in this guide is not meant for production workloads, and steps may vary depending on the app you are using. +### Add the Code Repository for the Example App + 1. Select **view** > **team** and **team** > **demo** in the top bar. -1. Select **Builds**, and click **Create Build**. +1. Select **Code Repositories**, and click **Add Code Repository**. -1. Provide a name for the Build. This is the same name used for the image stored in the private Harbor registry of your Team. This guide uses the Build name `rmq-example-app` with the tag `latest`. +1. Provide the name `apl-examples` for the Code Repository. -1. Select the **Mode** `Buildpacks`. +1. Select *GitHub* as the **Git Service**. -1. To use the example Python messaging app, provide the following GitHub repository URL: +1. Under **Repository URL**, add the following GitHub URL: ```command https://github.com/linode/apl-examples.git ``` -1. Set the **Buildpacks** path to `rabbitmq-python`. +1. Click **Add Code Repository**. -1. Click **Submit**. The build may take a few minutes to be ready. +### Create a Container Image - {{< note title="Make sure auto-scaling is enabled on your cluster" >}} - When a build is created, each task in the pipeline runs in a pod, which requires a certain amount of CPU and memory resources. To ensure the sufficient number of resources are available, it is recommended that auto-scaling for your LKE cluster is enabled prior to creating the build. - {{< /note >}} +1. Select **Container Images** from the menu. + +1. Select the *BuildPacks* build task. + +1. In the **Repository** dropdown list, select `apl-examples`. + +1. In the **Reference** dropdown list, select `main`. + +1. Set the **Path** field to `rabbitmq-python`. + +1. Click **Create Container Image**. ### Check the Build Status @@ -176,12 +187,10 @@ The backend status of the build can be checked from the **PipelineRuns** section Once successfully built, copy the image repository link so that you can create a Workload for deploying the app in the next step. -1. Select **Builds** to view the status of your build. +1. Select **Container Images** to view the status of your build. 1. When ready, use the "copy" button in the **Repository** column to copy the repository URL link to your clipboard. - ![App Build Ready](APL-RabbitMQ-build-ready.jpg) - ## Deploy the App 1. Select **view** > **team** and **team** > **demo** in the top bar. @@ -206,7 +215,7 @@ Once successfully built, copy the image repository link so that you can create a image: repository: {{< placeholder "" >}} pullPolicy: IfNotPresent - tag: {{< placeholder "latest" >}} + tag: {{< placeholder "main" >}} env: - name: {{< placeholder "NOTIFIER_RABBITMQ_HOST" >}} valueFrom: @@ -259,11 +268,9 @@ Create a service to expose the `rmq-example-app` application to external traffic 1. Select **Services** in the left menu, and click **Create Service**. -1. In the **Name** dropdown menu, select the `rmq-example-app` service. - -1. Under **Exposure**, select **External**. +1. In the **Service Name** dropdown list, select the `rmq-example-app` service. -1. Click **Submit**. The service may take a few minutes to be ready. +1. Click **Create Service**. The service may take around 30 seconds to be ready. ### Access the Demo App diff --git a/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/APL-WordPress-CrashLoopBackOff.jpg b/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/APL-WordPress-CrashLoopBackOff.jpg new file mode 100644 index 00000000000..90039b10791 Binary files /dev/null and b/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/APL-WordPress-CrashLoopBackOff.jpg differ diff --git a/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/APL-WordPress-LiveSite.jpg b/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/APL-WordPress-LiveSite.jpg new file mode 100644 index 00000000000..493ab853b13 Binary files /dev/null and b/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/APL-WordPress-LiveSite.jpg differ diff --git a/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/APL-WordPress-PodRunning.jpg b/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/APL-WordPress-PodRunning.jpg new file mode 100644 index 00000000000..19816b0ddf6 Binary files /dev/null and b/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/APL-WordPress-PodRunning.jpg differ diff --git a/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/APL-WordPress-WPlogin.jpg b/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/APL-WordPress-WPlogin.jpg new file mode 100644 index 00000000000..fc748bf7996 Binary files /dev/null and b/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/APL-WordPress-WPlogin.jpg differ diff --git a/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/index.md b/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/index.md new file mode 100644 index 00000000000..9a471e68e71 --- /dev/null +++ b/docs/guides/kubernetes/use-app-platform-to-deploy-wordpress/index.md @@ -0,0 +1,326 @@ +--- +slug: use-app-platform-to-deploy-wordpress +title: "Use App Platform to Deploy WordPress with Persistent Volumes on LKE" +description: "Two to three sentences describing your guide." +authors: ["Akamai"] +contributors: ["Akamai"] +published: 2025-04-25 +keywords: ['app platform','app platform for lke','lke','linode kubernetes engine','kubernetes','persistent volumes','mysql'] +license: '[CC BY-ND 4.0](https://creativecommons.org/licenses/by-nd/4.0)' +external_resources: +- '[Akamai App Platform for LKE](https://techdocs.akamai.com/cloud-computing/docs/application-platform)' +- '[Akamai App Platform Documentation](https://apl-docs.net/docs/akamai-app-platform/introduction)' +--- + +{{< note title="Beta Notice" type="warning" >}} +The Akamai App Platform is now available as a limited beta. It is not recommended for production workloads. To register for the beta, visit the [Betas](https://cloud.linode.com/betas) page in the Cloud Manager and click the Sign Up button next to the Akamai App Platform Beta. +{{< /note >}} + +This guide includes steps for deploying a WordPress site and persistent MySQL database using App Platform for Linode Kubernetes Engine (LKE). In this architecture, both WordPress and MySQL use PersistentVolumes (PV) and PersistentVolumeClaims (PVC) to store data. + +To add the WordPress and MySQL Helm charts to the App Platform Catalog, the **Add Helm Chart** feature of Akamai App Platform for LKE is used. + +## Prerequisites + +- A [Cloud Manager](https://cloud.linode.com/) account is required to use Akamai's cloud computing services, including LKE. + +- Enrollment into the Akamai App Platform's [beta program](https://cloud.linode.com/betas). + +- An provisioned and configured LKE cluster with App Platform enabled and [auto-scaling](https://techdocs.akamai.com/cloud-computing/docs/manage-nodes-and-node-pools#autoscale-automatically-resize-node-pools) turned on. A Kubernetes cluster consisting of 3 [Dedicated CPU Compute Instances](https://techdocs.akamai.com/cloud-computing/docs/dedicated-cpu-compute-instances) is sufficient for the deployment in this guide to run, but additional resources may be required during the configuration of your App Platform architecture. + + To ensure sufficient resources are available, it is recommended that node pool auto-scaling for your LKE cluster is enabled after deployment. Make sure to set the max number of nodes higher than your minimum. This may result in higher billing costs. + + To learn more about provisioning a LKE cluster with App Platform, see our [Getting Started with App Platform for LKE](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-akamai-application-platform) guide. + +## Components + +### Infrastructure + +- **Linode Kubernetes Engine (LKE)**: LKE is Akamai’s managed Kubernetes service, enabling you to deploy containerized applications without needing to build out and maintain your own Kubernetes cluster. + +- **App Platform for LKE**: A Kubernetes-based platform that combines developer and operations-centric tools, automation, self-service, and management of containerized application workloads. App Platform for LKE streamlines the application lifecycle from development to delivery and connects numerous CNCF (Cloud Native Computing Foundation) technologies in a single environment, allowing you to construct a bespoke Kubernetes architecture. + +### Software + +- [**MySQL**](https://www.mysql.com/): An open source database management system that uses a relational database and SQL (Structured Query Language) to manage its data. + +- [**WordPress**](https://wordpress.com/): The WordPress application is an industry standard, open source CMS (content management system) often used for creating and publishing websites. + +- [**Ingress NGINX Controller**](https://github.com/kubernetes/ingress-nginx): When creating a Service in App Platform, an `ingress` is created using NGINX's Ingress Controller to allow public access to internal services. + +## Set Up Infrastructure + +Once your LKE cluster is provisioned and the App Platform web UI is available, complete the following steps to continue setting up your infrastructure. + +Sign into the App Platform web UI using the `platform-admin` account, or another account that uses the `platform-admin` role. Instructions for signing into App Platform for the first time can be found in our [Getting Started with Akamai App Platform](https://techdocs.akamai.com/cloud-computing/docs/getting-started-with-akamai-application-platform) guide. + +### Create a New Team + +[Teams](https://apl-docs.net/docs/for-ops/console/teams) are isolated tenants on the platform to support Development/DevOps teams, projects, and methodologies, like [DTAP](https://en.wikipedia.org/wiki/Development,_testing,_acceptance_and_production). A Team gets access to the Console, which provides access to self-service features and the shared apps available on the platform. + +When working in the context of an admin-level Team, users can create and access resources in any namespace. When working in the context of a non-admin Team, users can only create and access resources used in that Team’s namespace. + +1. Select **view** > **platform** in the top bar. + +1. Click **Teams** in the left menu. + +1. Click **Create Team**. + +1. Provide a **Name** for the Team. Keep all other default values, and click **Create Team**. This guide uses the Team name `demo`. + +### Add the WordPress Helm Chart to the Catalog + +[Helm charts](https://helm.sh/) provide information for defining, installing, and managing resources on a Kubernetes cluster. Custom Helm charts can be added to App Platform Catalog using the **Add Helm Chart** feature. + +To install WordPress on your cluster, add the WordPress Helm chart using the Git Repository URL. + +1. Select **view** > **team** and **team** > **admin** in the top bar. + +1. Once using the `admin` team view, click on **Catalog** in the left menu. + +1. Select **Add Helm Chart**. + +1. Under **Git Repository URL**, add the URL to the `wordpress` Helm chart .yaml file: + + ```command + https://github.com/bitnami/charts/blob/wordpress/24.1.18/bitnami/wordpress/Chart.yaml + ``` + +1. Click **Get Details** to populate the `wordpress` Helm chart details. + +1. Leave **Allow teams to use this chart** selected (default). This allows teams other than `admin` to use the Helm chart. + +1. Click **Add Chart**. It may take a few minutes for the Helm chart to be added to the Catalog. + +### Add the MySQL Helm Chart to the Catalog + +Repeat the same steps for installing the MySQL service on your cluster. + +1. While still using the `admin` team view, click **Catalog** in the left menu. + +1. Select **Add Helm Chart**. + +1. Under **Git Repository URL**, add the URL to the `mysql` Helm chart .yaml file: + + ```command + https://github.com/bitnami/charts/blob/mysql/12.3.1/bitnami/mysql/Chart.yaml + ``` + +1. Click **Get Details** to populate the `mysql` Helm chart details. If necessary, change the **Target Directory Name** field to read "MySQL". This is used to differentiate Helm charts within the Catalog. + +1. Leave **Allow teams to use this chart** selected. + +1. Click **Add Chart**. + +## Deploy a MySQL Database and WordPress Site + +Separate Workloads are created for MySQL and WordPress in order to deploy a persistent database and site, respectively. Both Workloads require passwords, so to prevent the passwords from being stored unencrypted, Sealed Secrets are created for each first. + +[Sealed Secrets](https://apl-docs.net/docs/for-devs/console/secrets) are encrypted Kubernetes Secrets stored in the Values Git repository. When a Sealed Secret is created in the Console, the Kubernetes Secret will appear in the Team's namespace. + +### Create a Sealed Secret to Store MySQL Passwords + +1. Select **view** > **team** and **team** > **demo** in the top bar. + +1. Click **Sealed Secrets** in the left menu. + +1. Click **Create SealedSecret**. + +1. Add a name for your Sealed Secret. This name is also used when creating the MySQL Workload. This guide uses the name `mysql-credentials`. + +1. Select type _[kubernetes.io/opaque](kubernetes.io/opaque)_ from the **type** dropdown menu. + +1. Add the following **Key** and **Value** pairs, replacing `{{< placeholder "YOUR_PASSWORD" >}}` and `{{< placeholder "YOUR_ROOT_PASSWORD" >}}` with your own secure passwords. To add a second Key and Value combination, select **Add Item** after entering the first pair below: + + - Key=`mysql-password`, Value=`{{< placeholder "YOUR_PASSWORD" >}}` + - Key=`mysql-root-password`, Value=`{{< placeholder "YOUR_ROOT_PASSWORD" >}}` + +1. Click **Submit**. The Sealed Secret may take a few minutes to become ready. + +### Create a Sealed Secret to Store WordPress Credentials + +1. Select **view** > **team** and **team** > **demo** in the top bar. + +1. Click **Sealed Secrets** in the left menu. + +1. Click **Create SealedSecret**. + +1. Add a name for your Sealed Secret. This name is also used when creating the WordPress Workload. This guide uses the name `wordpress-credentials`. + +1. Select type _[kubernetes.io/opaque](kubernetes.io/opaque)_ from the **type** dropdown menu. + +1. Add the following **Key** and **Value** pairs. + + Replace `{{< placeholder "YOUR_MYSQL_PASSWORD" >}}` with the same password you used for your `mysql-password` when creating the `mysql-credentials` Sealed Secret above. Replace `{{< placeholder "YOUR_WORDPRESS_PASSWORD" >}}` with your own secure password: + + - Key=`mariadb-password`, Value=`{{< placeholder "YOUR_MYSQL_PASSWORD" >}}` + - Key=`wordpress-password`, Value=`{{< placeholder "YOUR_WORDPRESS_PASSWORD" >}}` + +1. Click **Submit**. The Sealed Secret may take a few minutes to become ready. + +### Create the MySQL Workload + +1. Select **view** > **team** and **team** > **demo** in the top bar. + +1. Select **Workloads** in the left menu. + +1. Click on **Create Workload**. + +1. Select the _MySQL_ Helm chart from the Catalog. + +1. Click on **Values**. + +1. Provide a name for the Workload. This guide uses the Workload name `wordpress-mysql`. + +1. Set the following chart values: + + ``` + auth: + database: "{{< placeholder "wordpress" >}}" + username: "{{< placeholder "wordpress" >}}" + existingSecret: "{{< placeholder "mysql-credentials" >}}" # Change when using a different name + networkPolicy: + enabled: {{< placeholder "false" >}} + ``` + + {{< note title="Managing Network Policies" >}} + The `networkPolicy` is disabled since all traffic is allowed by default. Rather than configuring `networkPolicy` values directly in the Workload config, this guide centrally manages all network policies using App Platform's [**Network Policies**](https://apl-docs.net/docs/for-ops/console/netpols) function. + {{< /note >}} + +1. Click **Submit**. The Workload may take a few minutes to become ready. + +### Create the WordPress Workload + +1. Select **view** > **team** and **team** > **demo** in the top bar. + +1. Click **Workloads** in the left menu. + +1. Click on **Create Workload**. + +1. Select the _WordPress_ Helm chart from the Catalog. + +1. Click on **Values**. + +1. Provide a name for the Workload. This guide uses the Workload name `wordpress`. + +1. Set the following chart values. Replace {{< placeholder "YOUR_USERNAME" >}} with the username you wish to use for logging into WordPress: + + ``` + mariadb: + enabled: {{< placeholder "false" >}} + externalDatabase: + host: {{< placeholder "wordpress-mysql.team-demo.svc.cluster.local" >}} + user: {{< placeholder "wordpress" >}} + database: {{< placeholder "wordpress" >}} + existingSecret: "{{< placeholder "wordpress-credentials" >}}" + service: + type: {{< placeholder "ClusterIP" >}} + networkPolicy: + enabled: {{< placeholder "false" >}} + existingSecret: "{{< placeholder "wordpress-credentials" >}}" + wordpressUsername: "{{< placeholder "YOUR_USERNAME" >}}" + ``` + +1. Click **Submit**. The Workload may take a few minutes to become ready. + +### Create Network Policies + +Create a Network Policy allowing only the WordPress Pod to connect to the MySQL database. + +1. Select **view** > **team** and **team** > **demo** in the top bar. + +1. Click **Network Policies** in the left menu. + +1. Click **Create Netpol**. + +1. Add a name for the Network Policy. This guide uses the name `wordpress-mysql`. + +1. Select **Rule type** `ingress` using the following values: + + - **Selector label name**: [`app.kubernetes.io/instance`](http://app.kubernetes.io/instance) + + - **Selector label value**: `wordpress-mysql` + +1. Select **AllowOnly**, and enter the following values. This allows only the WordPress Pod to connect to the database: + + - **Namespace name**: `team-demo` + + - **Selector label name**: [`app.kubernetes.io/instance`](http://app.kubernetes.io/instance) + + - **Selector label value**: `wordpress` + +1. Click **Submit**. + +#### Check the Pod Status + +Using the App Platform **Shell** feature, you can check to see if the WordPress Pod has started and connected to the MySQL database. + +1. Select **view** > **team** and **team** > **demo** in the top bar. + +1. Click **Shell** in the left menu. + +1. Enter the following command to launch the k9s interface once the Shell session has loaded. [k9s](https://k9scli.io/) is an open source, terminal-based Kubernetes user interface pre-installed with Akamai App Platform: + + ```command + k9s + ``` + +1. A `CrashLoopBackOff` status signifies that WordPress has not successfully connected to the database. If this is the case, check to see if label values are correct in your Network Policy. + + ![CrashLoopBackOff](APL-WordPress-CrashLoopBackOff.jpg) + + In order to force a restart, click on the WordPress Pod, and type Ctrl + D. This kills the current Pod and starts a new one. + + ![Pod Running](APL-WordPress-PodRunning.jpg) + +## Create a Service to Expose the WordPress Site + +Creating a [Service](https://apl-docs.net/docs/for-devs/console/services) in App Platform configures NGINX’s Ingress Controller. This allows you to enable public access to services running internally on your cluster. + +1. Select **view** > **team** and **team** > **demo** in the top bar. + +1. Click **Services** in the left menu. + +1. Click **Create Service**. + +1. In the **Service Name** dropdown menu, select the `wordpress` service. + +1. Under **Exposure (ingress)**, select **External**. + +1. Click **Create Service**. + +1. Once the Service is ready, click the URL of the `wordpress` service to navigate to the live WordPress site: + + ![WordPress Live Site](APL-WordPress-LiveSite.jpg) + +### Setting Up DNS + +When creating a Service, DNS for your site can be configure using a CNAME rather than using an external IP address. To do this, configure a CNAME entry with your domain name provider, and follow the steps in our [Using a CNAME](https://apl-docs.net/docs/for-devs/console/services#using-a-cname) App Platform documentation. + +See our guide on [CNAME records](https://techdocs.akamai.com/cloud-computing/docs/cname-records) for more information on how CNAME records work. + +### Access the WordPress UI + +1. While viewing the WordPress site in your browser, add `/wp-admin` to the end of the URL, where {{< placeholder "MY_WORDPRESS_URL" >}} is your site URL: + + ``` + http://{{< placeholder "MY_WORDPRESS_URL" >}}/wp-admin + ``` + + This should bring you to the WordPress admin panel login screen: + + ![WordPress Login Screen](APL-WordPress-WPlogin.jpg) + +1. To access the WordPress UI, sign in with your WordPress username and password. + + Your username is the value used for `wordpressUsername` when creating the [WordPress Workload](#create-the-wordpress-workload). Your password is the value used for `wordpress-password` when making your `wordpress-credentials` [Sealed Secret](#create-a-sealed-secret-to-store-wordpress-credentials). + +## Going Further + +Once you've accessed the WordPress UI, you can begin modifying your site using WordPress templates, themes, and plugins. For more information, see WordPress's resources below: + +- [WordPress Support](https://wordpress.org/support/): Learn the basic workflows for using WordPress. + +- [Securing WordPress](https://www.linode.com/docs/guides/how-to-secure-wordpress/): Advice on securing WordPress through HTTPS, using a secure password, changing the admin username, and more. + +- [WordPress Themes](https://wordpress.org/themes/#): A collection of thousands of available WordPress themes. \ No newline at end of file diff --git a/docs/marketplace-docs/guides/apache-spark-cluster/index.md b/docs/marketplace-docs/guides/apache-spark-cluster/index.md index 78cc5093a15..b1101acd197 100644 --- a/docs/marketplace-docs/guides/apache-spark-cluster/index.md +++ b/docs/marketplace-docs/guides/apache-spark-cluster/index.md @@ -2,7 +2,7 @@ title: "Deploy Apache Spark through the Linode Marketplace" description: "Apache Spark is a powerful open-source unified analytics engine for large-scale data processing. It provides high-level APIs in Java, Scala, Python, and R, and an optimized engine that supports general execution graphs. Spark is designed for both batch and streaming data processing, and it's significantly faster than traditional big data processing frameworks." published: 2024-07-09 -modified: 2024-07-09 +modified: 2024-05-01 keywords: ['spark','apache spark', 'marketplace', 'bigdata','analytics'] tags: ["ubuntu","marketplace", "big data", "linode platform", "cloud manager", "analytics", "cloud storage", "high availability", "compute storage"] external_resources: @@ -38,7 +38,7 @@ The minimum RAM requirement for the worker nodes is 4GB RAM to ensure that jobs ## Configuration Options -- **Supported distributions:** Ubuntu 22.04 LTS +- **Supported distributions:** Ubuntu 24.04 LTS - **Suggested minimum plan:** 4GB RAM ### Spark Options @@ -77,10 +77,10 @@ The Apache Spark Marketplace App installs the following software on your Linode: | **Software** | **Version** | **Description** | | :--- | :---- | :--- | -| **Apache Spark** | 3.4 | Unified analytics engine for large-scale data processing | +| **Apache Spark** | 3.5 | Unified analytics engine for large-scale data processing | | **Java OpenJDK** | 11.0 | Runtime environment for Spark | -| **Scala** | 2.11 | Programming language that Spark is built with, providing a powerful interface to Spark's APIs | -| **NGINX** | 1.18 | High-performance HTTP server and reverse proxy | +| **Scala** | Latest | Programming language that Spark is built with, providing a powerful interface to Spark's APIs | +| **NGINX** | Latest | High-performance HTTP server and reverse proxy | | **UFW** | | Uncomplicated Firewall for managing firewall rules | | **Fail2ban** | | Intrusion prevention software framework for protection against brute-force attacks | diff --git a/docs/marketplace-docs/guides/grav/grav-login.png b/docs/marketplace-docs/guides/grav/grav-login.png new file mode 100644 index 00000000000..2bd02ceba0f Binary files /dev/null and b/docs/marketplace-docs/guides/grav/grav-login.png differ diff --git a/docs/marketplace-docs/guides/grav/index.md b/docs/marketplace-docs/guides/grav/index.md index 1e7fdbfa645..a32dd72bd07 100644 --- a/docs/marketplace-docs/guides/grav/index.md +++ b/docs/marketplace-docs/guides/grav/index.md @@ -2,7 +2,7 @@ title: "Deploy Grav through the Linode Marketplace" description: "Learn how to deploy Grav, a modern open source flat-file CMS, on a Linode Compute Instance." published: 2022-02-22 -modified: 2022-03-08 +modified: 2025-04-08 keywords: ['cms','blog','website'] tags: ["marketplace", "linode platform", "cloud manager"] external_resources: @@ -29,14 +29,14 @@ marketplace_app_name: "Grav" ## Configuration Options -- **Supported distributions:** Ubuntu 20.04 LTS +- **Supported distributions:** Ubuntu 24.04 LTS - **Recommended plan:** All plan types and sizes can be used. ### Grav Options - **Email address** *(required)*: Enter the email address to use for generating the SSL certificates. -{{% content "marketplace-limited-user-fields-shortguide" %}} +{{% content "marketplace-required-limited-user-fields-shortguide" %}} {{% content "marketplace-custom-domain-fields-shortguide" %}} @@ -50,14 +50,19 @@ marketplace_app_name: "Grav" ![Screenshot of the URL bar with the Grav URL](grav-url.png) -1. You are now prompted to create a new admin user for Grav. Complete the form and click the **Create User** button. +1. Use the following credentials to log in: + - **Username:** *admin* + - **Password:** Enter the password stored in the credentials file on your server. To obtain it, log in to your Compute Instance via SSH or Lish and run: + ```command + cat /home/$USER/.credentials + ``` - ![Screenshot of the Create Admin Account form in Grav](grav-create-user.png) + ![Screenshot of the Grav login page](grav-login.png) -1. Once the admin user had been created, you are automatically logged in and taken to the Admin dashboard. From here, you can fully administer your new Grav site, including creating content, modifying your configuration, changing your theme, and much more. +You're now logged in and on the Admin dashboard. From here, you can fully administer your new Grav site, including creation of content, modification of your configuration, change your theme, and much more. - ![Screenshot of the Admin dashboard](grav-admin.png) +![Screenshot of the Admin dashboard](grav-admin.png) -Now that you’ve accessed your dashboard, check out [the official Grav documentation](https://learn.getgrav.org/) to learn how to further use your Grav instance. +Check out [the official Grav documentation](https://learn.getgrav.org/) to learn how to further use your Grav instance. {{% content "marketplace-update-note-shortguide" %}} \ No newline at end of file diff --git a/docs/marketplace-docs/guides/lamp-stack/index.md b/docs/marketplace-docs/guides/lamp-stack/index.md index 4d0bea98cc7..385d04aefd2 100644 --- a/docs/marketplace-docs/guides/lamp-stack/index.md +++ b/docs/marketplace-docs/guides/lamp-stack/index.md @@ -2,7 +2,7 @@ title: "Deploy a LAMP Stack through the Linode Marketplace" description: "This guide shows you how to use the Linode Marketplace One-Click Application to deploy a LAMP (Linux, Apache, MySQL, PHP) stack on a Linode running Linux." published: 2019-03-26 -modified: 2023-06-06 +modified: 2025-04-29 keywords: ['LAMP', 'apache', 'web server', 'mysql', 'php'] tags: ["apache","lamp","cloud-manager","linode platform","php","mysql","marketplace"] external_resources: @@ -36,6 +36,8 @@ A LAMP (Linux, [Apache](https://www.apache.org), [MySQL](https://www.mysql.com), - **Email address** *(required)*: Enter the email address to use for generating the SSL certificates. +- **Install PHPMyAdmin**: Choose whether to install PHPMyAdmin during deployment. This provides a web-based interface for managing your MySQL databases. + {{< note >}} The password for the MySQL root user is automatically generated and provided in the file `/home/$USERNAME/.credentials` when the LAMP deployment completes. {{< /note >}} diff --git a/docs/marketplace-docs/guides/moodle/index.md b/docs/marketplace-docs/guides/moodle/index.md index c0107e8fe39..d4a6e8ff724 100644 --- a/docs/marketplace-docs/guides/moodle/index.md +++ b/docs/marketplace-docs/guides/moodle/index.md @@ -29,26 +29,41 @@ Moodle is the most widely used open source learning management system. It is aim ## Configuration Options -- **Supported distributions:** Ubuntu 20.04 LTS +- **Supported distributions:** Ubuntu 24.04 LTS - **Recommended minimum plan:** All plan types and sizes can be used. ### Moodle Options -- **Moodle Admin Password** *(required)*: Enter a *strong* password for the Moodle admin account. -- **Moodle Admin Email** *(required)*: The email address you wish to use with the Moodle admin account. -| **MySQL Root Password** *(required)*: Enter a *strong* password for the MySQL root user. -- **Moodle database User Password** *(required)*: Enter a *strong* password for the database user. -- **Limited sudo user** *(required)*: Enter your preferred username for the limited user. -- **Password for the limited user** *(required)*: Enter a *strong* password for the new user. +- **Email address** *(required)*: Enter the email address you want to use for generating the SSL certificates and configuring the server and DNS records. + +- **Moodle Admin Username** *(required)*: Enter an admin username for the Moodle admin account. + +- **Moodle Database Username** *(required)*: Enter a database username for the Moodle database admin. + +{{% content "marketplace-required-limited-user-fields-shortguide" %}} {{% content "marketplace-custom-domain-fields-shortguide" %}} {{% content "marketplace-special-character-limitations-shortguide" %}} -#### Limited User SSH Options (Optional) +### Obtain the Credentials + +Once the app is deployed, you need to obtain the credentials from the server. + +To obtain the credentials: + +1. Log in to your new Compute Instance using one of the methods below: + + - **Lish Console**: Log in to Cloud Manager, click the **Linodes** link in the left menu, and select the Compute Instance you just deployed. Click **Launch LISH Console**. Log in as the `root` user. To learn more, see [Using the Lish Console](/docs/products/compute/compute-instances/guides/lish/). + - **SSH**: Log in to your Compute Instance over SSH using the `root` user. To learn how, see [Connecting to a Remote Server Over SSH](/docs/guides/connect-to-server-over-ssh/). + +1. Run the following command to access the credentials file: + + ```command + cat /home/$USERNAME/.credentials + ``` -- **SSH public key for the limited user:** If you wish to login as the limited user through public key authentication (without entering a password), enter your public key here. See [Creating an SSH Key Pair and Configuring Public Key Authentication on a Server](/docs/guides/use-public-key-authentication-with-ssh/) for instructions on generating a key pair. -- **Disable root access over SSH:** To block the root user from logging in over SSH, select *Yes* (recommended). You can still switch to the root user once logged in and you can also log in as root through [Lish](/docs/products/compute/compute-instances/guides/lish/). +This returns passwords that were automatically generated when the instance was deployed. Save them. Once saved, you can safely delete the file. ## Getting Started After Deployment @@ -56,7 +71,7 @@ Moodle is the most widely used open source learning management system. It is aim To access your Moodle instance, Open a browser and navigate to your Linode rDNS domain `https://203-0-113-0.ip.linodeusercontent.com`. Replace `https://203-0-113-0.ip.linodeusercontent.com` with your [Linode's RDNS domain](/docs/products/compute/compute-instances/guides/manage-ip-addresses/#viewing-ip-addresses). -From there, you can login by clicking the box on the top right of the page. Once you see the login page, you can enter `moodle` as the *username* and the *password* that was entered during the creation of the Linode. +From there, you can log in by clicking the box on the top right of the page. On the login page, enter the admin username you provided during the app deployment and the password stored in the credentials file. Now that you’ve accessed your dashboard, checkout [the official Moodle documentation](https://docs.moodle.org/311/en/Main_page) to learn how to further configure your instance. @@ -67,7 +82,7 @@ The Moodle Marketplace App installs the following required software on your Lino | Software | Description | | -- | -- | | [**PHP**](https://www.php.net) | A popular general-purpose scripting language that is especially suited to web development. | -| [**MariaDB Server**](https://mariadb.org) | A relational database server. The root password is set, locking down access outside the system. To gain access to the root user, obtain the password from `/root/.root_mysql_password` file. | +| [**MariaDB Server**](https://mariadb.org) | A relational database server. The root password is set, locking down access outside the system.| | [**UFW**](https://wiki.ubuntu.com/UncomplicatedFirewall) | Firewall utility that allows access only for SSH (port 22, rate limited), HTTP (port 80), and HTTPS (port 443). | | [**Certbot**](https://certbot.eff.org) | Certbot is a fully-featured, easy-to-use, extensible client for the Let's Encrypt CA. | | [**Apache2**](https://httpd.apache.org) | HTTP Server. | diff --git a/docs/marketplace-docs/guides/pihole/index.md b/docs/marketplace-docs/guides/pihole/index.md index 0f69c8f5be1..51bf0b2e971 100644 --- a/docs/marketplace-docs/guides/pihole/index.md +++ b/docs/marketplace-docs/guides/pihole/index.md @@ -29,32 +29,50 @@ marketplace_app_name: "Pi-hole" ## Configuration Options -- **Supported distributions:** Ubuntu 20.04 LTS +- **Supported distributions:** Ubuntu 24.04 LTS - **Recommended plan:** All plan types and sizes can be used. ### Pi-hole Options -- **Pi-hole user password** *(required)*: This will be the password to get into the Pi-hole dashboard. +- **Email address** *(required)*: Enter the email address you want to use for generating the SSL certificates and configuring the server and DNS records. -{{% content "marketplace-limited-user-fields-shortguide" %}} +{{% content "marketplace-required-limited-user-fields-shortguide" %}} {{% content "marketplace-custom-domain-fields-shortguide" %}} -- **Email address for the SOA record:** The start of authority (SOA) email address for this server. This is a required field if you want the installer to create DNS records. {{% content "marketplace-special-character-limitations-shortguide" %}} +### Obtain the Credentials + +Once the app is deployed, you need to obtain the credentials from the server. + +To obtain the credentials: + +1. Log in to your new Compute Instance using one of the methods below: + + - **Lish Console**: Log in to Cloud Manager, click the **Linodes** link in the left menu, and select the Compute Instance you just deployed. Click **Launch LISH Console**. Log in as the `root` user. To learn more, see [Using the Lish Console](/docs/products/compute/compute-instances/guides/lish/). + - **SSH**: Log in to your Compute Instance over SSH using the `root` user. To learn how, see [Connecting to a Remote Server Over SSH](/docs/guides/connect-to-server-over-ssh/). + +1. Run the following command to access the credentials file: + + ```command + cat /home/$USERNAME/.credentials + ``` + +This returns passwords that were automatically generated when the instance was deployed. Save them. Once saved, you can safely delete the file. + ## Getting Started after Deployment ### Accessing the Pi-hole App 1. Open your web browser and navigate to `http://[domain]/admin`, where *[domain]* can be replaced with the custom domain you entered during deployment, your Compute Instance's rDNS domain (such as `192-0-2-1.ip.linodeusercontent.com`), or your IPv4 address. See the [Managing IP Addresses](/docs/products/compute/compute-instances/guides/manage-ip-addresses/) guide for information on viewing IP addresses and rDNS. - The Pi-Hole dashboard should now be displayed. + The Pi-Hole dashboard opens. ![Screenshot of the Pi-hole dashboard](pihole-dashboard.png) -1. To log yourself in and access most of Pi-hole's features, click the **Login** link on the left menu. Enter the Pi-hole user password that you created when deploying the Compute Instance. +1. To log in and access most of Pi-hole's features, in the main menu, click **Login**. Enter the Pi-hole user password that is found in your credentials file. -Now that you’ve accessed your dashboard, check out [the official Pi-hole documentation](https://docs.pi-hole.net/) to learn how to further use your Pi-hole instance. +Check out [the official Pi-hole documentation](https://docs.pi-hole.net/) to learn how to further use your Pi-hole instance. {{% content "marketplace-update-note-shortguide" %}} \ No newline at end of file