Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes cluster unreachable #1540

Open
marcosmartinezfco opened this issue Nov 23, 2024 · 0 comments
Open

Kubernetes cluster unreachable #1540

marcosmartinezfco opened this issue Nov 23, 2024 · 0 comments
Assignees
Labels

Comments

@marcosmartinezfco
Copy link

marcosmartinezfco commented Nov 23, 2024

If you prefer, you can also ask your question in the Kubernetes community Slack channel #terraform-providers. (Sign up here)

Terraform version, Kubernetes provider version and Kubernetes version

Terraform version: v1.9.5
Helm Provider version: v2.16.1
Kubernetes version: 1.31

Terraform configuration

# main.tf
module "eks_base" {
  source = "../../modules/infra/eks_base"

  environment     = var.environment
  cluster_name    = "example-karpenter"
  cluster_version = "1.31"
}

module "eks_tooling" {
  source = "../../modules/infra/eks_tooling"

  region = var.region

  eks_cluster_name        = module.eks_base.cluster_name
  eks_cluster_arn         = module.eks_base.cluster_arn
  eks_cluster_oidc_issuer = module.eks_base.cluster_oidc_issuer
  eks_cluster_endpoint    = module.eks_base.cluster_endpoint

  vpc_eks_cluster_id = module.eks_base.vpc_eks_cluster_id

  controller_image_tag = "v2.7.2"

  providers = {
    aws.virginia        = aws.virginia # required to pull images from ECR
    helm.eks_base       = helm.eks_base
    kubernetes.eks_base = kubernetes.eks_base
  }

  depends_on = [module.eks_base]
}

---
# providers.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }

  backend "s3" {
    bucket         = "lct-terraform-state-sandbox"
    key            = "terraform.tfstate"
    region         = "us-east-1"
    dynamodb_table = "lct-terraform-lock-sandbox"
    encrypt        = true
  }

  required_version = ">= 1.5.0"
}

provider "aws" {
  region = var.region

  default_tags {
    tags = {
      "Environment" = var.environment
      "Deployment"  = "Terraform"
    }
  }
}

provider "aws" {
  alias  = "virginia"
  region = "us-east-1"
}

data "aws_eks_cluster" "cluster" {
  name = "example-karpenter-sandbox"
}

data "aws_eks_cluster_auth" "cluster" {
  name = data.aws_eks_cluster.cluster.name
}

provider "helm" {
  alias = "eks_base"
}

provider "kubernetes" {
  alias                  = "eks_base"
  host                   = data.aws_eks_cluster.cluster.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.cluster.token
}

---

# helm.tf (inside eks_tooling module)

data "aws_ecrpublic_authorization_token" "token" {
  provider = aws.virginia
}

resource "helm_release" "karpenter" {
  namespace           = "kube-system"
  name                = "karpenter"
  repository          = "oci://public.ecr.aws/karpenter"
  repository_username = data.aws_ecrpublic_authorization_token.token.user_name
  repository_password = data.aws_ecrpublic_authorization_token.token.password
  chart               = "karpenter"
  version             = "1.0.6"
  wait                = false

  values = [
    <<-EOT
    serviceAccount:
      name: ${module.karpenter.service_account}
    settings:
      clusterName: ${var.eks_cluster_name}
      clusterEndpoint: ${var.eks_cluster_endpoint}
      interruptionQueue: ${module.karpenter.queue_name}
    EOT
  ]
}

Question

I'm trying to create some new modules to deploy eks with karpenter using the terraform-aws-eks external module. The issue is that I get an error saying the cluster is unreachable. I've tried many things but I don't know what is causing this exaclty.

The config map is properly configured to allow the role I'm assuming to connect to the cluster (I'm running my terraform apply in github actions).

I do two different applies, one first for the eks_base and then I add the providers and all the eks_tooling bits to deploy karpenter and other charts as the metric server and the aws load balancer controler (although now I'm just trying to deploy helm to keep it simple)

I'm have no idea why the cluster is not reachable and I've tried so many things, using the data blocks, using the exec plugging and so on.

Thank you for your time if you see this : ) (also let me know If I missed any other part of the configuration that might be relevant to solve this)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants