Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod does not start on upgraded k8s eks cluster #91

Open
zeffirara opened this issue Jun 3, 2024 · 1 comment
Open

Pod does not start on upgraded k8s eks cluster #91

zeffirara opened this issue Jun 3, 2024 · 1 comment

Comments

@zeffirara
Copy link

I've created a new aws eks k8s cluster as I was upgrading from k8s version 1.21 to 1.29. However on the new eks cluster, the n8n pod doesn't start and is stuck after printing these logs:

Loading config overwrites [ '/n8n-config/config.json', '/n8n-secret/secret.json' ]
2024-06-03T09:14:40.278Z | ←[32minfo←[39m     | ←[32mInitializing n8n process←[39m "{ file: 'start.js', function: 'init' }"
2024-06-03T09:14:40.370Z | ←[34mdebug←[39m    | ←[34mLazy Loading credentials and nodes from n8n-nodes-base←[39m "{\n  credentials: 352,\n  nodes: 444,\n  file: 'LoggerProxy.js',\n  function: 'exports.debug'\n}"
2024-06-03T09:14:40.383Z | ←[34mdebug←[39m    | ←[34mLazy Loading credentials and nodes from @n8n/n8n-nodes-langchain←[39m "{\n  credentials: 14,\n  nodes: 70,\n  file: 'LoggerProxy.js',\n  function: 'exports.debug'\n}"

My helm chart is the same and it's working fine on the older cluster so I can't figure out why it's not working on the newer cluster. I saw this issue (#48) which seems related but the solution mentioned in it doesn't do anything for me

@zeffirara
Copy link
Author

My values.yaml:

n8n:
  encryption_key: <CUT>
defaults:

config:
  executions:
    pruneData: "true"
    pruneDataMaxAge: 3760
  database:
    type: postgresdb
    postgresdb:
      host: <CUT>
      database: <CUT>
      ssl:
        rejectUnauthorized: false
  host: <CUT>
  port: 443
  protocol: https

secret:
  database:
    postgresdb:
      user: <CUT>
      password: <CUT>

extraEnv:
  N8N_LOG_LEVEL: "debug"
  EXECUTIONS_MODE: "regular"
  QUEUE_HEALTH_CHECK_ACTIVE: "true"
  N8N_METRICS: "true"

extraEnvSecrets: {}

persistence:
  enabled: false
  type: emptyDir 
  accessModes:
    - ReadWriteOnce
  size: 1Gi

replicaCount: 1

deploymentStrategy:
  type: "Recreate"

image:
  repository: n8nio/n8n
  pullPolicy: IfNotPresent
 
  tag: ""

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  create: true
  annotations: {}
  name: ""

podSecurityContext:
  runAsNonRoot: true
  runAsUser: 1000
  runAsGroup: 1000
  fsGroup: 1000

service:
  type: ClusterIP
  port: 80
  annotations: {}

dnsNames:
  - <CUT>

ingress:
  enabled: true
  className: "alb"
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/target-type: "ip"
    alb.ingress.kubernetes.io/scheme: "internet-facing"
    alb.ingress.kubernetes.io/success-codes: "200-403"
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
    alb.ingress.kubernetes.io/certificate-arn: <CUT>
  hosts:
    - host: <CUT>
      paths: [ "/*" ]
  tls:
    - secretName: <CUT>
      hosts:
        - <CUT>

  className: ""

helmResourcePolicy: "keep"

issuer: "letsencrypt-cluster-issuer"

autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 2
  targetCPUUtilizationPercentage: 80

scaling:
  enabled: false

  worker:
    count: 1
    concurrency: 1
 
  webhook:
    enabled: false
    count: 1

redis:
  enabled: false

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant