Skip to content
This repository has been archived by the owner on Jul 26, 2022. It is now read-only.

GCE ingress with routes always falls back to default-http-backend #39

Open
alan-ma-umg opened this issue Jul 15, 2018 · 8 comments
Open
Assignees

Comments

@alan-ma-umg
Copy link

alan-ma-umg commented Jul 15, 2018

I installed the helm chart 1.5.1 into a GKE cluster:
helm install -f values.yaml --name cicd stable/sonatype-nexus

When the ingress is ready, I always get "default backend - 404" when visiting my nexus service IP/host.

$ kubectl describe ingress/cicd-nexus-sonatype-nexus
Name:             cicd-nexus-sonatype-nexus
Namespace:        default
Address:          35.190.xxx.xxx
Default backend:  default-http-backend:80 (10.0.1.3:8080)
TLS:
  nexus-tls terminates container.graphconnected.com,nexus.graphconnected.com
Rules:
  Host                          Path  Backends
  ----                          ----  --------
  container.foo.com
                                /*   cicd-nexus-sonatype-nexus:8080 (<none>)
  nexus.foo.com
                                /*   cicd-nexus-sonatype-nexus:8080 (<none>)
Annotations:
  backends:         {"k8s-be-32262--fa005fc45b78c698":"HEALTHY","k8s-be-32273--fa005fc45b78c698":"HEALTHY"}
  forwarding-rule:  k8s-fw-default-cicd-nexus-sonatype-nexus--fa005fc45b78c698
  target-proxy:     k8s-tp-default-cicd-nexus-sonatype-nexus--fa005fc45b78c698
  url-map:          k8s-um-default-cicd-nexus-sonatype-nexus--fa005fc45b78c698
Events:
  Type    Reason   Age               From                     Message
  ----    ------   ----              ----                     -------
  Normal  Service  2m (x10 over 1h)  loadbalancer-controller  no user specified default backend, using system default

Output from the dryrun $ helm install --dry-run --debug -f values.yaml stable/sonatype-nexus:

# Source: sonatype-nexus/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: angry-whippet-sonatype-nexus
  labels:
    app: sonatype-nexus
    fullname: angry-whippet-sonatype-nexus
    chart: sonatype-nexus-1.5.1
    release: angry-whippet
    heritage: Tiller
  annotations:
    kubernetes.io/ingress.allow-http: "true"
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.global-static-ip-name: "kubernetes-ingress-static-ip"
    kubernetes.io/tls-acme: "true"
spec:
  rules:
    - host: container.foo.com
      http:
        paths:
          - backend:
              serviceName: angry-whippet-sonatype-nexus
              servicePort: 8080
            path: /*
    - host: nexus.foo.com
      http:
        paths:
          - backend:
              serviceName: angry-whippet-sonatype-nexus
              servicePort: 8080
            path: /*
  tls:
    - hosts:
        - container.foo.com
        - nexus.foo.com
      secretName: "nexus-tls"

My full values.yaml content:

replicaCount: 1

nexus:
  imageName: quay.io/travelaudience/docker-nexus
  imageTag: 3.12.1
  imagePullPolicy: IfNotPresent
  env:
    - name: install4jAddVmParams
      value: "-Xms1200M -Xmx1200M -XX:MaxDirectMemorySize=2G -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
  # nodeSelector:
  #   cloud.google.com/gke-nodepool: default-pool
  resources: {}
    # requests:
      ## Based on https://support.sonatype.com/hc/en-us/articles/115006448847#mem
      ## and https://twitter.com/analytically/status/894592422382063616:
      ##   Xms == Xmx
      ##   Xmx <= 4G
      ##   MaxDirectMemory >= 2G
      ##   Xmx + MaxDirectMemory <= RAM * 2/3 (hence the request for 4800Mi)
      ##   MaxRAMFraction=1 is not being set as it would allow the heap
      ##     to use all the available memory.
      # cpu: 250m
      # memory: 4800Mi
  # The ports should only be changed if the nexus image uses a different port
  dockerPort: 5003
  nexusPort: 8081
  serviceType: NodePort
  # securityContext:
  #   fsGroup: 2000
  livenessProbe:
    initialDelaySeconds: 30
    periodSeconds: 30
    failureThreshold: 6
    path: /
  readinessProbe:
    initialDelaySeconds: 30
    periodSeconds: 30
    failureThreshold: 6
    path: /

nexusProxy:
  imageName: quay.io/travelaudience/docker-nexus-proxy
  imageTag: 2.2.0
  imagePullPolicy: IfNotPresent
  port: 8080
  env:
    nexusDockerHost: container.foo.com
    nexusHttpHost: nexus.foo.com
    enforceHttps: false
    cloudIamAuthEnabled: false
## If cloudIamAuthEnabled is set to true uncomment the variables below and remove this line
  #   clientId: ""
  #   clientSecret: ""
  #   organizationId: ""
  #   redirectUrl: ""
  # secrets:
  #   keystore: ""
  #   password: ""
  resources: {}
    # requests:
      # cpu: 100m
      # memory: 256Mi
    # limits:
      # cpu: 200m
      # memory: 512Mi
persistence:
  enabled: true
  accessMode: ReadWriteOnce
  ## If defined, storageClass: <storageClass>
  ## If set to "-", storageClass: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClass spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # existingClaim:
  # annotations:
  #  "helm.sh/resource-policy": keep
  # storageClass: "-"
  storageSize: 8Gi

nexusBackup:
  enabled: false
  imageName: quay.io/travelaudience/docker-nexus-backup
  imageTag: 1.2.0
  imagePullPolicy: IfNotPresent
  env:
    targetBucket:
  nexusAdminPassword: "admin123"
  persistence:
    enabled: true
    # existingClaim:
    # annotations:
    #  "helm.sh/resource-policy": keep
    accessMode: ReadWriteOnce
    # See comment above for information on setting the backup storageClass
    # storageClass: "-"
    storageSize: 8Gi

ingress:
  enabled: true
  path: /*
  annotations: 
    # NOTE: Can't use 'false' due to https://github.com/jetstack/kube-lego/issues/173.
    kubernetes.io/ingress.allow-http: true
    kubernetes.io/ingress.class: "gce"
    kubernetes.io/ingress.global-static-ip-name: "kubernetes-ingress-static-ip"
    kubernetes.io/tls-acme: true
  tls:
    enabled: true
    secretName: nexus-tls

Please advise.

Thanks!

@diasjorge
Copy link

Did you solve this issue? I'm facing the same problem

@turulb
Copy link

turulb commented Feb 7, 2019

Hi
I'm using minikube node with insecure registry, and have the same issue. Only difference with the config above is that i miss the "path: /*" from the ingress config.

@pires
Copy link
Contributor

pires commented Feb 27, 2019

@jeff-knurek @TAvardit can you help here? It seems to me this is a misconfiguration of the host-related attributes but I'm not experienced w/ the Helm charts.

@varditn
Copy link
Contributor

varditn commented Feb 27, 2019

Hi @alan-ma-umg something in your configurations seems off - your Nexus domains don't match the SSL domains.
@diasjorge Are you also using GKE and have googleAuth on Nexus-proxy disabled? Can you please share your configuration as @alan-ma-umg did?
@turulb , the path is needed but its configuration might be altered for a different provider than GKE.
I understand that your using minikube. Do you use Nginx ingress for minikube?

@diasjorge
Copy link

In my case I fixed it by adding path: /* to the ingress section, which was not in the recommended settings of the helm chart

@diasjorge
Copy link

I forgot to mention I'm also using GKE. I was using the nexus proxy before but I've disabled it since I had some problems with timeouts with very large uploads

@varditn
Copy link
Contributor

varditn commented Feb 27, 2019

The helm chart was changed to support other providers except for GKE but on the README the comment about the ingress path mention the need to use /* for GKE.
Interesting that you had timeout issues. We are using the proxy without any issues. which size of uploads created the issues and which type of objects?

@diasjorge
Copy link

I was uploading some packages about 500MB, and jetty would timeout since they took longer than 30seconds. I'll try to update to the newest version and retry, if there's a problem I'll make a new issue not to hijack this one.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants