Charles Salmon
Charles Salmon

Reputation: 467

Configuration of Nexus Helm Chart: HTTPS Serving HTTP Resources

I ran the following command:

kubectl create secret tls nexus-tls --cert cert.crt --key privateKey.pem

where cert.crt contains my certificate and privateKey.pem contains my private key (provisioned using CloudFlare).

I then installed the stable/sonatype-nexus Helm chart with the following configuration:

nexusProxy:
  env:
    nexusDockerHost: containers.<<NEXUS_HOST>>
    nexusHttpHost: nexus.<<NEXUS_HOST>>

nexusBackup:
  enabled: true
  nexusAdminPassword: <<PASSWORD>>
  env:
    targetBucket: gs://<<BACKUP_BUCKET_NAME>>
  persistence:
    storageClass: standard

ingress:
  enabled: true
  path: /*
  annotations:
    kubernetes.io/ingress.allow-http: true
    kubernetes.io/tls-acme: true
    kubernetes.io/ingress.class: gce
    kubernetes.io/ingress.global-static-ip-name: <<STATIC_IP_ADDRESS_NAME>>
  tls:
    enabled: true
    secretName: nexus-tls

persistence:
  storageClass: standard
  storageSize: 1024Gi

resources:
  requests:
    cpu: 250m
    memory: 4800Mi

by running the command:

helm install -f values.yaml stable/sonatype-nexus

The possible configuration values for this chart are documented here.

When I visit http://nexus.<<NEXUS_HOST>>, I am able to access the Nexus Repository. However, when I access https://nexus.<<NEXUS_HOST>>, I receive mixed content warnings, because HTTP resources are being served.

If I set the nexusProxy.env.enforceHttps environment variable to true, when I visit https://nexus.<<NEXUS_HOST>>, I get a response back which looks like:

HTTP access is disabled. Click here to browse Nexus securely: https://nexus.<<NEXUS_HOST>>.

How can I ensure that Nexus is served securely? Have I made a configuration error, or does the issue lie elsewhere?

Upvotes: 2

Views: 1702

Answers (2)

jws
jws

Reputation: 2764

For legacy reasons I must stand up nexus on GKE. While this question doesn't directly state it is on Google Cloud, the gs:// and ingress.class: gce suggest it was; despite the older answer from Xuan Huy being about AWS.

I had a heck of a time getting Nexus TLS to work on GKE, but I finally managed. Google Ingress resources are not the most stable. If you're iterating, they can wedge up and you might find finalizers unable to complete due to getting stuck on L4 ILB cleanup. Things got so screwed up in GCP with just innocent deploy and delete cycles that I had to trash projects and start new ones to test and finally get to a working combination.

My Helm values.yaml has the following. Note I am using Terraform also, so my ${variables} are replaced by Terraform with my particular environment settings before running Helm.

service:
  type: ClusterIP
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
    cloud.google.com/backend-config: '{"ports": {"8081":"sonatype-backendcfg"}}' 

ingress:
  ingressClassName: null      # on GCP, null this, and use annotations instead
  enabled: true
  hostPath: /                 # don't use /* that is suggested multiple places
  hostRepo: ${sonatype_dns_name}  # public facing FQDN
  annotations:
    ingress.gcp.kubernetes.io/pre-shared-cert: "${gce_ssl_cert_name}"
    kubernetes.io/ingress.class: "gce-internal"
    kubernetes.io/ingress.allow-http: "false"

    # unrelated hint - I use external-dns for DNS registration
    external-dns.alpha.kubernetes.io/hostname: "${sonatype_dns_name}."

  tls:
    - secretName: "${tls_secret_name}"
      hosts:
        - "${sonatype_cluster_dns_name}"  # the svc.cluster.local FQDN

Before running Helm, my installer places the TLS certs in the GCE cert store for the ILB to use.

Also before Helm, ${tls_secret_name} kubesecret is prepared with the cert in key names tls.crt and tls.key (many other apps use this pattern).

I also have a backendconfig resource:

apiVersion: cloud.google.com/v1                                                                                                                                 
kind: BackendConfig                                                                                                                                             
metadata:                                                                                                                                                       
  name: sonatype-backendcfg                                                                                                                                     
  namespace: sonatype                                                                                                                                           
spec:                                                                                                                                                           
  healthCheck:                                                                                                                                                  
    checkIntervalSec: 30                                                                                                                                        
    healthyThreshold: 1                                                                                                                                         
    port: 8081                                                                                                                                                  
    requestPath: /service/rest/v1/status                                                                                                                        
    timeoutSec: 15                                                                                                                                              
    type: HTTP                                                                                                                                                  
    unhealthyThreshold: 10   

The folks at Nexus are not supporting this scenario much longer, so we're working on moving to Harbor so we can cancel our Nexus license.

Upvotes: 1

Xuan Huy
Xuan Huy

Reputation: 21

if your ELB is serving pure http traffic, add this to your LoadBalancer service annotation.

service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http

Upvotes: 0

Related Questions