smk
smk

Reputation: 5842

Jenkins slave JNLP4- connection timeout

I see this error in some of the Jenkins jobs

Cannot contact jenkins-slave-l65p0-0f7m0: hudson.remoting.ChannelClosedException: Channel "unknown": Remote call on JNLP4-connect connection from 100.99.111.187/100.99.111.187:46776 failed. The channel is closing down or has closed down

I have a jenkins master - slave setup.

On the slave following logs are found

java.nio.channels.ClosedChannelException
    at org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
    at org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:142)
    at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:795)
    at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
    at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Jenkins is on a kubernetes cluster.

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  namespace: default
  name: jenkins-deployment
spec:
  serviceName: "jenkins-pod"
  replicas: 1
  template:
    metadata:
      labels:
        app: jenkins-pod
    spec:
      initContainers:
      - name: volume-mount-hack
        image: busybox
        command: ["sh", "-c", "chmod -R 777 /usr/mnt"]
        volumeMounts:
        - name: jenkinsdir
          mountPath: /usr/mnt
      containers:
      - name: jenkins-container
         imagePullPolicy: Always
        readinessProbe:
          exec:
            command:
              - curl
              - http://localhost:8080/login
              - -o
              - /dev/null
        livenessProbe:
          httpGet:
            path: /login
            port: 8080
          initialDelaySeconds: 120
          periodSeconds: 10
        env:
         - name: JAVA_OPTS
           value: "-Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85"
        resources:
          requests:
            memory: "7100Mi"
            cpu: "2000m"
        ports:
        - name: http-port
          containerPort: 8080
        - name: jnlp-port
          containerPort: 50000
        volumeMounts:
          - mountPath: /var/run
            name: docker-sock
          - mountPath: /var/jenkins_home
            name: jenkinsdir
      volumes:
        - name: jenkinsdir
          persistentVolumeClaim:
            claimName: "jenkins-persistence"
        - name: docker-sock
          hostPath:
            path: /var/run
---
apiVersion: v1
kind: Service
metadata:
  namespace: default
  name: jenkins
  labels:
    app: jenkins
spec:
  type: NodePort
  ports:
  - name: http
    port: 8080
    targetPort: 8080
    nodePort: 30099
    protocol: TCP
  selector:
    app: jenkins-pod
---
apiVersion: v1
kind: Service
metadata:
  namespace: default
  name: jenkins-external
  annotations:
        service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
  labels:
    app: jenkins
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 8080
    targetPort: 8080
    protocol: TCP
  selector:
    app: jenkins-pod
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: jenkins-master-pdb
  namespace: default
spec:
  maxUnavailable: 0
  selector:
    matchLabels:
      app: jenkins-pod
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: jenkins-slave-pdb
  namespace: default
spec:
  maxUnavailable: 0
  selector:
    matchLabels:
      jenkins: slave
---
kind: Service
apiVersion: v1
metadata:
  name: jenkins-discovery
  namespace: default
  labels:
    app: jenkins
spec:
  selector:
    app: jenkins-pod
  ports:
    - protocol: TCP
      port: 50000
      targetPort: 50000
      name: slaves

I doubt this has anything to do with kubernetes but still putting it out there.

Upvotes: 4

Views: 15536

Answers (3)

Utsav Chokshi
Utsav Chokshi

Reputation: 1395

I am assuming you are using Jenkins Kubernetes Plugin,

You can increase Timeout in seconds for Jenkins connection under Kubernetes Pod template. It may solve your issue.

Description for Timeout in seconds for Jenkins connection:

Specify time in seconds up to which Jenkins should wait for the JNLP agent to estabilish a connection. Value should be a positive integer, default being 100.

Upvotes: 2

Celine V.
Celine V.

Reputation: 11

Did you configure the JNLP port in Jenkins itself? It is located in Manage Jenkins > Configure Global Security > Agents. Click the "Fixed" radio button (since you already assigned a TCP port). Set the "TCP port for JNLP agents" to 50000.

Upvotes: 1

Related Questions