Raviteja
Raviteja

Reputation: 11

Docker push Falling with "Failed to read stream: Unexpected EOF read on the socket' status '400'

We have an artifactory test setup on AWS which involves AWS ELB and one EC2 instance acting as artifactory server. Our setup is docker based JFROG artifactory.

We are facing the “Unexpected EOF read on the socket” error while pushing particular image (“platform-image”) but we are able to push the same image to the production environment which we have. The difference between these two environments is that we are using AWS ELB and sub domain type docker registry at test environment. In our current prod we are not having ELB and docker registry is of type port binding.

2017-10-16 06:48:37,967 [http-nio-8081-exec-9] [ERROR] (o.a.a.c.r.ArtifactoryService:282) - Failed to read stream: Unexpected EOF read on the socket org.jfrog.storage.binstore.common.ClientInputStreamException: Failed to read stream: Unexpected EOF read on the socket at org.jfrog.storage.binstore.common.ClientStream.read(ClientStream.java:36) ~[binary-store-core-1.0.2.jar:na] at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1792) ~[commons-io-2.4.jar:2.4] at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1769) ~[commons-io-2.4.jar:2.4] at org.apache.commons.io.IOUtils.copy(IOUtils.java:1744) ~[commons-io-2.4.jar:2.4]

2017-10-16 06:48:37,968 [http-nio-8081-exec-9] [WARN ] (o.j.r.d.v.r.h.DockerV2LocalRepoHandler:170) - Error uploading blob 'platform-image/_uploads/e205ae95-2e43-48c2-942e-a5aea659575c' got status '400' and message: 'Failed to read stream: Unexpected EOF read on the socket'

Please check and let us know what could be the issue here. We are not facing this issue for all images. But and same image which is having issue, we are able to push to prod instance.

Note: Same image worked initially once but when we try after few days it is not working.

So we are suspecting this could be something with our configuration. Please let us know if you need any more details from us.

Image name: platform-image Size: 15.2 GB.

Our cluster details: AWS Classic ELB : 1 Ec2-instance (i3.2xlarge): 1

Nginx config:

## server configuration
server {
    listen 443 ssl;
    listen 80 ;
    server_name ~(?<repo>.+)\.domain.com domain.com;

    if ($http_x_forwarded_proto = '') {
        set $http_x_forwarded_proto  $scheme;
    }
    ## Application specific logs
    access_log /var/log/nginx/docker-access.log;
    error_log /var/log/nginx/docker-error.log;
    rewrite ^/$ /artifactory/webapp/ redirect;
    rewrite ^/artifactory/?(/webapp)?$ /artifactory/webapp/ redirect;
    rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2;
    chunked_transfer_encoding on;
    client_max_body_size 0;
    location /artifactory/ {
    proxy_read_timeout  3200;
    proxy_pass_header   Server;
    proxy_cookie_path   ~*^/.* /;
    proxy_pass          http://localhost:8081/artifactory/;
    proxy_set_header    X-Artifactory-Override-Base-Url $http_x_forwarded_proto://$host:$server_port/artifactory;
    proxy_set_header    X-Forwarded-Port  $server_port;
    proxy_set_header    X-Forwarded-Proto $http_x_forwarded_proto;
    proxy_set_header    Host              $http_host;
    proxy_set_header    X-Forwarded-For   $proxy_add_x_forwarded_for;
    }
}

Upvotes: 0

Views: 2457

Answers (1)

Raviteja
Raviteja

Reputation: 11

After investigation we found that issue was caused by ELB configuration. ELB idle timeout value was set to 60 sec default and artifactory was not able to send response back within this time so connection was getting closed by ELB. Increasing the idle timeout value resolved the issue.

Artifactory log error was not clear about timeout and it is more generic which make us to think many sides.

Thank you.

Upvotes: 1

Related Questions