JTIM
JTIM

Reputation: 2771

Pip upgrade cannot install packages

I have jumped around for some time now to solve this, and I cannot seem to get it working. I have a docker container where I set up an nvidia image for machine learning. I install all python dependencies. I then start with the pip package installations. I get the first error:

requests.exceptions.SSLError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max retries exceeded with url: /packages/5e/c4/6c4fe722df5343c33226f0b4e0bb042e4dc13483228b4718baf286f86d87/certifi-2020.6.20-py2.py3-none-any.whl (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))

Simple enough I have a certificate to deal with Cisco umbrella. I can then install all packages nice and easy. However to be able to install newest packages I need to upgrade pip, and upgrading works fine. After pip is upgraded to 20.2.3 I suddenly get an error again:

Could not fetch URL https://pypi.org/simple/pip/: There was a problem confirming the ssl certificate: HTTPSConnectionPool(host='pypi.org', port=443): Max retries exceeded with url: /simple/pip/ (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),)) - skipping

I have then googled around and tried the suggestions I stumbled upon:

Timing

I found that the system time was wrong - it worked for the initial pip version, which was weird. However changing the time did not help the issue.

conf

I added a pip.conf file with global tags for trusted hosts and for certifications. Still the same error persists.

pip install

I have tried with different trusted host flags and also the cert flag, which should already be specified from the conf file - if I understand it correctly. Nevertheless, neither method worked.

What to do

I am kind of at a loss right now, installing the certificate in the container allows me to install packages with pip 9.0.1 (default in the system) after upgrading to pip 20.2.3. I cannot get it to work with any package. I have tried multiple pip versions - but as soon as I upgrade I lose the certificate trying to reinstall it with

ADD Cisco_Umbrella_Root_CA.cer /usr/local/share/ca-certificates/Cisco_Umbrella_Root_CA.crt
RUN chmod 644 /usr/local/share/ca-certificates/Cisco_Umbrella_Root_CA.crt
RUN update-ca-certificates --fresh

Anybody has an idea how this can happen?

UPDATE

Curl

 RUN curl -v -k -H"Host; files.pythonhosted.org" https://files.pythonhosted.org/packages/8a/fd/bbbc569f98f47813c50a116b539d97b3b17a86ac7a309f83b2022d26caf2/Pillow-6.2.2-cp36-cp36m-manylinux1_x86_64.whl
  ---> Running in ac095828b9ec
   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                  Dload  Upload   Total   Spent    Left  Speed
   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying ::ffff:146.112.56.166...
 * TCP_NODELAY set
 * Connected to files.pythonhosted.org (::ffff:146.112.56.166) port 443 (#0)
 * ALPN, offering h2
 * ALPN, offering http/1.1
 * successfully set certificate verify locations:
 *   CAfile: /etc/ssl/certs/ca-certificates.crt
   CApath: /etc/ssl/certs
 } [5 bytes data]
 * TLSv1.3 (OUT), TLS handshake, Client hello (1):
 } [512 bytes data]
 * TLSv1.3 (IN), TLS handshake, Server hello (2):
 { [85 bytes data]
 * TLSv1.2 (IN), TLS handshake, Certificate (11):
 { [3177 bytes data]
 * TLSv1.2 (IN), TLS handshake, Server finished (14):
 { [4 bytes data]
 * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
 } [262 bytes data]
 * TLSv1.2 (OUT), TLS change cipher, Client hello (1):
 } [1 bytes data]
 * TLSv1.2 (OUT), TLS handshake, Finished (20):
 } [16 bytes data]
 * TLSv1.2 (IN), TLS handshake, Finished (20):
 { [16 bytes data]
 * SSL connection using TLSv1.2 / AES256-GCM-SHA384
 * ALPN, server did not agree to a protocol

From the last line it can be seen that they do not agree on protocol and the communication fails

Upvotes: 9

Views: 1286

Answers (4)

user12965285
user12965285

Reputation:

This seems to be either a problem concerning your certificate (old or invalid) or your (probably not updated) pip version. There's a link below regarding a conversation targeting the same (or similar) problem. I hope, I could help...

https://community.onion.io/topic/4014/problem-installing-packages-through-pip3-omega2/3

Upvotes: 0

Timothy c
Timothy c

Reputation: 811

For a docker build docker questions you REALLY need to show most of the dockerfile.

The detail above seems to indicate the docker file would have

FROM nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04
RUN set -ex \
  && apt update \
  && apt upgrade \
  && apt install -y curl python-pip
  && pip install --upgrade pip setuptools

Without the dockerfile there isn't a starting point and the only answer that can be given is "you seem to have a network problem". When I tried the above everything worked fine.

Using Curl within the container, the ssl cert I received was

* Server certificate:
*  subject: C=US; ST=California; L=San Francisco; O=Fastly, Inc; CN=r.ssl.fastly.net
*  start date: Jul 20 18:19:08 2020 GMT
*  expire date: Apr 28 19:20:25 2021 GMT
*  issuer: C=BE; O=GlobalSign nv-sa; CN=GlobalSign CloudSSL CA - SHA256 - G3

That cert is is a stock one that most systems should have. You can use openssl to interpret the results.

As you're adding Cisco_Umbrella_Root_CA.cer you ARE proxying through a corporate proxy. See Cisco Umbrella Root Certificate otherwise there is no need to add that cert. The "tested it on my private PC without any issues" tells you that it's environmental.

You can always run docker run -it nvidia/cuda:10.0-cudnn7-runtime-ubuntu18.04 to get a shell on the container and then start running the commands in the Dockerfile by hand. When things break fall back to linux troubleshooting. You're in a ubuntu like environment after all

Upvotes: 0

JTIM
JTIM

Reputation: 2771

The steps suggested in the answer and in my question are definitely what one should try. If someone cannot make it working, like me, then in this specific instance it was the IT organisation who had set the information to be proxied to umbrella, and it didn't supprt the ssl scanning/decryption.

Upvotes: 1

Tom Wojcik
Tom Wojcik

Reputation: 6189

Some time ago I ran into a similar problem. The solution for me was to add the cert and install dependencies in one docker layer.

I don't know how your Dockerfile looks exactly, but I'd try something like this:

ADD Cisco_Umbrella_Root_CA.cer /usr/local/share/ca-certificates/Cisco_Umbrella_Root_CA.crt
RUN chmod 644 /usr/local/share/ca-certificates/Cisco_Umbrella_Root_CA.crt && \
    update-ca-certificates --fresh && \
    pip install --upgrade pip setuptools && \
    pip install -r production.txt && \
    rm /usr/local/share/ca-certificates/Cisco_Umbrella_Root_CA.crt  # for extra safety

For reference what I do:

RUN mkdir -p -m 0600 ~/.ssh/ && \
    ssh-keyscan <my host> >> ~/.ssh/known_hosts && \
    eval `ssh-agent -s` && \
    ssh-add <ssh key> && \
    echo "Installing packages from pip. It might take a few minutes..." && \
    pip install --upgrade pip setuptools && \
    pip install -r production.txt && \
    rm <ssh key>

Where ssh key is already chmod 400 <ssh key> from another layer.

Also, make sure to

  • apt update AND
  • apt install -y ca-certificates OR
  • apt upgrade

Upvotes: 1

Related Questions