Reputation: 5721
I'm working on a simple script that involves CAS, jspring security check, redirection, etc. I would like to use Kenneth Reitz's python requests because it's a great piece of work! However, CAS requires getting validated via SSL so I have to get past that step first. I don't know what Python requests is wanting? Where is this SSL certificate supposed to reside?
Traceback (most recent call last):
File "./test.py", line 24, in <module>
response = requests.get(url1, headers=headers)
File "build/bdist.linux-x86_64/egg/requests/api.py", line 52, in get
File "build/bdist.linux-x86_64/egg/requests/api.py", line 40, in request
File "build/bdist.linux-x86_64/egg/requests/sessions.py", line 209, in request
File "build/bdist.linux-x86_64/egg/requests/models.py", line 624, in send
File "build/bdist.linux-x86_64/egg/requests/models.py", line 300, in _build_response
File "build/bdist.linux-x86_64/egg/requests/models.py", line 611, in send
requests.exceptions.SSLError: [Errno 1] _ssl.c:503: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
Upvotes: 544
Views: 1287099
Reputation: 669
Everything here failed for me. My company uses Z-Scaler and recently activated some extra protection that resulted in python packages such as plantweb
failing with this error:
SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate
The problem was that Z-Scaler is using its own certificate, so I needed to get that file from IT and tell python to use it.
export REQUESTS_CA_BUNDLE="/path/to/zscaler.crt"
plantweb --format png some_file.puml
However, this introduced another problem: Some other code I had started failing with the same SSL error because it needed to use the standard certificates from python's certifi
package.
The final solution that worked for me was to combine the normal certificates and the Z-Scaler one into a single certificate and tell python to use that:
pip3 install certifi
normal_python_cert="$(python3 -m certifi)"
cat $normal_python_cert /path/to/zscaler.crt > combined.crt
export REQUESTS_CA_BUNDLE=/path/to/combined.crt
plantweb --format png some_file.puml
Super useful reference: https://help.zscaler.com/zia/adding-custom-certificate-application-specific-trust-store#python
Upvotes: 4
Reputation: 1295
If you want to remove the warnings, use the code below.
import urllib3
urllib3.disable_warnings()
and verify=False
with request.get()
or post()
method
import requests
requests.get(url, verify=False)
Upvotes: 28
Reputation: 38
When it says verify
takes 'path to certificate', I pointed it to the issuer certificate so that it can use that to verify the url's certificate. curl
and wget
were fine with that certificate. But not python requests.
I had to create a certificate chain with all the certificates from end (leaf?) to root for python requests to process it fine. And the chain works fine with cURL and Wget too naturally.
Hope it helps someone and saves few hours.
Upvotes: 0
Reputation: 32125
From requests documentation on SSL verification:
Requests can verify SSL certificates for HTTPS requests, just like a web browser. To check a host’s SSL certificate, you can use the verify argument:
>>> requests.get('https://kennethreitz.com', verify=True)
If you don't want to verify your SSL certificate, make verify=False
Upvotes: 140
Reputation: 1106
I think you have several ways for fix this issue. I mentioned 5 ways below:
import certifi
import ssl
import urllib
context = ssl.create_default_context(cafile=certifi.where())
result = urllib.request.urlopen('https://www.example.com', context=context)
environment
.import os
import certifi
import urllib
os.environ["REQUESTS_CA_BUNDLE"] = certifi.where()
os.environ["SSL_CERT_FILE"] = certifi.where()
result = urllib.request.urlopen('https://www.example.com')
create default https context
method:import certifi
import ssl
ssl._create_default_https_context = lambda: ssl.create_default_context(cafile=certifi.where())
result = urllib.request.urlopen('https://www.example.com')
$ sudo update-ca-certificates --fresh
$ export SSL_CERT_DIR=/etc/ssl/certs
$ cd "/Applications/$(python3 --version | awk '{print $2}'| awk -F. '{print "Python " $1"."$2}')"
$ sudo "./Install Certificates.command"
Upvotes: 3
Reputation: 726
Some servers do not have the trusted root cert for Letsencrypt.
For example, assume the server pointed by the url below is protected by a Letsencrypt SSL.
requests.post(url, json=data)
This request can fail with [SSL: CERTIFICATE_VERIFY_FAILED] because the requesting server does not have the root cert for Letsencrypt.
When this happens download the active self-signed 'pem' cert from the link below.
https://letsencrypt.org/certificates/. (Active ISRG Root X1 as of this writing)
Now, use that in the verify parameter as follows.
requests.post(url, json=data, verify='path-to/isrgrootx1.pem')
Upvotes: 3
Reputation: 129
I found this answer which fixed it:
import ssl
import certifi
import urllib.request
url = "https://www.google.com/"
html = urllib.request.urlopen(url, context=ssl.create_default_context(cafile=certifi.where()))
I have no idea what it does, though.
Upvotes: 0
Reputation: 3356
In case you have a library that relies on requests
and you cannot modify the verify path (like with pyvmomi
) then you'll have to find the cacert.pem
bundled with requests and append your CA there. Here's a generic approach to find the cacert.pem
location:
windows
C:\>python -c "import requests; print requests.certs.where()"
c:\Python27\lib\site-packages\requests-2.8.1-py2.7.egg\requests\cacert.pem
linux
# (py2.7.5,requests 2.7.0, verify not enforced)
root@host:~/# python -c "import requests; print requests.certs.where()"
/usr/lib/python2.7/dist-packages/certifi/cacert.pem
# (py2.7.10, verify enforced)
root@host:~/# python -c "import requests; print requests.certs.where()"
/usr/local/lib/python2.7/dist-packages/requests/cacert.pem
btw. @requests-devs, bundling your own cacerts with request is really, really annoying... especially the fact that you do not seem to use the system ca store first and this is not documented anywhere.
update
in situations, where you're using a library and have no control over the ca-bundle location you could also explicitly set the ca-bundle location to be your host-wide ca-bundle:
REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-bundle.crt python -c "import requests; requests.get('https://somesite.com')";
Upvotes: 40
Reputation: 804
This is just another way you can try to solve the issue.
If you put "www.example.com", requests shouts at you. If you put "https://www.example.com", you get this error. So if you DO NOT NEED https, you can avoid the error by changing "https" to "http". eg. "http://www.example.com"
WARNING: Not using HTTPS is generally not a good idea. See Why HTTPS for Everything? Why HTTPS matters
Upvotes: -2
Reputation: 524
As pointed out by others, this problem "is caused by an untrusted SSL certificate". My answer is based on the top-rated answer and this answer.
You can test the certificate using curl
:
curl -vvI https://example.com
If an error returns, you have 3 options:
requests.get('https://example.com', verify=False)
requests.get('https://example.com', verify='/path/to/certfile')
My problem was because I was using only my site's certificate, not the intermediate (a.k.a. chain) certificate.
If you are using Let's Encrypt, you should use the fullchain.pem
file, not cert.pem
.
Upvotes: 29
Reputation: 10732
The problem you are having is caused by an untrusted SSL certificate.
Like @dirk mentioned in a previous comment, the quickest fix is setting verify=False
:
requests.get('https://example.com', verify=False)
Please note that this will cause the certificate not to be verified. This will expose your application to security risks, such as man-in-the-middle attacks.
Of course, apply judgment. As mentioned in the comments, this may be acceptable for quick/throwaway applications/scripts, but really should not go to production software.
If just skipping the certificate check is not acceptable in your particular context, consider the following options, your best option is to set the verify
parameter to a string that is the path of the .pem
file of the certificate (which you should obtain by some sort of secure means).
So, as of version 2.0, the verify
parameter accepts the following values, with their respective semantics:
True
: causes the certificate to validated against the library's own trusted certificate authorities (Note: you can see which Root Certificates Requests uses via the Certifi library, a trust database of RCs extracted from Requests: Certifi - Trust Database for Humans).False
: bypasses certificate validation completely.Source: Requests - SSL Cert Verification
Also take a look at the cert
parameter on the same link.
Upvotes: 714
Reputation: 8823
This is similar to @rafael-almeida 's answer, but I want to point out that as of requests 2.11+, there are not 3 values that verify
can take, there are actually 4:
True
: validates against requests's internal trusted CAs.False
: bypasses certificate validation completely. (Not recommended)The rest of my answer is about #4, how to use a directory containing certificates to validate:
Obtain the public certificates needed and place them in a directory.
Strictly speaking, you probably "should" use an out-of-band method of obtaining the certificates, but you could also just download them using any browser.
If the server uses a certificate chain, be sure to obtain every single certificate in the chain.
According to the requests documentation, the directory containing the certificates must first be processed with the "rehash" utility (openssl rehash
).
(This requires openssl 1.1.1+, and not all Windows openssl implementations support rehash. If openssl rehash
won't work for you, you could try running the rehash ruby script at https://github.com/ruby/openssl/blob/master/sample/c_rehash.rb , though I haven't tried this. )
I had some trouble with getting requests to recognize my certificates, but after I used the openssl x509 -outform PEM
command to convert the certs to Base64 .pem
format, everything worked perfectly.
You can also just do lazy rehashing:
try:
# As long as the certificates in the certs directory are in the OS's certificate store, `verify=True` is fine.
return requests.get(url, auth=auth, verify=True)
except requests.exceptions.SSLError:
subprocess.run(f"openssl rehash -compat -v my_certs_dir", shell=True, check=True)
return requests.get(url, auth=auth, verify="my_certs_dir")
Upvotes: 4
Reputation: 1249
In my case the reason was fairly trivial.
I had known that the SSL verification had worked until a few days earlier, and was infact working on a different machine.
My next step was to compare the certificate contents and size between the machine on which verification was working, and the one on which it was not.
This quickly led to me determining that the Certificate on the 'incorrectly' working machine was not good, and once I replaced it with the 'good' cert, everything was fine.
Upvotes: 1
Reputation: 13993
Too late to the party I guess but I wanted to paste the fix for fellow wanderers like myself! So the following worked out for me on Python 3.7.x
Type the following in your terminal
pip install --upgrade certifi # hold your breath..
Try running your script/requests again and see if it works (I'm sure it won't be fixed yet!). If it didn't work then try running the following command in the terminal directly
open /Applications/Python\ 3.6/Install\ Certificates.command # please replace 3.6 here with your suitable python version
Upvotes: 6
Reputation: 7820
If the request calls are buried somewhere deep in the code and you do not want to install the server certificate, then, just for debug purposes only, it's possible to monkeypatch requests:
import requests.api
import warnings
def requestspatch(method, url, **kwargs):
kwargs['verify'] = False
return _origcall(method, url, **kwargs)
_origcall = requests.api.request
requests.api.request = requestspatch
warnings.warn('Patched requests: SSL verification disabled!')
Never use in production!
Upvotes: 6
Reputation: 2469
I had to upgrade from Python 3.4.0 to 3.4.6
pyenv virtualenv 3.4.6 myvenv
pyenv activate myvenv
pip install -r requirements.txt
Upvotes: 0
Reputation: 4130
$ pip install -U requests[security]
When this question was opened (2012-05) the Requests version was 0.13.1. On version 2.4.1 (2014-09) the "security" extras were introduced, using certifi
package if available.
Right now (2016-09) the main version is 2.11.1, that works good without verify=False
. No need to use requests.get(url, verify=False)
, if installed with requests[security]
extras.
Upvotes: 51
Reputation: 682
After hours of debugging I could only get this to work using the following packages:
requests[security]==2.7.0 # not 2.18.1
cryptography==1.9 # not 2.0
using OpenSSL 1.0.2g 1 Mar 2016
Without these packages verify=False
was not working.
I hope this helps someone.
Upvotes: 9
Reputation: 2863
I was having a similar or the same certification validation problem. I read that OpenSSL versions less than 1.0.2, which requests depends upon sometimes have trouble validating strong certificates (see here). CentOS 7 seems to use 1.0.1e which seems to have the problem.
I wasn't sure how to get around this problem on CentOS, so I decided to allow weaker 1024bit CA certificates.
import certifi # This should be already installed as a dependency of 'requests'
requests.get("https://example.com", verify=certifi.old_where())
Upvotes: 0
Reputation: 199
As mentioned by @Rafael Almeida, the problem you are having is caused by an untrusted SSL certificate. In my case, the SSL certificate was untrusted by my server. To get around this without compromising security, I downloaded the certificate, and installed it on the server (by simply double clicking on the .crt file and then Install Certificate...).
Upvotes: 1
Reputation: 1312
There is currently an issue in the requests module causing this error, present in v2.6.2 to v2.12.4 (ATOW): https://github.com/kennethreitz/requests/issues/2573
Workaround for this issue is adding the following line: requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS = 'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS'
Upvotes: 2
Reputation: 727
I have found an specific approach for solving a similar issue. The idea is pointing the cacert file stored at the system and used by another ssl based applications.
In Debian (I'm not sure if same in other distributions) the certificate files (.pem) are stored at /etc/ssl/certs/
So, this is the code that work for me:
import requests
verify='/etc/ssl/certs/cacert.org.pem'
response = requests.get('https://lists.cacert.org', verify=verify)
For guessing what pem
file choose, I have browse to the url and check which Certificate Authority (CA) has generated the certificate.
EDIT: if you cannot edit the code (because you are running a third app) you can try to add the pem
certificate directly into /usr/local/lib/python2.7/dist-packages/requests/cacert.pem
(e.g. copying it to the end of the file).
Upvotes: 15
Reputation: 771
I encountered the same issue and ssl certificate verify failed issue when using aws boto3, by review boto3 code, I found the REQUESTS_CA_BUNDLE
is not set, so I fixed the both issue by setting it manually:
from boto3.session import Session
import os
# debian
os.environ['REQUESTS_CA_BUNDLE'] = os.path.join(
'/etc/ssl/certs/',
'ca-certificates.crt')
# centos
# 'ca-bundle.crt')
For aws-cli, I guess setting REQUESTS_CA_BUNDLE in ~/.bashrc
will fix this issue (not tested because my aws-cli works without it).
REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt # ca-bundle.crt
export REQUESTS_CA_BUNDLE
Upvotes: 77
Reputation: 8230
I fought this problem for HOURS.
I tried to update requests. Then I updated certifi. I pointed verify to certifi.where() (The code does this by default anyways). Nothing worked.
Finally I updated my version of python to python 2.7.11. I was on Python 2.7.5 which had some incompatibilities with the way that the certificates are verified. Once I updated Python (and a handful of other dependencies) it started working.
Upvotes: 3
Reputation: 399
I face the same problem using gspread and these commands works for me:
sudo pip uninstall -y certifi
sudo pip install certifi==2015.04.28
Upvotes: 17
Reputation: 151
If you don't bother about certificate just use verify=False
.
import requests
url = "Write your url here"
returnResponse = requests.get(url, verify=False)
Upvotes: 10
Reputation: 593
It is not feasible to add options if requests is being called from another package. In that case adding certificates to the cacert bundle is the straight path, e.g. I had to add "StartCom Class 1 Primary Intermediate Server CA", for which I downloaded the root cert into StartComClass1.pem. given my virtualenv is named caldav, I added the certificate with:
cat StartComClass1.pem >> .virtualenvs/caldav/lib/python2.7/site-packages/pip/_vendor/requests/cacert.pem
cat temp/StartComClass1.pem >> .virtualenvs/caldav/lib/python2.7/site-packages/requests/cacert.pem
one of those might be enough, I did not check
Upvotes: 0
Reputation: 186
I ran into the same issue. Turns out I hadn't installed the intermediate certificate on my server (just append it to the bottom of your certificate as seen below).
https://www.digicert.com/ssl-support/pem-ssl-creation.htm
Make sure you have the ca-certificates package installed:
sudo apt-get install ca-certificates
Updating the time may also resolve this:
sudo apt-get install ntpdate
sudo ntpdate -u ntp.ubuntu.com
If you're using a self-signed certificate, you'll probably have to add it to your system manually.
Upvotes: 6
Reputation: 414745
The name of CA file to use you could pass via verify
:
cafile = 'cacert.pem' # http://curl.haxx.se/ca/cacert.pem
r = requests.get(url, verify=cafile)
If you use verify=True
then requests
uses its own CA set that might not have CA that signed your server certificate.
Upvotes: 67