ValueError: check_hostname requires server_hostname using Fiddler 4 - python-3.x

This question just recently posted has some useful answers, but it's not the same as mine. I'm running urllib3 1.26.4 and Python 3.7 from an ArcGIS Pro Notebook. I also have Fiddler 4 open because I want to track web traffic while troubleshooting a script. I only get the following error when I have Fiddler open. If I close Fiddler I get <Response [200]>. Is it not possible to use the requests module with Fiddler open? I'm new to Fiddler.
Truncated script:
import requests
#url
idph_data = 'https://idph.illinois.gov/DPHPublicInformation/api/covidVaccine/getVaccineAdministrationCurrent'
#headers
headers = {'user-agent': 'Mozilla/5.0'}
response = requests.get(idph_data, headers=headers, verify=True)
Error:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
In [35]:
Line 4: response = requests.get(idph_data,verify=True)
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\requests\api.py, in get:
Line 76: return request('get', url, params=params, **kwargs)
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\requests\api.py, in request:
Line 61: return session.request(method=method, url=url, **kwargs)
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\requests\sessions.py, in request:
Line 542: resp = self.send(prep, **send_kwargs)
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\requests\sessions.py, in send:
Line 655: r = adapter.send(request, **kwargs)
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\requests\adapters.py, in send:
Line 449: timeout=timeout
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\urllib3\connectionpool.py, in urlopen:
Line 696: self._prepare_proxy(conn)
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\urllib3\connectionpool.py, in _prepare_proxy:
Line 964: conn.connect()
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\urllib3\connection.py, in connect:
Line 359: conn = self._connect_tls_proxy(hostname, conn)
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\urllib3\connection.py, in _connect_tls_proxy:
Line 506: ssl_context=ssl_context,
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\urllib3\util\ssl_.py, in ssl_wrap_socket:
Line 432: ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\urllib3\util\ssl_.py, in _ssl_wrap_socket_impl:
Line 474: return ssl_context.wrap_socket(sock)
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\ssl.py, in wrap_socket:
Line 423: session=session
File C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\ssl.py, in _create:
Line 827: raise ValueError("check_hostname requires server_hostname")
ValueError: check_hostname requires server_hostname
---------------------------------------------------------------------------

I am running into this issue as well with the environment provided by the current version of ArcGIS Pro. Per a lower-rated answer in the question you linked, I ran pip install urllib3==1.25.11 in the desired environment (in my case a clone of the default), and the issue appears to be resolved.
This is apparently due to a new feature in the urllib3 version provided by ArcGIS Pro. The above command downgrades to a relatively recent, but working, version. This will not be resolved in newer versions of urllib3, but instead, there is currently a pull request pending to fix the underlying issue in Python itself.
By the way, while it's possible to configure pip to be able to run through the fiddler proxy, it's not too easy, so it is best to turn off Fiddler while running any pip commands.
The pertinent bug report is found here. The issue appears to be that there is a very old bug in how Windows system proxy settings are being parsed by CPython / built-in urllib, causing the proxy entry for use with https URLs to always receive a HTTPS prefix (instead of HTTP). Newer version of urllib3 actually support using proxies over HTTPS, which was not previously the case. So before, urllib3 would ignore the prefix, but now, it attempts to use HTTPS to communicate with a HTTP url.

I've updated to requests v. 2.7.0, the latest, and I'm no longer receiving the error. If it was a version-specific issue related to v. 2.25.1, which was what I was using, I'm not sure. I haven't came across any evidence of that.
In a Windows command prompt in the same directory as my Python executable:
python -m pip install requests==2.7.0
Now if I run my original script with Fiddler capturing, I get a HTTP status of 200 and my script no longer gives me the error.

Related

Install ssl certificates for discord.py in an app

I am making a python-based mac app that uses discord.py to do stuff with discord. As I knew from previous experience making discord bots, running discord bots requires that you run Install Certificates.command in your version of python. However, if another users uses this app, I don't want to require them to install python. I took a snippet of code from Install Certificates.command, thinking it would put the certificate in the right place on a user's computer. However, a tester got this error running the app on their computer:
Traceback (most recent call last):
File "Interface.py", line 136, in <module>
File "installCerts.py", line 25, in installCerts
FileNotFoundError: [Errno 2] No such file or directory: '/Library/Frameworks/Python.framework/Versions/3.8/etc/openssl'
[2514] Failed to execute script 'Interface' due to unhandled exception: [Errno 2] No such file or directory: '/Library/Frameworks/Python.framework/Versions/3.8/etc/openssl'
[2514] Traceback:
Traceback (most recent call last):
File "Interface.py", line 136, in <module>
File "installCerts.py", line 25, in installCerts
FileNotFoundError: [Errno 2] No such file or directory: '/Library/Frameworks/Python.framework/Versions/3.8/etc/openssl'
It's pretty clear what this error is saying: They don't have python (3.8) installed, so it can't put the ssl certificates anywhere (this is because the app is running in a python 3.8 environment).
By the way, the path mentioned in the error is the directory name of the path given by ssl.get_default_verify_paths().openssl_cafile.
I'm not super well-versed in the finer points of web connections and stuff like that, so I don't know the exact role of these certificates. Here's my question:
Is it possible to get this to work without the user installing python on their computer?
I.e. Can I add the ssl certificates to the app's local python version (as far as I can tell, in my app, python is simply a large bundled exec file)? Is there somewhere deep in the file system where I can put the certificates to let the connection to discord happen? . Pretty much any solution would be appreciated.
Additional Info:
My Code to Install Certificates:
STAT_0o775 = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR
| stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP
| stat.S_IROTH | stat.S_IXOTH)
openssl_dir, openssl_cafile = os.path.split(
ssl.get_default_verify_paths().openssl_cafile)
os.chdir(openssl_dir) #Error happens here
relpath_to_certifi_cafile = os.path.relpath(certifi.where())
print(" -- removing any existing file or link")
try:
os.remove(openssl_cafile)
except FileNotFoundError:
pass
print(" -- creating symlink to certifi certificate bundle")
os.symlink(relpath_to_certifi_cafile, openssl_cafile)
print(" -- setting permissions")
os.chmod(openssl_cafile, STAT_0o775)
print(" -- update complete")
The error that discord.py throws when the user doesn't have correct certificates installed:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/aiohttp/connector.py", line 969, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore # noqa
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/base_events.py", line 1050, in create_connection
transport, protocol = await self._create_connection_transport(
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/base_events.py", line 1080, in _create_connection_transport
await waiter
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/sslproto.py", line 529, in data_received
ssldata, appdata = self._sslpipe.feed_ssldata(data)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/asyncio/sslproto.py", line 189, in feed_ssldata
self._sslobj.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/ssl.py", line 944, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1125)
If you need more info, let me know.
Ok. This was very tough, but I got to an answer after much research. ssl in python is basically just a set of bindings for openSSL. When you do import ssl, it builds an openSSL environment (I don't think I'm using the exact right words here). As you could see, it was defaulting to the openSSL folder in Python because from python's perspective, that is where openSSL keeps its certs. Turns out, ssl.DefaultVerifyPaths objects have other attributes, namely cafile. This was how I made the path to the cert whatever I wanted. You see, when openSSL builds, it looks for an environment variable SSL_CERT_FILE. As long as I set that variable with os.environ before I imported ssl, it would work, because ssl would find the certificate. I simplified installCerts down to the following:
import os
import stat
import certifi
def installCerts():
os.environ['SSL_CERT_FILE'] = certifi.where()
import ssl
# ssl build needs to happen after enviro var is set
STAT_0o775 = (stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR
| stat.S_IRGRP | stat.S_IWGRP | stat.S_IXGRP
| stat.S_IROTH | stat.S_IXOTH)
cafile = ssl.get_default_verify_paths().cafile
os.chmod(cafile, STAT_0o775)
And it seems to work fine on other people's computers now without them needing to install python.
This question helped me:
How to change the 'cafile' argument in the ssl module in Python3?

Does POSTMAN support http2 request

I am trying to send http2 request using postman. However, when my server receives the request gives error:
handle: <Handle _SelectorSocketTransport._read_ready()>
Traceback (most recent call last):
File "/usr/lib64/python3.6/asyncio/events.py", line 145, in _run
self._callback(*self._args)
File "/usr/lib64/python3.6/asyncio/selector_events.py", line 721, in _read_ready
self._protocol.data_received(data)
File "/home/deesharm/jetconf/jetconf/jetconf/rest_server.py", line 76, in data_received
events = self.conn.receive_data(data)
File "/home/deesharm/jetconf/venv/lib/python3.6/site-packages/h2/connection.py", line 1448, in receive_data
.. versionchanged:: 2.0.0
File "/home/deesharm/jetconf/venv/lib/python3.6/site-packages/h2/frame_buffer.py", line 52, in add_data
raise ProtocolError("Invalid HTTP/2 preamble.")
h2.exceptions.ProtocolError: Invalid HTTP/2 preamble.
Currently, Postman doesn't support HTTP/2.
https://github.com/postmanlabs/postman-app-support/issues/2701
As of late 2022, Postman still does not support HTTP2.
A workaround is to click the "Code" icon (looks like </> ) to generate the cURL command, add the --http2 command line flag, then copy/paste it into the terminal:
Works well with any mac / linux terminal as well as WSL2 on Windows. You can also provide the --verbose flag to cURL to make sure HTTP2 is working as expected.
I created a GUI tool that you can use to send HTTP requests, with support for HTTP/2 and HTTP/3. It has full compatibility with existing Postman collections and environments.
https://github.com/alexandrehtrb/Pororoca

SSL verification for registry.gitlab.com via httplib2 fails

I use bazel to publish docker images to gitlab regitry. Last week, the bazel commands started failing. I was able to narrow down the issue to httplib2.
The code sample below can be used to reproduce the issue.
import httplib
import httplib2
conn = httplib.HTTPSConnection("registry.gitlab.com")
conn.request("GET", "/")
r1 = conn.getresponse()
print r1.status, r1.reason
httplib2.Http().request('https://registry.gitlab.com')
The output for the above is:
200 OK
Traceback (most recent call last):
File "deleteMe.py", line 9, in <module>
httplib2.Http().request('https://registry.gitlab.com')
File "/Users/joint/Library/Python/2.7/lib/python/site-packages/httplib2/__init__.py", line 2135, in request
cachekey,
File "/Users/joint/Library/Python/2.7/lib/python/site-packages/httplib2/__init__.py", line 1796, in _request
conn, request_uri, method, body, headers
File "/Users/joint/Library/Python/2.7/lib/python/site-packages/httplib2/__init__.py", line 1701, in _conn_request
conn.connect()
File "/Users/joint/Library/Python/2.7/lib/python/site-packages/httplib2/__init__.py", line 1411, in connect
raise SSLHandshakeError(e)
httplib2.SSLHandshakeError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:726)
Error shown in Wireshark is 'Description: Unknown CA (48)'
I have tried verifying the gitlab certs via openssl and I don't see any issue with them.
I have tried specifying the gitlab cert in httplib2 definition but I get the same error.
h = httplib2.Http(ca_certs='./registrygitlabcom.crt')
h.request('https://registry.gitlab.com')
Any pointers on what I should be doing or trying out... thanks!
I think I have figured out the answer. Posting it here for anyone else who might run into this.
The root certificates used by httplib2 are coming from the cacerts.txt file.
(https://github.com/httplib2/httplib2/blob/master/python2/httplib2/cacerts.txt)
registry.gitlab.com probably switched the root CA last week and that has triggered the problem.

How to avoid this ssl.SSLError, or simply ignore?

The program should allow to run several https get requests with one aiohttp.ClientSession as the documentation suggests. It is intended to run a telegram bot.
I was not able to catch the exception with try ... except. Therefore the program hangs when exiting. During extended sessions the error is printed in the command windows (but not in the error log).
SSL error in data received
protocol: <asyncio.sslproto.SSLProtocol object at 0x0000016A581E4400>
transport: <_SelectorSocketTransport fd=644 read=polling write=<idle, bufsize=0>>
Traceback (most recent call last):
File "C:\Users\annet\Anaconda3\lib\asyncio\sslproto.py", line 526, in data_received
ssldata, appdata = self._sslpipe.feed_ssldata(data)
File "C:\Users\annet\Anaconda3\lib\asyncio\sslproto.py", line 207, in feed_ssldata
self._sslobj.unwrap()
File "C:\Users\annet\Anaconda3\lib\ssl.py", line 767, in unwrap
return self._sslobj.shutdown()
ssl.SSLError: [SSL: KRB5_S_INIT] application data after close notify (_ssl.c:2592)
^C
As the error information is very unspecific I could not really isolate the source and have a short code to reproduce the error.
A sample code is on github under https://github.com/fhag/telegram2.git
In order to run the code you will need an API token from telegram of your own bot.
This error showed up the first time when I upgraded to python 3.7.1.
Python is running on Windows 10.

Mercurial largefiles not working on Windows Server 2008

I'm trying to get the largefiles extension working on a mercurial server under Windows Server 2008 / IIS 7.5 with the hgweb.wsgi script.
When I clone a repo with largefiles locally (but using https://domain/, not a file system path) everything gets cloned fine, but when I try it on a different machine I get abort: remotestore: largefile XXXXX is missing
Here's the verbose output:
requesting all changes
adding changesets
adding manifests
adding file changes
added 1 changesets with 177 changes to 177 files
calling hook changegroup.lfiles: <function checkrequireslfiles at 0x0000000002E00358>
updating to branch default
resolving manifests
getting .hglf/path/to.file
...
177 files updated, 0 files merged, 0 files removed, 0 files unresolved
getting changed largefiles
getting path/to.file:c0c81df934cd72ca980dd156984fa15987e3881d
abort: remotestore: largefile c0c81df934cd72ca980dd156984fa15987e3881dis missing
Both machines have the extension working. I've tried disabling the firewall but that didn't help. Do I have to do anything to set up the extension besides adding it to mercurial.ini?
Edit: If I delete the files from the server's AppData\Local\largefiles\ directory, I get the same error when cloning on the server, unless I use a filesystem path to clone, in which case the files are added back to `AppData\Local\largefiles\'
Edit 2: Here's the debug output and traceback:
177 files updated, 0 files merged, 0 files removed, 0 files unresolved
getting changed largefiles
using http://domain
sending capabilities command
getting largefiles: 0/75 lfile (0.00%)
getting path/to.file:64f2c341fb3b1adc7caec0dc9c51a97e51ca6034
sending statlfile command
Traceback (most recent call last):
File "mercurial\dispatch.pyo", line 87, in _runcatch
File "mercurial\dispatch.pyo", line 685, in _dispatch
File "mercurial\dispatch.pyo", line 467, in runcommand
File "mercurial\dispatch.pyo", line 775, in _runcommand
File "mercurial\dispatch.pyo", line 746, in checkargs
File "mercurial\dispatch.pyo", line 682, in <lambda>
File "mercurial\util.pyo", line 463, in check
File "mercurial\commands.pyo", line 1167, in clone
File "mercurial\hg.pyo", line 400, in clone
File "mercurial\extensions.pyo", line 184, in wrap
File "hgext\largefiles\overrides.pyo", line 629, in hgupdate
File "hgext\largefiles\lfcommands.pyo", line 416, in updatelfiles
File "hgext\largefiles\lfcommands.pyo", line 398, in cachelfiles
File "hgext\largefiles\basestore.pyo", line 80, in get
File "hgext\largefiles\remotestore.pyo", line 56, in _getfile
Abort: remotestore: largefile 64f2c341fb3b1adc7caec0dc9c51a97e51ca6034 is missing
The _getfile function throws an exception because the statlfile command returns that the file wasn't found.
I've never used python myself, so I don't know what I'm doing while trying to debug this :D
AFAIK the statlfile command gets executed on the server so I can't debug it from my local machine. I've tried running python -m win32traceutil on the server, but it doesn't show anything. I also tried setting accesslog and errorlog in the server's mercurial config file, but it doesn't generate them.
I run hg through the hgweb.wsgi script, and I have no idea if/how I can get into the python debugger using that, but if I could get the debugger running on the server I could narrow down the problem...
Finally figured it out, the extension tries to write temporary files to %windir%\System32\config\systemprofile\AppData\Local, which was causing permission errors. The call was wrapped in a try-catch block that ended up returning the "file not found" error.
I'm just posting this for anyone else coming into the thread from a search.
There's currently an issue using the largefiles extension in the mercurial python module when hosted via IIS. See this post if you're encountering issues pushing large changesets (or large files) to IIS via TortoiseHg.
The problem ultimlately turns out to be a bug in SSL processing introduced Python 2.7.3 (probably explaining why there are so many unresolve posts of people looking for problems with Mercurial). Rolling back to Python 2.7.2 let me get a little further ahead (blocked at 30Mb pushes instead of 15Mb), but to properly solve the problem I had to install the IISCrypto utility to completely disable transfers over SSLv2.

Resources