socket.gaierror: [Errno -3] Temporary failure in name resolution - python-3.x

I am requesting an API with this kind of code using the python requests library:
api_request = requests.get(f"http://data.api.org/search?q=example&ontologies=BFO&roots_only=true",
headers={'Authorization': 'apikey token=' + 'be03c61f-2ab8'})
api_result = api_request.json()
collection = api_result["collection"]
...
This code works fine when I don't request a lot of content but otherwise I'm getting an error. What is weird is that I don't get it each time I request a lot of content. The error message is the following one :
Traceback (most recent call last):
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/util/connection.py", line 61, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 392, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/lib/python3.6/http/client.py", line 964, in send
self.connect()
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/connection.py", line 187, in connect
conn = self._new_conn()
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/connection.py", line 172, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f4bdeca7080>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/nobu/.local/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 725, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/util/retry.py", line 439, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='data.api.org', port=80): Max retries exceeded with url: /ontologies/NCIT/classes/http%3A%2F%2Fncicb.nci.nih.gov%2Fxml%2Fowl%2FEVS%2FThesaurus.owl%23C48481/descendants (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4bdeca7080>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "new_format.py", line 181, in <module>
ontology_api(extraction(90))
File "new_format.py", line 142, in ontology_api
concept_extraction(collection)
File "new_format.py", line 100, in concept_extraction
api_request_tree = requests.get(f"{leaf}", headers={'Authorization': 'apikey token=' + f'{api_key}'})
File "/home/nobu/.local/lib/python3.6/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/home/nobu/.local/lib/python3.6/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/home/nobu/.local/lib/python3.6/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/home/nobu/.local/lib/python3.6/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/home/nobu/.local/lib/python3.6/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='data.api.org', port=80): Max retries exceeded with url: /ontologies/NCIT/classes/http%3A%2F%2Fncicb.nci.nih.gov%2Fxml%2Fowl%2FEVS%2FThesaurus.owl%23C48481/descendants (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4bdeca7080>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))
I don't know if I get the error because I overrequest the API if it's due to something else. I was not able to find an answer to my problem on SO or somewhere else.
Thank you by advance for your time and attention.

with requests.Session() as s:
s.get('http://google.com')
or
with requests.get('http://httpbin.org/get', stream=True) as r:
# Do something
this is other way
Python-Requests close http connection
but thanks for session.mount('http://', requests.adapters.HTTPAdapter(max_retries=100))

None/error output from a query can lead to the error
The error in question vanished when I ran the code in a container with the right parameter. Before, there was an id in one of the parameters that was not in the dataset that I ran the code on. Meaning: a query tried to find an id in a dataset, found nothing, then there was no output, then this legacy container reported:
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/properties.py:1029:
SAWarning: On Product.product_attribute, 'passive_deletes' is normally
configured on one-to-many, one-to-one, many-to-many relationships
only. self._check_cascade_settings(self._cascade) Traceback (most
recent call last): File "/usr/lib/python2.7/logging/handlers.py",
line 556, in emit
self.send(s) File "/usr/local/lib/python2.7/dist-packages/graypy/handler.py", line 37,
in send
DatagramHandler.send(self, s) File "/usr/lib/python2.7/logging/handlers.py", line 607, in send
self.sock.sendto(s, (self.host, self.port)) gaierror: [Errno -3] Temporary failure in name resolution
and further down at the end of the run:
raise e
AutoReconnect: connection closed
Looks as if sqlalchemy cannot handle a None output from a query, which closes the connection, and it tries and tries again to reach it until the connection gets closed after x tries.
Other debugging steps you might give a chance
I am a beginner, do not trust this answer. I still dare it.
"Temporary failure in name resolution" means that you cannot reach a server in your network, be it your host, your dns, where you log, the cloud.
First step is to ping each of the servers that your code tries to reach, and the DNS of your network to see whether the names work at all.
You might have a forgotten container still running that changes your network (docker ps) or that disturbs a module with its network traffic.
But if it happens only in times, and sort of switches on and off for much of the same dataload, you can try debugging this the plain way:
Make an exception and log in it whether you can reach all servers at the moment of the error.
Switch off logging by commenting it out in your code. Test the code for a longer time to see whether the error comes up. Logging leads to network traffic. Same for the API call, but I guess that the problem is a Python module that cannot handle race conditions when there are spikes of API calls that get logged.

Related

How can I setup Visdom on a remote server?

I'd like to use visdom for visualization of results in a deep learning algorithm which is trained in a remote cluster server. I found a link that tried to describe a correct way to setup everything in a slurm script.
python -u Script.py --visdom_server http://176.97.99.618 --visdom_port 8097
I use my ip and 8097 to connect to the remote cluster server:
ssh -L 8097:176.97.99.618:8097 my_userid#r#my_server_address
I have the following lines of code:
import visdom
import numpy as np
cfg = {"server": "176.97.99.618",
"port": 8097}
vis = visdom.Visdom('http://' + cfg["server"], port = cfg["port"])
win = None
#Plotting on remote server
def update_viz(epoch, loss, title):
global win
if win is None:
title = title
win = viz.line(
X=np.array([epoch]),
Y=np.array([loss]),
win=title,
opts=dict(
title=title,
fillarea=True
)
)
else:
viz.line(
X=np.array([epoch]),
Y=np.array([loss]),
win=win,
update='append'
)
update_viz(epoch, elbo2.item(), 'ELBO2 Loss of beta distributions')
I got this error:
Setting up a new session...
Traceback (most recent call last):
File "/anaconda3/lib/python3.8/site-packages/urllib3/connection.py", line 174, in _ne
w_conn
conn = connection.create_connection(
File "/anaconda3/lib/python3.8/site-packages/urllib3/util/connection.py", line 96, in
create_connection
raise err
File "/anaconda3/lib/python3.8/site-packages/urllib3/util/connection.py", line 86, in
create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 699, in
urlopen
httplib_response = self._make_request(
File "/anaconda3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 394, in
_make_request
conn.request(method, url, **httplib_request_kw)
File "/anaconda3/lib/python3.8/site-packages/urllib3/connection.py", line 239, in req
uest
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/anaconda3/lib/python3.8/http/client.py", line 1255, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/anaconda3/lib/python3.8/http/client.py", line 1301, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/anaconda3/lib/python3.8/http/client.py", line 1250, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/anaconda3/lib/python3.8/http/client.py", line 1010, in _send_output
self.send(msg)
File "/anaconda3/lib/python3.8/http/client.py", line 950, in send
self.connect()
File "/anaconda3/lib/python3.8/site-packages/urllib3/connection.py", line 205, in con
nect
conn = self._new_conn()
File "/anaconda3/lib/python3.8/site-packages/urllib3/connection.py", line 186, in _ne
w_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7ff292f14d00
>: Failed to establish a new connection: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.8/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/anaconda3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 755, in
urlopen
retries = retries.increment(
File "/anaconda3/lib/python3.8/site-packages/urllib3/util/retry.py", line 574, in inc
rement
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='176.97.99.618', port=8097): Max retries
exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection obj
ect at 0x7ff292f14d00>: Failed to establish a new connection: [Errno 110] Connection timed out'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.8/site-packages/visdom/__init__.py", line 708, in _send
return self._handle_post(
File "/anaconda3/lib/python3.8/site-packages/visdom/__init__.py", line 677, in _handl
e_post
r = self.session.post(url, data=data)
File "/anaconda3/lib/python3.8/site-packages/requests/sessions.py", line 590, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/anaconda3/lib/python3.8/site-packages/requests/sessions.py", line 542, in requ
est
resp = self.send(prep, **send_kwargs)
File "/anaconda3/lib/python3.8/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/anaconda3/lib/python3.8/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='192.168.2.10', port=8097): Max retri
es exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection
object at 0x7ff292f14d00>: Failed to establish a new connection: [Errno 110] Connection timed out'
))
Visdom python client failed to establish socket to get messages from the server. This feature is o
ptional and can be disabled by initializing Visdom with `use_incoming_socket=False`, which will pr
event waiting for this request to timeout.
Script.py:41: UserWarning: To copy construct from a tensor, it is recommended to us
e sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than
torch.tensor(sourceTensor).
params['w'].append(nn.Parameter(torch.tensor(Normal(torch.zeros(n_in, n_out), std * torch.ones(n
_in, n_out)).rsample(), requires_grad=True, device=device)))
Script.py:42: UserWarning: To copy construct from a tensor, it is recommended to us
e sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than
torch.tensor(sourceTensor).
params['b'].append(nn.Parameter(torch.tensor(torch.mul(bias_init, torch.ones([n_out,])), require
s_grad=True, device=device)))
Script.py:292: UserWarning: To copy construct from a tensor, it is recommended to u
se sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather tha
n torch.tensor(sourceTensor).
return torch.exp(torch.lgamma(torch.tensor(a, dtype=torch.float, requires_grad=True).to(device=l
ocal_device)) + torch.lgamma(torch.tensor(b, dtype=torch.float, requires_grad=True).to(device=loca
l_device)) - torch.lgamma(torch.tensor(a+b, dtype=torch.float, requires_grad=True).to(device=local
_device)))
Script.py:679: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /opt/conda/conda-bld/pytorch
_1631630815121/work/torch/csrc/utils/python_arg_parser.cpp:1025.)
exp_avg.mul_(beta1).add_(1 - beta1, grad)
[Errno 110] Connection timed out
on_close() takes 1 positional argument but 3 were given
Traceback (most recent call last):
File "Script.py", line 873, in <module>
update_viz(epoch, elbo2.item(), 'ELBO2 Loss of beta distributions')
File "Script.py", line 736, in update_viz
win = viz.line(
NameError: name 'viz' is not defined
how can I run my plotting script on a remote server? How should the command line of python code be in my SLURM script? How can I store the plot and move it later to my laptop using scp command?
Try using global viz after global win line.

Requests library raises urllib connection error

Requests library is giving me an error I've never encountered before, while trying to download an image from my website.
Here is the download function
def dl_jpg(url, filePath, fileName):
fullPath = filePath + fileName + '.jpg'
r = requests.get(url,allow_redirects=True)
open(fullPath,'wb').write(r.content)
The URL I'm entering is:
http://www.deepfrybot.ga/uploads/c4c7936ef4218dbe7014cb543049168b.jpg
Here's the error message
Last login: Fri Nov 1 03:36:32 2019 from 116.193.136.13
__| __|_ )
_| ( / Amazon Linux 2 AMI
___|\___|___|
https://aws.amazon.com/amazon-linux-2/
[ec2-user#ip ~]$ cd dfb-master
[ec2-user#ip dfb-master]$ sudo python3 mainr.py
enter url (n) if auto: http://www.deepfrybot.ga/uploads/c4c7936ef4218dbe7014cb543049168b.jpg
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", line 57, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib64/python3.7/socket.py", line 748, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 603, in urlopen
chunked=chunked)
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 355, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib64/python3.7/http/client.py", line 1244, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib64/python3.7/http/client.py", line 1290, in _send_request
self.endheaders(body, encode_chunked=encode_chunked) File "/usr/lib64/python3.7/http/client.py", line 1239, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib64/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/lib64/python3.7/http/client.py", line 966, in send
self.connect()
File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 183, in connect
conn = self._new_conn()
File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 169, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fdce843ed50>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 641, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 399, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='www.deepfrybot.ga', port=80): Max retries exceeded with url: /uploads/c4c7936ef4218dbe7014cb543049168b.jpg (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdce843ed50>: Failed to establish a new connection: [Errno -2] Name or service not known'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "mainr.py", line 32, in wrapper
return job_func(*args, **kwargs)
File "mainr.py", line 138, in main_p
dl_jpg(url, get_abs_file('images/'), file_name)
File "mainr.py", line 67, in dl_jpg
r = requests.get(url,allow_redirects=True)
File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.7/site-
packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='www.deepfrybot.ga', port=80): Max retries exceeded with url: /uploads/c4c7936ef4218dbe7014cb543049168b.jpg (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdce843ed50>: Failed to establish a new connection: [Errno -2] Name or service not known'))
I'm clueless as to why this is happening, a little help would be much appreciated
lxop said "
The script is failing to resolve the name 'www.deepfrybot.ga'. Does
your EC2 instance have DNS access?
"
And after checking and researching a bit on it I came to know that some specific EC2 instances/urllib3 are sometimes not able to access free top level domains (like .ga,.ml,.tk) as they have a general tendency to be malicious and also since they generally lack an SSL certificate.
These domains with SSL certificates work fine though!
And if you are hosting on say heliohost.org (which i am), simply change the domain in the script from yourwebsite.ga to yourwebsite.yourhost.domain
That should solve it!

Catching exception 'TimeoutError'

I can't seem to catching this TimeoutError exception. The other topics found on this are suggesting using "except TimeoutError" for Python 3, but it's still throwing an error. The error log is below.
I have also tried to import the requests module which didn't make any difference. Assuming this is an easy fix, I just can't figure it out. How can I fix it?
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/packages/urllib3/connection.py", line 141, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py", line 83, in create_connection
raise err
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
TimeoutError: [Errno 60] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py", line 345, in _make_request
self._validate_conn(conn)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py", line 844, in _validate_conn
conn.connect()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/packages/urllib3/connection.py", line 284, in connect
conn = self._new_conn()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/packages/urllib3/connection.py", line 150, in _new_conn
self, "Failed to establish a new connection: %s" % e)
requests.packages.urllib3.exceptions.NewConnectionError: <requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x106f3dd68>: Failed to establish a new connection: [Errno 60] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/adapters.py", line 423, in send
timeout=timeout
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/packages/urllib3/connectionpool.py", line 649, in urlopen
_stacktrace=sys.exc_info()[2])
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/packages/urllib3/util/retry.py", line 376, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
requests.packages.urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.spotify.com', port=443): Max retries exceeded with url: /v1/artists/090VebphoycdEyH165iMqc (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x106f3dd68>: Failed to establish a new connection: [Errno 60] Operation timed out',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "populate2.py", line 130, in <module>
r = s.get_artist(a['uri'])
File "/Users/tapoffice/Google Drive/OSCAR SIDEBO/Programming/Python/Spotify/spotify_handler.py", line 44, in get_artist
search = self.spotifyHandler.artist(uri)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/spotipy/client.py", line 244, in artist
return self._get('artists/' + trid)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/spotipy/client.py", line 146, in _get
return self._internal_call('GET', url, payload, kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/spotipy/client.py", line 108, in _internal_call
r = self._session.request(method, url, headers=headers, proxies=self.proxies, **args)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/adapters.py", line 487, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.spotify.com', port=443): Max retries exceeded with url: /v1/artists/090VebphoycdEyH165iMqc (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x106f3dd68>: Failed to establish a new connection: [Errno 60] Operation timed out',))
Here's the actual script running. Basically, it iterates over a list requested from a database and performs an API requests to collect information that then gets added back into the database. I realize the root of this issue it exceeding the rate limits of the API, so the solution would be to allow the script to sleep for 1 minute before continuing. I don't want to use "except Exception" since I could be missing other errors. The script below doesn't catch the TimeoutError which is what I don't understand.
for a in [x for x in songs.get('*')]:
try:
r = s.get_track(a['uri'])
pop = r['popularity']
songupdates.add_row(['song_id', 'popularity'], [str(a['id']), str(pop)])
except TimeoutError:
print("TimeoutError, sleeping 1 minute.")
time.sleep(60)
As I wrote in the comment below your post, debugging could be useful.
In my opinion, the exception occurs in the list comprehension [x for x in songs.get('*')] (I assume songs.get() make some request), outside the try block and that's the reason you can't handle it.
The output of this code may be helpful:
import logging
logger = logging.getLogger('my_app')
logger.setLevel(logging.DEBUG)
file_handler = logging.FileHandler('my_app.log')
file_handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
try:
logger.info('Starting for-loop')
for a in songs.get('*'):
logger.info('Getting song track')
r = s.get_track(a['uri'])
pop= r['popularity']
logger.info('Adding a row')
songupdates.add_row(['song_id','popularity'],[str(a['id']),str(pop)])
except Exception:
logger.error('An error occurred', exc_info=True)
If the last information message of the logger is 'Starting the for-loop', there are two possibilities: the request is invalid (not probable, I guess) or there is effectively need to slow down the code with time.sleep() or, if possible, incrementing the timeout, passing the keyworded argument somewhere in your code.
I had the same problem, i.e. looping over a list and requesting data for each entry of the list. This would lead to the error described in the question. I was able to resolve the issue by this except statement:
except requests.exceptions.ConnectionError as err:
print(err)
Some more information: the other three exceptions mentioned in the question are located in requests or urllib3, respectively:
urllib3.exceptions.MaxRetryError
urllib3.exceptions.NewConnectionError
requests.exceptions.Timeout

Cloud Natural Language API returning socket.gaierror: nodename nor servname provided after performing Sentiment Analysis every now and then

I'm running the code on Jupyter notebook, I modified the code from this link so it takes it from Jupyter notebook instead of console and iterates over a list of files.
"""Demonstrates how to make a simple call to the Natural Language API."""
import argparse
import requests
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
def print_result(annotations, movie_review_filename):
score = annotations.document_sentiment.score
magnitude = annotations.document_sentiment.magnitude
file_path_split = movie_review_filename.split("/")
fileName = file_path_split[len(file_path_split) - 1][:-4]
sentencelist = []
statuslist = []
for index, sentence in enumerate(annotations.sentences):
sentence_sentiment = sentence.sentiment.score
singlesentence = [fileName, sentence.text.content, sentence.sentiment.magnitude, sentence_sentiment]
sentencelist.append(singlesentence)
outputdf = pd.DataFrame(sentencelist, columns = ['status_id', 'sentence', 'sentence_magnitude', 'sentence_sentiment'])
outputdf.to_csv("/Users/abhi/Desktop/RetrySentenceCSVs/" + fileName + ".csv", index = False)
return 0
def analyze(movie_review_filename):
"""Run a sentiment analysis request on text within a passed filename."""
client = language.LanguageServiceClient()
with open(movie_review_filename, 'r') as review_file:
# Instantiates a plain text document.
content = review_file.read()
document = types.Document(
content=content,
type=enums.Document.Type.PLAIN_TEXT)
annotations = client.analyze_sentiment(document=document)
# Print the results
print_result(annotations, movie_review_filename)
if __name__ == '__main__':
import glob
csv_file_list = glob.glob("/Users/abhi/Desktop/mytxtfilepath/*.txt")
for file in csv_file_list: #Iterate through a list of file paths
analyze(file)
The code is running fine for 10% of the set of text files (I have 687), but after a while it starts to throw errors:
ERROR:root:AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x113b76588>" raised exception!
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/urllib3/connection.py", line 171, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/anaconda3/lib/python3.6/site-packages/urllib3/util/connection.py", line 56, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/anaconda3/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 343, in _make_request
self._validate_conn(conn)
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 849, in _validate_conn
conn.connect()
File "/anaconda3/lib/python3.6/site-packages/urllib3/connection.py", line 314, in connect
conn = self._new_conn()
File "/anaconda3/lib/python3.6/site-packages/urllib3/connection.py", line 180, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x113b840b8>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/requests/adapters.py", line 445, in send
timeout=timeout
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/anaconda3/lib/python3.6/site-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='accounts.google.com', port=443): Max retries exceeded with url: /o/oauth2/token (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x113b840b8>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/google/auth/transport/requests.py", line 120, in __call__
**kwargs)
File "/anaconda3/lib/python3.6/site-packages/requests/sessions.py", line 512, in request
resp = self.send(prep, **send_kwargs)
File "/anaconda3/lib/python3.6/site-packages/requests/sessions.py", line 622, in send
r = adapter.send(request, **kwargs)
File "/anaconda3/lib/python3.6/site-packages/requests/adapters.py", line 513, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='accounts.google.com', port=443): Max retries exceeded with url: /o/oauth2/token (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x113b840b8>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/grpc/_plugin_wrapping.py", line 77, in __call__
callback_state, callback))
File "/anaconda3/lib/python3.6/site-packages/google/auth/transport/grpc.py", line 77, in __call__
callback(self._get_authorization_headers(context), None)
File "/anaconda3/lib/python3.6/site-packages/google/auth/transport/grpc.py", line 65, in _get_authorization_headers
headers)
File "/anaconda3/lib/python3.6/site-packages/google/auth/credentials.py", line 122, in before_request
self.refresh(request)
File "/anaconda3/lib/python3.6/site-packages/google/oauth2/service_account.py", line 322, in refresh
request, self._token_uri, assertion)
File "/anaconda3/lib/python3.6/site-packages/google/oauth2/_client.py", line 145, in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "/anaconda3/lib/python3.6/site-packages/google/oauth2/_client.py", line 106, in _token_endpoint_request
method='POST', url=token_uri, headers=headers, body=body)
File "/anaconda3/lib/python3.6/site-packages/google/auth/transport/requests.py", line 124, in __call__
six.raise_from(new_exc, caught_exc)
File "<string>", line 3, in raise_from
google.auth.exceptions.TransportError: HTTPSConnectionPool(host='accounts.google.com', port=443): Max retries exceeded with url: /o/oauth2/token (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x113b840b8>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
ERROR:root:AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x113b76588>" raised exception!
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/urllib3/connection.py", line 171, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/anaconda3/lib/python3.6/site-packages/urllib3/util/connection.py", line 56, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/anaconda3/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 343, in _make_request
self._validate_conn(conn)
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 849, in _validate_conn
conn.connect()
File "/anaconda3/lib/python3.6/site-packages/urllib3/connection.py", line 314, in connect
conn = self._new_conn()
File "/anaconda3/lib/python3.6/site-packages/urllib3/connection.py", line 180, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x113b84470>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
...
The error repeats itself, then runs the SentimentAnalysis on the file, and then shows up against multiple times, then runs the SentimentAnalysis on the file and then finally stops with RendezVous error (forgot to capture this message) What I'm wondering is, how is it that the code worked for certain set of files for some time and threw error messages, worked a little more, threw error messages and then completely stopped working after a point?
I reran the code, only to find that, it is returning socket.gaierror after some random number of files in a folder. So one can see with a reasonable level of confidence that it is not the file contents that is the issue.
EDIT1: The file is simply any .txt files that have words in it.
Can someone help me resolve this? I can also assure you, all the text I have in all the 680 files accounts for a total of 1400 requests, I've been very meticulous in its calculation based on the definition of what a request is according to Cloud Natural API. so I am WELL within my limits.
EDIT2: I've tried sleep(10) which seems to work fine for a while but again begins throwing errors..
I figured it out. You're going to have to not read in all 600 files at once, but instead try to read it in batches of 50 files. (Create 12 folders with 50 files each), and manually run the code every time it is done scanning the folder. I'm not sure WHY this seems to work out, but it just works.

TypeError: getresponse() got an unexpected keyword argument 'buffering'

Cannot Upload from a Windows 7 32 bit OS. It works fine on Windows 7 64 bit OS with 32/64 bit Python. I am using Python 3.4.3 with latest requests API.
The error I get is:
Traceback (most recent call last): File
"C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 376, in _make_request
httplib_response = conn.getresponse(buffering=True)
TypeError: getresponse() got an unexpected keyword argument 'buffering'
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File
"C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 559, in urlopen
body=body, headers=headers) File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 378, in _make_request
httplib_response = conn.getresponse() File "C:\Python34\lib\http\client.py", line 1171, in getresponse
response.begin() File "C:\Python34\lib\http\client.py", line 351, in begin
version, status, reason = self._read_status() File "C:\Python34\lib\http\client.py", line 313, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "C:\Python34\lib\socket.py", line 374, in readinto
return self._sock.recv_into(b)
ConnectionResetError: WinError 10054] An existing connection was forcibly close d by the remote host
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "C:\Python34\lib\site-packages\requests\adapters.py", line 370, in send
timeout=timeout File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 609, in urlopen
_stacktrace=sys.exc_info()[2]) File "C:\Python34\lib\site-packages\requests\packages\urllib3\util\retry.py", line 245, in increment
raise six.reraise(type(error), error, _stacktrace) File "C:\Python34\lib\site-packages\requests\packages\urllib3\packages\six.py", line 309, in reraise
raise value.with_traceback(tb) File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 559, in urlopen
body=body, headers=headers) File "C:\Python34\lib\site-packages\requests\packages\urllib3\connectionpool.py", line 378, in _make_request
httplib_response = conn.getresponse() File "C:\Python34\lib\http\client.py", line 1171, in getresponse
response.begin() File "C:\Python34\lib\http\client.py", line 351, in begin
version, status, reason = self._read_status() File "C:\Python34\lib\http\client.py", line 313, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "C:\Python34\lib\socket.py", line 374, in readinto
return self._sock.recv_into(b)
requests.packages.urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "Upgrader.py", line 12, in
rdst = requests.post(urldst, files={'1_19_0_developer.exe': resp.content}) File "C:\Python34\lib\site-packages\requests\api.py", line 109, in post
return request('post', url, data=data, json=json, *kwargs) File "C:\Python34\lib\site-packages\requests\api.py", line 50, in request
response = session.request(method=method, url=url, *kwargs) File "C:\Python34\lib\site-packages\requests\sessions.py", line 468, in request
resp = self.send(prep, *send_kwargs) File "C:\Python34\lib\site-packages\requests\sessions.py", line 576, in send
r = adapter.send(request, *kwargs) File "C:\Python34\lib\site-packages\requests\adapters.py", line 412, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetErro r(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))
The code is
import requests
from requests_file import FileAdapter
s = requests.Session()
s.mount('file://', FileAdapter())
resp = s.get('file:///local_package_name')
urldst = 'Upload URL'
rdst = requests.post(urldst, files={'filename': resp.content})
print(rdst)
This code works fine on a Windows7 64 bit OS, but returns the error as described in Windows7 32 bit OS.
Also, I can upload small packages using the provided code on a 32 bit Windows 7 OS. The only problem is with uploading large packages.
Ignore the first ("buffering=True") exception. That's an internal backward compatibility artefact. The real errors are the ones that follow.
Here's a little more context for #fche's correct answer.
This comment from a Requests maintainer sums up what is going on here.
This is an unforeseen problem to do with how exception tracebacks are being reported in Python 3. PEP 3134 introduced this 'chaining exceptions' reporting [...]. The purpose of this error reporting is to highlight that some exceptions occur in except blocks, and to work out what chain of exceptions was hit. This is potentially very useful: for instance, you can hit an exception after destroying a resource and then attempt to use that resource in the except block, which hits another exception. It's helpful to be able to see both exceptions at once.
The key is that the TypeError raised as the first exception is unrelated to the subsequent ones. In fact, that's the standard control flow in urllib3. This means that the real exception that's being raised here is the request.exceptions.ConnectionError exception that wraps the urllib3.exceptions.MaxRetryError exception being raised in urllib3.
This is not a Requests bug, it's just an ugly traceback introduced by Python 3. We can try to reduce the nastiness of it somewhat by refactoring the method in urllib3 [...], but that'll only remove the TypeError from the chain: the rest will stay in place.

Resources