How can I setup Visdom on a remote server? - python-3.x

I'd like to use visdom for visualization of results in a deep learning algorithm which is trained in a remote cluster server. I found a link that tried to describe a correct way to setup everything in a slurm script.
python -u Script.py --visdom_server http://176.97.99.618 --visdom_port 8097
I use my ip and 8097 to connect to the remote cluster server:
ssh -L 8097:176.97.99.618:8097 my_userid#r#my_server_address
I have the following lines of code:
import visdom
import numpy as np
cfg = {"server": "176.97.99.618",
"port": 8097}
vis = visdom.Visdom('http://' + cfg["server"], port = cfg["port"])
win = None
#Plotting on remote server
def update_viz(epoch, loss, title):
global win
if win is None:
title = title
win = viz.line(
X=np.array([epoch]),
Y=np.array([loss]),
win=title,
opts=dict(
title=title,
fillarea=True
)
)
else:
viz.line(
X=np.array([epoch]),
Y=np.array([loss]),
win=win,
update='append'
)
update_viz(epoch, elbo2.item(), 'ELBO2 Loss of beta distributions')
I got this error:
Setting up a new session...
Traceback (most recent call last):
File "/anaconda3/lib/python3.8/site-packages/urllib3/connection.py", line 174, in _ne
w_conn
conn = connection.create_connection(
File "/anaconda3/lib/python3.8/site-packages/urllib3/util/connection.py", line 96, in
create_connection
raise err
File "/anaconda3/lib/python3.8/site-packages/urllib3/util/connection.py", line 86, in
create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 699, in
urlopen
httplib_response = self._make_request(
File "/anaconda3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 394, in
_make_request
conn.request(method, url, **httplib_request_kw)
File "/anaconda3/lib/python3.8/site-packages/urllib3/connection.py", line 239, in req
uest
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/anaconda3/lib/python3.8/http/client.py", line 1255, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/anaconda3/lib/python3.8/http/client.py", line 1301, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/anaconda3/lib/python3.8/http/client.py", line 1250, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/anaconda3/lib/python3.8/http/client.py", line 1010, in _send_output
self.send(msg)
File "/anaconda3/lib/python3.8/http/client.py", line 950, in send
self.connect()
File "/anaconda3/lib/python3.8/site-packages/urllib3/connection.py", line 205, in con
nect
conn = self._new_conn()
File "/anaconda3/lib/python3.8/site-packages/urllib3/connection.py", line 186, in _ne
w_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7ff292f14d00
>: Failed to establish a new connection: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.8/site-packages/requests/adapters.py", line 439, in send
resp = conn.urlopen(
File "/anaconda3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 755, in
urlopen
retries = retries.increment(
File "/anaconda3/lib/python3.8/site-packages/urllib3/util/retry.py", line 574, in inc
rement
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='176.97.99.618', port=8097): Max retries
exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection obj
ect at 0x7ff292f14d00>: Failed to establish a new connection: [Errno 110] Connection timed out'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.8/site-packages/visdom/__init__.py", line 708, in _send
return self._handle_post(
File "/anaconda3/lib/python3.8/site-packages/visdom/__init__.py", line 677, in _handl
e_post
r = self.session.post(url, data=data)
File "/anaconda3/lib/python3.8/site-packages/requests/sessions.py", line 590, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "/anaconda3/lib/python3.8/site-packages/requests/sessions.py", line 542, in requ
est
resp = self.send(prep, **send_kwargs)
File "/anaconda3/lib/python3.8/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/anaconda3/lib/python3.8/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='192.168.2.10', port=8097): Max retri
es exceeded with url: /env/main (Caused by NewConnectionError('<urllib3.connection.HTTPConnection
object at 0x7ff292f14d00>: Failed to establish a new connection: [Errno 110] Connection timed out'
))
Visdom python client failed to establish socket to get messages from the server. This feature is o
ptional and can be disabled by initializing Visdom with `use_incoming_socket=False`, which will pr
event waiting for this request to timeout.
Script.py:41: UserWarning: To copy construct from a tensor, it is recommended to us
e sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than
torch.tensor(sourceTensor).
params['w'].append(nn.Parameter(torch.tensor(Normal(torch.zeros(n_in, n_out), std * torch.ones(n
_in, n_out)).rsample(), requires_grad=True, device=device)))
Script.py:42: UserWarning: To copy construct from a tensor, it is recommended to us
e sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than
torch.tensor(sourceTensor).
params['b'].append(nn.Parameter(torch.tensor(torch.mul(bias_init, torch.ones([n_out,])), require
s_grad=True, device=device)))
Script.py:292: UserWarning: To copy construct from a tensor, it is recommended to u
se sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather tha
n torch.tensor(sourceTensor).
return torch.exp(torch.lgamma(torch.tensor(a, dtype=torch.float, requires_grad=True).to(device=l
ocal_device)) + torch.lgamma(torch.tensor(b, dtype=torch.float, requires_grad=True).to(device=loca
l_device)) - torch.lgamma(torch.tensor(a+b, dtype=torch.float, requires_grad=True).to(device=local
_device)))
Script.py:679: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha) (Triggered internally at /opt/conda/conda-bld/pytorch
_1631630815121/work/torch/csrc/utils/python_arg_parser.cpp:1025.)
exp_avg.mul_(beta1).add_(1 - beta1, grad)
[Errno 110] Connection timed out
on_close() takes 1 positional argument but 3 were given
Traceback (most recent call last):
File "Script.py", line 873, in <module>
update_viz(epoch, elbo2.item(), 'ELBO2 Loss of beta distributions')
File "Script.py", line 736, in update_viz
win = viz.line(
NameError: name 'viz' is not defined
how can I run my plotting script on a remote server? How should the command line of python code be in my SLURM script? How can I store the plot and move it later to my laptop using scp command?

Try using global viz after global win line.

Related

.oinion sites not listening on port 80 or 443

for some reason I just can't get a .oinion website to respond to my proxy request! I've tried so many times! Here is my current code:
import requests
session = requests.session()
session.proxies = {}
session.proxies['http'] = 'socks5h://localhost:9050'
session.proxies['https'] = 'socks5h://localhost:9050'
r = session.get('https://facebookcorewwwi.onion/')
print(r.headers)
and that returns:
Traceback (most recent call last):
File "/Users/myname/Library/Python/3.9/lib/python/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/Users/myname/Library/Python/3.9/lib/python/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/Users/myname/Library/Python/3.9/lib/python/site-packages/urllib3/connectionpool.py", line 1040, in _validate_conn
conn.connect()
File "/Users/myname/Library/Python/3.9/lib/python/site-packages/urllib3/connection.py", line 358, in connect
conn = self._new_conn()
File "/Users/myname/Library/Python/3.9/lib/python/site-packages/urllib3/connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x10db4a6d0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/myname/Library/Python/3.9/lib/python/site-packages/requests/adapters.py", line 440, in send
resp = conn.urlopen(
File "/Users/myname/Library/Python/3.9/lib/python/site-packages/urllib3/connectionpool.py", line 785, in urlopen
retries = retries.increment(
File "/Users/myname/Library/Python/3.9/lib/python/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.facebookcorewwwi.onion', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x10db4a6d0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/myname/Desktop/Python Projects/myfile.py", line 8, in <module>
r = session.get('https://www.facebookcorewwwi.onion/')
File "/Users/myname/Library/Python/3.9/lib/python/site-packages/requests/sessions.py", line 542, in get
return self.request('GET', url, **kwargs)
File "/Users/myname/Library/Python/3.9/lib/python/site-packages/requests/sessions.py", line 529, in request
When I switch the socks 5 port 9150 I am able to scrape regular website with a new ip.
Any help is greatly appreciated, thanks in advance!
-Pythondude3080

Using Python 3 "requests" package to access web server with IP V6 address: works with W10, not with Rasberry Pi OS

We have a device currently under development that we want to test for different features and protocols. One of the features is this device has an embedded HTTP(S) server and we want to perform regression testing when we make changes in the device source code. for that purpose we use Jenkins and a Raspberry Pi 4 (4 GB) as slave with PyTest as test framework, although our development environment is done with Windows 10.
We noticed a problem: when trying to connect to the device HTTP server with the IP V6 address on the RPi, we get a "[Errno 22] Invalid argument" error.
We reduced the test to a minimal Python 3 script to no avail: the same script works on Windows 10 but not on RPi.
Here is the script:
import os
import requests
ipv6_address = 'fe80::211:ff:fe4c:e565'
os.environ['no_proxy'] = 'localhost,foobar.com,192.168.10.*,{},[{}]'.format(ipv6_address, ipv6_address)
r = requests.get('https://[{}]/'.format(ipv6_address), verify=False, timeout=5)
print(r.status_code)
print(r.headers)
print(r.url)
Here are the results for Windows 10 (version 2004 19041.867):
$ python --version
Python 3.7.7
$ python ~/test_req.py
C:\Python37-32\lib\site-packages\urllib3\connectionpool.py:1020: InsecureRequestWarning: Unverified HTTPS request is being made to host 'fe80::211:ff:fe4c:e565'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning,
200
{'Server': '', 'Connection': 'Keep-Alive', 'Keep-Alive': '', 'Persist': '', 'Content-Type': 'text/html', 'Content-Length': '1429', 'Last-Modified': 'FRI JUL 07 16:43:06 2017', 'Etag': '"21D47F50-00000595-595FBA1A"'}
https://[fe80::211:ff:fe4c:e565]/
Result for Raspberry Pi (Linux raspberrypi 5.4.83-v7l-ALADDIN+ #1 SMP Tue Feb 9 14:23:45 CET 2021 armv7l GNU/Linux) :
[NOTE: we have compiled a specific version of Linux for the Ixxat SocketCAN driver from the default kernel configuration, except the name]
$ python3 --version
Python 3.7.3
$ python3 ~/test_req.py
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 80, in create_connection
raise err
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 70, in create_connection
sock.connect(sa)
OSError: [Errno 22] Invalid argument
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 343, in _make_request
self._validate_conn(conn)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 841, in _validate_conn
conn.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 301, in connect
conn = self._new_conn()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0xb6369c30>: Failed to establish a new connection: [Errno 22] Invalid argument
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/.local/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='fe80::211:ff:fe4c:e565', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0xb6369c30>: Failed to establish a new connection: [Errno 22] Invalid argument'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pi/test_req.py", line 9, in <module>
r = requests.get('https://[{}]/'.format(ipv6_address), verify=False, timeout=5)
File "/home/pi/.local/lib/python3.7/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/requests/sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/requests/sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "/home/pi/.local/lib/python3.7/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='fe80::211:ff:fe4c:e565', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0xb6369c30>: Failed to establish a new connection: [Errno 22] Invalid argument'))
We would like to know whether someone has an idea of the source of this "invalid argument" error.

socket.gaierror: [Errno -3] Temporary failure in name resolution

I am requesting an API with this kind of code using the python requests library:
api_request = requests.get(f"http://data.api.org/search?q=example&ontologies=BFO&roots_only=true",
headers={'Authorization': 'apikey token=' + 'be03c61f-2ab8'})
api_result = api_request.json()
collection = api_result["collection"]
...
This code works fine when I don't request a lot of content but otherwise I'm getting an error. What is weird is that I don't get it each time I request a lot of content. The error message is the following one :
Traceback (most recent call last):
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/util/connection.py", line 61, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen
chunked=chunked,
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 392, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib/python3.6/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/lib/python3.6/http/client.py", line 964, in send
self.connect()
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/connection.py", line 187, in connect
conn = self._new_conn()
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/connection.py", line 172, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f4bdeca7080>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/nobu/.local/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 725, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/nobu/.local/lib/python3.6/site-packages/urllib3/util/retry.py", line 439, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='data.api.org', port=80): Max retries exceeded with url: /ontologies/NCIT/classes/http%3A%2F%2Fncicb.nci.nih.gov%2Fxml%2Fowl%2FEVS%2FThesaurus.owl%23C48481/descendants (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4bdeca7080>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "new_format.py", line 181, in <module>
ontology_api(extraction(90))
File "new_format.py", line 142, in ontology_api
concept_extraction(collection)
File "new_format.py", line 100, in concept_extraction
api_request_tree = requests.get(f"{leaf}", headers={'Authorization': 'apikey token=' + f'{api_key}'})
File "/home/nobu/.local/lib/python3.6/site-packages/requests/api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "/home/nobu/.local/lib/python3.6/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/home/nobu/.local/lib/python3.6/site-packages/requests/sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "/home/nobu/.local/lib/python3.6/site-packages/requests/sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "/home/nobu/.local/lib/python3.6/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='data.api.org', port=80): Max retries exceeded with url: /ontologies/NCIT/classes/http%3A%2F%2Fncicb.nci.nih.gov%2Fxml%2Fowl%2FEVS%2FThesaurus.owl%23C48481/descendants (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4bdeca7080>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))
I don't know if I get the error because I overrequest the API if it's due to something else. I was not able to find an answer to my problem on SO or somewhere else.
Thank you by advance for your time and attention.
with requests.Session() as s:
s.get('http://google.com')
or
with requests.get('http://httpbin.org/get', stream=True) as r:
# Do something
this is other way
Python-Requests close http connection
but thanks for session.mount('http://', requests.adapters.HTTPAdapter(max_retries=100))
None/error output from a query can lead to the error
The error in question vanished when I ran the code in a container with the right parameter. Before, there was an id in one of the parameters that was not in the dataset that I ran the code on. Meaning: a query tried to find an id in a dataset, found nothing, then there was no output, then this legacy container reported:
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/properties.py:1029:
SAWarning: On Product.product_attribute, 'passive_deletes' is normally
configured on one-to-many, one-to-one, many-to-many relationships
only. self._check_cascade_settings(self._cascade) Traceback (most
recent call last): File "/usr/lib/python2.7/logging/handlers.py",
line 556, in emit
self.send(s) File "/usr/local/lib/python2.7/dist-packages/graypy/handler.py", line 37,
in send
DatagramHandler.send(self, s) File "/usr/lib/python2.7/logging/handlers.py", line 607, in send
self.sock.sendto(s, (self.host, self.port)) gaierror: [Errno -3] Temporary failure in name resolution
and further down at the end of the run:
raise e
AutoReconnect: connection closed
Looks as if sqlalchemy cannot handle a None output from a query, which closes the connection, and it tries and tries again to reach it until the connection gets closed after x tries.
Other debugging steps you might give a chance
I am a beginner, do not trust this answer. I still dare it.
"Temporary failure in name resolution" means that you cannot reach a server in your network, be it your host, your dns, where you log, the cloud.
First step is to ping each of the servers that your code tries to reach, and the DNS of your network to see whether the names work at all.
You might have a forgotten container still running that changes your network (docker ps) or that disturbs a module with its network traffic.
But if it happens only in times, and sort of switches on and off for much of the same dataload, you can try debugging this the plain way:
Make an exception and log in it whether you can reach all servers at the moment of the error.
Switch off logging by commenting it out in your code. Test the code for a longer time to see whether the error comes up. Logging leads to network traffic. Same for the API call, but I guess that the problem is a Python module that cannot handle race conditions when there are spikes of API calls that get logged.

Requests library raises urllib connection error

Requests library is giving me an error I've never encountered before, while trying to download an image from my website.
Here is the download function
def dl_jpg(url, filePath, fileName):
fullPath = filePath + fileName + '.jpg'
r = requests.get(url,allow_redirects=True)
open(fullPath,'wb').write(r.content)
The URL I'm entering is:
http://www.deepfrybot.ga/uploads/c4c7936ef4218dbe7014cb543049168b.jpg
Here's the error message
Last login: Fri Nov 1 03:36:32 2019 from 116.193.136.13
__| __|_ )
_| ( / Amazon Linux 2 AMI
___|\___|___|
https://aws.amazon.com/amazon-linux-2/
[ec2-user#ip ~]$ cd dfb-master
[ec2-user#ip dfb-master]$ sudo python3 mainr.py
enter url (n) if auto: http://www.deepfrybot.ga/uploads/c4c7936ef4218dbe7014cb543049168b.jpg
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", line 57, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib64/python3.7/socket.py", line 748, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 603, in urlopen
chunked=chunked)
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 355, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib64/python3.7/http/client.py", line 1244, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/lib64/python3.7/http/client.py", line 1290, in _send_request
self.endheaders(body, encode_chunked=encode_chunked) File "/usr/lib64/python3.7/http/client.py", line 1239, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/lib64/python3.7/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/lib64/python3.7/http/client.py", line 966, in send
self.connect()
File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 183, in connect
conn = self._new_conn()
File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 169, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fdce843ed50>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 641, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 399, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='www.deepfrybot.ga', port=80): Max retries exceeded with url: /uploads/c4c7936ef4218dbe7014cb543049168b.jpg (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdce843ed50>: Failed to establish a new connection: [Errno -2] Name or service not known'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "mainr.py", line 32, in wrapper
return job_func(*args, **kwargs)
File "mainr.py", line 138, in main_p
dl_jpg(url, get_abs_file('images/'), file_name)
File "mainr.py", line 67, in dl_jpg
r = requests.get(url,allow_redirects=True)
File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.7/site-
packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='www.deepfrybot.ga', port=80): Max retries exceeded with url: /uploads/c4c7936ef4218dbe7014cb543049168b.jpg (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fdce843ed50>: Failed to establish a new connection: [Errno -2] Name or service not known'))
I'm clueless as to why this is happening, a little help would be much appreciated
lxop said "
The script is failing to resolve the name 'www.deepfrybot.ga'. Does
your EC2 instance have DNS access?
"
And after checking and researching a bit on it I came to know that some specific EC2 instances/urllib3 are sometimes not able to access free top level domains (like .ga,.ml,.tk) as they have a general tendency to be malicious and also since they generally lack an SSL certificate.
These domains with SSL certificates work fine though!
And if you are hosting on say heliohost.org (which i am), simply change the domain in the script from yourwebsite.ga to yourwebsite.yourhost.domain
That should solve it!

Cloud Natural Language API returning socket.gaierror: nodename nor servname provided after performing Sentiment Analysis every now and then

I'm running the code on Jupyter notebook, I modified the code from this link so it takes it from Jupyter notebook instead of console and iterates over a list of files.
"""Demonstrates how to make a simple call to the Natural Language API."""
import argparse
import requests
from google.cloud import language
from google.cloud.language import enums
from google.cloud.language import types
def print_result(annotations, movie_review_filename):
score = annotations.document_sentiment.score
magnitude = annotations.document_sentiment.magnitude
file_path_split = movie_review_filename.split("/")
fileName = file_path_split[len(file_path_split) - 1][:-4]
sentencelist = []
statuslist = []
for index, sentence in enumerate(annotations.sentences):
sentence_sentiment = sentence.sentiment.score
singlesentence = [fileName, sentence.text.content, sentence.sentiment.magnitude, sentence_sentiment]
sentencelist.append(singlesentence)
outputdf = pd.DataFrame(sentencelist, columns = ['status_id', 'sentence', 'sentence_magnitude', 'sentence_sentiment'])
outputdf.to_csv("/Users/abhi/Desktop/RetrySentenceCSVs/" + fileName + ".csv", index = False)
return 0
def analyze(movie_review_filename):
"""Run a sentiment analysis request on text within a passed filename."""
client = language.LanguageServiceClient()
with open(movie_review_filename, 'r') as review_file:
# Instantiates a plain text document.
content = review_file.read()
document = types.Document(
content=content,
type=enums.Document.Type.PLAIN_TEXT)
annotations = client.analyze_sentiment(document=document)
# Print the results
print_result(annotations, movie_review_filename)
if __name__ == '__main__':
import glob
csv_file_list = glob.glob("/Users/abhi/Desktop/mytxtfilepath/*.txt")
for file in csv_file_list: #Iterate through a list of file paths
analyze(file)
The code is running fine for 10% of the set of text files (I have 687), but after a while it starts to throw errors:
ERROR:root:AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x113b76588>" raised exception!
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/urllib3/connection.py", line 171, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/anaconda3/lib/python3.6/site-packages/urllib3/util/connection.py", line 56, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/anaconda3/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 343, in _make_request
self._validate_conn(conn)
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 849, in _validate_conn
conn.connect()
File "/anaconda3/lib/python3.6/site-packages/urllib3/connection.py", line 314, in connect
conn = self._new_conn()
File "/anaconda3/lib/python3.6/site-packages/urllib3/connection.py", line 180, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x113b840b8>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/requests/adapters.py", line 445, in send
timeout=timeout
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/anaconda3/lib/python3.6/site-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='accounts.google.com', port=443): Max retries exceeded with url: /o/oauth2/token (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x113b840b8>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/google/auth/transport/requests.py", line 120, in __call__
**kwargs)
File "/anaconda3/lib/python3.6/site-packages/requests/sessions.py", line 512, in request
resp = self.send(prep, **send_kwargs)
File "/anaconda3/lib/python3.6/site-packages/requests/sessions.py", line 622, in send
r = adapter.send(request, **kwargs)
File "/anaconda3/lib/python3.6/site-packages/requests/adapters.py", line 513, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='accounts.google.com', port=443): Max retries exceeded with url: /o/oauth2/token (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x113b840b8>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/grpc/_plugin_wrapping.py", line 77, in __call__
callback_state, callback))
File "/anaconda3/lib/python3.6/site-packages/google/auth/transport/grpc.py", line 77, in __call__
callback(self._get_authorization_headers(context), None)
File "/anaconda3/lib/python3.6/site-packages/google/auth/transport/grpc.py", line 65, in _get_authorization_headers
headers)
File "/anaconda3/lib/python3.6/site-packages/google/auth/credentials.py", line 122, in before_request
self.refresh(request)
File "/anaconda3/lib/python3.6/site-packages/google/oauth2/service_account.py", line 322, in refresh
request, self._token_uri, assertion)
File "/anaconda3/lib/python3.6/site-packages/google/oauth2/_client.py", line 145, in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
File "/anaconda3/lib/python3.6/site-packages/google/oauth2/_client.py", line 106, in _token_endpoint_request
method='POST', url=token_uri, headers=headers, body=body)
File "/anaconda3/lib/python3.6/site-packages/google/auth/transport/requests.py", line 124, in __call__
six.raise_from(new_exc, caught_exc)
File "<string>", line 3, in raise_from
google.auth.exceptions.TransportError: HTTPSConnectionPool(host='accounts.google.com', port=443): Max retries exceeded with url: /o/oauth2/token (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x113b840b8>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known',))
ERROR:root:AuthMetadataPluginCallback "<google.auth.transport.grpc.AuthMetadataPlugin object at 0x113b76588>" raised exception!
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/urllib3/connection.py", line 171, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/anaconda3/lib/python3.6/site-packages/urllib3/util/connection.py", line 56, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/anaconda3/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 343, in _make_request
self._validate_conn(conn)
File "/anaconda3/lib/python3.6/site-packages/urllib3/connectionpool.py", line 849, in _validate_conn
conn.connect()
File "/anaconda3/lib/python3.6/site-packages/urllib3/connection.py", line 314, in connect
conn = self._new_conn()
File "/anaconda3/lib/python3.6/site-packages/urllib3/connection.py", line 180, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x113b84470>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known
During handling of the above exception, another exception occurred:
...
The error repeats itself, then runs the SentimentAnalysis on the file, and then shows up against multiple times, then runs the SentimentAnalysis on the file and then finally stops with RendezVous error (forgot to capture this message) What I'm wondering is, how is it that the code worked for certain set of files for some time and threw error messages, worked a little more, threw error messages and then completely stopped working after a point?
I reran the code, only to find that, it is returning socket.gaierror after some random number of files in a folder. So one can see with a reasonable level of confidence that it is not the file contents that is the issue.
EDIT1: The file is simply any .txt files that have words in it.
Can someone help me resolve this? I can also assure you, all the text I have in all the 680 files accounts for a total of 1400 requests, I've been very meticulous in its calculation based on the definition of what a request is according to Cloud Natural API. so I am WELL within my limits.
EDIT2: I've tried sleep(10) which seems to work fine for a while but again begins throwing errors..
I figured it out. You're going to have to not read in all 600 files at once, but instead try to read it in batches of 50 files. (Create 12 folders with 50 files each), and manually run the code every time it is done scanning the folder. I'm not sure WHY this seems to work out, but it just works.

Resources