Im trying to get event from google calendar by python, Im using google quickstart example, I get browser opening to confirm credential but on my pc I get following error:
Traceback (most recent call last):
File "C:\Programs\Python\Python35-32\lib\site-packages\httplib2\__init__.py", line 987, in _conn_request
conn.connect()
File "C:\Programs\Python\Python35-32\lib\http\client.py", line 1252, in connect
super().connect()
File "C:\Programs\Python\Python35-32\lib\http\client.py", line 849, in connect
(self.host,self.port), self.timeout, self.source_address)
File "C:\Programs\Python\Python35-32\lib\socket.py", line 693, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "C:\Programs\Python\Python35-32\lib\socket.py", line 732, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11002] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\myUser\Documents\Python_projects\quickstart.py", line 79, in <module>
main()
File "C:\Users\myUser\Documents\Python_projects\quickstart.py", line 60, in main
credentials = get_credentials()
File "C:\Users\myUser\Documents\Python_projects\quickstart.py", line 48, in get_credentials
credentials = tools.run_flow(flow, store, flags)
File "C:\Programs\Python\Python35-32\lib\site-packages\oauth2client\util.py", line 137, in positional_wrapper
return wrapped(*args, **kwargs)
File "C:\Programs\Python\Python35-32\lib\site-packages\oauth2client\tools.py", line 243, in run_flow
credential = flow.step2_exchange(code, http=http)
File "C:\Programs\Python\Python35-32\lib\site-packages\oauth2client\util.py", line 137, in positional_wrapper
return wrapped(*args, **kwargs)
File "C:\Programs\Python\Python35-32\lib\site-packages\oauth2client\client.py", line 2027, in step2_exchange
headers=headers)
File "C:\Programs\Python\Python35-32\lib\site-packages\httplib2\__init__.py", line 1314, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "C:\Programs\Python\Python35-32\lib\site-packages\httplib2\__init__.py", line 1064, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\Programs\Python\Python35-32\lib\site-packages\httplib2\__init__.py", line 994, in _conn_request
raise ServerNotFoundError("Unable to find the server at %s" % conn.host)
httplib2.ServerNotFoundError: Unable to find the server at accounts.google.com
I suppose the problem is due to my proxy but where can I set proxy setting?
Thanks in advance
Related
I have a series of AWS Lambdas that are fed from SQS queue event triggers. However, sometimes when I try to delete the message from the queue, the attempt times out over and over again until my Lambda timeout occurs.
I enabled Debug logging which confirmed it was a socket timeout, but I don't get any further details beyond that. This also appears to be irregular. At first, I thought it was a Lambda warmup issue, but I've seen the problem after running the lambda multiple times successfully and on the first deploy.
What I've tried so far:
I thought maybe using a Boto client vs a Boto resource was the problem, but I saw the same result with both methods.
I've tweaked the connection and read timeouts to be higher than the default, however, the connection just retries with the Boto retry logic under the hood.
I've tried the connection timeout to be lower, but this just means more retries before the lambda timeout.
I've tried both standard and FIFO queue types, both have the same problem
A couple of other details:
Python v3.8.5
Boto3 v1.16.1
My SQS settings are set for a 5-second delay and a 120-second visibility timeout
My lambda timeout is 120 seconds.
Snippet of the code that I'm using:
config = Config(connect_timeout=30, read_timeout=30, retries={'total_max_attempts': 1}, region_name='us-east-1')
sqs_client = boto3.client(service_name='sqs', config=config)
receiptHandle = event['Records'][0]['receiptHandle']\
fromQueueName = eventSourceARN.split(':')[-1]
fromQueue = sqs_client.get_queue_url(QueueName=fromQueueName)
fromQueueUrl = sqs_client.get_queue_url(QueueName=fromQueueName)['QueueUrl']
messageDelete = sqs_client.delete_message(QueueUrl=fromQueueUrl, ReceiptHandle=receiptHandle)
And the and example of the DEBUG exception I'm seeing:
[DEBUG] 2020-10-29T21:27:28.32Z 3c60cac9-6d99-58c6-84c9-92dc581919fd retry needed, retryable exception caught:
Connect timeout on endpoint URL: "https://queue.amazonaws.com/" Traceback (most recent call last):
"/var/task/urllib3/connection.py", line 159, in _new_conn conn = connection.create_connection(
File "/var/task/urllib3/util/connection.py", line 84, in create_connection
raise err
File "/var/task/urllib3/util/connection.py", line 74, in create_connection
sock.connect(sa) socket.timeout: timed out During handling of the above exception, another exception occurred: Traceback (most
recent call last):
File "/opt/python/botocore/httpsession.py", line 254, in send
urllib_response = conn.urlopen(
File "/var/task/urllib3/connectionpool.py", line 726, in urlopen
retries = retries.increment(
File "/var/task/urllib3/util/retry.py", line 386, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/var/task/urllib3/packages/six.py", line 735, in reraise
raise value
File "/var/task/urllib3/connectionpool.py", line 670, in urlopen
httplib_response = self._make_request(
File "/var/task/urllib3/connectionpool.py", line 381, in _make_request
self._validate_conn(conn)
File "/var/task/urllib3/connectionpool.py", line 978, in _validate_conn
conn.connect()
File "/var/task/urllib3/connection.py", line 309, in connect
conn = self._new_conn()
File "/var/task/urllib3/connection.py", line 164, in _new_conn
raise ConnectTimeoutError( urllib3.exceptions.ConnectTimeoutError: (<botocore.awsrequest.AWSHTTPSConnection object at 0x7f27b56b7460>, 'Connection
to queue.amazonaws.com timed out. (connect timeout=15)') During handling of the above exception, another
exception occurred: Traceback (most recent call last):
File "/opt/python/utils.py", line 79, in preflight_check
fromQueue = sqs_client.get_queue_url(QueueName=fromQueueName)
File "/opt/python/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/opt/python/botocore/client.py", line 662, in _make_api_call
http, parsed_response = self._make_request(
File "/opt/python/botocore/client.py", line 682, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/opt/python/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/opt/python/botocore/endpoint.py", line 136, in _send_request
while self._needs_retry(attempts, operation_model, request_dict,
File "/opt/python/botocore/endpoint.py", line 253, in _needs_retry
responses = self._event_emitter.emit(
File "/opt/python/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/opt/python/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/opt/python/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/opt/python/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/opt/python/botocore/retryhandler.py", line 250, in __call__
should_retry = self._should_retry(attempt_number, response,
File "/opt/python/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/opt/python/botocore/retryhandler.py", line 316, in __call__
checker_response = checker(attempt_number, response,
File "/opt/python/botocore/retryhandler.py", line 222, in __call__
return self._check_caught_exception(
File "/opt/python/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/opt/python/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/opt/python/botocore/endpoint.py", line 269, in _send
return self.http_session.send(request)
File "/opt/python/botocore/httpsession.py", line 287, in send
raise ConnectTimeoutError(endpoint_url=request.url, error=e) botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint
URL: "https://queue.amazonaws.com/" During handling of the above exception, another exception occurred: Traceback (most recent
call last):
File "/opt/python/botocore/retryhandler.py", line 269, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/opt/python/botocore/retryhandler.py", line 316, in __call__
checker_response = checker(attempt_number, response,
File "/opt/python/botocore/retryhandler.py", line 222, in __call__
return self._check_caught_exception(
File "/opt/python/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/opt/python/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/opt/python/botocore/endpoint.py", line 269, in _send
return self.http_session.send(request)
File "/opt/python/botocore/httpsession.py", line 287, in send
raise ConnectTimeoutError(endpoint_url=request.url, error=e) botocore.exceptions.ConnectTimeoutError:
Connect timeout on endpoint URL: "https://queue.amazonaws.com/"
Based on the comments.
The SQS timeout was caused by the fact that the lambda function was associated with a VPC, and the VPC had no SQS VPC interface endpoint. Without the endpoint or NAT gateway, the function is not enable to connect to SQS.
The solution was to add the VPC interface endpoint for the SQS service.
I used nginx to build mlflow server with its proxy_pass and integrated simple HTTP auth in nginx. However, when I ran the experiment for a while, the mlflow client met this exception. And I have no idea how to fix it.
Here is the error messages:
Traceback (most recent call last):
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connection.py", line 159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/util/connection.py", line 80, in create_connection
raise err
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/util/connection.py", line 70, in create_connection
sock.connect(sa)
TimeoutError: [Errno 60] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1026, in _send_output
self.send(msg)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 964, in send
self.connect()
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connection.py", line 181, in connect
conn = self._new_conn()
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connection.py", line 168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x1280a8438>: Failed to establish a new connection: [Errno 60] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/util/retry.py", line 398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host=<host_ip>, port=<port>): Max retries exceeded with url: /api/2.0/mlflow/experiments/get-by-name?experiment_name=<exp_name> (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x1280a8438>: Failed to establish a new connection: [Errno 60] Operation timed out',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tmp_experiment_entry.py", line 4, in <module>
mlflow.set_experiment(<exp_name>)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mlflow/tracking/fluent.py", line 47, in set_experiment
experiment = client.get_experiment_by_name(experiment_name)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mlflow/tracking/client.py", line 151, in get_experiment_by_name
return self._tracking_client.get_experiment_by_name(name)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mlflow/tracking/_tracking_service/client.py", line 114, in get_experiment_by_name
return self.store.get_experiment_by_name(name)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mlflow/store/tracking/rest_store.py", line 219, in get_experiment_by_name
response_proto = self._call_endpoint(GetExperimentByName, req_body)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mlflow/store/tracking/rest_store.py", line 32, in _call_endpoint
return call_endpoint(self.get_host_creds(), endpoint, method, json_body, response_proto)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mlflow/utils/rest_utils.py", line 133, in call_endpoint
host_creds=host_creds, endpoint=endpoint, method=method, params=json_body)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mlflow/utils/rest_utils.py", line 70, in http_request
url=url, headers=headers, verify=verify, **kwargs)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/mlflow/utils/rest_utils.py", line 51, in request_with_ratelimit_retries
response = requests.request(**kwargs)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/local/opt/python/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host=<host_ip>, port=<port>): Max retries exceeded with url: /api/2.0/mlflow/experiments/get-by-name?experiment_name=<exp_name> (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x1280a8438>: Failed to establish a new connection: [Errno 60] Operation timed out',))
In the client, I use mlflow log by the following format and log_params, log_metrics in main function
with mlflow.start_run():
main(params)
I have been trying to use requests-html in a venv environment (python 3.7.0 - MacOS 10.15.1), however I am dealing with some certificate issue (I'm not behind any proxy/firewall):
The main call is :
from requests_html import HTMLSession
sessao = HTMLSession()
r1 = sessao.get(url=url_inicio)
The exception is raised while running the GET method, like this:
/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/bin/python "/Users/ricardobarroslourenco/Library/Application Support/JetBrains/Toolbox/apps/PyCharm-P/ch-0/192.6817.19/PyCharm.app/Contents/helpers/pydev/pydevd.py" --multiproc --qt-support=auto --client 127.0.0.1 --port 50377 --file /Users/ricardobarroslourenco/PycharmProjects/zarc/zarc_scraper/main.py
pydev debugger: process 9369 is connecting
Connected to pydev debugger (build 192.6817.19)
[W:pyppeteer.chromium_downloader] start chromium download.
Download may take a few minutes.
Traceback (most recent call last):
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 672, in urlopen
chunked=chunked,
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 376, in _make_request
self._validate_conn(conn)
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 994, in _validate_conn
conn.connect()
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/connection.py", line 394, in connect
ssl_context=context,
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/util/ssl_.py", line 370, in ssl_wrap_socket
return context.wrap_socket(sock, server_hostname=server_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 412, in wrap_socket
session=session
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 850, in _create
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ssl.py", line 1108, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1045)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/requests_html.py", line 714, in browser
self._browser = await pyppeteer.launch(ignoreHTTPSErrors=not(self.verify), headless=True, args=self.__browser_args)
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/pyppeteer/launcher.py", line 311, in launch
return await Launcher(options, **kwargs).launch()
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/pyppeteer/launcher.py", line 125, in __init__
download_chromium()
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/pyppeteer/chromium_downloader.py", line 136, in download_chromium
extract_zip(download_zip(get_url()), DOWNLOADS_FOLDER / REVISION)
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/pyppeteer/chromium_downloader.py", line 78, in download_zip
data = http.request('GET', url, preload_content=False)
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/request.py", line 76, in request
method, url, fields=fields, headers=headers, **urlopen_kw
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/request.py", line 97, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/poolmanager.py", line 330, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 760, in urlopen
**response_kw
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 760, in urlopen
**response_kw
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 760, in urlopen
**response_kw
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/connectionpool.py", line 720, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/Users/ricardobarroslourenco/PycharmProjects/zarc/venv/lib/python3.7/site-packages/urllib3/util/retry.py", line 436, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='storage.googleapis.com', port=443): Max retries exceeded with url: /chromium-browser-snapshots/Mac/575458/chrome-mac.zip (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1045)')))
Any hints on how to solve this issue? The idea is to scrape some websites which cookies are generated with javascript, and requests-html supposedly solves the problem of renderization (that occurs on the regular requests package).
Recently I have come back to some code of mine, that used to work perfectly fine from the weather-api module (https://pypi.org/project/weather-api/). However now it just spits out a long error of which I'm not sure what to do with.
I have traced the error back to the weather.py, and tried artificially slowing down the request rate with time.sleep(), however to no avail.
from weather import Weather , Unit
weather = Weather(unit=Unit.CELSIUS)
location = weather.lookup_by_location('London')
This gives the error:
Traceback (most recent call last):
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\urllib3\connection.py", line 141, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\urllib3\util\connection.py", line 60, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "D:\Program Files\Python\lib\socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\urllib3\connectionpool.py", line 601, in urlopen
chunked=chunked)
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\urllib3\connectionpool.py", line 357, in _make_request
conn.request(method, url, **httplib_request_kw)
File "D:\Program Files\Python\lib\http\client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "D:\Program Files\Python\lib\http\client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "D:\Program Files\Python\lib\http\client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "D:\Program Files\Python\lib\http\client.py", line 1026, in _send_output
self.send(msg)
File "D:\Program Files\Python\lib\http\client.py", line 964, in send
self.connect()
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\urllib3\connection.py", line 166, in connect
conn = self._new_conn()
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\urllib3\connection.py", line 150, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x0398B1B0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\requests\adapters.py", line 440, in send
timeout=timeout
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\urllib3\connectionpool.py", line 639, in urlopen
_stacktrace=sys.exc_info()[2])
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\urllib3\util\retry.py", line 388, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='query.yahooapis.com', port=80): Max retries exceeded with url: /v1/public/yql?q=select*%20from%20weather.forecast%20where%20woeid%20in%20(select%20woeid%20from%20geo.places(1)%20where%20text='London')%20and%20u='c'%20&format=json (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0398B1B0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<pyshell#14>", line 1, in <module>
location = weather.lookup_by_location('London')
File "D:\Program Files\Python\lib\site-packages\weather\weather.py", line 27, in lookup_by_location
self.URL, location, self.unit)
File "D:\Program Files\Python\lib\site-packages\weather\weather.py", line 38, in _call
def _call(self, url):
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\requests\api.py", line 72, in get
return request('get', url, params=params, **kwargs)
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\requests\api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\requests\sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\requests\sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "C:\Users\Me\AppData\Roaming\Python\Python36\site-packages\requests\adapters.py", line 508, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='query.yahooapis.com', port=80): Max retries exceeded with url: /v1/public/yql?q=select*%20from%20weather.forecast%20where%20woeid%20in%20(select%20woeid%20from%20geo.places(1)%20where%20text='London')%20and%20u='c'%20&format=json (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0398B1B0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed',))
Now trying to deconstruct the error I get to D:\Program Files\Python\Lib\site-packages\weather\weather.py at Ln: 38 (in the _call() function) which looks like:
def _call(self, url):
req = requests.get(url)
print('here') # my addition to the code. This is never reached.
if self.log:
self.logger.info("Request URL: %s" % req.url)
self.logger.info("Status Code: %s" % req.status_code)
self.logger.info("JSON Response: %s" % req.content)
if not req.ok:
req.raise_for_status()
try:
results = req.json()
if self.log:
self.logger.info(results)
if int(results['query']['count']) > 0:
wo = WeatherObject(results['query']['results']['channel'])
return wo
else:
if self.log:
self.logger.warn("No results found: %s " % results)
return
except Exception as e:
self.logger.warn(e)
self.logger.warn(req.content)
sys.exit(0)
I don't know why the requests module would cause this error, does anyone know a solution?
Expected outcome: a class object which should contain data from yahoo weather that can be read as a string with location.text.
Actual outcome: an error :/
Yahoo's weather API is deprecated! Who would've thought.
(Meaning trying to run requests.get(url) on the Yahoo url that no longer works didn't yield the correct results)
When i use requests.get() function in python3 using following commands
import requests
res = requests.get('http://www.gutenberg.org/cache/epub/1112/pg1112.txt')
Then python3 throws the following error:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 557, in urlopen
body=body, headers=headers)
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 351, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python3.4/http/client.py", line 1137, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python3.4/http/client.py", line 1182, in _send_request
self.endheaders(body)
File "/usr/lib/python3.4/http/client.py", line 1133, in endheaders
self._send_output(message_body)
File "/usr/lib/python3.4/http/client.py", line 963, in _send_output
self.send(msg)
File "/usr/lib/python3.4/http/client.py", line 898, in send
self.connect()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 155, in connect
conn = self._new_conn()
File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 134, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 90, in create_connection
raise err
File "/usr/lib/python3/dist-packages/urllib3/util/connection.py", line 80, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 370, in send
timeout=timeout
File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 607, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/lib/python3/dist-packages/urllib3/util/retry.py", line 271, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
requests.packages.urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='192.168.15.2', port=8000): Max retries exceeded with url: http://www.gutenberg.org/cache/epub/1112/pg1112.txt (Caused by ProxyError('Cannot connect to proxy.', TimeoutError(110, 'Connection timed out')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/requests/api.py", line 69, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python3/dist-packages/requests/api.py", line 50, in request
response = session.request(method=method, url=url, **kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 465, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python3/dist-packages/requests/sessions.py", line 573, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python3/dist-packages/requests/adapters.py", line 424, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='192.168.15.2', port=8000): Max retries exceeded with url: http://www.gutenberg.org/cache/epub/1112/pg1112.txt (Caused by ProxyError('Cannot connect to proxy.', TimeoutError(110, 'Connection timed out')))
As far as I know it says no internet connection, but my internet is working fine. So why python is throwing this error?
You can increase the timeout with (in seconds):
requests.get('http://www.gutenberg.org/cache/epub/1112/pg1112.txt', timeout=30)
Found answer using alpbert help and this thread Proxies with Python 'Requests' module
I dont have any proxy but python was still trying to detect a proxy. So i created a dict element
proxies={'http':''}
then this command worked
res = requests.get('http://www.gutenberg.org/cache/epub/1112/pg1112.txt',proxies=proxies)