urllib [Errno 11001] getaddrinfo failed with windows proxy - python-3.x

I'm running Django 3.2 with django-tenants on a Windows local dev environment.
In my windows hosts file I have:
127.0.0.1 *.localhost
...so that I am able to use subdomains with django-tenants. E.g. http://mysub.localhost:8000.
When running ./manage.py runserver the dev server runs perfectly. However, when trying to execute urlopen in my code I get an error:
>>> html = urlopen('http://mysub.localhost:8000')
Traceback (most recent call last):
[...]
urllib.error.URLError: <urlopen error [Errno 11001] getaddrinfo failed>
As far as I can tell the error is due to the proxy settings on my windows machine (this does not fail in production), but I am unsure how to resolve it?

Related

MLFLOW: Registering a model remotely doesn't work while running locally inside azure VM does

I have been having issues trying to connect to an MLFLOW I created on an azure VM using the following tutorial:
https://medium.com/swlh/how-to-setup-mlflow-on-azure-5ba67c178e7d
Whenever running the following script on the server it works fine, but when running the same script remotely I get an error.
Is there anyone around here that has experience in deploying mlflow to Azure?
the script (censored IP address intentionally):
from sklearn.ensemble import RandomForestRegressor
import mlflow
import mlflow.sklearn
mlflow.set_tracking_uri("http://xx.xxx.xx.xxx:5000/")
mlflow.set_registry_uri("http://xx.xxx.xx.xxx:5000/")
mlflow.set_experiment("test experiment4")
with mlflow.start_run(run_name="YOUR_RUN_NAME") as run:
sk_learn_rfr = RandomForestRegressor()
mlflow.sklearn.log_model(sk_model=sk_learn_rfr,artifact_path="sklearn-model_local",registered_model_name="sk-learn-random-forest-reg-model")
error :
File "C:\Users\JasperBusschers\PycharmProjects\mlflow\venv\lib\site-packages\azure\core\pipeline\_base.py", line 103, in send
self._sender.send(request.http_request, **request.context.options),
File "C:\Users\JasperBusschers\PycharmProjects\mlflow\venv\lib\site-packages\azure\storage\blob\_shared\base_client.py", line 333, in send
return self._transport.send(request, **kwargs)
File "C:\Users\JasperBusschers\PycharmProjects\mlflow\venv\lib\site-packages\azure\storage\blob\_shared\base_client.py", line 333, in send
return self._transport.send(request, **kwargs)
File "C:\Users\JasperBusschers\PycharmProjects\mlflow\venv\lib\site-packages\azure\core\pipeline\transport\_requests_basic.py", line 361, in send
raise error
azure.core.exceptions.ServiceRequestError: <urllib3.connection.HTTPSConnection object at 0x000002B063025AC0>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed
I also tried using socket to try to connect and experienced the same error:
import socket
s = socket.socket()
s.connect(('http://20.XXX.XX.XXX', 5000))
Traceback (most recent call last):
File "<input>", line 4, in <module>
socket.gaierror: [Errno 11001] getaddrinfo failed

Connect TCP socket failed with: 111 (No connection between python in linux and twincat in windows)

I have been trying to get a connection between TwinCAT 3 on Windows and Python on Ubuntu. I already have the connection between Twincat 3 Windows and Python Windows working, but not to Ubuntu. I have a virtual machine set up through Oracle VM Virtualbox. I tried many things but so far had no success in creating the connection.
I have a bridged adapter network and tried to open the port of the IP address of the virtual machine in linux through sudo ufw allow
I have the following code:
pyads.open_port()
pyads.add_route('10.11.104.206.1.1','127.0.0.1')
pyads.close_port()
plc = pyads.Connection('10.11.104.206.1.1', 851)
plc.open()
try:
# try to connect to PLC
plc.read_state()
print('Connection succeeded')
except Exception:
print('Connection failed')
And this is the error I get:
2020-11-22T22:45:46+0100 Error: Connect TCP socket failed with: 111
Traceback (most recent call last):
File "/home/laurence/ws_moveit/devel/lib/moveit_tutorials/move_panda_LKO.py", line 15, in <module>
exec(compile(fh.read(), python_script, 'exec'), context)
File "/home/laurence/ws_moveit/src/moveit_tutorials/doc/move_panda_LKO/scripts/move_panda_LKO.py", line 64, in <module>
pyads.add_route('10.11.104.206.1.1','127.0.0.1')
File "/usr/local/lib/python3.8/dist-packages/pyads/ads.py", line 188, in add_route
return adsAddRoute(adr.netIdStruct(), ip_address)
File "/usr/local/lib/python3.8/dist-packages/pyads/pyads_ex.py", line 155, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/pyads/pyads_ex.py", line 177, in adsAddRoute
raise ADSError(error_code)
pyads.pyads_ex.ADSError: ADSError: target port not found ADS Server not started (6).
These are the netid/IPadresses.
-
-
-
CX-52EE70
169.254.64.202
5.82.238.112.1.1
TCP_IP
-
LEENLAPTOP19
127.0.0.1
10.11.104.206.1.1
TCP_IP
I have tried combinations with other netid/IP addresses so sometimes I get other errors (110,113) but usually 111 which means connection refused, but I do not know what I am doing wrong. Any ideas?
Please make sure the plc runtime is running (a plc program) when you connect. If the plc is in config or exception mode the plc runtime ads port (851 (or 801 for TC2)) is not present. That is what the ADS error 6 target port not found is trying to tell us.

"ConnectionRefusedError: [WinError 10061] No connection could be made" When Trying To Connect To A Server

I am trying to run this code using sockets:
address = ip = input('>>>')
if ip == '0':
address = (socket.gethostname(), 65535)
s.connect(address)
And it gives me this error:
Traceback (most recent call last):
File "C:/Users/*****/Desktop/Lightspace/Lightspace.py", line 61, in startupgame
s.connect(address)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
Why is it doing that? I have checked the server console, and everything looks fine. What is with this client-side issue?

Cannot load any datasets from tf.keras, gives error [WinError 10054] An existing connection was forcibly closed by the remote host

OS: Windows 10
tensorflow and keras succesfully imported, python 3.7.9
tf.__version__
>>> '2.1.0'
keras.__version__
>>> '2.2.4-tf'
Problem
Tried load_datasets or any dataset available in tf.keras such as:
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data()
give this error
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
During handling of the above exception, another exception occurred:
.
.
.
URLError: <urlopen error [WinError 10054] An existing connection was forcibly closed by the remote host>
During handling of the above exception, another exception occurred:
.
.
.
Exception: URL fetch failure on https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-
idx1-ubyte.gz: None -- [WinError 10054] An existing connection was forcibly closed by the remote host
The three dots showing bunch of code lines that can't be executed.
Anyone knows how to solve? I've been looking for possible solutions but the closest I can find is solving certification/verification issue, I think mine is about URL.
I know the workaround is to download the dataset from kaggle etc., but I want to know what cause this. Thanks guys
EDIT: it's not URL problem, unable to access https://storage.googleapis.com using IDM, but files can be downloaded directly in browser. So I guess it's security issue
Finally after 5 hours reading here and there..
Please check the solution by CRLannister here https://github.com/tensorflow/tensorflow/issues/33285
What it doesn't mention is where data_utils.py is located in case of Windows OS and anaconda environment. It's located here
~\Anaconda3\envs\*your_env*\Lib\site-packages\tensorflow_core\python\keras\utils\data_utils.py
just add the following after all the import statement
import requests
requests.packages.urllib3.disable_warnings()
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
# Legacy Python that doesn't verify HTTPS certificates by default
pass
else:
# Handle target environment that doesn't support HTTPS verification
ssl._create_default_https_context = _create_unverified_https_context

Docker Cassandra with python gives connection error

I installed Docker and run the first Cassandra node and used Cqlsh to run some commands and it works fine. I installed python driver and then when i run the command below i get the following error. I saw many stack questions and not much people were able to answer. Please give your ideas. I have been longing for a while to use cassandra but could never come up with a good solution for this problem. Thanks
>>> from cassandra.cluster import Cluster
>>> cluster=Cluster()
>>> keyspace='north'
>>> session=Cluster(['192.168.1.xx']).connect()
Error
cassandra.cluster.NoHostAvailable: ('Unable to connect to any servers', {'192.168.1.xx': ConnectionRefusedError(111, "Tried connecting to [('192.168.1.xx', 9042)]. Last error: Connection refused")})
When i tried to replace the Ip address with the cassandra node which i created, 'node1' in my case it gives me this error
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "cassandra/cluster.py", line 826, in cassandra.cluster.Cluster.__init__
File "/usr/lib/python3.5/socket.py", line 732, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -5] No address associated with hostname
I actually solved this by giving the container Ip address which was inside the docker. I was quite confused which address i should give. But then after running this command.
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id.
I was not aware that i need to specify the container id which had the cluster node. So i was always giving the ip address of the machine.
The IP address you have provided is not valid: 192.168.1.xx.
You need provide the IP address (or valid hostname) of at least one node in your cluster.

Resources