I have very strange error :
If I would like to download one model I get
python3.6/site-packages/urllib3/contrib/pyopenssl.py in recv_into(self, *args, **kwargs)
303 try:
--> 304 return self.connection.recv_into(*args, **kwargs)
305 except OpenSSL.SSL.SysCallError as e:
SSLError: ("read error: Error([('SSL routines', 'ssl3_get_record', 'decryption failed or bad record mac')],)",)
But, if I download another model in same workspace download normally.
model = Model(ws, 'model1')
model.download(target_dir=os.getcwd() + '/outputs/1/', exist_ok=True)
# this download normaly
model = Model(ws, 'model2')
model.download(target_dir=os.getcwd() + '/outputs/2/', exist_ok=True)
# This give me an SSL error
Some points:
This model already worked, but suddenly doesn't wont to download
My network is probably not a problem, because else the first model wouldn't download,...
This is indeed odd. I assume that it this is consistently reproducing between model1 and 2. Which version of openssl are you using?
python -c "import sys; print(sys.OPENSSL_VERSION)"
Related
I am trying to execute a REST get call using 'requests' library, and i am using python 3.10 on ubuntu
But I get the exception:
requests.exceptions.ProxyError: HTTPSConnectionPool(host='XXX', port=443): Max retries exceeded with url: XXX (Caused by ProxyError('Cannot connect to proxy.', OSError('Tunnel connection failed: 503 Service Unavailable')))
I set environment variables in persistence file .bashrc :
enter image description here and I added HTTP_PROXY variable too.
I tried sending proxies variable in the request but I get the same exception
enter image description here
In the requests library documentation I coun't find any information about this exception.
This worked for me
requests.packages.urllib3.disable_warnings()
requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS += ':HIGH:!DH:!aNULL'
try:
requests.packages.urllib3.contrib.pyopenssl.util.ssl_.DEFAULT_CIPHERS += ':HIGH:!DH:!aNULL'
except AttributeError:
# no pyopenssl support used / needed / available
pass
Then call: r = requests.get(url, headers=headers, timeout=3, verify=False)
When I convert a my trained pytorch model to coreml model, I got this error:
File "/Users/lion/Documents/MyLab/web_workspace/sky_replacement/venv/lib/python3.9/site-packages/torch/jit/_serialization.py", line 161, in load
cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
This is my code:
from networks import *
import coremltools as ct
run_device = torch.device("cpu")
net_G = define_G(input_nc=3, output_nc=1, ngf=64,
netG='coord_resnet50').to(run_device)
checkpoint = torch.load('./model/best_ckpt.pt', map_location=run_device)
net_G.load_state_dict(checkpoint['model_G_state_dict'])
net_G.to(run_device)
net_G.eval()
model = ct.convert('./model/best_ckpt.pt', source='pytorch', inputs=[ct.ImageType()], skip_model_load=True)
model.save("result.mlmodel")
It could be a problem with the PyTorch version and saving mechanism. I had the same problem and solved it by passing the kwarg _use_new_zipfile_serialization=False when saving the model. More details here.
I had this issue because a failed git LFS smudge corrupted the checkpoint. Check the file size/checksum of your ckpt/pth file.
OS: Windows 10
tensorflow and keras succesfully imported, python 3.7.9
tf.__version__
>>> '2.1.0'
keras.__version__
>>> '2.2.4-tf'
Problem
Tried load_datasets or any dataset available in tf.keras such as:
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data()
give this error
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
During handling of the above exception, another exception occurred:
.
.
.
URLError: <urlopen error [WinError 10054] An existing connection was forcibly closed by the remote host>
During handling of the above exception, another exception occurred:
.
.
.
Exception: URL fetch failure on https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-
idx1-ubyte.gz: None -- [WinError 10054] An existing connection was forcibly closed by the remote host
The three dots showing bunch of code lines that can't be executed.
Anyone knows how to solve? I've been looking for possible solutions but the closest I can find is solving certification/verification issue, I think mine is about URL.
I know the workaround is to download the dataset from kaggle etc., but I want to know what cause this. Thanks guys
EDIT: it's not URL problem, unable to access https://storage.googleapis.com using IDM, but files can be downloaded directly in browser. So I guess it's security issue
Finally after 5 hours reading here and there..
Please check the solution by CRLannister here https://github.com/tensorflow/tensorflow/issues/33285
What it doesn't mention is where data_utils.py is located in case of Windows OS and anaconda environment. It's located here
~\Anaconda3\envs\*your_env*\Lib\site-packages\tensorflow_core\python\keras\utils\data_utils.py
just add the following after all the import statement
import requests
requests.packages.urllib3.disable_warnings()
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
# Legacy Python that doesn't verify HTTPS certificates by default
pass
else:
# Handle target environment that doesn't support HTTPS verification
ssl._create_default_https_context = _create_unverified_https_context
Trying to connect a Colab Notebook to a MongoDB on Atlas.
from pymongo import MongoClient
uri = "mongodb+srv://MYUSERNAME:mypassword#mydatabase.mongodb.net/test"
client = MongoClient(uri)
I am getting a CongfigurationError:
"dnspython" module must be installed to use mongodb+srv:// URIs.
I installed the module.
pip install dnspython
Got the message back
Requirement already satisfied: dnspython in /usr/local/lib/python3.6/dist-packages (1.16.0)
Do not know what is wrong.
This worked a few days ago with another colab notebook (and another database).
Here is the entire error message:
ConfigurationError Traceback (most recent call last)
<ipython-input-30-a6c89e14e64f> in <module>()
----> 1 client = MongoClient(uri)
1 frames
/usr/local/lib/python3.6/dist-packages/pymongo/mongo_client.py in __init__(self, host, port, document_class, tz_aware, connect, type_registry, **kwargs)
522 for entity in host:
523 if "://" in entity:
--> 524 res = uri_parser.parse_uri(entity, port, warn=True)
525 seeds.update(res["nodelist"])
526 username = res["username"] or username
/usr/local/lib/python3.6/dist-packages/pymongo/uri_parser.py in parse_uri(uri, default_port, validate, warn)
316 elif uri.startswith(SRV_SCHEME):
317 if not _HAVE_DNSPYTHON:
--> 318 raise ConfigurationError('The "dnspython" module must be '
319 'installed to use mongodb+srv:// URIs')
320 is_srv = True
ConfigurationError: The "dnspython" module must be installed to use mongodb+srv:// URIs
You have to restart runtime to have the changes take effect:
!pip install dnspython
Restart runtime Runtime -> Restart runtime...
Run your code
Try installing pymongo[srv] and [tls]
!pip3 install pymongo[srv]
!pip3 install pymongo[tls]
change mongodb+srv:// to mongodb:// and it will work
I have a project code in Python Notebook and it ran all good when Spark was hosted in Bluemix.
We are running the following code to connect to Netezza (on premises) which worked fine in Bluemix.
VT = sqlContext.read.format('jdbc').options(url='jdbc:netezza://169.54.xxx.x:xxxx/BACC_PRD_ISCNZ_GAPNZ',user='XXXXXX', password='XXXXXXX', dbtable='GRACE.CDVT_LIVE_SPARK', driver='org.netezza.Driver').load()'
However, after migration to DatascienceExperience, we are getting the following error. I have established the secure gateway and its all working fine, but this code is not running. I think the issue is with the Netezza driver. If it is the case, is there a way we can explicitly import the class/driver so the above code can be executed. Please help how we can address the issue.
Error Message:
/usr/local/src/spark20master/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
/usr/local/src/spark20master/spark/python/lib/py4j-0.10.3-src.zip /py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
317 raise Py4JJavaError(
318 "An error occurred while calling {0}{1} {2}.\n".
--> 319 format(target_id, ".", name), value)
320 else:
321 raise Py4JError(
Py4JJavaError: An error occurred while calling o212.load.
: java.lang.ClassNotFoundException: org.netezza.driver
at java.net.URLClassLoader.findClass(URLClassLoader.java:607)
at java.lang.ClassLoader.loadClassHelper(ClassLoader.java:844)
at java.lang.ClassLoader.loadClass(ClassLoader.java:823)
at java.lang.ClassLoader.loadClass(ClassLoader.java:803)
at org.apache.spark.sql.execution.datasources.jdbc.DriverRegistry$.register(DriverRegistry.scala:38)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createC onnectionFactory$1.apply(JdbcUtils.scala:49)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createC onnectionFactory$1.apply(JdbcUtils.scala:49)
at scala.Option.foreach(Option.scala:257)
You can install a jar file by adding a cell with an exclamation mark that runs a unix tool to download the file, in this example wget:
!wget https://some.public.host/yourfile.jar -P ${HOME}/data/libs
After downloading the file you will need to restart your kernel.
Note this approach assumes your jar file is publicly available on the Internet.
Notebooks in Bluemix and notebooks in DSX (Data Science Experience) currently use the same backend, so they have access to the same pre-installed drivers. Netezza isn't among them. As Chris Snow pointed out, users can install additional JARs and Python packages into their service instances.
You probably created a new service instance for DSX, and did not yet install the user JARs and packages that the old one had. It's a one-time setup, therefore easy to forget when you've been using the same instance for a while. Execute these commands in a Python notebook of the old instance on Bluemix to check for user-installed things:
!ls -lF ~/data/libs
!pip freeze
Then install the missing things into your new instance on DSX.
There is another way to connect to Netezza using ingest connector which
is by default enabled in DSX.
http://datascience.ibm.com/docs/content/analyze-data/python_load.html
from ingest import Connectors
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
NetezzaloadOptions = {
Connectors.Netezza.HOST : 'hostorip',
Connectors.Netezza.PORT : 'port',
Connectors.Netezza.DATABASE : 'databasename',
Connectors.Netezza.USERNAME : 'xxxxx',
Connectors.Netezza.PASSWORD : 'xxxx',
Connectors.Netezza.SOURCE_TABLE_NAME : 'tablename'}
NetezzaDF = sqlContext.read.format("com.ibm.spark.discover").options(**NetezzaloadOptions).load()
NetezzaDF.printSchema()
NetezzaDF.show()
Thanks,
Charles.