Python 3 Requests Module Error: urllib3 version parsing issue - python-3.x

I'm trying to import and use requests in Python 3. When I import requests I get this error in IDLE (as well as IntelliJ):
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
import requests
File "/usr/local/lib/python3.6/site-packages/requests/__init__.py", line 49, in <module>
major, minor, patch = urllib3.__version__.split('.')[:3]
ValueError: not enough values to unpack (expected 3, got 2)
I've read and understand that there is an issue with requests when trying to read the version of urllib3 which needs three values to unpack (major, minor, patch). However, my version of urllib3 is 1.22 with no patch appended at the end. Here is my pip freeze:
appnope==0.1.0
beautifulsoup4==4.6.0
bleach==2.0.0
certifi==2017.7.27.1
chardet==3.0.4
chromedriver==2.24.1
cycler==0.10.0
Cython==0.26.1
decorator==4.1.2
entrypoints==0.2.3
facebook-sdk==2.0.0
geopy==1.11.0
glob2==0.6
html5lib==0.999999999
idna==2.6
ipykernel==4.6.1
ipython==6.1.0
ipython-genutils==0.2.0
ipywidgets==7.0.0
jedi==0.10.2
Jinja2==2.9.6
jsonschema==2.6.0
jupyter==1.0.0
jupyter-client==5.1.0
jupyter-console==5.2.0
jupyter-core==4.3.0
MarkupSafe==1.0
matplotlib==2.0.2
mistune==0.7.4
nbconvert==5.3.1
nbformat==4.4.0
nltk==3.2.4
nose==1.3.7
notebook==5.0.0
numpy==1.13.1
olefile==0.44
opencv-python==3.3.0.10
pandas==0.20.3
pandocfilters==1.4.2
pexpect==4.2.1
pickleshare==0.7.4
Pillow==4.2.1
prompt-toolkit==1.0.15
ptyprocess==0.5.2
Pygments==2.2.0
PyMySQL==0.7.11
pyparsing==2.2.0
python-dateutil==2.6.1
pytz==2017.2
pyzmq==16.0.2
qtconsole==4.3.1
requests==2.18.4
requests2==2.16.0
scikit-learn==0.19.0
scipy==0.19.1
selenium==3.6.0
simplegeneric==0.8.1
six==1.10.0
sklearn==0.0
terminado==0.6
testpath==0.3.1
tornado==4.5.2
traitlets==4.3.2
tzlocal==1.4
urllib3==1.22
virtualenv==15.1.0
wcwidth==0.1.7
webencodings==0.5.1
widgetsnbextension==3.0.2
xlrd==1.1.0
EDIT: I've found a temporary workaround solution and posted as an answer to my question. But any other answers for better solutions are welcome / encouraged. Thanks.

In order to get requests to work, I was able to find a workaround at the top of my project / IDLE session which is as follows:
import urllib3 # Must append a third value to avoid error
if len(urllib3.__version__.split('.')) < 3:
urllib3.__version__ = urllib3.__version__ + '.0'
After visiting this link I was able to confirm there is no patch to urllib3 version 1.22 at the time of this writing. I assume when a patch is released this workaround will not be necessary, but this may help somebody with a similar issue.

Related

Unable to train a self-supervised(ssl) model using Lightly CLI

I am unable to train a self-supervised(ssl) model to create image embeddings using the lightly cli: Lightly Platform Link. I intend to select diverse example from my dataset to create an object detection model further downstream and the image embeddings created with the ssl model will help me to perform Active Learning.I have reproduced the error in the Notebook with public access -----> lightly_app_troubleshooting_stackoverflow.ipynb Link.
In the notebook shared above this cmd raises an exception:
!source /content/venv_1/bin/activate;lightly-magic \
input_dir="/content/Sunflowers" trainer.max_epochs=20 \
token='< your lightly token(free account) >' \
new_dataset_name="sunflowers_dataset" loader.batch_size=64
The exception stack trace produced is as below:
/content/venv_1/lib/python3.7/site-packages/hydra/_internal/hydra.py:127: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/next/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
configure_logging=with_log_configuration,
########## Starting to train an embedding model.
/content/venv_1/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py:23: LightningDeprecationWarning: pytorch_lightning.core.lightning.LightningModule has been deprecated in v1.7 and will be removed in v1.9. Use the equivalent class from the pytorch_lightning.core.module.LightningModule class instead.
"pytorch_lightning.core.lightning.LightningModule has been deprecated in v1.7"
Error executing job with overrides: ['input_dir=/content/Sunflowers', 'trainer.max_epochs=20', 'token=5bbcf60e3a5c7c266dcd4e0e9056c8301684e0f2f8922bc5', 'new_dataset_name=sunflowers_dataset', 'loader.batch_size=64']
Traceback (most recent call last):
File "/content/venv_1/lib/python3.7/site-packages/lightly/cli/lightly_cli.py", line 115, in lightly_cli
return _lightly_cli(cfg)
File "/content/venv_1/lib/python3.7/site-packages/lightly/cli/lightly_cli.py", line 52, in _lightly_cli
checkpoint = _train_cli(cfg, is_cli_call)
File "/content/venv_1/lib/python3.7/site-packages/lightly/cli/train_cli.py", line 137, in _train_cli
encoder.train_embedding(**cfg['trainer'], strategy=distributed_strategy)
File "/content/venv_1/lib/python3.7/site-packages/lightly/embedding/_base.py", line 88, in train_embedding
trainer = pl.Trainer(**kwargs, callbacks=[self.checkpoint_callback])
File "/content/venv_1/lib/python3.7/site-packages/pytorch_lightning/utilities/argparse.py", line 345, in insert_env_defaults
return fn(self, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'weights_summary'
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
I could not create a new tag - "lightly" as I lack the stackoverflow reputation points to do so.
The error is from an incompatibility with the latest PyTorch Lightning version (version 1.7 at the time of this writing). A quick fix is to use a lower version (e.g. 1.6). We are working on a fix :)
Let me know in case that does not work for you!

Getting import _rd_kafka error in all docker images while trying to connect to kafka via pykafka

HI i tried with multiple docker images like Ubuntu and python:3.8-alpine etc.. and everywhere I am getting an error as below while trying to connect to my kafka cluster (2.7) via pykafka library.
Environment info:
kafka server : 2.7 (installed via strimzi kafka in EKS)
kafka clinet: pykafka (2.8.0)
python version: 3.8, 3.7 in all version i get same error
Note: This error happens only when the code run's inside the container when I run outside i.e directly from my machine then it works fine
INFO:pykafka.topic:Could not load pykafka.rdkafka extension.
DEBUG:pykafka.topic:Traceback:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/pykafka/topic.py", line 43, in <module>
from . import rdkafka
File "/usr/local/lib/python3.8/site-packages/pykafka/rdkafka/__init__.py", line 1, in <module>
from .producer import RdKafkaProducer
File "/usr/local/lib/python3.8/site-packages/pykafka/rdkafka/producer.py", line 7, in <module>
from . import _rd_kafka
ImportError: cannot import name '_rd_kafka' from partially initialized module 'pykafka.rdkafka' (most likely due to a circular import) (/usr/local/lib/python3.8/site-pac
INFO:pykafka.cluster:Broker version is too old to use automatic API version discovery. Falling back to hardcoded versions list.
DEBUG:pykafka.cluster:Updating cluster, attempt 1/3
DEBUG:pykafka.connection:Connecting to kafka-kafka-bootstrap.kafka:9093
INFO:pykafka.connection:Attempt 0: failed to connect to kafka-kafka-bootstrap.kafka:9093
INFO:pykafka.connection:[Errno 2] No such file or directory
INFO:pykafka.connection:Retrying in 300ms.
INFO:pykafka.connection:Attempt 1: failed to connect to kafka-kafka-bootstrap.kafka:9093
INFO:pykafka.connection:[Errno 2] No such file or directory
INFO:pykafka.connection:Retrying in 300ms.
INFO:pykafka.connection:Attempt 2: failed to connect to kafka-kafka-bootstrap.kafka:9093
INFO:pykafka.connection:[Errno 2] No such file or directory
INFO:pykafka.connection:Retrying in 300ms.
WARNING:pykafka.broker:Failed to connect to broker at kafka-kafka-bootstrap.kafka:9093. Check the `listeners` property in server.config.
This works with centos docker image, but not sure why did not work with other images like ubunut python3.8-alpine etc..

RuntimeError: Unable to start JVM because of Deprecated: convertStrings

I run an automated python job on an EMR cluster that updates Amazon Athena Tables.
It was running well until few days ago (on python 2.7 and 3.7). Here is the script:
from pyathenajdbc import connect
import yaml
config = yaml.load(open('athena-config.yaml', 'r'))
statements = config['statements']
staging_dir = config['staging_dir']
conn = connect(s3_staging_dir=staging_dir, region_name='eu-west-1')
try:
with conn.cursor() as cursor:
for statement in statements:
cursor.execute(statement)
finally:
conn.close()
The athena-config.yaml has a staging directory and few Athena Statements.
Here is the Error:
You are using pip version 9.0.3, however version 19.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Unrecognized option: -server
create_tables.py:5: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
config = yaml.load(open('athena-config.yaml', 'r'))
/mnt/conda/lib/python3.7/site-packages/jpype/_core.py:210: UserWarning:
-------------------------------------------------------------------------------
Deprecated: convertStrings was not specified when starting the JVM. The default
behavior in JPype will be False starting in JPype 0.8. The recommended setting
for new code is convertStrings=False. The legacy value of True was assumed for
this session. If you are a user of an application that reported this warning,
please file a ticket with the developer.
-------------------------------------------------------------------------------
""")
Traceback (most recent call last):
File "create_tables.py", line 10, in <module>
region_name='eu-west-1')
File "/mnt/conda/lib/python3.7/site-packages/pyathenajdbc/__init__.py", line 69, in connect
driver_path, log4j_conf, **kwargs)
File "/mnt/conda/lib/python3.7/site-packages/pyathenajdbc/connection.py", line 68, in __init__
self._start_jvm(jvm_path, jvm_options, driver_path, log4j_conf)
File "/mnt/conda/lib/python3.7/site-packages/pyathenajdbc/util.py", line 25, in _wrapper
return wrapped(*args, **kwargs)
File "/mnt/conda/lib/python3.7/site-packages/pyathenajdbc/connection.py", line 97, in _start_jvm
jpype.startJVM(jvm_path, *args)
File "/mnt/conda/lib/python3.7/site-packages/jpype/_core.py", line 219, in startJVM
_jpype.startup(jvmpath, tuple(args), ignoreUnrecognized, convertStrings)
RuntimeError: Unable to start JVM
at loadJVM(native/common/jp_env.cpp:169)
at loadJVM(native/common/jp_env.cpp:179)
at startup(native/python/pyjp_module.cpp:159)
As far as I understand the issue in convertStrings being deprecated. Can anyone help me resolve that? I cannot understand why this """) comes before the traceback, and what changed in past days to break the code. Thanks!
Got the same issue today. Try to downgrade JPype1 to 0.6.3. JPype1 released 0.7.0 today, which is not compatible with old interfaces.
The issue appears to be that the package is calling the JVM with an unrecognized argument -server. The previous version was ignoring those sort of errors allowing things to proceed. To get the same behavior with 0.7.0, the flag ignoreUnrecognized would need to be set to True. Likely this needs to be send to pyathenajdbc to correct the defect which placed the bogus argument into the startJVM in the first place.
Looking at the source the -server is hardcoded into the module.
if not jpype.isJVMStarted():
_logger.debug('JVM path: %s', jvm_path)
args = [
'-server',
'-Djava.class.path={0}'.format(driver_path),
'-Dlog4j.configuration=file:{0}'.format(log4j_conf)
]
if jvm_options:
args.extend(jvm_options)
_logger.debug('JVM args: %s', args)
jpype.startJVM(jvm_path, *args)
cls.class_loader = jpype.java.lang.Thread.currentThread().getContextClassLoader()
It is assuming a particular JVM which accepts -server as an argument.

Strange invalid pickle protocol errors when using Dill

Recently, Dill has completely stopped working for me. It does this:
>>> import dill
>>> dill.dumps([1,2,3])
b'\x80\x03]q\x00(K\x01K\x02K\x03e.'
>>> dill.loads(_)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python34\lib\site-packages\dill\dill.py", line 260, in loads
return load(file)
File "C:\Python34\lib\site-packages\dill\dill.py", line 250, in load
obj = pik.load()
File "C:\Python34\lib\pickle.py", line 1039, in load
dispatch[key[0]](self)
File "C:\Python34\lib\pickle.py", line 1066, in load_proto
raise ValueError("unsupported pickle protocol: %d" % proto)
ValueError: unsupported pickle protocol: 93
The number is different every time. This started happening maybe a month ago; reinstalling Dill with pip didn't help.
From stepping through it with a debugger it looks like it's correctly reading in the protocol version from the beginning of the data, then reading one of the first instructions in the pickle data and interpreting it as the protocol version for some reason. I don't really know though, since I don't know much about how pickle works.

Error when running deepmind

It took me two days to install the requirements of deepQ(python version),then I tried to run it today but I faced this problem, and the code are as followed.
root#unicorn:/media/trump/Data1/wei/college/laboratory/deep_q_rl-master/deep_q_rl# python run_nips.py
A.L.E: Arcade Learning Environment (version 0.5.0)
[Powered by Stella]
Use -help for help screen.
Warning: couldn't load settings file: ./ale.cfg
Game console created:
ROM file: ../roms/breakout.bin
Cart Name: Breakout - Breakaway IV (1978) (Atari)
Cart MD5: f34f08e5eb96e500e851a80be3277a56
Display Format: AUTO-DETECT ==> NTSC
ROM Size: 2048
Bankswitch Type: AUTO-DETECT ==> 2K
Running ROM file...
Random seed is 65
Traceback (most recent call last):
File "run_nips.py", line 60, in <module>
launcher.launch(sys.argv[1:], Defaults, __doc__)
File "/media/trump/Data1/wei/college/laboratory/deep_q_rl-master/deep_q_rl/launcher.py", line 223, in launch
rng)
File "/media/trump/Data1/wei/college/laboratory/deep_q_rl-master/deep_q_rl/q_network.py", line 53, in __init__
num_actions, num_frames, batch_size)
File "/media/trump/Data1/wei/college/laboratory/deep_q_rl-master/deep_q_rl/q_network.py", line 168, in build_network
batch_size)
File "/media/trump/Data1/wei/college/laboratory/deep_q_rl-master/deep_q_rl/q_network.py", line 407, in build_nips_network_dnn
from lasagne.layers import dnn
File "/usr/local/lib/python2.7/dist-packages/Lasagne-0.2.dev1-py2.7.egg/lasagne/layers/dnn.py", line 13, in <module>
raise ImportError("dnn not available") # pragma: no cover
ImportError: dnn not available
I have already tested theano ,numpy, scipy and there was no errors coming out. But when I ran it, it said dnn not available. So I came to find dnn, and the code is like this
import theano
from theano.sandbox.cuda import dnn
from .. import init
from .. import nonlinearities
from .base import Layer
from .conv import conv_output_length
from .pool import pool_output_length
from ..utils import as_tuple
if not theano.config.device.startswith("gpu") or not dnn.dnn_available():
raise ImportError("dnn not available") # pragma: no cover
Just hope someone can help me.
Did you install CUDA and cuDNN?
Lasagne is build on top of Theano and is, in some cases, relying on cuda code (e.g. here), rather than abstracting it away.
This can be seen from the import:
from theano.sandbox.cuda import dnn
Also see: https://github.com/Lasagne/Lasagne/issues/242
To get cuDNN you need to register at NVidia as a developer, see:
https://developer.nvidia.com/accelerated-computing
Hope this helps.
Cheers,
Michael

Resources