Recently, Dill has completely stopped working for me. It does this:
>>> import dill
>>> dill.dumps([1,2,3])
b'\x80\x03]q\x00(K\x01K\x02K\x03e.'
>>> dill.loads(_)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python34\lib\site-packages\dill\dill.py", line 260, in loads
return load(file)
File "C:\Python34\lib\site-packages\dill\dill.py", line 250, in load
obj = pik.load()
File "C:\Python34\lib\pickle.py", line 1039, in load
dispatch[key[0]](self)
File "C:\Python34\lib\pickle.py", line 1066, in load_proto
raise ValueError("unsupported pickle protocol: %d" % proto)
ValueError: unsupported pickle protocol: 93
The number is different every time. This started happening maybe a month ago; reinstalling Dill with pip didn't help.
From stepping through it with a debugger it looks like it's correctly reading in the protocol version from the beginning of the data, then reading one of the first instructions in the pickle data and interpreting it as the protocol version for some reason. I don't really know though, since I don't know much about how pickle works.
Related
I am running a python 3.7 conda environment and I have imported pickle5 into my code however I am still getting the following error regardless:
C:\Users\Joel\Desktop\Bowls App Object Detection>python obj.py
Using TensorFlow backend.
Generating anchor boxes for training images and annotation...
Traceback (most recent call last):
File "obj.py", line 7, in
trainer.setTrainConfig(object_names_array=["white bowls, black bowls, yellow bowls"], batch_size=4, num_experiments=100, train_from_pretrained_model="pretrained-yolov3.h5")
File "C:\Users\Joel\anaconda3\envs\py37\lib\site-packages\imageai\Detection\Custom_init_.py", line 171, in setTrainConfig
self.__train_cache_file, self.__model_labels)
File "C:\Users\Joel\anaconda3\envs\py37\lib\site-packages\imageai\Detection\Custom\gen_anchors.py", line 82, in generateAnchors
model_labels
File "C:\Users\Joel\anaconda3\envs\py37\lib\site-packages\imageai\Detection\Custom\voc.py", line 9, in parse_voc_annotation
cache = pickle.load(handle)
ValueError: unsupported pickle protocol: 5
It would be great if someone could help out!
I have an existing LMDB zarr archive (~6GB) saved at path. Now I want to consolidate the metadata to improve read performance.
Here is my script:
store = zarr.LMDBStore(path)
root = zarr.open(store)
zarr.consolidate_metadata(store)
store.close()
I get the following error:
Traceback (most recent call last):
File "zarr_consolidate.py", line 12, in <module>
zarr.consolidate_metadata(store)
File "/local/home/marcel/.virtualenvs/noisegan/local/lib/python3.5/site-packages/zarr/convenience.py", line 1128, in consolidate_metadata
return open_consolidated(store, metadata_key=metadata_key)
File "/local/home/marcel/.virtualenvs/noisegan/local/lib/python3.5/site-packages/zarr/convenience.py", line 1182, in open_consolidated
meta_store = ConsolidatedMetadataStore(store, metadata_key=metadata_key)
File "/local/home/marcel/.virtualenvs/noisegan/local/lib/python3.5/site-packages/zarr/storage.py", line 2455, in __init__
d = store[metadata_key].decode() # pragma: no cover
AttributeError: 'memoryview' object has no attribute 'decode'
I am using zarr 2.3.2 and python 3.5.2. I have another machine running python 3.6.2 where this works. Could it have to do with the python version?
Thanks for the report. Should be fixed with gh-452. Please test it out (if you are able).
If you are able to share a bit more information on why read performance suffers in your case, that would be interesting to learn about. :)
I wrote a module consisting of 40 conversions, python 3.7. The module is all def statements with triple quoted HELP info at the beginning. When I try to import it, I get 'ImportError: DLL load failed: %1 is not a valid Win32 application'. I am at a loss as to what to do.
Tried saving as .py and .pyd. Moved file into every environment on my PYTHONPATH. Multitude of google searches, but nothing I could wrap my head around.
Line 1 is:
def main():
pass
in order to make this work:
if __name__ == '__main__':
main()
But was getting same import error before I added these elements.
import cunvurtz
Traceback (most recent call last):
File "P:\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3296, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
import cunvurtz
File "P:\PyCharm Community Edition with Anaconda plugin 2019.1.1\helpers\pydev_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
ImportError: DLL load failed: %1 is not a valid Win32 application.
I am trying to use Tensorflow Object Detection API and I follow the steps mentioned in the given link -
https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html#tf-models-install
When I try to access the Object Detection Jupyter Notebook through jupyter notebook
I am facing the below exception
Traceback (most recent call last):
File "/usr/local/bin/jupyter-notebook", line 7, in <module>
from notebook.notebookapp import main
File "/home/dinesh/.local/lib/python3.6/site-
packages/notebook/notebookapp.py", line 79, in <module>
from .base.handlers import Template404, RedirectWithParams
File "/home/dinesh/.local/lib/python3.6/site-
packages/notebook/base/handlers.py", line 32, in <module>
import prometheus_client
File "/home/dinesh/.local/lib/python3.6/site-
packages/prometheus_client/__init__.py", line 7, in <module>
from . import process_collector
File "/home/dinesh/.local/lib/python3.6/site-
packages/prometheus_client/process_collector.py", line 12, in <module>
_PAGESIZE = resource.getpagesize()
AttributeError: module 'resource' has no attribute 'getpagesize'
I am using
Python - 3.6.3
Jupyter - 1.0.0
How can I overcome this exception?
Got a similar error. My project contained modules (folders)
model
resource (replaced with resources)
service
So I changed the name of the resource module to resources (change name to any appropriate module name)
I have the same error ,after rename my resource module in the PYTHONPATH , it works right. check your PYTHONPATH, is there a resource module?
I had a similar issue on starting Jupyter Notebooks on Windows 10.
When I initially ran the regular startup script, I got a windows terminal that opened and immediately closed, too fast to see any error messages. So, I opened a windows 10 powershell terminal and ran
conda update conda
and
conda update --all
then I ran
jupyter-notebook at the windows prompt. The results were:
Traceback (most recent call last):
File "E:\Users\Bob\anaconda3\Scripts\jupyter-notebook-script.py", line 6, in
from notebook.notebookapp import main
File "E:\Users\Bob\anaconda3\lib\site-packages\notebook\notebookapp.py", line 76, in
from .base.handlers import Template404, RedirectWithParams
File "E:\Users\Bob\anaconda3\lib\site-packages\notebook\base\handlers.py", line 24, in
import prometheus_client
File "E:\Users\Bob\anaconda3\lib\site-packages\prometheus_client_init_.py", line 3, in
from . import (
File "E:\Users\Bob\anaconda3\lib\site-packages\prometheus_client\process_collector.py", line 11, in
_PAGESIZE = resource.getpagesize()
AttributeError: module 'resource' has no attribute 'getpagesize'
I opened process_collector.py in the site-packages\prometheus_client in notepad++ and changed
line 9 import resource to import resources
and
line 11 _PAGESIZE = resource.getpagesize() to _PAGESIZE = resources.getpagesize()
I searched for other instances of resource and found none. I then saved the file and reran jupyter-notebook at the windows terminal prompt.
This time I got:
Traceback (most recent call last):
File "E:\Users\Bob\anaconda3\Scripts\jupyter-notebook-script.py", line 10, in
sys.exit(main())
File "E:\Users\Bob\anaconda3\lib\site-packages\jupyter_core\application.py", line 254, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "E:\Users\Bob\anaconda3\lib\site-packages\traitlets\config\application.py", line 844, in launch_instance
app.initialize(argv)
File "E:\Users\Bob\anaconda3\lib\site-packages\traitlets\config\application.py", line 87, in inner
return method(app, *args, **kwargs)
File "E:\Users\Bob\anaconda3\lib\site-packages\notebook\notebookapp.py", line 2126, in initialize
self.init_resources()
File "E:\Users\Bob\anaconda3\lib\site-packages\notebook\notebookapp.py", line 1697, in init_resources
old_soft, old_hard = resource.getrlimit(resource.RLIMIT_NOFILE)
AttributeError: module 'resource' has no attribute 'getrlimit'
Still having Notepad++ open, I opened notebookapp.py in site-packages\notebook and searched for resource. I found and changed the following lines:
Line 37 import resource to import resources
line 40 resource = None to resources = None
line 1036 resource is None to resources is None
line 1040 soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) to
soft, hard = resources.getrlimit(resources.RLIMIT_NOFILE)
line 1693 if resource is None: to if resources is None:
line 1697 old_soft, old_hard = resource.getrlimit(resource.RLIMIT_NOFILE) to old_soft, old_hard = resources.getrlimit(resources.RLIMIT_NOFILE)
line 1706 resource.setrlimit(resource.RLIMIT_NOFILE, (soft, hard)) to
resources.setrlimit(resources.RLIMIT_NOFILE, (soft, hard))
I searched for other instances of resource and found none. I then saved the notebookapp.py file and reran jupyter-notebook at the windows terminal prompt. This time Jupyter Notebooks opened with the expected files tab displayed. I quit out of Jupyter Notebooks and restarted using the normal link to the startup script and it worked as expected.
I am not sure what caused this issue. I did not intentionally update anything before I saw the issue. Yesterday, Jupyter Notebooks worked as expected, today when I tried to run it, I got the flicker screen as described above.
[Quick look for root cause]
Check if you have set PYTHONPATH in your system -
try to rename that momentarily
run "jupyter notebook"
[Fix the issue]
if the issue is resolved by doing this then
search "resource" folder within all paths mentioned in PYTHONPATH
if such folder found then rename/refactor it to other name e.g. "resources"
I'm trying to upload couple of .zip files with size ranging between 20 to 30 MBs using python's sftp module 'Paramiko'. I'm able to successfully connect to the sftp server, and also able to list the contents in the destination directory. But while attempting to upload files from my ec2 linux machine, the request is taking too much time and then terminating with EOFError(). Below is the code,
>>> import paramiko
>>> import os
>>> ftp_host='*****'
>>> ftp_username='***'
>>> ftp_password='**'
>>> ssh_client=paramiko.SSHClient()
>>> ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
>>> ssh_client.connect(hostname=ftp_host,username=ftp_username,password=ftp_password, timeout=500)
>>> ftp_client=ssh_client.open_sftp()
>>> ftp_client.chdir('/inbound/')
>>> ftp_client.listdir()
[u't5-file-1.zip', u'edp_r.zip', u'T5_Transaction_Sample.gz']
>>> ftp_client.put("/path-to-zip-files-on-local/edp_revenue.zip","/inbound/edp_revenue.zip")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/paramiko/sftp_client.py", line 669, in put
return self.putfo(fl, remotepath, file_size, callback, confirm)
File "/usr/lib/python2.7/dist-packages/paramiko/sftp_client.py", line 626, in putfo
fr.write(data)
File "/usr/lib/python2.7/dist-packages/paramiko/file.py", line 331, in write
self._write_all(data)
File "/usr/lib/python2.7/dist-packages/paramiko/file.py", line 448, in _write_all
count = self._write(data)
File "/usr/lib/python2.7/dist-packages/paramiko/sftp_file.py", line 176, in _write
self._reqs.append(self.sftp._async_request(type(None), CMD_WRITE, self.handle, long(self._realpos), data[:chunk]))
File "/usr/lib/python2.7/dist-packages/paramiko/sftp_client.py", line 750, in _async_request
self._send_packet(t, msg)
File "/usr/lib/python2.7/dist-packages/paramiko/sftp.py", line 170, in _send_packet
self._write_all(out)
File "/usr/lib/python2.7/dist-packages/paramiko/sftp.py", line 135, in _write_all
raise EOFError()
EOFError
When I'm trying the same thing using curl command I'm easily able to do and just in 20 seconds. What am I missing here? Is it the size which is making the request to timed out or anything else.
It seems I was using an older version of paramiko on my linux server. I upgraded it using,
pip install --upgraded paramiko
and then logged everything in a log file,
paramiko.util.log_to_file('/path_to_log_file/paramiko.log')
just to capture every detail. Now it seems to be working pretty fast and correct.