tifffile cannot decompress JPEG because JPEG not in TIFF.DECOMPRESSORS - python-3.x

import tifffile
f = 'some.tif'
img = tifffile.imread(f)
Gives error:
~/.conda/envs/cmap_py3/lib/python3.6/site-packages/tifffile/tifffile.py in imread(files, **kwargs)
443 if isinstance(files, basestring) or hasattr(files, 'seek'):
444 with TiffFile(files, **kwargs_file) as tif:
--> 445 return tif.asarray(**kwargs)
446 else:
447 with TiffSequence(files, **kwargs_seq) as imseq:
~/.conda/envs/cmap_py3/lib/python3.6/site-packages/tifffile/tifffile.py in asarray(self, key, series, out, validate, maxworkers)
1900 typecode, product(series.shape), out=out, native=True)
1901 elif len(pages) == 1:
-> 1902 result = pages[0].asarray(out=out, validate=validate)
1903 else:
1904 result = stack_pages(pages, out=out, maxworkers=maxworkers)
~/.conda/envs/cmap_py3/lib/python3.6/site-packages/tifffile/tifffile.py in asarray(self, out, squeeze, lock, reopen, maxsize, validate)
3376 if self.compression not in TIFF.DECOMPESSORS:
3377 raise ValueError(
-> 3378 'cannot decompress %s' % self.compression.name)
3379 if 'SampleFormat' in tags:
3380 tag = tags['SampleFormat']
ValueError: cannot decompress JPEG
Note: It seems as though I only get the error for larger tif images. Also, the tifffile version is 0.15.1.
UPDATE-
After using pip to install imagecodes>=2018.10.22, I'm now getting the following error:
img=tifffile.imread(f)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/somename/.conda/envs/cmap_py3/lib/python3.6/site-packages/tifffile/tifffile.py", line 581, in imread
return tif.asarray(**kwargs)
File "/home/somename/.conda/envs/cmap_py3/lib/python3.6/site-packages/tifffile/tifffile.py", line 2042, in asarray
maxworkers=maxworkers)
File "/home/somename/.conda/envs/cmap_py3/lib/python3.6/site-packages/tifffile/tifffile.py", line 3813, in asarray
strip = decompress(strip)
File "/home/somename/.conda/envs/cmap_py3/lib/python3.6/site-packages/tifffile/tifffile.py", line 3700, in decompress
out)
File "/home/somename/.conda/envs/cmap_py3/lib/python3.6/site-packages/imagecodecs/imagecodecs.py", line 678, in jpeg_decode
'JPEG tables, colorspace, and outcolorspace otions not supported')
NotImplementedError: JPEG tables, colorspace, and outcolorspace otions not supported
On the linux machine on which tifffile can't open the large tif in ~/.conda/envs/cmap_py3/lib/python3.6/site-packages/imagecodecs I see
__init__.py
licencses
__pycache__
imagecodecs.py
On the windows machine on which tifffile can open the large tif in ls C:\\Anaconda2\\envs\\tensorflow35\\lib\\site-packages\\imagecodecs\\ I see
__init__.py
licenses
__pycache__
imagecodecs.py
_imagecodecs.cp35-win_amd64.pyd
_jpeg12.cp35-win_amd64.pyd

I think I had the same struggles as you, #user3731622
Try this code out, as recommended by #cgohlke:
!pip install tifffile
!pip install imagecodecs
import numpy
import tifffile as tiff
img = tiff.imread('/content/gdrive/My Drive/My Driver/imageToDriveExample.tif')
matrix = img[:,:,:]
print(matrix)

Related

[Geopandas error]fiona.errors.DriverError: '/vsimem/3563f91543824520abdaa032ab1a68da' not recognized as a supported file format

I wanted to read the .shp files by the file_uploader of streamlit.
Get the list of shp files from the file_uploader of streamlit.
Read the shp files using the geopandas.
Here's my code.
st.session_state.data_01 = st.file_uploader('Please choose a file.', accept_multiple_files=True, key='0').
df = []
for d in st.session_state.data_01:
df.append(gpd.read_file(d),encoding='utf-8')
And I got the error such like:
File "/Users/icuh/Desktop/Eun/Web/life.py", line 17, in run_life
df.append(gpd.read_file(d),encoding='utf-8')
File "/Users/icuh/opt/anaconda3/envs/impacts_02/lib/python3.8/site-packages/geopandas/io/file.py", line 253, in _read_file
return _read_file_fiona(
File "/Users/icuh/opt/anaconda3/envs/impacts_02/lib/python3.8/site-packages/geopandas/io/file.py", line 294, in _read_file_fiona
with reader(path_or_bytes, **kwargs) as features:
File "/Users/icuh/opt/anaconda3/envs/impacts_02/lib/python3.8/site-packages/fiona/collection.py", line 555, in __init__
super(BytesCollection, self).__init__(self.virtual_file, vsi=filetype, **kwds)
File "/Users/icuh/opt/anaconda3/envs/impacts_02/lib/python3.8/site-packages/fiona/collection.py", line 162, in __init__
self.session.start(self, **kwargs)
File "fiona/ogrext.pyx", line 540, in fiona.ogrext.Session.start
File "fiona/_shim.pyx", line 90, in fiona._shim.gdal_open_vector
fiona.errors.DriverError: '/vsimem/3563f91543824520abdaa032ab1a68da' not recognized as a supported file format.
Versions I use
python 3.8.6
geopandas 0.11.1
fiona 1.8.21
shapely 1.8.4
This is not a streamlit issue as such.
have simulated the error you stated with geopandas sample shape file
this fails when shape file has no extension .shp. Same error you reported
try again with an extension .shp. Different error, partner files missing (`.shx', '.prj', ...)
try again where all files are in same directory as .shp. Suceeds
Your upload capability needs to take into account that a shape file is a set of files (not a single file). Either ensure they are all uploaded into same directory. Alternatively zip them up and upload zip file. read_file() supports zip files.
import geopandas as gpd
import fiona
from pathlib import Path
import tempfile
import shutil
with tempfile.TemporaryDirectory() as tmpdirname:
fn = Path(gpd.datasets.get_path("naturalearth_lowres"))
tmp_file = Path(tmpdirname).joinpath(fn.stem)
shutil.copy(fn, tmp_file)
print(tmp_file) # temp file with no extension...
try:
gpd.read_file(tmp_file)
except fiona.errors.DriverError as e:
print(e)
tmp_file.unlink()
# now just the shape file
tmp_file = Path(tmpdirname).joinpath(fn.name)
shutil.copy(fn, tmp_file)
try:
gpd.read_file(tmp_file)
except fiona.errors.DriverError as e:
print(e)
# now all the files that make up an ESRI shapefile
for fn_ in fn.parent.glob("*"):
print(fn_.name)
shutil.copy(fn_, tmpdirname)
gpd.read_file(tmp_file)
# no exception :-)
output
/var/folders/3q/trbn3hyn0y91jwvh6gfn7ln40000gn/T/tmpa8x28emo/naturalearth_lowres
'/var/folders/3q/trbn3hyn0y91jwvh6gfn7ln40000gn/T/tmpa8x28emo/naturalearth_lowres' not recognized as a supported file format.
Unable to open /var/folders/3q/trbn3hyn0y91jwvh6gfn7ln40000gn/T/tmpa8x28emo/naturalearth_lowres.shx or /var/folders/3q/trbn3hyn0y91jwvh6gfn7ln40000gn/T/tmpa8x28emo/naturalearth_lowres.SHX. Set SHAPE_RESTORE_SHX config option to YES to restore or create it.
naturalearth_lowres.shx
naturalearth_lowres.cpg
naturalearth_lowres.shp
naturalearth_lowres.dbf
naturalearth_lowres.prj

TypeError: can't pickle cv2.CLAHE objects

I am trying to run the code on github(https://github.com/AayushKrChaudhary/RITnet)
I did not get the semantic segmentation dataset of OpenEDS, so I tried to download the png image from the Internet and put it in Semantic_Segmentation_Dataset\test\ to run the test program.
That code gives the following error:
Traceback (most recent call last):
File "test.py", line 59, in <module>
for i, batchdata in tqdm(enumerate(testloader),total=len(testloader)):
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\site-packages\torch\utils\data\dataloader.py", line 291, in __iter__
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\site-packages\torch\utils\data\dataloader.py", line 737, in __init__
w.start()
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle cv2.CLAHE objects
(Machine_Learning) C:\Users\b0743\Downloads\RITnet-master\RITnet-master>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\b0743\AppData\Local\Continuum\anaconda3\envs\Machine_Learning\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
and my environment is:
# Name Version
cudatoolkit 10.1.243
cudnn 7.6.5
keras-applications 1.0.8
keras-base 2.3.1
keras-gpu 2.3.1
keras-preprocessing 1.1.0
matplotlib 3.3.1
matplotlib-base 3.3.1
numpy 1.19.1
numpy-base 1.19.1
opencv 3.3.1
pillow 7.2.0
python 3.6.10
pytorch 1.6.0
scikit-learn 0.23.2
scipy 1.5.2
torchsummary 1.5.1
torchvision 0.7.0
tqdm 4.48.2
I don’t know if this is a stupid question, but I hope someone can try to answer it for me.
I literally just got into the dataset python file and commented all the parts that require opencv.
Turns out it works but you wont get that sweet clahe and the other stuff but it works.
if you don't need the dataset thing just make a tensor out of the 640 by 400 image and put it in a empty array and put that array in an array until you have a 4d tensor and pass it in the dnn , and then put the output through the get predictions function and viola you have a array of eye features.

PIL.UnidentifiedImageError: cannot identify image file

I'm working on GCP cloud functions and intend to write a functions which combines two images. But I', getting the following error when I invoke the function:
Traceback (most recent call last): File
"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py",
line 346, in run_http_function result =
_function_handler.invoke_user_function(flask.request) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py",
line 217, in invoke_user_function return
call_user_function(request_or_event) File
"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py",
line 210, in call_user_function return
self._user_function(request_or_event) File "/user_code/main.py", line
74, in execute newIntro= generateIntroImage(nameMappings['stdName'],
nameMappings['stdPicture'], nameMappings['logo'],
nameMappings['stdYear'], nameMappings['font']) File
"/user_code/main.py", line 12, in generateIntroImage
images.append(Image.open(logo)) File
"/env/local/lib/python3.7/site-packages/PIL/Image.py", line 2862, in
open "cannot identify image file %r" % (filename if filename else fp)
PIL.UnidentifiedImageError: cannot identify image file '/tmp/logo.jpg'
I have ran this function on my local machine and it works as expected but when I deploy it on GCP, it gives this error and crashes. Here's my function:
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
def generateIntroImage(stdName, stdPicture, logo, year, typeFace):
images = [Image.open(x) for x in [stdPicture, logo]]
widths, heights = zip(*(i.size for i in images))
total_width = sum(widths)
max_height = max(heights)
new_im = Image.new('RGB', (total_width, max_height))
x_offset = 0
for im in images:
new_im.paste(im, (x_offset,0))
x_offset += im.size[0]
font= ImageFont.truetype(typeFace, 70)
draw= ImageDraw.Draw(new_im)
draw.text((0, 0), stdName+"'s " +year+" Year Book", (0,0,0),font= font)
fileName= "/tmp/test.jpg"
new_im.save(fileName)
return fileName
These images are .jpg and .png files. Any idea what could be wrong?
Happened to me on Google Colab as well, apparently updating PIL version fixed the problem for me.
PIL throws error because it cannot identify the image format. Most probably the reason is that the image is corrupted and hence cannot be read (or "identified") by pillow's Image.open(). For example if you try opening the image in an IPython prompt, it would fail as well.
In [2]: from PIL import Image
In [3]: Image.open("176678612.jpg")
---------------------------------------------------------------------------
UnidentifiedImageError Traceback (most recent call last)
<ipython-input-3-3f91b2f4e49a> in <module>
----> 1 Image.open("176678612.jpg")
/opt/conda/envs/swin/lib/python3.7/site-packages/PIL/Image.py in open(fp, mode, formats)
3022 warnings.warn(message)
3023 raise UnidentifiedImageError(
-> 3024 "cannot identify image file %r" % (filename if filename else fp)
3025 )
3026
UnidentifiedImageError: cannot identify image file '176678612.jpg'
And the relevant piece of code handling this check is from PIL.Image.open()
"""
exception PIL.UnidentifiedImageError: If the image cannot be opened and
identified.
"""
raise UnidentifiedImageError(
"cannot identify image file %r" % (filename if filename else fp)
So, the fix is to delete the image, or replace it with an uncorrupted version.
you need to provide a downloadable link to the image. What worked for me was click on the download image and the copy that URL.

Running clustalw on google platform with error in generating .aln file in ubuntu

I was trying to run clustalw from Biopython library of python3 on Google Cloud Platform, then generate a phylogenetic tree from the .dnd file using the Phylo library.
The code was running perfectly with no error on my local system. However, when it runs on the Google Cloud platform it has the following error:
python3 clustal.py
Traceback (most recent call last):
File "clustal.py", line 9, in <module>
align = AlignIO.read("opuntia.aln", "clustal")
File "/home/lhcy3w/.local/lib/python3.5/site-packages/Bio/AlignIO/__init__.py", line 435, in read
first = next(iterator)
File "/home/lhcy3w/.local/lib/python3.5/site-packages/Bio/AlignIO/__init__.py", line 357, in parse
with as_handle(handle, 'rU') as fp:
File "/usr/lib/python3.5/contextlib.py", line 59, in __enter__
return next(self.gen)
File "/home/lhcy3w/.local/lib/python3.5/site-packages/Bio/File.py", line 113, in as_handle
with open(handleish, mode, **kwargs) as fp:
FileNotFoundError: [Errno 2] No such file or directory: 'opuntia.aln'
If I run sudo python3 clustal.py, the error would be
File "clustal.py", line 1, in <module>
from Bio import AlignIO
ImportError: No module named 'Bio'
If I run it as in the interactive form of python, the following happened
Python 3.5.3 (default, Sep 27 2018, 17:25:39)
[GCC 6.3.0 20170516] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from Bio.Align.Applications import ClustalwCommandline
>>> in_file = "opuntia.fasta"
>>> clustalw_cline = ClustalwCommandline("/usr/bin/clustalw", infile=in_file)
>>> clustalw_cline()
('\n\n\n CLUSTAL 2.1 Multiple Sequence Alignments\n\n\nSequence format is Pearson\nSequence 1: gi|6273291|gb|AF191665.1|AF191665 902 bp\nSequence 2: gi|6273290|gb
|AF191664.1|AF191664 899 bp\nSequence 3: gi|6273289|gb|AF191663.1|AF191663 899 bp\nSequence 4: gi|6273287|gb|AF191661.1|AF191661 895 bp\nSequence 5: gi|627328
6|gb|AF191660.1|AF191660 893 bp\nSequence 6: gi|6273285|gb|AF191659.1|AF191659 894 bp\nSequence 7: gi|6273284|gb|AF191658.1|AF191658 896 bp\n\n', '\n\nERROR:
Cannot open output file [opuntia.aln]\n\n\n')
Here is my clustal.py file:
from Bio import AlignIO
from Bio import Phylo
import matplotlib
from Bio.Align.Applications import ClustalwCommandline
in_file = "opuntia.fasta"
clustalw_cline = ClustalwCommandline("/usr/bin/clustalw", infile=in_file)
clustalw_cline()
tree = Phylo.read("opuntia.dnd", "newick")
tree = tree.as_phyloxml()
Phylo.draw(tree)
I just want to know how to create an .aln and a .dnd file on the Google Cloud platform as I can get on my local environment. I guess it is probably because I don't have the permission to create a new file on the server with python. I have tried f = open('test.txt', 'w') on Google Cloud but it couldn't work until I add sudo before the terminal command like sudo python3 text.py. However, as you can see, for clustalw, adding sudo only makes the whole biopython library missing.

Error following first Theano program example

I'm totally new to theano and following this simple intro exercise to theano found here: http://deeplearning.net/software/theano/introduction.html#introduction
The idea is to simply declare some tensor variables and wrap them in a function, it is the most simple thing you could possibly do with theano
the exact code is:
import theano
from theano import tensor
# declare two symbolic floating-point scalars
a = tensor.dscalar()
b = tensor.dscalar()
# create a simple expression
c = a + b
# convert the expression into a callable object that takes (a,b)
# values as input and computes a value for c
f = theano.function([a,b], c)
# bind 1.5 to 'a', 2.5 to 'b', and evaluate 'c'
assert 4.0 == f(1.5, 2.5)
However, I get this traceback:
Traceback (most recent call last):
File "test.py", line 13, in <module>
f = theano.function([a,b], c)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/compile/function.py", line 223, in function
profile=profile)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/compile/pfunc.py", line 512, in pfunc
on_unused_input=on_unused_input)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/compile/function_module.py", line 1312, in orig_function
defaults)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/compile/function_module.py", line 1181, in create
_fn, _i, _o = self.linker.make_thunk(input_storage=input_storage_lists)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/link.py", line 434, in make_thunk
output_storage=output_storage)[:3]
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/vm.py", line 847, in make_all
no_recycling))
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/op.py", line 606, in make_thunk
output_storage=node_output_storage)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cc.py", line 948, in make_thunk
keep_lock=keep_lock)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cc.py", line 891, in __compile__
keep_lock=keep_lock)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cc.py", line 1314, in cthunk_factory
key = self.cmodule_key()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cc.py", line 1032, in cmodule_key
c_compiler=self.c_compiler(),
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cc.py", line 1090, in cmodule_key_
sig.append('md5:' + theano.configparser.get_config_md5())
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/configparser.py", line 146, in get_config_md5
['%s = %s' % (cv.fullname, cv.__get__()) for cv in all_opts]))
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/configparser.py", line 146, in <listcomp>
['%s = %s' % (cv.fullname, cv.__get__()) for cv in all_opts]))
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/configparser.py", line 273, in __get__
val_str = self.default()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/tensor/blas.py", line 282, in default_blas_ldflags
if GCC_compiler.try_flags(["-lblas"]):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cmodule.py", line 1852, in try_flags
flags=flag_list, try_run=False)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/theano/gof/cmodule.py", line 1799, in try_compile_tmp
os.write(fd, src_code)
TypeError: ('The following error happened while compiling the node', Elemwise{add,no_inplace}(<TensorType(float64, scalar)>, <TensorType(float64, scalar)>), '\n', "'str' does not support the buffer interface")
My only thought is that it may be python3 related, but that should not be the case. Please help.
Theano code base do not work out of the box for python2 and python3. It need to get converted. This is done during the installation of Theano. When installed via pip, this is done automatically. If you cloned/downloded the source code, you need to install it with:
python setup.py install
Here is a Theano ticket with more information:
https://github.com/Theano/Theano/issues/2317
Also, for python 3 support, you should use the development version line the other answer:
pip3 install --upgrade --no-deps git+git://github.com/Theano/Theano.git
But this isn't related to BLAS as it is written.
Problem is not including the BLAS in the most recent version of theano. Solved when you pull the bleeding-edge version:
pip3 install --upgrade --no-deps git+git://github.com/Theano/Theano.git

Resources