I am very new to using Paraview, and I'm trying to import a few VTK files and view them. However, I'm receiving the following errors:
Generic Warning: In /Users/kitware/dashboards/buildbot-slave/8275bd07/build/superbuild/paraview/src/VTK/IO/Legacy/vtkDataReader.cxx, line 1436
Error reading ascii data. Possible mismatch of datasize with declaration.
ERROR: In /Users/kitware/dashboards/buildbot-slave/8275bd07/build/superbuild/paraview/src/VTK/IO/Legacy/vtkUnstructuredGridReader.cxx, line 346
vtkUnstructuredGridReader (0x7fb15582bd10): Unrecognized keyword: ,
I can't seem to figure out what's wrong, I've tried converting them to other formats to no avail.
I don't think there's a problem with the files. I can open them with Paraview 5.6. Maybe they were generated with a version of VTK that is more recent than the one used for your version of Paraview. You should install the latest version of Paraview (or at least 5.6).
The big file results in some visible geometry, the smaller one does not. But I have no error message, everything seems ok.
Related
I have followed the script provided here by DaveD here:
How to read Ansys data files in ParaView?
But I am unable to get a result that Paraview can import. I attach a few screenshots, because several warnings came out while the script was being run. I got a vtk as output (360 MB, so I guess it contains something...), but Paraview displays the following error:
ERROR: In C:\glr\builds\paraview\paraview-ci\source-paraview\VTK\IO\Legacy\vtkUnstructuredGridReader.cxx, line 320
vtkUnstructuredGridReader (000001CECD70BC00): Unrecognized keyword: 0.00000e+00
I have never used APDL, so I will be happy if the author of the script or someone experienced using it could tell me what I did wrong (I continued clicking "yes" through all the windows and I got the output.vtk as I mentioned)
Thanks a lot in advance
enter image description here
I am using machine learning in my Python (version 3.8.5) code. In the preprocessing part, I need to hash encode few features. So earlier I have dumped a hash encoder pickle file using the features in the training phase. Saved the file with the name of 'hash_encoder.pkl'. Now in the testing phase, I need to transform the features using this pickle file. I'm using the following code given in screenshot to hash encode three string features as given in the first line.
In the encoder.transform line, I'm getting the error of "data_lock=mutiprocessing.Manager().Lock()".
At the end I'm also getting 'raise EOF error'.
I have tried using same version of pandas (1.1.3) to dump the hash_encoder file and also to load it. I'm not sure why is this coming up.
Can someone help me in understand or debugging this part?
I have added the screenshot of the error.
When reading in a raster dataset I get the below error. Previously I have been able to read this same raster dataset successfully in R in this way, maintaining access to the attribute table and correct field names. I've tried updating the files with backups to eliminate the issue of corrupt files and I still get the below error. Besides corrupt files, what may be causing this error?
dat2 <- raster("data/data_LEMMA/lemma_clip/w001001.adf")
Error : GDAL Error 3: Failed reading table field info for table
lemma_clip.VAT File may be corrupt?
Warning message: In .rasterFromGDAL(x, band = band, objecttype, ...) :
Could not read RAT or Category names
This appears to be an ESRI GRID that works fine with Arc, but not with GDAL (when reading the Raster Attribute Table (RAT; or VAT in ESRI speak)). It would be useful if you could make it available for others to look at and look for a solution.
A work-around is to not read the RAT; perhaps that is acceptable in this case.
dat2 <- raster("data/data_LEMMA/lemma_clip/w001001.adf", RAT=FALSE)
When I run the code from the following link:
https://gist.github.com/fchollet/f35fbc80e066a49d65f1688a7e99f069#file-classifier_from_little_data_script_2-py
I get the following error:
Using TensorFlow backend. Found 2000 images belonging to 2 classes.
/home/nd/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:692:
UserWarning: Possibly corrupt EXIF data. Expecting to read 80000 bytes
but only got 0. Skipping tag 64640 "Skipping tag %s" % (size,
len(data), tag))
I am Using Ubuntu.
Tried Solution : change 'w' to 'wb' in line 70 and 81.
Thnx in advance
This is because some of the images have corrupted exif info. You can just remove the exif info of all your images to remove this warning.
The python package piexif can help you. you can use the following code to remove the exif info of an image:
import piexif
# suppose im_path is a valid image path
piexif.remove(im_path)
You can find more discussion here.
The error seems to imply that you try to use TIFF images (rather than JPEGs) and that the PIL library canĀ“t import these without an error (Possibly corrupt EXIF data).
I suggest you try some test JPEGs to make sure your images can be imported correctly.
I am working on import routines for node, so far I can import text nodes from a PDF using pdf2json, this works well, but doesn't work on PDF's that are image based and contain no text.
So I downloaded pdf2img, however there are plenty of issues with this module, the one I have now is that after running it, I get a lot of 0 byte png files created, no content and an error message:
/docfire/node_modules/gm/lib/command.js:228
proc.stdin.once('error', cb);
^
TypeError: Cannot read property 'once' of undefined
at gm._spawn (/docfire/node_modules/gm/lib/command.js:228:15)
at /docfire/node_modules/gm/lib/command.js:140:19
at series (/docfire/node_modules/array-series/index.js:11:36)
at gm._preprocess
(/docfire/node_modules/gm/lib/command.js:177:5)
at gm.stream (/docfire/node_modules/gm/lib/command.js:138:10)
at convertPdf2Img (/docfire/node_modules/pdf2img/lib/pdf2img.js:93:6)
at /docfire/node_modules/pdf2img/lib/pdf2img.js:67:9
at /docfire/node_modules/async/lib/async.js:246:17
at /docfire/node_modules/async/lib/async.js:122:13
at _each (/docfire/node_modules/async/lib/async.js:46:13)
I've tried posting a issue on the GIT site for the module, but it looks like quite a few people are having exactly the same problem and there doesn't seem to be any activity regarding any fixes.
What I would ideally like is a way to extract text and images from a PDF for node.
I'm running on an iMAC running macOS Sierra v10.12.4
With node version 7.8.0, pdf2img 0.2.0, gm 1.23.0
You can try pdf-image npm package.
https://www.npmjs.com/package/pdf-image
Hope this helps.