cdo remapbil error : Segmentation fault (core dumped) - linux

I have spatially merged 4 tif tiles using gdal_merge. then converted the merged file to netcdf using gdal_translate. Now I want to regrid the netcdf file for specific lat lon and resolution. But when i use remapbil in cdo I get the error "Segmentation fault (core dumped)". As the file is more than 1.5 GB I am attaching a google drive link.
text
The grid data (gridfile.txt) for the command
"cdo remapbil,gridfile.txt out.nc out_1.nc" is attached here.text
Please help me resolve this problem.
I have spatially merged 4 tif tiles using gdal_merge. then converted the merged file to netcdf using gdal_translate. Now I want to regrid the netcdf file for specific lat lon and resolution. But when i use remapbil in cdo I get the error "Segmentation fault (core dumped)". What should I do to resolve this problem

You are almost certainly running out of RAM. This is what is happening to me on my 32 GB machine. A critical thing to know is that, at a minimum, CDO has to take an entire horizontal layer into memory, so regridding this file is going to be very RAM heavy.
The solution is to first resample the grid, and then regrid it.
Your horizontal resolution in the raw file is roughly 0.001 by 0.001. However, the target grid resolution is 0.25 by 0.25. My recommendation is to resample the original grid so that it is 0.01 by 0.01, and then regrid to 0.25. The following will work:
cdo samplegrid,10 out.nc out1.nc
cdo remapbil,grid out1.nc out2.nc

Related

Python script skyrockets size of pagefile.sys

I wrote a Python script that tends to crash sometimes with a Memory Allocation Error. I noticed that the pagefile.sys of my Win10 64 system skyrockets in this script and exceeds the free memory.
My current solution is to run the script in steps, so that every time the script runs through, the pagefile empties.
I would like the script to run through all at once, though.
Moving the pagefile to another drive is not an option, unfortunately, because I only have this one drive and moving the pagefile to an external drive does not seem to work.
During my research, I found out about the module gc but that is not working:
import gc
and after every iteration I use
gc.collect()
Am I using it wrong or is there another (python-based!) option?
[Edit:]
The script is very basic and only iterates over image files (using Pillow). The script only checks for width, height and resolution of the image, calculates the dimensions in cm.
If height > width, the image is rotated 90° counterclockwise.
The images are meant to be enlarged or shrunk to A3 size (42 x 29.7cm), so I use the width/height ratio to calculate whether I can enlarge the width to 42cm and the height remains < 29.7cm and in case the height is > 29.7cm, I enlarge the height to 29.7 cm.
For the moment, I do the actual enlargement/shrinking still in Photoshop. Based on whether it is a width/height enlargement, the file is moved to a certain folder that contains either one of those file types.
Anyways, the memory explosion happens in the iteration that only reads the file dimensions.
For that I use
with Image.open(imgOri) as pic:
widthPX = pic.size[0]
heightPX = pic.size[1]
resolution = pic.info["dpi"][0]
widthCM = float(widthPX) / resolution * 2.54
heightCM = float(heightPX) / resolution * 2.54
I also calculate whether the shrinking would be too strong, the image gets divided in half and re-evaluated.
Even though it is unnecessary, I still added pic.close
to the with open()statement, because I thought Python may be keeping the image files open, but that didn't help.
Once the iteration finished, the pagefile.sys goes back to its original size, so in case that error occurs, I take some files out and do them gradually.

How can I avoid a "Segmentation Fault (core dumped)" error when loading large .JP2 images with PIL/OpenCV/Matplotlib?

I am running the following simple line in a short script without any issues:
Python 3.5.2;
PIL 1.1.7;
OpenCV 2.4.9.1;
Matplotlib 3.0.1;
...
# for example:
img = plt.imread(i1)
...
However, if the size of a loaded .JP2 > ~500 MB, Python3 throws the following error when attempting to load an image:
"Segmentation Fault (core dumped)"
It should not be a RAM issue, as only ~40% of the available RAM is being used when the error occurs + the error remains the same when RAM is removed or added to the computer. The error also remains the same when using other ways to load the image, e.g. with PIL.
Is there a way to avoid this error or to work around it?
Thanks a lot!
Not really a solution, more of an idea that may work or help other folks think up similar or further developments...
If you want to do several operations or crops on each monster JP2 image, it may be worth paying the price up-front, just once to convert to a format that ImageMagick can subsequently handle more easily. So, your image is 20048x80000 of 2-byte shorts, so you can expand it out to a 16-bit PGM file like this:
convert monster.jp2 -depth 16 image.pgm
and that takes around 3 minutes. However, if you now want to extract part of the image some way down its height, you can now extract from the PGM:
convert image.pgm -crop 400x400+0+6000 tile.tif
in 18 seconds, instead of from the monster JP2:
convert monster.jp2 -crop 400x400+0+6000 tile.tif
which takes 153 seconds.
Note that the PGM will take lots of disk space.... I guess you could try the same thing with a TIFF which can hold 16-bit data too and could maybe be LZW compressed. I guess you could also use libvips to extract tiles even faster from the PGM file.

Issue Capturing image from FLIR Boson with openCV on a Jetson TX2

When I try to open a webcam (FLIR Boson) with OpenCV on a Jetson TX2 it gives the following error:
libv4l2: error set_fmt gave us a different result then try_fmt!
VIDEOIO ERROR: libv4l unable convert to requested pixfmt
I am using this python script:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Although it does display the video it shows those errors. The reason that is relevant is I am trying to get the FLIR Boson to work with a Jetson TX2 running this program https://github.com/naisy/realtime_object_detection
I have it working with a regular webcam but with the FLIR Boson it gives
libv4l2: error set_fmt gave us a different result then try_fmt!
VIDEOIO ERROR: libv4l unable convert to requested pixfmt
VIDEOIO ERROR: V4L: Initial Capture Error: Unable to load initial memory buffers.
Segmentation fault (core dumped)
the above error and closes. In my research on the error, it seems to come up with people who use webcams that are monochrome, looking at this https://www.flir.com/support-center/oem/is-there-a-way-to-maximize-the-video-display-on-the-boson-app-for-windows-pc-to-full-screen/ I am wondering if I need to configure OpenCV or the V4L2 driver to choose the right format for the webcam to prevent the errors.
I also have a Jetson Xavier and the same object detection program works on it (it just has a different build of OpenCV and Tensorflow), so I am guessing that there is a slightly different configuration related to webcam format compatibility on that OpenCV install on the Xavier VS the TX2. I am new to all of this so forgive me if I ask for more clarification.
One last bit of info, this is out of the FLIR Boson manuel related to USB:
8.2.2 USB
Boson is capable of providing digital data as a USB Video Class (UVC) compliant device. Two output options are provided. Note the options are not selected via the CCI but rather by the video capture or viewing software selected by the user. The options are:
■ Pre-AGC (16-bit): The output is linearly proportional to the flux incident on each pixel in the array; output resolution is 320x256 for the 320 configuration, 640x512 for the 640 configuration. Note that AGC settings, zoom settings, and color-encoding settings have no effect on the output signal at this tap point. This option is identified with a UVC video format 4CC code of “Y16 ” (16-bit uncompressed greyscale image)
■ Post-Colorize, YCbCrb: The output is transformed to YCbCr color space using the specified color palette (see Section 6.7). Resolution is 640x512 for both the 320 and 640 configurations. Three options are provided, identified via the UVC video format 4CC code:
• I420: 8 bit Y plane followed by 8 bit 2x2 subsampled U and V planes
• NV12: 8-bit Y plane followed by an interleaved U/V plane with 2x2 subsampling
• NV21: same as NV12 except reverse order of U and V planes
I have tried reinstalled everything several times, although it takes a few hours to reflash the TX2 and re-install open CV and Tensorflow. I have tried two different "builds" of opencv. I have tried to view the webcam with cheese and have never had a problem.
I don't work with Python but you need disable conversion to RGB:
cap.set(cv.CAP_PROP_CONVERT_RGB, 0)
See you v4l example from OpenCV.
I was able to find a way to get it to work, using the below code worked. It seemed to be a problem with open CV interacting with the v4l2.
pipeline = "v4l2src device=/dev/video1 ! video/x-raw,width=640,height=512,format=(string)I420,pixel-aspect-ratio=1/1, interlace-mode=(string)progressive, framerate=30/1 ! videoconvert ! appsink"
cap = cv2.VideoCapture(pipeline, cv2.CAP_GSTREAMER)
https://github.com/FLIR/BosonUSB/issues/13

ImageMagick identify reports incorrect GIF frame count

I'm using ImageMagick to do some stuff with GIF images.
One of my steps is identifying the number of frames in an image.
I'm calling identify via node-imagemagick (and later gm) like this:
identify -format '%T,%w,%h ' test.gif
Most of the time I correctly get 53 space-separated values for 53 frames.
But sometimes I get 47 or 50 frames for the same GIF image (that has 53 frames).
How can this possibly happen?
I'm running convert -coalesce -append test.gif other.gif at the same time, but it shouldn't touch the original image, right? Moreover I checked and the original image is just fine, even when wrong number of frames is reported.
I can't even reproduce this consistently. Where do I look for the problem?
This seems to happen when I'm running several ImageMagick processes concurrently (on different files).
I'm using ImageMagick 6.8.7-9 Q16 x86_64 2013-12-11.
The image in question:
(But I've had this happen to other images.)
This was not an ImageMagick problem at all.
My code for downloading the image to the server was faulty, always skipping some last fifty bytes or so.
This was too easy to miss because it didn't impact GIF's quality severely.

HaarTraining with OpenCV error

I have about 15000 cropped images with the object of interest (positive samples) and 7000 negative images (non object of interest). The cropped images have a resolution of 48x96 and are placed in a folder. The .txt file containing the positive samples looks something like this : picture1.pgm 1 0 0 48 96 meaning that there is 1 positive sample in picture 1 from (0,0) to (48, 96). Likewise I have a .txt file for negative images.
The command for training is the following:
c:\libraries\OpenCV2.4.1\opencv\built\bin\Debug>opencv_haartrainingd.exe -data d
ata/cascade -vec data/positives.vec -bg c:/users/gheorghi/desktop/daimler/pedest
rian_stereo_extracted/nonpedestrian/nonpedestrian/c0/negatives.txt -npos 15660 -
nneg 7129 -nstage 14 -mem 1000 -mode ALL -w 18 -h 36 -nonsym
But at some point I always get this error :
Parent node: 0
*** 1 cluster ***
OpenCV Error: Assertion failed (elements_read == 1) in unknown function, file C:
\libraries\OpenCV2.4.1\opencv\apps\haartraining\cvhaartraining.cpp, line 1858
How can I overcome this ??? Any help is appreciated. Many many thanks
I found that the problem can be solved in 2 ways. you can either decrease the amount of positives or increase the amount of negatives. either way it turns out that having a small positive to negative ratio helps.
I answered the question here.
It may be of some help.
The same issue was posted by many others, i used the advice given here.

Resources