HaarTraining with OpenCV error - visual-c++

I have about 15000 cropped images with the object of interest (positive samples) and 7000 negative images (non object of interest). The cropped images have a resolution of 48x96 and are placed in a folder. The .txt file containing the positive samples looks something like this : picture1.pgm 1 0 0 48 96 meaning that there is 1 positive sample in picture 1 from (0,0) to (48, 96). Likewise I have a .txt file for negative images.
The command for training is the following:
c:\libraries\OpenCV2.4.1\opencv\built\bin\Debug>opencv_haartrainingd.exe -data d
ata/cascade -vec data/positives.vec -bg c:/users/gheorghi/desktop/daimler/pedest
rian_stereo_extracted/nonpedestrian/nonpedestrian/c0/negatives.txt -npos 15660 -
nneg 7129 -nstage 14 -mem 1000 -mode ALL -w 18 -h 36 -nonsym
But at some point I always get this error :
Parent node: 0
*** 1 cluster ***
OpenCV Error: Assertion failed (elements_read == 1) in unknown function, file C:
\libraries\OpenCV2.4.1\opencv\apps\haartraining\cvhaartraining.cpp, line 1858
How can I overcome this ??? Any help is appreciated. Many many thanks

I found that the problem can be solved in 2 ways. you can either decrease the amount of positives or increase the amount of negatives. either way it turns out that having a small positive to negative ratio helps.

I answered the question here.
It may be of some help.
The same issue was posted by many others, i used the advice given here.

Related

cdo remapbil error : Segmentation fault (core dumped)

I have spatially merged 4 tif tiles using gdal_merge. then converted the merged file to netcdf using gdal_translate. Now I want to regrid the netcdf file for specific lat lon and resolution. But when i use remapbil in cdo I get the error "Segmentation fault (core dumped)". As the file is more than 1.5 GB I am attaching a google drive link.
text
The grid data (gridfile.txt) for the command
"cdo remapbil,gridfile.txt out.nc out_1.nc" is attached here.text
Please help me resolve this problem.
I have spatially merged 4 tif tiles using gdal_merge. then converted the merged file to netcdf using gdal_translate. Now I want to regrid the netcdf file for specific lat lon and resolution. But when i use remapbil in cdo I get the error "Segmentation fault (core dumped)". What should I do to resolve this problem
You are almost certainly running out of RAM. This is what is happening to me on my 32 GB machine. A critical thing to know is that, at a minimum, CDO has to take an entire horizontal layer into memory, so regridding this file is going to be very RAM heavy.
The solution is to first resample the grid, and then regrid it.
Your horizontal resolution in the raw file is roughly 0.001 by 0.001. However, the target grid resolution is 0.25 by 0.25. My recommendation is to resample the original grid so that it is 0.01 by 0.01, and then regrid to 0.25. The following will work:
cdo samplegrid,10 out.nc out1.nc
cdo remapbil,grid out1.nc out2.nc

How can I avoid a "Segmentation Fault (core dumped)" error when loading large .JP2 images with PIL/OpenCV/Matplotlib?

I am running the following simple line in a short script without any issues:
Python 3.5.2;
PIL 1.1.7;
OpenCV 2.4.9.1;
Matplotlib 3.0.1;
...
# for example:
img = plt.imread(i1)
...
However, if the size of a loaded .JP2 > ~500 MB, Python3 throws the following error when attempting to load an image:
"Segmentation Fault (core dumped)"
It should not be a RAM issue, as only ~40% of the available RAM is being used when the error occurs + the error remains the same when RAM is removed or added to the computer. The error also remains the same when using other ways to load the image, e.g. with PIL.
Is there a way to avoid this error or to work around it?
Thanks a lot!
Not really a solution, more of an idea that may work or help other folks think up similar or further developments...
If you want to do several operations or crops on each monster JP2 image, it may be worth paying the price up-front, just once to convert to a format that ImageMagick can subsequently handle more easily. So, your image is 20048x80000 of 2-byte shorts, so you can expand it out to a 16-bit PGM file like this:
convert monster.jp2 -depth 16 image.pgm
and that takes around 3 minutes. However, if you now want to extract part of the image some way down its height, you can now extract from the PGM:
convert image.pgm -crop 400x400+0+6000 tile.tif
in 18 seconds, instead of from the monster JP2:
convert monster.jp2 -crop 400x400+0+6000 tile.tif
which takes 153 seconds.
Note that the PGM will take lots of disk space.... I guess you could try the same thing with a TIFF which can hold 16-bit data too and could maybe be LZW compressed. I guess you could also use libvips to extract tiles even faster from the PGM file.

Tensorflow Object Detection Limit

Following the code here: https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
No matter the image inputted, there seems to be a hard limit of 20 objects detected. Example:
The problem is also seen in this post: TensorFlow Object Detection API Weird Behavior
Is there some configuration or parameter that can be changed to raise the number of objects detected?
EDIT: I can confirm that greater than 20 objects are detected, but there is a maximum of 20 that will be shown in the final output. Is there a way to increase this limit?
The max number of detections can be set in your config file. By default it's usually 300, so you should be fine.
Your problem here is the number of displayed detections. Towards the end of your code you have a call to vis_util.visualize_boxes_and_labels_on_image_array. Just add max_boxes_to_draw=None to its arguments to display all the detections (or choose some bigger number if you want).

FIFO almost full and empty conditions Verilog

Suppose i am having a FIFO with depth 32 and width 8 bit.There is a valid bit A in all 32 locations.If this bit is 1 in all locations we have full condition and if 0 it will be empty condition.My Requirement is if this bit A at one location is 0 and all locations of this bit A is 1. when reaches to 30th location it should generate Almost_full condition.
Help me out please.
Thanks in Advance.
So you have a 32 bit vector and you want to check only one of the bits is 0. If speed is not much of a concern I will use a for loop to do this.
If speed is a concern I will get this done in 5 iterations. You can do this by divide and check method. Check two 16 bit words in parallel. Then divide this into two 8 bits and check them in parallel. And depending on where the zero is divide that particular 8 bit into 4 bits and check and so on.
If at any point you have zeros in both the parts, then you can exit the checking and conclude that almost_full = 0;

ImageMagick identify reports incorrect GIF frame count

I'm using ImageMagick to do some stuff with GIF images.
One of my steps is identifying the number of frames in an image.
I'm calling identify via node-imagemagick (and later gm) like this:
identify -format '%T,%w,%h ' test.gif
Most of the time I correctly get 53 space-separated values for 53 frames.
But sometimes I get 47 or 50 frames for the same GIF image (that has 53 frames).
How can this possibly happen?
I'm running convert -coalesce -append test.gif other.gif at the same time, but it shouldn't touch the original image, right? Moreover I checked and the original image is just fine, even when wrong number of frames is reported.
I can't even reproduce this consistently. Where do I look for the problem?
This seems to happen when I'm running several ImageMagick processes concurrently (on different files).
I'm using ImageMagick 6.8.7-9 Q16 x86_64 2013-12-11.
The image in question:
(But I've had this happen to other images.)
This was not an ImageMagick problem at all.
My code for downloading the image to the server was faulty, always skipping some last fifty bytes or so.
This was too easy to miss because it didn't impact GIF's quality severely.

Resources