Displaying dicom datatset using VTK - vtk

I want to process a dicom dataset and display it using VTK.
How can i know in advance if the graphic card will be able to display the volume?
i've tried using glGetIntegerv(GL_MAX_TEXTURE_BUFFER_SIZE_EXT,size) that gives you the maximum number of texels that can be rendered using the graphic card, and then try to compare it with the output of m_vtkImageReader->GetOutput()->GetDimensions(dimensions). i thought that if dimensions.x*dimensions.y*dimensions.z > size then the vtk will throw an error, but it didn't happened.
I'll be glad to hear about other ways, or maybe someone can point me were i'm wrong.

VTK provide gpu-based volume processing, and non gpu. You may try to use VtkSmartVolumeMapper. This mapper select best mapper, from vtk mappers, for your card. It is display volume fine with notebook via unichrome videocard with 32 mb of memory.

Related

Convert PDF to image with high resolution to fit in page

I regularly get tree-drilling-data out of a machine that should get into reports.
The pdf-s contain too much empty space and useless information.
With convert i already managed to convert the pdf to png, cut out parts and rebuild an image i desire. It has a fine sharpness, its just too large:
Output 1: Nice, just too large
For my reports i need it in 45% size of that, or 660 pixels wide.
The best output i managed up to now is this:
Output 2: Perfect size but unsharp
Now, this is far away in quality from the picture before shrinking.
For sure, i've read this article here, that already helped.
But i think it must be possible to get an image as fine as the too large one in Output 1.
I've tried around for hours with convert -scale, -resize, -resample, playing around with values for density, sharpen, unsharpen, quality... nothing better than what i've got, using
convert -density 140 -trim input.pdf -quality 100 -sharpen 0x1.0 step1.png
then processing it to the new picture (output1, see up), that i'm putting to the correct size with
convert output1.png -resize 668x289! -unsharp 0x0.75+0.75+0.01 output2.png
I tried also "resize 668x" in order not to maybe disturb, no difference.
I find i am helpless in the end.
I am not an IT-expert, i am a computer-affin tree-consultant.
My understanding of image-processing is limited.
Maybe it would make sense to stay on a vector-based format (i tried .gif and .svg ... brrrr).
I would prefer to stay with convert/imagemagick and not to install additional software.
It has to run from command-line, as it is part of a bash-script processing multiple files. I am using Suse Linux.
Grateful for your help!
I realize you said no other software, but it can be easier to get good results from other PDF rendering engines.
ImageMagick renders PDFs by shelling out to ghostscript. This is terrific software, but it's designed for print rather than screen output. As a result, it generates very hard edges, because that's what you need if you are intending to control ink on paper. The tricks you see for rendering PDF at higher res and then resizing them fix this, but it can be tricky to get the parameters just right (as you know).
There are PDF rendering libraries which target screen output and will produce nice edges immediately. You don't need to render at high res and sample down, they just render correctly for screen in the first place. This makes them easier to use (obviously!) and a lot faster.
For example, vipsthumbnail comes with suse and includes a direct PDF rendering system. Install with:
zypper install vips-tools
Regarding the size, your 660 pixels across is too low. Some characters in your PDF will come out at only 3 or 4 pixels across and you simply can't make them sharp, there are just too few dots.
Instead, think about the size you want them printed on the paper, and the level of detail you need. The number of pixels across sets the detail, and the resolution controls the physical size of those dots when you print.
I would at least double that 668. Try:
vipsthumbnail P3_M002.pdf --size 1336 -o x.png
With your sample image I get:
Now when you print, you want those 1336 pixels to fill 17cm of paper. libvips lets you set resolution in pixels per millimetre, so you need 1336 pixels in 170 mm, or 1336 / 170, or 7.86. Try:
vips.exe copy x.png y.png[palette] --xres 7.86 --yres 7.86
Now y.png should load into librecalc at 17cm across and be nice and sharp when printed. The [palette] option after y.png enables palettised PNG, which shrinks the image to around 50kb.
The resolution setting is also called DPI (dots per inch). I find the name confusing myself -- you'll also see it called "pixels per printed inch", which I think is a much clearer.
In Imagemagick, set a higher density, then trim, then resize, then unsharpened. The higher the density, the sharper your result, but the slower it will get. Note that PNG quality of 100 is not the proper scale. It does not have quality values corresponding to 0 to 100 as in JPG. See https://imagemagick.org/script/command-line-options.php#quality. I cannot tell you the "best" numbers to use as it is image dependent. You can use some other tool such as at https://imagemagick.org/Usage/formats/#png_non-im to optimize your PNG output.
So try,
convert -density 300 input.pdf -trim +repage -resize 668x289 -unsharp 0x0.75+0.75+0.01 output.png
Or remove the -unsharp if you find that it is not needed.
ADDITION
Here is what I get with
convert -density 1200 P3_M002.pdf -alpha off -resize 660x -brightness-contrast -35,35 P3_M002.png
I am not sure why the graph itself lost brightness and contrast. (I suspect it is due to an imbedded image for the graph). So I added -brightness-contrast to bring out the detail. But it made the background slightly gray. You can try reducing those values. You may not need it quite so strong.
Great, #fmw42,
pngcrush -res 213 graphc.png done.png
from your link did the job, as to be seen here:
perfect size and sharp graph
Thank you a lot.
Now i'll try to get file-size down, as the Original pdf has 95 KiB an d now i am on 350 KiB. So, with 10 or more graphs in a document it would be maybe unnecessary large, also working on the ducument might get slow.
-- Addition -- 2023-02-04
#fmw42 : Thanks for all your effort!
Your solution with the .pdf you show does not really work - too gray for a good report, also not the required sharpness.
#jcupitt : Also thanks, vips is quick and looks interesting. vipsthumbnails' outcome ist unsharp, i tried around a bit but the docu is too abstract for me to get syntax-correct use. I could not find a dilettant-readable docu, maybe you know one?
General: With all my beginners-trials up to now i find:
the pdf contains all information to produce a large, absolutely sharp output (vector-typic, i guess)
it is no problem to convert to a png of same size without losing quality
any solutions of shrinking the png in size then result in significant (a) quality-loss or (b) file-size increase.
So, i (beginner) think that the pdf should be processed directly to the correct png-size, without later downsampling the png.
This could be done
(a) telling the conversion-process the output-size (if there is a possibility for this?) or
(b) first creating a smaller pdf, like letting it look A5 instead of A4, so a fitting .png is directly created (i need 6.5 inches wide approx.).
For both solutions i miss ability to sensefully investigate, for it takes me hours and hours to try out things and learn about the mysteries of image-processing.
The solution with pngcrush works for the moment, although i'm not really happy about the file-size (cpu and fan-power are not really important factors here).
--- Addition II --- final one 2023-02-05
convert -density 140 -trim "$datei" -sharpen 0x1.0 rgp-kopie0.png
magick rgp-kopie0.png +dither PNG8:rgp-kopie.png ## less colours
## some convert -crop and -composite here to arrange new image
pngcrush -s -res 213 graphc.png "$namenr.png"
New image is as this, with around 50 KiB, definitely satisfying for me in quality and filesize.
I thank you all a lot for contributing, this makes my work easier from now on!
... and even if i do not completely understand everything, i learnt a bit.

How would I be able to acquire data using a picture?

I need to find a way to acquire data from a picture for a new project I am trying to do. This involves tracking eye movements.
Have you checked out OpenCV? If that's not an option, consider any number of the image libraries available in Python -- just about all of them have a number of encoding representations, such as RGBA, HSV, Bayer, and LVU. Note that the four encodings that I've mentioned has different channels are are uncompressed, which would give you the full data from each frame you're analyzing.

Python Image Manipulation

I am loading in big, raw data files with python. It is a collection of images (video stream) that I want to display on an interface. As of now I am embedding a matplotlib graph by using the imshow() command. However it is very slow.
The fast part is reading the data itself, but splitting it in a numpy array matrix already takes 8 seconds for a 14MB file. We have 50GB files. That would take 8 hours. It's probably not the biggest problem though.
What the real problem is, is displaying the images. Let's say all images of the 14MB file are in RAM memory (I'm assuming python keeps it there. Which is also my problem with python, you don't know what the hell is happening). So right now I am replotting the image every time and then redrawing the canvas, and it seems to be a bottleneck. Is there anyway to reduce this bottleneck?
Images are usually 680*480 (but also variable) of a variable datatype, usually uint8. The interface is a GUI, and there is a slider bar that you can drag to get to a certain frame. An additional feature will be a play button that will go through each frames near real-time. Windows application.

OpenCV blob tracking on low resolution image in Visual Studio 2012

I'm using the Kinect SDK in C++ to generate an image of points near a plane in space, with the goal of using them as touches. I've attached a 3x scale image of the result of that process, so thats all gravy.
My question is how best to use OpenCV to generate blobs frame to frame from this image (and images like it) to use for touches. Heres what I've tried in my ProcessDepth callback, where img is a monochrome cv::Mat of the touch image, and out is an empty cv::Mat.
cv::Canny(img,out,100,200,3);
cv::findContours(out,contours,cv::RETR_TREE,cv::CHAIN_APPROX_SIMPLE,cv::Point(0,0));
mu.resize(contours.size());
mc.resize(contours.size());
for(int i = 0; i<contours.size();i++){
mu[i] = cv::moments(contours[i],true);
}
for(int i = 0; i<contours.size();i++){
mc[i] = cv::Point2f(mu[i].m10/mu[i].m00, mu[i].m01/mu[i].m00);
}
(I'd post more code, but VMWare is being bad about letting me copy paste out of it, if you want more, just ask.)
At which point I think I should get center of masses for blobs for a frame, in practice though, its not there. I get either errors when contour.size() returns greater than 0, or with a bit of tinkering, I get moments that seem really weird, containing large negative numbers say. So my questions are as follows:
Does anyone have recommendations on how to turn the image below into blob data with a good result, so far as flags in findContours are concerned?
Do I even need to bother with Cranny or threshold since I have a monochrome image already, and if Cranny, is the kernal of 3 too large for the number of pixels I'm dealing with?
Will Find contours work on images of this size? (160 ish by 90 ish, though thats fairly arbitrary. Smallish more generally.)
Are the OpenCV functions async? I get lots of invalid address errors if my images and the contour vector don't exist as properties on the application class. (I'm the first to admit I'm not a particularly talented C++ Programmer.)
Is there a way simpler way to go from image to series of points corresponding to touches from image?
For reference, I'm cribbing on some examples in my OpenCV download, and this example.
Let me know if you need some other information to better answer, and I'll try to provide it, thanks!

DICOM Image is too dark with ITK

i am trying to read an image with ITK and display with VTK.
But there is a problem that has been haunting me for quite some time.
I read the images using the classes itkGDCMImageIO and itkImageSeriesReader.
After reading, i can do two different things:
1.
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
2.
The second scenario is the registration pipeline. Here, i read the image as before, then use the classes shown in the ITK Software Guide chapter about registration. Then i resample the image and use the itkImageSeriesWriter.
And that's when the problem appears. After writing the image to a file, i compare this new image with the image i used as input in the XMedcon software. If the image i wrote ahs been shown too bright in my software, there no changes when i compare both of them in XMedcon. Otherwise, if the image was too dark in my software, it appears all messed up in XMedcon.
I noticed, when comparing both images (the original and the new one) that, in both cases, there are changes in modality, pixel dimensions and glmax.
I suppose the problem is with the glmax, as the major changes occur with the darker images.
I really don't know what to do. Does this have something to do with color level/window? The most strange thing is that all the images are very similar, with identical tags and only some of them display errors when shown/written.
I'm not familiar with the particulars of VTK/ITK specifically, but it sounds to me like the problem is more general than that. Medical images have a high dynamic range and often the images will appear very dark or very bright if the window isn't set to some appropriate range. The DICOM tags Window Center (0028, 1050) and Window Width (0028, 1051) will include some default window settings that were selected by the modality. Usually these values are reasonable, but not always. See part 3 of the DICOM standard (11_03pu.pdf is the filename) section C.11.2.1.2 for details on how raw image pixels are scaled for display. The general idea is that you'll need to apply a linear scaling to the images to get appropriate pixel values for display.
What pixel types do you use? In most cases, it's simpler to use a floating point type while using ITK, but raw medical images are often in short, so that could be your problem.
You should also write the image to the disk after each step (in MHD format, for example), and inspect it with a viewer that's known to work properly, such as vv (http://www.creatis.insa-lyon.fr/rio/vv). You could also post them here as well as your code for further review.
Good luck!
For what you describe as your first issue:
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
I suggest the following: Check your window/level in VTK, they probably aren't adequate to your images. If they are abdominal tomographies window = 350 level 50 should be a nice color level.

Resources