LabVIEW: How to store jpeg from an array of data - jpeg

Hi I have a 16 bit image and I know its size, image depth, mask. image tpe, rectangle (left, top, right, and bottom). I don't have IMAQ so I have to write my own subvi to save jpeg image.
I can use "Write JPEG file.vi". Is it even possible to construct my own image cluster and write a jpeg image with "Write JPEG file.vi" ?
Thanks,

Yes you can, the image cluster is just a regular cluster that you need to supply with the correct data. The only complicated thing in there is the image data variable but that is well documented in the help of the "Write JPEG file.vi". Only the 16 bit part might be troublesome since this is not supported.
An example of how you would read an image and write it via the normal way and if you create your own image data from the read image. Comparing the two images shows that it is exactly the same.

Related

How do graphic file format types work?

I'm very interested in understanding how graphic file format (PNG, JPG, GIF) work. Are there any code examples that demonstrate how these files are made and also how they are interpreted (viewed in browser)?
Regardless of which graphic file format you are working with, you need to understand the basic nature that all graphic files have in common.
File Header
File Type, Version, (Time & Date Stamp - if included)
Possible data structure/s info or chunks
Flags for which color type to be expected, if compression is available and which type, byte order (endian), has transparency, and other various flags.
Image Data Info
Width normally in pixels sometimes in pels, bits or bytes
Height normally in pixels sometimes in pels, bits or bytes
Bits Per Pixel or Pixel Depth
Image Size in Bytes: numPixelsWidth * numPixelsHeight * ((bits or bytes) for each pixel)
Color Type: - Each Pixel has color data which can vary
Gray Scale
Palette
Color RGB
Color RGBA
Possible Others
If Compression Is Present Which Coding and Encoding Is Used
The actual image data
Once you understand this basic structure then parsing image files becomes easier once you know the specification to the file structure you are working with. When you know how many bytes to read in to your file pointer that includes all headers and chunks, then you can advance your file pointer to the data structure that will read in or write out all the pixel (color) data. In many cases the pixel data is usually 24bits per pixel such that each channel RGBA - Red, Green, Blue, and Alpha are 8bits each or one byte same as an unsigned char. This is represented in either a structure or a two dimensional array. Either way once you know the file's structure and know how to read in the actual image or color data you can easily store it into a single array. Then what you do with it from there depends on your application's needs.
The most detailed information can be obtained by reading the file format specification and implementing a parser in the language you know best.
A good way would be to read the format and transform it into an array of four byte tupples (RGBA, the red, green, blue and alpha parts of a color) This will allow you to use this format as an in between format between formats for easy conversion. At the same time most APIs support the displaying of this raw format.
A good format to get started with is BMP. As old as it is, if this is your first encounter with writing a parser this is a safe an 'easy' format. A good second format is PNG. Start with the uncompressed variations and later add the compression.
Next step is TGA to learn reading chunks or JPG to learn more about compression.
Extra tip: Some implementations of writers contain(ed) errors causing images to be in violation of the format. Others added extra features that never made it to the official specs. When writing a parser this can be a real pain. When you are running into problems always second guess the image you are trying to read. A good binary/hex file reader/editor can be a very helpful tool. I used AXE, if I remember correctly it allows you to overlay the hex codes with a format so you can quickly recognize the header and chunks.

Base64 - the diffrence in size

I'm using patternify and pixieengine when i need to make some small graphic elements for my websites. It didn't bother me till now - pixel editor is dead a few days now. Why these websites ? Because of the base64 code compression.
Example:
Patternify - I fill 5x5 px pattern with black color, this is the base64 code i can get:
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAYAAACNbyblAAAAEUlEQVQImWNgYGD4jwVTXRAA9qoY6Kb21uEAAAAASUVORK5CYII=
It's short and everything works as i expected.
Now I'll try to make a short base64 code without these sites. I made in photoshop a black square 5x5 the same as above and saved this in every possible format. Next I've found few online encoders but this is what they gave me:
iVBORw0KGgoAAAANSUhEUgAAAAUAAAAFCAMAAAC6sdbXAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyJpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMy1jMDExIDY2LjE0NTY2MSwgMjAxMi8wMi8wNi0xNDo1NjoyNyAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNiAoV2luZG93cykiIHhtcE1NOkluc3RhbmNlSUQ9InhtcC5paWQ6RTM1QjVGOEU0MDkxMTFFM0E5MDlGOUFDNDM5REVCMUQiIHhtcE1NOkRvY3VtZW50SUQ9InhtcC5kaWQ6RTM1QjVGOEY0MDkxMTFFM0E5MDlGOUFDNDM5REVCMUQiPiA8eG1wTU06RGVyaXZlZEZyb20gc3RSZWY6aW5zdGFuY2VJRD0ieG1wLmlpZDpFMzVCNUY4QzQwOTExMUUzQTkwOUY5QUM0MzlERUIxRCIgc3RSZWY6ZG9jdW1lbnRJRD0ieG1wLmRpZDpFMzVCNUY4RDQwOTExMUUzQTkwOUY5QUM0MzlERUIxRCIvPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRGPiA8L3g6eG1wbWV0YT4gPD94cGFja2V0IGVuZD0iciI/Pg8gB7gAAAAGUExURQAAAP///6XZn90AAAAOSURBVHjaYmDABwACDAAAHgABzCCyiwAAAABJRU5ErkJggg==
Much longer code and the weight of file was similar to the PNG from patternify ~950 B
Patternify has limitation to 10x10 px. So for larger elements i have to use pixieengine, it has exact the same compression level as patternify and no limitation unfortunately it's dead thats why i need to understand now how it really works. Is there any "offline" way to achieve patternify/pixieengine compression level ?
This isn't really a question about base64 encoding, it's about image compression. Base64 encoding is not going to implicitly make your image take up fewer bytes, in fact it makes it take up more (binary vs. a string representation of that binary). Run your original PNG through a good compression tool such as pngcrush and then encode it as base64.

Library for raster cartographic transformations. 'Unprojected' to ANY

I have many raster (bitmap) images that I'd like to transform from unprojected lat-lon to a projected rendering. (e.g. GIF, PNG).
I don't understand how to use PROJ.4 to render the resulting image. I'd like a library or software that can do this all automatically. GRASS GIS is large. The transforms are relatively simple transforms and of raster images only.
Or is there basic code or an example of how I would do this? using PROJ.4 and GraphicsMagick.
It is a little confusing about what you are asking for here.
If you are trying to convert from a LAT Long geo referenced image to another projection or if you just want to keep the current geo referencing of a bitmap and convert it to another format such as GIF or PNG.
If you wish to change formats I don't believe PNG or GIF supports geo referencing in its header so this will not be possible. If you are looking at trying to compress the image so it doesn't take up as much space you could look at JPEG or JPEG2000 as these support both. For a full list of image formats and what supports geo referencing and what does not this page is a good place to start:
Link
If you wish to change the co-ordinate projection from lat, long to something else (like Mercator mga zone XX) You can use something like GDAL to batch the process.
Download from here: http://trac.osgeo.org/gdal/wiki/DownloadingGdalBinaries
See here for a list of inlcuded utilities: Link
See here for the utility help for changing projections: Link
See here for utility that changes image formats: Link
Hopefully that will be of help to you.

DICOM Image is too dark with ITK

i am trying to read an image with ITK and display with VTK.
But there is a problem that has been haunting me for quite some time.
I read the images using the classes itkGDCMImageIO and itkImageSeriesReader.
After reading, i can do two different things:
1.
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
2.
The second scenario is the registration pipeline. Here, i read the image as before, then use the classes shown in the ITK Software Guide chapter about registration. Then i resample the image and use the itkImageSeriesWriter.
And that's when the problem appears. After writing the image to a file, i compare this new image with the image i used as input in the XMedcon software. If the image i wrote ahs been shown too bright in my software, there no changes when i compare both of them in XMedcon. Otherwise, if the image was too dark in my software, it appears all messed up in XMedcon.
I noticed, when comparing both images (the original and the new one) that, in both cases, there are changes in modality, pixel dimensions and glmax.
I suppose the problem is with the glmax, as the major changes occur with the darker images.
I really don't know what to do. Does this have something to do with color level/window? The most strange thing is that all the images are very similar, with identical tags and only some of them display errors when shown/written.
I'm not familiar with the particulars of VTK/ITK specifically, but it sounds to me like the problem is more general than that. Medical images have a high dynamic range and often the images will appear very dark or very bright if the window isn't set to some appropriate range. The DICOM tags Window Center (0028, 1050) and Window Width (0028, 1051) will include some default window settings that were selected by the modality. Usually these values are reasonable, but not always. See part 3 of the DICOM standard (11_03pu.pdf is the filename) section C.11.2.1.2 for details on how raw image pixels are scaled for display. The general idea is that you'll need to apply a linear scaling to the images to get appropriate pixel values for display.
What pixel types do you use? In most cases, it's simpler to use a floating point type while using ITK, but raw medical images are often in short, so that could be your problem.
You should also write the image to the disk after each step (in MHD format, for example), and inspect it with a viewer that's known to work properly, such as vv (http://www.creatis.insa-lyon.fr/rio/vv). You could also post them here as well as your code for further review.
Good luck!
For what you describe as your first issue:
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
I suggest the following: Check your window/level in VTK, they probably aren't adequate to your images. If they are abdominal tomographies window = 350 level 50 should be a nice color level.

detect color space with openCV

how can I see the color space of my image with openCV ?
I would like to be sure it is RGB, before to convert to another one using cvCvtColor() function
thanks
Unfortunately, OpenCV doesn't provide any sort of indication as to the color space in the IplImage structure, so if you blindly pick up an IplImage from somewhere there is just no way to know how it was encoded. Furthermore, no algorithm can definitively tell you if an image should be interpreted as HSV vs. RGB - it's all just a bunch of bytes to the machine (should this be HSV or RGB?). I recommend you wrap your IplImages in another struct (or even a C++ class with templates!) to help you keep track of this information. If you're really desperate and you're dealing only with a certain type of images (outdoor scenes, offices, faces, etc.) you could try computing some statistics on your images (e.g. build histogram statistics for natural RGB images and some for natural HSV images), and then try to classify your totally unknown image by comparing which color space your image is closer to.
txandi makes an interesting point. OpenCV has a BGR colorspace which is used by default. This is similar to the RGB colorspace except that the B and R channels are physically switched in the image. If the physical channel ordering is important to you, you will need to convert your image with this function: cvCvtColor(defaultBGR, imageRGB, CV_BGR2RGB).
As rcv said, there is no method to programmatically detect the color space by inspecting the three color channels, unless you have a priori knowledge of the image content (e.g., there is a marker in the image whose color is known). If you will be accepting images from unknown sources, you must allow the user to specify the color space of their image. A good default would be to assume RGB.
If you modify any of the pixel colors before display, and you are using a non-OpenCV viewer, you should probably use cvCvtColor(src,dst,CV_BGR2RGB) after you have finished running all of your color filters. If you are using OpenCV for the viewer or will be saving the images out to file, you should make sure they are in BGR color space.
The IplImage struct has a field named colorModel consisting of 4 chars. Unfortunately, OpenCV ignores this field. But you can use this field to keep track of different color models.
I basically split the channels and display each one to figure out the color space of the image I'm using. It may not be the best way, but it works for me.
For detailed explanation, you can refer the below link.
https://dryrungarage.wordpress.com/2018/03/11/image-processing-basics/

Resources