Gamma-correct zoomable image (dzi, etc) generator - browser

I'm using MS Deep Zoom Composer to generate tiled image sets for megapixel sized images.
Right now I'm preparing a densely detailed black and white linedrawing.
The lack of gamma correction during resizing is very apparent;
while zooming the tiles appear to become brighter on higher zoom levels.
This makes the boundaries between tiles quite apparent during the loading stage.
While it does not in any way hurt usability it is a bit unsightly.
I am wondering if there are any alternatives to Deep Zoom Composer that do gamma correct resizing?

The vips deepzoom creator can do this.
You make a deepzoom pyramid like this:
vips dzsave somefile.tif pyr_name
and it'll read somefile.tif and write pyr_name.dzi and pyr_name_files, a folder containing the tiles. You can use a .zip extension to the pyramid name and it'll directly write an uncompressed zip file containing the whole pyramid --- this is a lot faster on Windows. There's a blog post with some more examples and explanation.
To make it shrink gamma corrected, you need to move your image to a linear colourspace for saving. The simplest is probably scRGB, that is, sRGB with linear light. You can do this with:
vips colourspace somefile.tif x.tif scrgb
and it'll write x.tif, an scRGB float tiff.
You can run the two operations in a single command by using .dz as the output file suffix. This will send the output of the colourspace transform to the deepzoom writer for saving. The deepzoom writer will use .jpg to save each tile, the jpeg writer knows that jpeg files can only be RGB, so it'll automatically turn the scRGB tiles back into plain sRGB for saving.
Put that all together and you need:
vips colourspace somefile.tif mypyr.dz scrgb
And that should build a pyramid with a linear-light shrink.
You can pass options to the deepzoom saver in square brackets after the filename, for example:
vips colourspace somefile.tif mypyr.dz[container=zip] scrgb
The blog post has the details.
update: the Windows binary is here, to save you hunting. Unzip somewhere, and vips.exe is in the /bin folder.

pamscale1 of the netpbm suite is quite well known not to screw up scaled images as you describe. It uses gamma correction instead of ill-concieved "high-quality filters" and other magic used to paper over incorrect scaling algorithms.
Of course you will need some scripting - it's not a direct replacement.

We maintain a list of DZI creation tools here:
http://openseadragon.github.io/examples/creating-zooming-images/
I don't know if any of them do gamma correction, but some of them might not have that issue to begin with. Also, many of them come with source, so you can add the gamma correction in yourself if need be.

Related

Color management - what exactly does the monitor ICC profile do, and where does it sit in the color conversion chain?

I'm reading/watching anything I can about color management/color science and something that's not making sense to me is the scene-referred and display-referred workflows. Isn't everything display-referred, because your monitor is converting everything you see into something it can display?
While reading this article, I came across this image:
So, if I understand this right to follow a linear workflow, I should apply an inverse power function to any imported jpg/png/etc files that contain color data, to get it's gamma to be linear. I then work on the image, and when I'm ready to export, say to sRGB and save it as a png, it'll bake in the original transfer function.
But, even while it's linear, and I'm working on it, is't my monitor converting everything I see to what I can display? Isn't it basically applying it's own LUT? Isn't there already a gamma curve that the monitor itself is applying?
Also, from input to output, how many color space conversions take place, say if I'm working in the ACEScg color space. If I import a jpg texture, I linearize it and bring it into the ACEScg color space. I work on it, and when I render it out, the renderer applies a view transform to convert it from ACEScg to sRGB, and then also what I'm seeing is my monitor converting then from sRGB to my monitor's own ICC profile, right (which is always happening since everything I'm seeing is through my monitor's ICC profile)?
Finally, if I add a tone-mapping s curve, where does that conversion sit on that image?
I'm not sure your question is about programming, and the question has not much relevance to the title.
In any case:
light (photons) behave linearly. The intensity of two lights is the sum of the intensity of each light. For this reason a lot of image mangling is done in linear space. Note: camera sensors have often a near linear response.
eyes see nearly as with a gamma exponent of 2. So for compression (less noise with less bit information) gamma is useful. By accident also the CRT phosphors had a similar response (else the engineers would have found some other methods: in past such fields were done with a lot of experiments: feed back from users, on many settings).
Screens expects images with a standardized gamma correction (now it depends on the port, setting, image format). Some may be able to cope with many different colour spaces. Note: now we have no more CRT, so the screen will convert data from expected gamma to the monitor gamma (and possibly different value for each channel). So a sort of a LUT (it may just be electronically done, so without the T (table)). Screens are setup so that with a standard signal you get expected light. (There are standards (images and methods) to measure the expected bahavious, but so ... there is some implicit gamma correction of the gamma corrected values. It was always so: on old electronic monitor/TV technicians may get an internal knob to regulate single colours, general settings, etc.)
Note: Professionals outside computer graphic will use often opto-electronic transfer function (OETF) from camera (so light to signal) and the inverse electro-optical transfer function (EOTF) when you convert a signal (electric) to light, e.g. in the screen. I find this way to call the "gamma" show quickly what it is inside gamma: it is just a conversion between analogue electrical signal and light intensity.
The input image has own colour space. You now assume a JPEG, but often you have much more information (RAW or log, S-log, ...). So now you convert to your working colour space (it may be linear, as our example). If you show the working image, you will have distorted colours. But you may not able to show it, because you will use probably more then 8-bit per channel (colour). Common is 16 or 32bits, and often with half-float or single float).
And I lost some part of my answer (after last autosave). The rest was also complex, but the answer is already too long. In short. You can calibrate the monitor: two way: the best way (if you have a monitor that can be "hardware calibrated"), you just modify the tables in monitor. So it is nearly all transparent (it is just that the internal gamma function is adapted to get better colours). You still get the ICC, but for other reasons. Or you get the easy calibration, where the bytes of an image are transformed on your computer to get better colours (in a program, or now often by operating system, either directly by OS, or by telling the video card to do it). You should careful check that only one component will do colour correction.
Note: in your program, you may save the image as sRGB (or AdobeRGB), so with standard ICC profiles, and practically never as your screen ICC, just for consistency with other images. Then it is OS, or soft-preview, etc. which convert for your screen, but if the image as your screen ICC, just the OS colour management will see that ICC-image to ICC-output will be a trivial conversion (just copying the value).
So, take into account that at every step, there is an expected colour space and gamma. All programs expect it, and later it may be changed. So there may be unnecessary calculation, but it make things simpler: you should not track expectations.
And there are many more details. ICC is also use to characterize your monitor (so the capable gamut), which can be used for some colour management things. The intensions are just the method the colour correction are done, if the image has out-of-gamut colours (just keep the nearest colour, so you lose shade, but gain accuracy, or you scale all colours (and you expect your eyes will adapt: they do if you have just one image at a time). The evil is in such details.

Is it possible to cut parts out of a picture and analyze them separately with python?

I am doing some studies on eye vascularization - my project contains a machine which can detect the different blood vessels in the retinal membrane at the back of the eye. What I am looking for is a possibility to segment the picture and analyze each segmentation on it`s own. The Segmentation consist of six squares wich I want to analyze separately on the density of white pixels.
I would be very thankful for every kind of input, I am pretty new in the programming world an I actually just have a bare concept on how it should work.
Thanks and Cheerio
Sam
Concept DrawOCTA PICTURE
You could probably accomplish this by using numpy to load the image and split it into sections. You could then analyze the sections using scikit-image or opencv (though this could be difficult to get working. To view the image, you can either save it to a file using numpy, or use matplotlib to open it in a new window.
First of all, please note that in image processing "segmentation" describes the process of grouping neighbouring pixels by context.
https://en.wikipedia.org/wiki/Image_segmentation
What you want to do can be done in various ways.
The most common way is by using ROIs or AOIs (region/area of interest). That's basically some geometric shape like a rectangle, circle, polygon or similar defined in image coordinates.
The image processing is then restricted to only process pixels within that region. So you don't slice your image into pieces but you restrict your evaluation to specific areas.
Another way, like you suggested is to cut the image into pieces and process them one by one. Those sub-images are usually created using ROIs.
A third option which is rather limited but sufficient for simple tasks like yours is accessing pixels directly using coordinate offsets and several nested loops.
Just google "python image processing" in combination with "library" "roi" "cropping" "sliding window" "subimage" "tiles" "slicing" and you'll get tons of information...

Library for raster cartographic transformations. 'Unprojected' to ANY

I have many raster (bitmap) images that I'd like to transform from unprojected lat-lon to a projected rendering. (e.g. GIF, PNG).
I don't understand how to use PROJ.4 to render the resulting image. I'd like a library or software that can do this all automatically. GRASS GIS is large. The transforms are relatively simple transforms and of raster images only.
Or is there basic code or an example of how I would do this? using PROJ.4 and GraphicsMagick.
It is a little confusing about what you are asking for here.
If you are trying to convert from a LAT Long geo referenced image to another projection or if you just want to keep the current geo referencing of a bitmap and convert it to another format such as GIF or PNG.
If you wish to change formats I don't believe PNG or GIF supports geo referencing in its header so this will not be possible. If you are looking at trying to compress the image so it doesn't take up as much space you could look at JPEG or JPEG2000 as these support both. For a full list of image formats and what supports geo referencing and what does not this page is a good place to start:
Link
If you wish to change the co-ordinate projection from lat, long to something else (like Mercator mga zone XX) You can use something like GDAL to batch the process.
Download from here: http://trac.osgeo.org/gdal/wiki/DownloadingGdalBinaries
See here for a list of inlcuded utilities: Link
See here for the utility help for changing projections: Link
See here for utility that changes image formats: Link
Hopefully that will be of help to you.

DICOM Image is too dark with ITK

i am trying to read an image with ITK and display with VTK.
But there is a problem that has been haunting me for quite some time.
I read the images using the classes itkGDCMImageIO and itkImageSeriesReader.
After reading, i can do two different things:
1.
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
2.
The second scenario is the registration pipeline. Here, i read the image as before, then use the classes shown in the ITK Software Guide chapter about registration. Then i resample the image and use the itkImageSeriesWriter.
And that's when the problem appears. After writing the image to a file, i compare this new image with the image i used as input in the XMedcon software. If the image i wrote ahs been shown too bright in my software, there no changes when i compare both of them in XMedcon. Otherwise, if the image was too dark in my software, it appears all messed up in XMedcon.
I noticed, when comparing both images (the original and the new one) that, in both cases, there are changes in modality, pixel dimensions and glmax.
I suppose the problem is with the glmax, as the major changes occur with the darker images.
I really don't know what to do. Does this have something to do with color level/window? The most strange thing is that all the images are very similar, with identical tags and only some of them display errors when shown/written.
I'm not familiar with the particulars of VTK/ITK specifically, but it sounds to me like the problem is more general than that. Medical images have a high dynamic range and often the images will appear very dark or very bright if the window isn't set to some appropriate range. The DICOM tags Window Center (0028, 1050) and Window Width (0028, 1051) will include some default window settings that were selected by the modality. Usually these values are reasonable, but not always. See part 3 of the DICOM standard (11_03pu.pdf is the filename) section C.11.2.1.2 for details on how raw image pixels are scaled for display. The general idea is that you'll need to apply a linear scaling to the images to get appropriate pixel values for display.
What pixel types do you use? In most cases, it's simpler to use a floating point type while using ITK, but raw medical images are often in short, so that could be your problem.
You should also write the image to the disk after each step (in MHD format, for example), and inspect it with a viewer that's known to work properly, such as vv (http://www.creatis.insa-lyon.fr/rio/vv). You could also post them here as well as your code for further review.
Good luck!
For what you describe as your first issue:
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
I suggest the following: Check your window/level in VTK, they probably aren't adequate to your images. If they are abdominal tomographies window = 350 level 50 should be a nice color level.

detect color space with openCV

how can I see the color space of my image with openCV ?
I would like to be sure it is RGB, before to convert to another one using cvCvtColor() function
thanks
Unfortunately, OpenCV doesn't provide any sort of indication as to the color space in the IplImage structure, so if you blindly pick up an IplImage from somewhere there is just no way to know how it was encoded. Furthermore, no algorithm can definitively tell you if an image should be interpreted as HSV vs. RGB - it's all just a bunch of bytes to the machine (should this be HSV or RGB?). I recommend you wrap your IplImages in another struct (or even a C++ class with templates!) to help you keep track of this information. If you're really desperate and you're dealing only with a certain type of images (outdoor scenes, offices, faces, etc.) you could try computing some statistics on your images (e.g. build histogram statistics for natural RGB images and some for natural HSV images), and then try to classify your totally unknown image by comparing which color space your image is closer to.
txandi makes an interesting point. OpenCV has a BGR colorspace which is used by default. This is similar to the RGB colorspace except that the B and R channels are physically switched in the image. If the physical channel ordering is important to you, you will need to convert your image with this function: cvCvtColor(defaultBGR, imageRGB, CV_BGR2RGB).
As rcv said, there is no method to programmatically detect the color space by inspecting the three color channels, unless you have a priori knowledge of the image content (e.g., there is a marker in the image whose color is known). If you will be accepting images from unknown sources, you must allow the user to specify the color space of their image. A good default would be to assume RGB.
If you modify any of the pixel colors before display, and you are using a non-OpenCV viewer, you should probably use cvCvtColor(src,dst,CV_BGR2RGB) after you have finished running all of your color filters. If you are using OpenCV for the viewer or will be saving the images out to file, you should make sure they are in BGR color space.
The IplImage struct has a field named colorModel consisting of 4 chars. Unfortunately, OpenCV ignores this field. But you can use this field to keep track of different color models.
I basically split the channels and display each one to figure out the color space of the image I'm using. It may not be the best way, but it works for me.
For detailed explanation, you can refer the below link.
https://dryrungarage.wordpress.com/2018/03/11/image-processing-basics/

Resources