I have many raster (bitmap) images that I'd like to transform from unprojected lat-lon to a projected rendering. (e.g. GIF, PNG).
I don't understand how to use PROJ.4 to render the resulting image. I'd like a library or software that can do this all automatically. GRASS GIS is large. The transforms are relatively simple transforms and of raster images only.
Or is there basic code or an example of how I would do this? using PROJ.4 and GraphicsMagick.
It is a little confusing about what you are asking for here.
If you are trying to convert from a LAT Long geo referenced image to another projection or if you just want to keep the current geo referencing of a bitmap and convert it to another format such as GIF or PNG.
If you wish to change formats I don't believe PNG or GIF supports geo referencing in its header so this will not be possible. If you are looking at trying to compress the image so it doesn't take up as much space you could look at JPEG or JPEG2000 as these support both. For a full list of image formats and what supports geo referencing and what does not this page is a good place to start:
Link
If you wish to change the co-ordinate projection from lat, long to something else (like Mercator mga zone XX) You can use something like GDAL to batch the process.
Download from here: http://trac.osgeo.org/gdal/wiki/DownloadingGdalBinaries
See here for a list of inlcuded utilities: Link
See here for the utility help for changing projections: Link
See here for utility that changes image formats: Link
Hopefully that will be of help to you.
Related
how to convert the image into object file like as .obj or .ply . I need some code written in visualization toolkit and c++.
Thanks
Image data is pixel data and .obj/ .ply or for that matter .stl is 3D geometry data with Point and Cell (for .obj Cell is Triangle) information.
Your question is not clear, but to give you some steps -
First, you need to identify how would you convert the pixels into points? vtkImageDataGeometryFilter might be of help here. Although it might not be sufficient as you will also need triangles data.
Once you get vtkPolyData from image data, you can write this data to STL or OBJ or PLY format. You can use following VTK classes for that
vtkSTLWriter, vtkOBJWriter and vtkPLYWriter.
I'm trying to build a live gif, just for kicks, and I want to turn a 2D array of pixel data into a gif (or more specifically one frame of an animated gif). I found gifencoder and it works but it's slow as molasses (~800ms to encode a 500x500px gif). Every other solution I can find (e.g. things built on graphicsmagick or imagemagick) don't seem to have a way to accept input streams, but just already encoded images. I suppose I could just dump data to a .bmp, but that's a very roundabout way to accomplish this. The other thing I'm thinking is just lzw encoding the data but before I go digging into the technical aspects of that I'm just fishing here for other ideas.
I'm using MS Deep Zoom Composer to generate tiled image sets for megapixel sized images.
Right now I'm preparing a densely detailed black and white linedrawing.
The lack of gamma correction during resizing is very apparent;
while zooming the tiles appear to become brighter on higher zoom levels.
This makes the boundaries between tiles quite apparent during the loading stage.
While it does not in any way hurt usability it is a bit unsightly.
I am wondering if there are any alternatives to Deep Zoom Composer that do gamma correct resizing?
The vips deepzoom creator can do this.
You make a deepzoom pyramid like this:
vips dzsave somefile.tif pyr_name
and it'll read somefile.tif and write pyr_name.dzi and pyr_name_files, a folder containing the tiles. You can use a .zip extension to the pyramid name and it'll directly write an uncompressed zip file containing the whole pyramid --- this is a lot faster on Windows. There's a blog post with some more examples and explanation.
To make it shrink gamma corrected, you need to move your image to a linear colourspace for saving. The simplest is probably scRGB, that is, sRGB with linear light. You can do this with:
vips colourspace somefile.tif x.tif scrgb
and it'll write x.tif, an scRGB float tiff.
You can run the two operations in a single command by using .dz as the output file suffix. This will send the output of the colourspace transform to the deepzoom writer for saving. The deepzoom writer will use .jpg to save each tile, the jpeg writer knows that jpeg files can only be RGB, so it'll automatically turn the scRGB tiles back into plain sRGB for saving.
Put that all together and you need:
vips colourspace somefile.tif mypyr.dz scrgb
And that should build a pyramid with a linear-light shrink.
You can pass options to the deepzoom saver in square brackets after the filename, for example:
vips colourspace somefile.tif mypyr.dz[container=zip] scrgb
The blog post has the details.
update: the Windows binary is here, to save you hunting. Unzip somewhere, and vips.exe is in the /bin folder.
pamscale1 of the netpbm suite is quite well known not to screw up scaled images as you describe. It uses gamma correction instead of ill-concieved "high-quality filters" and other magic used to paper over incorrect scaling algorithms.
Of course you will need some scripting - it's not a direct replacement.
We maintain a list of DZI creation tools here:
http://openseadragon.github.io/examples/creating-zooming-images/
I don't know if any of them do gamma correction, but some of them might not have that issue to begin with. Also, many of them come with source, so you can add the gamma correction in yourself if need be.
I have a 3D model as mesh structure or in .stl/.obj format which I converted to voxels using binvox voxelization tool. Using a Java program, I have done some processing on the voxel grid thus obtained. Now, I wish to covert this voxelized model back into a "smooth" mesh structure (or any other format), which can later be exported to .stl or .obj format.
Can someone suggest how can I achieve the last part, i.e. converting the voxel grid into some format for retrieving back the "smooth" surfaces ? Any help, including pointing to existing tools, or relevant theory in this direction will be appreciated.
Give a try to Marching Cubes algorithm. See http://paulbourke.net/geometry/polygonise/ for more details.
i am trying to read an image with ITK and display with VTK.
But there is a problem that has been haunting me for quite some time.
I read the images using the classes itkGDCMImageIO and itkImageSeriesReader.
After reading, i can do two different things:
1.
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
2.
The second scenario is the registration pipeline. Here, i read the image as before, then use the classes shown in the ITK Software Guide chapter about registration. Then i resample the image and use the itkImageSeriesWriter.
And that's when the problem appears. After writing the image to a file, i compare this new image with the image i used as input in the XMedcon software. If the image i wrote ahs been shown too bright in my software, there no changes when i compare both of them in XMedcon. Otherwise, if the image was too dark in my software, it appears all messed up in XMedcon.
I noticed, when comparing both images (the original and the new one) that, in both cases, there are changes in modality, pixel dimensions and glmax.
I suppose the problem is with the glmax, as the major changes occur with the darker images.
I really don't know what to do. Does this have something to do with color level/window? The most strange thing is that all the images are very similar, with identical tags and only some of them display errors when shown/written.
I'm not familiar with the particulars of VTK/ITK specifically, but it sounds to me like the problem is more general than that. Medical images have a high dynamic range and often the images will appear very dark or very bright if the window isn't set to some appropriate range. The DICOM tags Window Center (0028, 1050) and Window Width (0028, 1051) will include some default window settings that were selected by the modality. Usually these values are reasonable, but not always. See part 3 of the DICOM standard (11_03pu.pdf is the filename) section C.11.2.1.2 for details on how raw image pixels are scaled for display. The general idea is that you'll need to apply a linear scaling to the images to get appropriate pixel values for display.
What pixel types do you use? In most cases, it's simpler to use a floating point type while using ITK, but raw medical images are often in short, so that could be your problem.
You should also write the image to the disk after each step (in MHD format, for example), and inspect it with a viewer that's known to work properly, such as vv (http://www.creatis.insa-lyon.fr/rio/vv). You could also post them here as well as your code for further review.
Good luck!
For what you describe as your first issue:
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
I suggest the following: Check your window/level in VTK, they probably aren't adequate to your images. If they are abdominal tomographies window = 350 level 50 should be a nice color level.