game background file of just 2KB ...how? - graphics

I am making game for mobile phone and i have little knowledge of creating graphics for games. I am making graphics using CorelDraw & Photoshop.
I made flash.png using above 2 software's & could squeeze the size to 47Kb only.....
But I came across one game which has file size just 2kb for its background (bg0 & bg1.png)
I want to know how do I make such beautiful graphics without increasing the size of my file...
I assume the gamer must have hand sketched, scanned & used one of the above software's to fill the colors.....but i am not sure about it...
plz help

There are several ways to reduce the size of a PNG:
Reduce the colour depth. Don't use RGB true/24 bit colour, use an indexed colour image. You need to add a palette to the image, but each pixel is one byte, not two.
Once you have an indexed colour image, reduce the number of colours in the palette. There is a limit to how many colours you can reduce it by - the fewer colours, the lower the image quality.
Remove unnecessary PNG chunks. Art packages may add additional data to the PNG that isn't image data (creation date, author info, resolution, comments, etc.)

Check http://pmt.sourceforge.net/pngcrush/ to get rid of unneeded PNG chunks and compress the IDAT chunk even further. This might help a lot or not at all depending on the PNG that came out of the art packages. If it doesn't help, consider index PNGs. And if you go for paletized PNGs be sure to check out http://en.wikipedia.org/wiki/Color_cycling for cool effects you might be able to use.

Use a paletted png with few colors and then pass the png through a png optimizer like the free exe PngOptimizer
If your png still is too big reduce the number of colors used and reoptimize. Rince and repeat ^^.
I have used this technique on quite a lot of mobile games where size was of the essence.

Related

Convert PDF to image with high resolution to fit in page

I regularly get tree-drilling-data out of a machine that should get into reports.
The pdf-s contain too much empty space and useless information.
With convert i already managed to convert the pdf to png, cut out parts and rebuild an image i desire. It has a fine sharpness, its just too large:
Output 1: Nice, just too large
For my reports i need it in 45% size of that, or 660 pixels wide.
The best output i managed up to now is this:
Output 2: Perfect size but unsharp
Now, this is far away in quality from the picture before shrinking.
For sure, i've read this article here, that already helped.
But i think it must be possible to get an image as fine as the too large one in Output 1.
I've tried around for hours with convert -scale, -resize, -resample, playing around with values for density, sharpen, unsharpen, quality... nothing better than what i've got, using
convert -density 140 -trim input.pdf -quality 100 -sharpen 0x1.0 step1.png
then processing it to the new picture (output1, see up), that i'm putting to the correct size with
convert output1.png -resize 668x289! -unsharp 0x0.75+0.75+0.01 output2.png
I tried also "resize 668x" in order not to maybe disturb, no difference.
I find i am helpless in the end.
I am not an IT-expert, i am a computer-affin tree-consultant.
My understanding of image-processing is limited.
Maybe it would make sense to stay on a vector-based format (i tried .gif and .svg ... brrrr).
I would prefer to stay with convert/imagemagick and not to install additional software.
It has to run from command-line, as it is part of a bash-script processing multiple files. I am using Suse Linux.
Grateful for your help!
I realize you said no other software, but it can be easier to get good results from other PDF rendering engines.
ImageMagick renders PDFs by shelling out to ghostscript. This is terrific software, but it's designed for print rather than screen output. As a result, it generates very hard edges, because that's what you need if you are intending to control ink on paper. The tricks you see for rendering PDF at higher res and then resizing them fix this, but it can be tricky to get the parameters just right (as you know).
There are PDF rendering libraries which target screen output and will produce nice edges immediately. You don't need to render at high res and sample down, they just render correctly for screen in the first place. This makes them easier to use (obviously!) and a lot faster.
For example, vipsthumbnail comes with suse and includes a direct PDF rendering system. Install with:
zypper install vips-tools
Regarding the size, your 660 pixels across is too low. Some characters in your PDF will come out at only 3 or 4 pixels across and you simply can't make them sharp, there are just too few dots.
Instead, think about the size you want them printed on the paper, and the level of detail you need. The number of pixels across sets the detail, and the resolution controls the physical size of those dots when you print.
I would at least double that 668. Try:
vipsthumbnail P3_M002.pdf --size 1336 -o x.png
With your sample image I get:
Now when you print, you want those 1336 pixels to fill 17cm of paper. libvips lets you set resolution in pixels per millimetre, so you need 1336 pixels in 170 mm, or 1336 / 170, or 7.86. Try:
vips.exe copy x.png y.png[palette] --xres 7.86 --yres 7.86
Now y.png should load into librecalc at 17cm across and be nice and sharp when printed. The [palette] option after y.png enables palettised PNG, which shrinks the image to around 50kb.
The resolution setting is also called DPI (dots per inch). I find the name confusing myself -- you'll also see it called "pixels per printed inch", which I think is a much clearer.
In Imagemagick, set a higher density, then trim, then resize, then unsharpened. The higher the density, the sharper your result, but the slower it will get. Note that PNG quality of 100 is not the proper scale. It does not have quality values corresponding to 0 to 100 as in JPG. See https://imagemagick.org/script/command-line-options.php#quality. I cannot tell you the "best" numbers to use as it is image dependent. You can use some other tool such as at https://imagemagick.org/Usage/formats/#png_non-im to optimize your PNG output.
So try,
convert -density 300 input.pdf -trim +repage -resize 668x289 -unsharp 0x0.75+0.75+0.01 output.png
Or remove the -unsharp if you find that it is not needed.
ADDITION
Here is what I get with
convert -density 1200 P3_M002.pdf -alpha off -resize 660x -brightness-contrast -35,35 P3_M002.png
I am not sure why the graph itself lost brightness and contrast. (I suspect it is due to an imbedded image for the graph). So I added -brightness-contrast to bring out the detail. But it made the background slightly gray. You can try reducing those values. You may not need it quite so strong.
Great, #fmw42,
pngcrush -res 213 graphc.png done.png
from your link did the job, as to be seen here:
perfect size and sharp graph
Thank you a lot.
Now i'll try to get file-size down, as the Original pdf has 95 KiB an d now i am on 350 KiB. So, with 10 or more graphs in a document it would be maybe unnecessary large, also working on the ducument might get slow.
-- Addition -- 2023-02-04
#fmw42 : Thanks for all your effort!
Your solution with the .pdf you show does not really work - too gray for a good report, also not the required sharpness.
#jcupitt : Also thanks, vips is quick and looks interesting. vipsthumbnails' outcome ist unsharp, i tried around a bit but the docu is too abstract for me to get syntax-correct use. I could not find a dilettant-readable docu, maybe you know one?
General: With all my beginners-trials up to now i find:
the pdf contains all information to produce a large, absolutely sharp output (vector-typic, i guess)
it is no problem to convert to a png of same size without losing quality
any solutions of shrinking the png in size then result in significant (a) quality-loss or (b) file-size increase.
So, i (beginner) think that the pdf should be processed directly to the correct png-size, without later downsampling the png.
This could be done
(a) telling the conversion-process the output-size (if there is a possibility for this?) or
(b) first creating a smaller pdf, like letting it look A5 instead of A4, so a fitting .png is directly created (i need 6.5 inches wide approx.).
For both solutions i miss ability to sensefully investigate, for it takes me hours and hours to try out things and learn about the mysteries of image-processing.
The solution with pngcrush works for the moment, although i'm not really happy about the file-size (cpu and fan-power are not really important factors here).
--- Addition II --- final one 2023-02-05
convert -density 140 -trim "$datei" -sharpen 0x1.0 rgp-kopie0.png
magick rgp-kopie0.png +dither PNG8:rgp-kopie.png ## less colours
## some convert -crop and -composite here to arrange new image
pngcrush -s -res 213 graphc.png "$namenr.png"
New image is as this, with around 50 KiB, definitely satisfying for me in quality and filesize.
I thank you all a lot for contributing, this makes my work easier from now on!
... and even if i do not completely understand everything, i learnt a bit.

Is there a library for c++ (or a tool) to reduce colors in a PNG image with alpha values?

I have a PNG-Image with alpha values and need to reduce the amount of colors. I need to have no more than 256 colors for all the colors in the image and so far everything I tried (from paint shop to leptonica, etc...) strips the image of the alpha channel and makes it unusable. Is there anything out there that does what I want ?
Edit: I do not want to use a 8-bit palette. I just need to reduce the numbers of color so that my own program can process the image.
Have you tried ImageMagick?
http://www.imagemagick.org/script/index.php
8-bit PNGs with alpha transparency will only render alpha on newer webbrowsers.
Here are some tools and website that does the conversion:
free pngquant
Adobe Fireworks
and website: http://www.8bitalpha.com/
Also, see similar question
The problem you describe is inherent in the PNG format. See the entry at Wikipedia and notice there's no entry in the color options table for Indexed & alpha. There's an ability to add an alpha value to each of the 256 colors, but typically only one palette entry will be made fully transparent and the rest will be fully opaque.
Paint Shop Pro has a couple of options for blending or simulating partial transparency in a paletted PNG - I know because I wrote it.

DICOM Image is too dark with ITK

i am trying to read an image with ITK and display with VTK.
But there is a problem that has been haunting me for quite some time.
I read the images using the classes itkGDCMImageIO and itkImageSeriesReader.
After reading, i can do two different things:
1.
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
2.
The second scenario is the registration pipeline. Here, i read the image as before, then use the classes shown in the ITK Software Guide chapter about registration. Then i resample the image and use the itkImageSeriesWriter.
And that's when the problem appears. After writing the image to a file, i compare this new image with the image i used as input in the XMedcon software. If the image i wrote ahs been shown too bright in my software, there no changes when i compare both of them in XMedcon. Otherwise, if the image was too dark in my software, it appears all messed up in XMedcon.
I noticed, when comparing both images (the original and the new one) that, in both cases, there are changes in modality, pixel dimensions and glmax.
I suppose the problem is with the glmax, as the major changes occur with the darker images.
I really don't know what to do. Does this have something to do with color level/window? The most strange thing is that all the images are very similar, with identical tags and only some of them display errors when shown/written.
I'm not familiar with the particulars of VTK/ITK specifically, but it sounds to me like the problem is more general than that. Medical images have a high dynamic range and often the images will appear very dark or very bright if the window isn't set to some appropriate range. The DICOM tags Window Center (0028, 1050) and Window Width (0028, 1051) will include some default window settings that were selected by the modality. Usually these values are reasonable, but not always. See part 3 of the DICOM standard (11_03pu.pdf is the filename) section C.11.2.1.2 for details on how raw image pixels are scaled for display. The general idea is that you'll need to apply a linear scaling to the images to get appropriate pixel values for display.
What pixel types do you use? In most cases, it's simpler to use a floating point type while using ITK, but raw medical images are often in short, so that could be your problem.
You should also write the image to the disk after each step (in MHD format, for example), and inspect it with a viewer that's known to work properly, such as vv (http://www.creatis.insa-lyon.fr/rio/vv). You could also post them here as well as your code for further review.
Good luck!
For what you describe as your first issue:
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
I suggest the following: Check your window/level in VTK, they probably aren't adequate to your images. If they are abdominal tomographies window = 350 level 50 should be a nice color level.

detect color space with openCV

how can I see the color space of my image with openCV ?
I would like to be sure it is RGB, before to convert to another one using cvCvtColor() function
thanks
Unfortunately, OpenCV doesn't provide any sort of indication as to the color space in the IplImage structure, so if you blindly pick up an IplImage from somewhere there is just no way to know how it was encoded. Furthermore, no algorithm can definitively tell you if an image should be interpreted as HSV vs. RGB - it's all just a bunch of bytes to the machine (should this be HSV or RGB?). I recommend you wrap your IplImages in another struct (or even a C++ class with templates!) to help you keep track of this information. If you're really desperate and you're dealing only with a certain type of images (outdoor scenes, offices, faces, etc.) you could try computing some statistics on your images (e.g. build histogram statistics for natural RGB images and some for natural HSV images), and then try to classify your totally unknown image by comparing which color space your image is closer to.
txandi makes an interesting point. OpenCV has a BGR colorspace which is used by default. This is similar to the RGB colorspace except that the B and R channels are physically switched in the image. If the physical channel ordering is important to you, you will need to convert your image with this function: cvCvtColor(defaultBGR, imageRGB, CV_BGR2RGB).
As rcv said, there is no method to programmatically detect the color space by inspecting the three color channels, unless you have a priori knowledge of the image content (e.g., there is a marker in the image whose color is known). If you will be accepting images from unknown sources, you must allow the user to specify the color space of their image. A good default would be to assume RGB.
If you modify any of the pixel colors before display, and you are using a non-OpenCV viewer, you should probably use cvCvtColor(src,dst,CV_BGR2RGB) after you have finished running all of your color filters. If you are using OpenCV for the viewer or will be saving the images out to file, you should make sure they are in BGR color space.
The IplImage struct has a field named colorModel consisting of 4 chars. Unfortunately, OpenCV ignores this field. But you can use this field to keep track of different color models.
I basically split the channels and display each one to figure out the color space of the image I'm using. It may not be the best way, but it works for me.
For detailed explanation, you can refer the below link.
https://dryrungarage.wordpress.com/2018/03/11/image-processing-basics/

Imaging Question: How to determine image quality?

I'm looking for ways to determine the quality of a photography (jpg). The first thing that came into my mind was to compare the file-size to the amount of pixel stored within. Are there any other ways, for example to check the amount of noise in a jpg? Does anyone have a good reading link on this topic or any experience? By the way, the project I'm working on is written in C# (.net 3.5) and I use the Aurigma Graphics Mill for image processing.
Thanks in advance!
I'm not entirely clear what you mean by "quality", if you mean the quality setting in the JPG compression algorithm then you may be able to extract it from the EXIF tags of the image (relies on the capture device putting them in and no-one else overwriting them) for your library see here:
http://www.aurigma.com/Support/DocViewer/30/JPEGFileFormat.htm.aspx
If you mean any other sort of "quality" then you need to come up with a better definition of quality. For example, over-exposure may be a problem in which case hunting for saturated pixels would help determine that specific sort of quality. Or more generally you could look at statistics (mean, standard deviation) of the image histogram in the 3 colour channels. The image may be out of focus, in which case you could look for a cutoff in the spatial frequencies of the image Fourier transform. If you're worried about speckle noise then you could try applying a median filter to the image and comparing back to the original image (more speckle noise would give a larger change) - I'm guessing a bit here.
If by "quality" you mean aesthetic properties of composition etc then - good luck!
The 'quality' of an image is not measurable, because it doesn't correspond to any particular value.
If u take it as number of pixels in the image of specific size its not accurate. You might talk about a photograph taken in bad light conditions as being of 'bad quality', even though it has exactly the same number of pixels as another image taken in good light conditions. This term is often used to talk about the overall effect of an image, rather than its technical specifications.
I wanted to do something similar, but wanted the "Soylent Green" option and used people to rank images by performing comparisons. See the question responses here.
I think you're asking about how to determine the quality of the compression process itself. This can be done by converting the JPEG to a BMP and comparing that BMP to the original bitmap from with the JPEG was created. You can iterate through the bitmaps pixel-by-pixel and calculate a pixel-to-pixel "distance" by summing the differences between the R, G and B values of each pair of pixels (i.e. the pixel in the original and the pixel in the JPEG) and dividing by the total number of pixels. This will give you a measure of the average difference between the original and the JPEG.
Reading the number of pixels in the image can tell you the "megapixel" size(#pixels/1000000), which can be a crude form of programatic quality check, but that wont tell you if the photo is properly focused, assuming it is supposed to be focused (think fast-motion objects, like trains), nor weather or not there is something in the pic worth looking at, that will require a human, or pigeon if you prefer.

Resources