I am using the following command to generate progressive JPEGs using mozjpeg (cjpeg utility)
cjpeg -quality 85 -outfile outputfile.jpg inputfile.jpg
On rendering this output image, a grayscale image is first shown which is then followed by a greenish image and then the original image. After quite a bit of study, i found that splitting of DC coefficients is done to reduce the size of image. But, i need the rendering to be a blurred to bright one. How to achieve this?
There's an option for this:
-dc-scan-opt DC scan optimization mode
- 0 One scan for all components
- 1 One scan per component (default)
- 2 Optimize between one scan for all components and one scan for 1st component
plus one scan for remaining components
The default causes grayscale look, because it sends luma separately from chroma.
Related
I regularly get tree-drilling-data out of a machine that should get into reports.
The pdf-s contain too much empty space and useless information.
With convert i already managed to convert the pdf to png, cut out parts and rebuild an image i desire. It has a fine sharpness, its just too large:
Output 1: Nice, just too large
For my reports i need it in 45% size of that, or 660 pixels wide.
The best output i managed up to now is this:
Output 2: Perfect size but unsharp
Now, this is far away in quality from the picture before shrinking.
For sure, i've read this article here, that already helped.
But i think it must be possible to get an image as fine as the too large one in Output 1.
I've tried around for hours with convert -scale, -resize, -resample, playing around with values for density, sharpen, unsharpen, quality... nothing better than what i've got, using
convert -density 140 -trim input.pdf -quality 100 -sharpen 0x1.0 step1.png
then processing it to the new picture (output1, see up), that i'm putting to the correct size with
convert output1.png -resize 668x289! -unsharp 0x0.75+0.75+0.01 output2.png
I tried also "resize 668x" in order not to maybe disturb, no difference.
I find i am helpless in the end.
I am not an IT-expert, i am a computer-affin tree-consultant.
My understanding of image-processing is limited.
Maybe it would make sense to stay on a vector-based format (i tried .gif and .svg ... brrrr).
I would prefer to stay with convert/imagemagick and not to install additional software.
It has to run from command-line, as it is part of a bash-script processing multiple files. I am using Suse Linux.
Grateful for your help!
I realize you said no other software, but it can be easier to get good results from other PDF rendering engines.
ImageMagick renders PDFs by shelling out to ghostscript. This is terrific software, but it's designed for print rather than screen output. As a result, it generates very hard edges, because that's what you need if you are intending to control ink on paper. The tricks you see for rendering PDF at higher res and then resizing them fix this, but it can be tricky to get the parameters just right (as you know).
There are PDF rendering libraries which target screen output and will produce nice edges immediately. You don't need to render at high res and sample down, they just render correctly for screen in the first place. This makes them easier to use (obviously!) and a lot faster.
For example, vipsthumbnail comes with suse and includes a direct PDF rendering system. Install with:
zypper install vips-tools
Regarding the size, your 660 pixels across is too low. Some characters in your PDF will come out at only 3 or 4 pixels across and you simply can't make them sharp, there are just too few dots.
Instead, think about the size you want them printed on the paper, and the level of detail you need. The number of pixels across sets the detail, and the resolution controls the physical size of those dots when you print.
I would at least double that 668. Try:
vipsthumbnail P3_M002.pdf --size 1336 -o x.png
With your sample image I get:
Now when you print, you want those 1336 pixels to fill 17cm of paper. libvips lets you set resolution in pixels per millimetre, so you need 1336 pixels in 170 mm, or 1336 / 170, or 7.86. Try:
vips.exe copy x.png y.png[palette] --xres 7.86 --yres 7.86
Now y.png should load into librecalc at 17cm across and be nice and sharp when printed. The [palette] option after y.png enables palettised PNG, which shrinks the image to around 50kb.
The resolution setting is also called DPI (dots per inch). I find the name confusing myself -- you'll also see it called "pixels per printed inch", which I think is a much clearer.
In Imagemagick, set a higher density, then trim, then resize, then unsharpened. The higher the density, the sharper your result, but the slower it will get. Note that PNG quality of 100 is not the proper scale. It does not have quality values corresponding to 0 to 100 as in JPG. See https://imagemagick.org/script/command-line-options.php#quality. I cannot tell you the "best" numbers to use as it is image dependent. You can use some other tool such as at https://imagemagick.org/Usage/formats/#png_non-im to optimize your PNG output.
So try,
convert -density 300 input.pdf -trim +repage -resize 668x289 -unsharp 0x0.75+0.75+0.01 output.png
Or remove the -unsharp if you find that it is not needed.
ADDITION
Here is what I get with
convert -density 1200 P3_M002.pdf -alpha off -resize 660x -brightness-contrast -35,35 P3_M002.png
I am not sure why the graph itself lost brightness and contrast. (I suspect it is due to an imbedded image for the graph). So I added -brightness-contrast to bring out the detail. But it made the background slightly gray. You can try reducing those values. You may not need it quite so strong.
Great, #fmw42,
pngcrush -res 213 graphc.png done.png
from your link did the job, as to be seen here:
perfect size and sharp graph
Thank you a lot.
Now i'll try to get file-size down, as the Original pdf has 95 KiB an d now i am on 350 KiB. So, with 10 or more graphs in a document it would be maybe unnecessary large, also working on the ducument might get slow.
-- Addition -- 2023-02-04
#fmw42 : Thanks for all your effort!
Your solution with the .pdf you show does not really work - too gray for a good report, also not the required sharpness.
#jcupitt : Also thanks, vips is quick and looks interesting. vipsthumbnails' outcome ist unsharp, i tried around a bit but the docu is too abstract for me to get syntax-correct use. I could not find a dilettant-readable docu, maybe you know one?
General: With all my beginners-trials up to now i find:
the pdf contains all information to produce a large, absolutely sharp output (vector-typic, i guess)
it is no problem to convert to a png of same size without losing quality
any solutions of shrinking the png in size then result in significant (a) quality-loss or (b) file-size increase.
So, i (beginner) think that the pdf should be processed directly to the correct png-size, without later downsampling the png.
This could be done
(a) telling the conversion-process the output-size (if there is a possibility for this?) or
(b) first creating a smaller pdf, like letting it look A5 instead of A4, so a fitting .png is directly created (i need 6.5 inches wide approx.).
For both solutions i miss ability to sensefully investigate, for it takes me hours and hours to try out things and learn about the mysteries of image-processing.
The solution with pngcrush works for the moment, although i'm not really happy about the file-size (cpu and fan-power are not really important factors here).
--- Addition II --- final one 2023-02-05
convert -density 140 -trim "$datei" -sharpen 0x1.0 rgp-kopie0.png
magick rgp-kopie0.png +dither PNG8:rgp-kopie.png ## less colours
## some convert -crop and -composite here to arrange new image
pngcrush -s -res 213 graphc.png "$namenr.png"
New image is as this, with around 50 KiB, definitely satisfying for me in quality and filesize.
I thank you all a lot for contributing, this makes my work easier from now on!
... and even if i do not completely understand everything, i learnt a bit.
Godot 2d project, I created at 640 640 png using Gimp.
Imported those PNG's to Godot as texture(2d).
After setting scale to 0.1, I resized those images to 64 x 64 in godot scene.
When I initiate this image in my main scene, I get this pixelated disgusting result.
Edit : Dont be confused with rotated red wings, I did it at runtime. Its not part of the question.
My window size is 1270 x 780
Stretch Mode is viewport.
I tried changing import settings etc.
I wonder is it not possible to have a sleek image in this sizes?
Disclaimer: I haven’t bothered to fire up Godot to reproduce your problem.
I suspect you are shooting yourself in the foot by that scale 0.1 bit. Remember, every time you resample (scale) an image there is loss.
There are three things to do:
Prepare your image(s) to have the same size (resolution) as your intended target display. In other words, if your image is going to display at 64×64 pixels, then your source image should be 64×64 pixels.
When importing images, make sure that Filter is set to ☑ On. If your image contains alpha, you may also wish to check the Fix Alpha Border flag.
Perform as few resampling operations as possible to display the image. In your particular case, you are resampling it to a tenth of its size before again resampling it up to the displayed size. Don’t do that. (This is probably the main cause of your problem.) Just make your sprite have the natural size of the image, and scale the sprite only if necessary.
It is also possible that you are applying some other filter or that your renderer has a bad resampler attached to it, but your problem description does not lead me to think either of these are likely.
A warning ahead: I'm not into godot at all, but I have some experience with image resizing.
Your problem looks totally related to pure image resizing. If you scale down an image in one go by any factor smaller than 0.5 (this means make it smaller than half its original size), you will face this effect of an ugly small image.
To get a nice and smooth result, you have to resize it multiple times:
Always reduce the image size by half, until the next necessary step is bigger than scaling by 0.5.
For your case (640x640 to 64x64) it would need these steps:
640 * 0.5 => 320
320 * 0.5 => 160
160 * 0.5 => 80
80 * 0.8 => 64
You can either start with a much smaller image - if you never need it with that big size in your code, or you add multiple scaled resolutions to your resources, or you precalculate the halfed steps before you start using them and then just pick the right image to scale down by the final factor.
I'm not very experienced with godot and haven't opened it right now, but I could imagine that you shouldn't scale the image to 0.1 because then there will be a loss in image quality.
I'm trying to set the 9 patch image attached below as my layouts background.It looks good on few device backgrounds and few devices shows consecutive circles in center which makes the UI inconsistent. I want consistent over all the device densities. How can I achieve the UI appearance on all devices being consistent?
It is difficult to understand what you mean.the grammar is a little bit unclear.do you mean the same background for all layouts?
Make sure you create versions of the 9Patch png for each density and put them in the relevant drawable folders (drawable-mdpi, drawable-hdpi, drawable-xhdpi etc).
When you run your app, android determines the screen density of the device you're running it on and then looks in that particular drawable folder for your 9Patch png. If it can't find a 9Patch png in the folder for that density, it will look in the folders for the other densities until it does find one. It will then either stretch or compress the png as needed to try and create a suitable png for the missing density. This stretching and compressing can lead to the sort of artifacts you are seeing.
For best results, don't leave it up to the operating system to try and generate images for missing screen densities. Provide your own images for each density.
Below is an excerpt from Supporting Different Screens on the Android Developers site. Take the time to read and understand this if you want to progress with developing apps for android, because this is fundamental to designing the look of any UI.
"..you should start with your raw resource in vector format and generate the images for each density using the following size scale:
xhdpi: 2.0
hdpi: 1.5
mdpi: 1.0 (baseline)
ldpi: 0.75
This means that if you generate a 200x200 image for xhdpi devices, you should generate the same resource in 150x150 for hdpi, 100x100 for mdpi, and 75x75 for ldpi devices."
i am trying to read an image with ITK and display with VTK.
But there is a problem that has been haunting me for quite some time.
I read the images using the classes itkGDCMImageIO and itkImageSeriesReader.
After reading, i can do two different things:
1.
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
2.
The second scenario is the registration pipeline. Here, i read the image as before, then use the classes shown in the ITK Software Guide chapter about registration. Then i resample the image and use the itkImageSeriesWriter.
And that's when the problem appears. After writing the image to a file, i compare this new image with the image i used as input in the XMedcon software. If the image i wrote ahs been shown too bright in my software, there no changes when i compare both of them in XMedcon. Otherwise, if the image was too dark in my software, it appears all messed up in XMedcon.
I noticed, when comparing both images (the original and the new one) that, in both cases, there are changes in modality, pixel dimensions and glmax.
I suppose the problem is with the glmax, as the major changes occur with the darker images.
I really don't know what to do. Does this have something to do with color level/window? The most strange thing is that all the images are very similar, with identical tags and only some of them display errors when shown/written.
I'm not familiar with the particulars of VTK/ITK specifically, but it sounds to me like the problem is more general than that. Medical images have a high dynamic range and often the images will appear very dark or very bright if the window isn't set to some appropriate range. The DICOM tags Window Center (0028, 1050) and Window Width (0028, 1051) will include some default window settings that were selected by the modality. Usually these values are reasonable, but not always. See part 3 of the DICOM standard (11_03pu.pdf is the filename) section C.11.2.1.2 for details on how raw image pixels are scaled for display. The general idea is that you'll need to apply a linear scaling to the images to get appropriate pixel values for display.
What pixel types do you use? In most cases, it's simpler to use a floating point type while using ITK, but raw medical images are often in short, so that could be your problem.
You should also write the image to the disk after each step (in MHD format, for example), and inspect it with a viewer that's known to work properly, such as vv (http://www.creatis.insa-lyon.fr/rio/vv). You could also post them here as well as your code for further review.
Good luck!
For what you describe as your first issue:
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
I suggest the following: Check your window/level in VTK, they probably aren't adequate to your images. If they are abdominal tomographies window = 350 level 50 should be a nice color level.
I am using ImageMagick to programmatically reduce the size of a PNG image by reducing the colors in the image. I get the images unique-colors and divide this by 2. Then I assign this value to the -colors option as follows:
variable = unique-colors / 2
convert image.png -colors variable -depth 8
I thought this would substantially reduce the size of the image but instead it increases the images size on disk. Can anyone shed any light on this.
Thanks.
EDIT: Turns out the problem was dithering. Dithering helps your reduced color images look more like the originals but adds to the image size. To remove dithering in ImageMagick add +dither to your command.
Example
convert CandyBar.png +dither -colors 300 -depth 8 smallerCandyBar.png
Imagemagick probably uses some dithering algorithm to make image appear as though it has original amount of colors. This increases image data "randomness" (single pixels are recolored at some places to blend into other colors) and this image data no longer packs as well. Research further into how the convert command does the dithering. You can also see this effect by adding second image as a layer in gimp/equivalent program and tuning transparency.
You should use pngquant for this.
You don't need to guess number of colors, it has actual --quality setting:
pngquant --verbose --quality=70 image.png
The above will automatically choose number of colors needed to match given quality in the same scale as JPEG quality (100 = perfect, 70 = OK, 20 = awful).
pngquant has substantially better quantization algorithm, and the better the quantization the better quality/filesize ratio.
And pngquant doesn't dither areas that look good without dithering, and this avoids adding unnecessary noise/randomness to the file.
The "new" PNG's compression is not as good as the one of the original.