How to change the colors of a PNG image easily? [closed] - colors

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have PNG images that represent playing-cards.
They are the standard colours with Clubs and Spades being blank and Diamonds and Hearts being red.
I want to create a 4-colour deck by converting the the Clubs to green and the Diamonds to blue.
I don't want to re-draw them but just changing them by hand seems a lot of work since the colours are not all "pure" but graded near the edges.
How do I do it?

Photoshop - right click layer -> blending options -> color overlay
change color and save

This should be fairly straightforward in the gimp http://gimp.org/
First make sure your image is RGB (not indexed color) then use the "color to alpha" feature to turn the clubs/diamonds clear, then fill or set the background or whatever to get the color you want.

If you are going to be programming an application to do all of this, the process will be something like this:
Convert image from RGB to HSV
adjust H value
Convert image back to RGB
Save image

Use Photoshop, Paint.NET or similar software and adjust Hue.

Ok guys it can be done easy in photoshop.
Open png photo and then check image -> mode value(i had indexed color). Go image -> mode and check rgb color. Now change your color EASY.

If you are like me and Photoshop is out of your price range or just overkill for what you need. Acorn 5 is a much cheaper version of Photoshop with a lot of the same features. One of those features being a color change option. You can import all of the basic image formats including SVG and PNG. The color editing software works great and allows for basic color selection, RBG selection, hex code, or even a color grabber if you do not know the color. These color features, plus a whole lot image editing features, is definitely worth the $30. The only downside is that is currently only available on Mac.

Related

Combine two single color images into one with CLI (imagemagick?) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm generating "single" color images (B/W and red/white) of our todo schedule to write images to an 3-color e-ink screen, with my Raspberry Pi. First write the black one, then overwrite with red. This works really great, so no help required there.
But I would also like to have the same image available on a website. So I need to have my red image "overwrite" the black one, so I also have the black/red/white image the e-ink shows, as a png/bmp/jpg/whatever. For now we're just using the black one, but I want my overdue stuff in red as well.
I already have imagemagick for the conversions from SVG to PNG/BMP etc., and I'm guessing it should be able to do this. However, I could not find any examples. It's also hard to define the right search parameters, since a lot of people want to combine two images next to or below eachother, not actually combining them.
Does anyone here know how to do this? (I'm not stuck on ImageMagick, but that's the tool I already use)
My guess would be I'd need to make the white in the red/white transparent, then somehow layer it on top... But how?
Change the RED image to 50% opacity
convert red.png -alpha set -background none -channel A -evaluate multiply 0.5 +channel red2.png
Then overlay the 50%RED on the black
convert black.png red2.png -gravity center -composite all.png
Thanks to answers I found elsewhere on transparency, and above here from Bruce on 'composite', I ended up doing:
convert screen-output-red.png -fuzz 30% -transparent white -fill red -opaque black trans.png
composite trans.png screen-output.png combo.png
Haven't figured out if I could do it in one step, but this is good enough for now.
There are several ways to do this with ImageMagick. A very simple method is to make the white transparent then composite the red over the black.
magick img_black.png img_red.png -colorspace rgb \
-fuzz "10%" -transparent white -background white -flatten result.png
First it reads in both images and sets the working colorspace to "rgb" so the result will be color instead of grayscale.
Then it sets a "-fuzz" value of "10%", which is the amount of variance from pure white that will be affected by the "-transparent" operation. You can adjust this amount to suit your needs.
Next the "-transparent white" operation removes all the white and near-white specified by the fuzz value. Since that removes the white from both images, we add a background color of white so the result will be flattened onto that.
Finish by flattening the red image onto the black image onto a white background and write the result to a file.
This command uses ImageMagick v7 on a *nix system. To use it with IMv6 change "magick" to "convert". To use this command in Windows change the continued-line backslash "\" to a caret "^".

How to antialias when drawing to semitransparent frame buffers in OpenGL? [duplicate]

This question already has answers here:
OpenGL default pipeline alpha blending does not make any sense for the alpha component
(2 answers)
Closed 2 years ago.
I'm currently learning OpenGL and try to make a simple GUI. So far I know very little about shaders and didn't use any.
One of the tricks I made to accelerate text rendering is to render the text quads to a transparent frame buffer object before rendering them to screen. The speedup is significant, but I noticed the text is poorly drawn on the edges. I noticed then if I made the transparent texture be of another transparent color, then the text would blend with that color. In the example I rendered to a transparent green texture:
I use the following parameters for blending:
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE)
with glBlendEquation being default (GL_ADD).
My understanding from the documentation is that each pixel is sent through an equation that is source_rgb * blend_factor + dest_rgb * blend_factor.
I would typically want that, when a texture is transparent, it's RGB to be ignored, both sides of the blending, so if I could I would compute the rgb with a similar equation:
source_rgb * source_alpha / total_alpha + dest_rgb * dest_alpha / total_alpha
where total_alpha is the sums of the alphas. Which doesn't seem supported.
Is there something that can help me with minimum headache? I'm open to suggestions, from fixes, to rewriting everything to using a library that already does it.
The full source code is available here if you are interested. Please let me know if you need relevant extracts.
EDIT: I did try before to remove the GL_ONE, GL_ONE from my alpha blending and simply use (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) for RGBA, but the results weren't great either. I get similar results.
Solved the problem using premultiplication as suggested.
First of all, total_alpha isn't the sum of the alphas but rather the following:
total_alpha = 1 - (1 - source_alpha)*(1 - dest_alpha)
As you noted correctly, OpenGL doesn't support that final division by total_alpha. But it doesn't need to. All you need is to switch into thinking and working in terms of pre-multiplied alpha. With that the simple
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA)
does the right thing.

Is ZBar unable to recognize colored barcodes?

I am using ZBar (http://zbar.sourceforge.net/) in one of my project and I noticed that the library is unable to recognize barcodes if they are colored: let's say yellow background, and blue foreground (the bars). The application requires to have colored barcodes when impressed on a paper label.
Is there a way to trick the issue, or is there another library that make this thing possible?
NOTES: I am using Python 3.7.1 for this application.
Zbar processes pictures in black and white, so it would be possible to edit the pictures so that they recognize the QR code better. You could edit the image so that the pixels differ from white to a certain percentage, black. All the other Prxel you dye white. Darduch would have to give a good contrast.
Here possible formula, as an example:
(R/255*100)>6||(G/255*100)>6||(B/255*100)>6
How big the deviation must be, you have to test.

Is there a library for c++ (or a tool) to reduce colors in a PNG image with alpha values?

I have a PNG-Image with alpha values and need to reduce the amount of colors. I need to have no more than 256 colors for all the colors in the image and so far everything I tried (from paint shop to leptonica, etc...) strips the image of the alpha channel and makes it unusable. Is there anything out there that does what I want ?
Edit: I do not want to use a 8-bit palette. I just need to reduce the numbers of color so that my own program can process the image.
Have you tried ImageMagick?
http://www.imagemagick.org/script/index.php
8-bit PNGs with alpha transparency will only render alpha on newer webbrowsers.
Here are some tools and website that does the conversion:
free pngquant
Adobe Fireworks
and website: http://www.8bitalpha.com/
Also, see similar question
The problem you describe is inherent in the PNG format. See the entry at Wikipedia and notice there's no entry in the color options table for Indexed & alpha. There's an ability to add an alpha value to each of the 256 colors, but typically only one palette entry will be made fully transparent and the rest will be fully opaque.
Paint Shop Pro has a couple of options for blending or simulating partial transparency in a paletted PNG - I know because I wrote it.

DICOM Image is too dark with ITK

i am trying to read an image with ITK and display with VTK.
But there is a problem that has been haunting me for quite some time.
I read the images using the classes itkGDCMImageIO and itkImageSeriesReader.
After reading, i can do two different things:
1.
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
2.
The second scenario is the registration pipeline. Here, i read the image as before, then use the classes shown in the ITK Software Guide chapter about registration. Then i resample the image and use the itkImageSeriesWriter.
And that's when the problem appears. After writing the image to a file, i compare this new image with the image i used as input in the XMedcon software. If the image i wrote ahs been shown too bright in my software, there no changes when i compare both of them in XMedcon. Otherwise, if the image was too dark in my software, it appears all messed up in XMedcon.
I noticed, when comparing both images (the original and the new one) that, in both cases, there are changes in modality, pixel dimensions and glmax.
I suppose the problem is with the glmax, as the major changes occur with the darker images.
I really don't know what to do. Does this have something to do with color level/window? The most strange thing is that all the images are very similar, with identical tags and only some of them display errors when shown/written.
I'm not familiar with the particulars of VTK/ITK specifically, but it sounds to me like the problem is more general than that. Medical images have a high dynamic range and often the images will appear very dark or very bright if the window isn't set to some appropriate range. The DICOM tags Window Center (0028, 1050) and Window Width (0028, 1051) will include some default window settings that were selected by the modality. Usually these values are reasonable, but not always. See part 3 of the DICOM standard (11_03pu.pdf is the filename) section C.11.2.1.2 for details on how raw image pixels are scaled for display. The general idea is that you'll need to apply a linear scaling to the images to get appropriate pixel values for display.
What pixel types do you use? In most cases, it's simpler to use a floating point type while using ITK, but raw medical images are often in short, so that could be your problem.
You should also write the image to the disk after each step (in MHD format, for example), and inspect it with a viewer that's known to work properly, such as vv (http://www.creatis.insa-lyon.fr/rio/vv). You could also post them here as well as your code for further review.
Good luck!
For what you describe as your first issue:
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
I suggest the following: Check your window/level in VTK, they probably aren't adequate to your images. If they are abdominal tomographies window = 350 level 50 should be a nice color level.

Resources