How to add border to an image, choosing color dynamically based on image edge color(s)? - python-3.x

I need to add a border to each image in a set of tightly cropped logos. The border color should either match the most commonly used color along the edge, OR, if there are too many different edge colors, it should choose some sort of average of the edge colors.
Here's an example. In this case, I would want the added border to be the same color as the image "background" (using that term in the lay sense). A significant majority of the pixels along the edges are that color, and there are only two other colors, so the decision algorithm would be able to select that rather drecky greenish tan for the added border (not saying anything bad about the organization behind the logo, mind you).
Does Pillow have any functions to simplify this task?
I found answers that show how to use Pillow to add borders and how to determine the average color of an entire image. But I couldn't find any code that looks only at the edges of an image and finds the predominant color, which color could then be used in the border-adding routine. Just in case someone has already done that work, please point me to it. ('Edges' meaning bands of pixels along the top/bottom/left/right margins of the image, whose height or width would be specified as a percentage of the image's total size.)
Short of pointing me to a gist that solves my whole problem, are there Pillow routines that look at edges and/or that count the colors in a pixel range and put them into an array or what not?
I see here that OpenCV can add a border the duplicates the color of the each outermost pixel along all four edges, but that looks funky—I want a solid-color border. And I'd prefer to stick with Pillow—unless another library can do the whole edge-color-analysis-and-add-border procedure in one step, more or less, in which case, please point it out.

Overwrite the center part of the image with some fixed color, that – most likely – won't be present within the edge. For that, maybe use a color with a certain alpha value. Then, there's a function getcolors, which exactly does, what you're looking for. Sort the resulting list, and get the color with the highest count. (That, often, will be the color we used to overwrite the center part. So check for that, and take the second entry, if needed.) Finally, use ImageOps.expand to add the actual border.
That'd be the whole code:
from PIL import Image, ImageDraw, ImageOps
# Open image, enforce RGB with alpha channel
img = Image.open('path/to/your/image.png').convert('RGBA')
w, h = img.size
# Set up edge margin to look for dominant color
me = 3
# Set up border margin to be added in dominant color
mb = 30
# On an image copy, set non edge pixels to (0, 0, 0, 0)
img_copy = img.copy()
draw = ImageDraw.Draw(img_copy)
draw.rectangle((me, me, w - (me + 1), h - (me + 1)), (0, 0, 0, 0))
# Count colors, first entry most likely is color used to overwrite pixels
n_colors = sorted(img_copy.getcolors(2 * me * (w + h) + 1), reverse=True)
dom_color = n_colors[0][1] if n_colors[0][1] != (0, 0, 0, 0) else n_colors[1][1]
# Add border
img = ImageOps.expand(img, mb, dom_color).convert('RGB')
# Save image
img.save('with_border.png')
That'd be the result for your example:
And, that's some output for another image:
It's up to you to decide, whether there are several dominant colors, which you want to mix or average. You'd need to inspect the n_colors appropriately on the several counts for that. That's quite a lot of work, which is left out here.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.9.1
PyCharm: 2021.1
Pillow: 8.2.0
----------------------------------------

Related

Why is there a difference between "width" and "actualBoundingBoxLeft + actualBoundingBoxRight" in a "TextMetrics" object of a HTML5 canvas?

When I call measureText as shown in the snippet, I get the following result:
{
"width": 45.43333435058594,
"actualBoundingBoxLeft": 0,
"actualBoundingBoxRight": 45.35,
"actualBoundingBoxAscent": 18,
"actualBoundingBoxDescent": 0
}
Why is there a difference between width and actualBoundingBoxRight, if actualBoundingBoxLeft is zero?
c2d = document.getElementById('canvas').getContext('2d');
c2d.direction = 'ltr';
c2d.font = '24px serif';
console.log (c2d.measureText('TeX'));
<canvas id="canvas"></canvas>
Hope I get what you are asking, I'll give it a go.
The width property gives the text's advance width. Space taken excluding left and right-side bearing.
– width attribute
The width of that inline box, in CSS pixels. (The text's advance width.)
– actualBoundingBoxLeft attribute
The distance parallel to the baseline from the alignment point given by the textAlign attribute to the left side of the bounding rectangle of the given text, in CSS pixels; positive numbers indicating a distance going left from the given alignment point.
Where it also gives the note:
The sum of this value and the next (actualBoundingBoxRight) can be wider than the width of the inline box (width), in particular with slanted fonts where characters overhang their advance width.
– actualBoundingBoxRight attribute
The distance parallel to the baseline from the alignment point given by the textAlign attribute to the right side of the bounding rectangle of the given text, in CSS pixels; positive numbers indicating a distance going right from the given alignment point.
Some examples
Take this times f with data from canvas's measuereText (normal left, italic right):
ERR: Switched places for middle and bottom width line (blue ones), but did not update the labels. "middle" is "bottom" and "bottom" is "middle" as for the labels in the picture. I'll try to get time to upload a new later.
Especially the slanted version shows this well. The blue line following the textBaseline (gray horizontal line), and starting at textAlign (gray vertical line) show the width value for the glyph. That is how much the font advances the "typehead".
Bounding box left / right are the extremes in horizontal expansion. If one look on it as an rectangle. Same goes for Ascend and Descend. They are the extremes up / down. But, as font's "overlap" (kerning etc.) it is not a factor for width which represents advanced width.
The sum of the box width is 111 + 39 = 150 but the width is only 72.28.
As for your sample, it is harder to catch with such small fonts. (Relatively speaking). Increasing the for to 1024px or what ever gives a clearer result. There is so small fractions and path calculations that one will miss subtle pixel fractions. With 1024px:
actualBoundingBoxAscent : 747
​actualBoundingBoxDescent : 14
​actualBoundingBoxLeft : -10
​actualBoundingBoxRight : 1933.5
​width : 1938.5
The difference (1933.5 + -10 = 1923.5) is still small, considering the total width, but at least present in the served object.
Another sample with +:
As one observe the glyph advances the text a lot more then what it occupies in painted pixels. One can even have cases where a glyph does not advance the text at all. They can still stand alone in a text, but it's definition applies to the previous glyph in a way ... For example dấu hỏi or hook above has zero width.
But some are still defined as advancing characters for example:
Also interesting with that sample is to see how Descent is negative, (not going down below textBaseline), and Ascent is also present. Logically when one look at it, but can be a gotcha.
Could scale up the test on the canvas, but would have to look at it closer. Way too long since I worked on cavases. This is a close view, but have not validated or checked how precise (down to pixel) the lines are.
If it is correct, it show a subtle diff where the width advances at the end of TeX using 24px font.

How can i force the imagemagick module of nodejs to output one single image only?

I am using the imagemagick module with Nodejs
im = require('imagemagick');
The imagemagick module uses the imagemagick command line tools.
I use the convert method to crop an image
im.convert([image_path, '-crop', '200x150', '-gravity', 'center', target_path],
function(err, stdout){}
);
This results in two images. The one with the cropped image area - the second with the image garbage i tried to get rid of.
How can i force imagemagick to output one image file only?
Per the imagemagick documentation for cropping, which is admittedly a little obtuse (emphasis added):
The width and height of the geometry argument give the size of the image that remains after cropping, and x and y in the offset (if present) gives the location of the top left corner of the cropped image with respect to the original image.
...
If the x and y offsets are present, a single image is generated, consisting of the pixels from the cropping region.
...
If the x and y offsets are omitted, a set of tiles of the specified geometry, covering the entire input image, is generated.
... so, you just need to specify your x and y offsets as part of your geometry argument, like so: 200x150-100-75
Notice that I've specified -100 and -75 for the upper left corner of your crop region, this is because you set your gravity to center, but it appears that imagemagick tries to intelligently determine the appropriate distance target based on your gravity, and I don't see exactly how it behaves when you choose center. So you may have to play around with this one a bit, or you could omit the gravity and use the actual offset from the top left corner of your original image.
I had to use the +delete parameter to remove the last image from the image sequence.
im.convert([image_file.path, '-crop', geometry, '+delete', thumb_path], ...

Remove the picture edges

I downloaded a icon, and now i want to reset the color of it, but i'm not good at photoshop, i've set the color of it to be red, but there are to many edges and corners, please tell me how to remove those edges by using photoshop step by step, thanks a lot.
here is the icon i downloaded:
and this is my ugly one:
The best way to alter a single color like this on a simple image such as this is to alter the Hue and Saturation [CTRL / CMD + U]...
This allows you greater color control and keeps the anti-aliased edges of the image intact.
Most beginners alter colors like this by simply selecting the color with the wand, or using the paint bucket on the color. Unfortunately this usually does one of 2 things:
Makes the ragged edges that you saw.
Leaves a halo of the old color as an orphan.
I did this in a few seconds with that tool:

Color Space Inversion for contrasting grid

I have a randomly colored background that is split into solid colored rectangles. I want to draw a grid over the rectangles (this is not the problem). The issue is because of the random colors I cannot hard-code the grid color because it may not show up.
Another way to think about this is plotting a grid on a plot of a surface f(x,y). If the grid color happens to be the same color of the function (however it is defined) then it won't be visible.
I would like to take the background color and compute a new color (either grayscale or similar to the background color) that is contrasted with the color so it can easily be seen (but not distracting such as pure white on pure black).
I've tried using the luminance and weighted luminance but it doesn't work well for all colors. I've also tried gamma correcting the colors but it also does not work well.
I would also like the grid color to be as uniform as possible (I could possibly compute the adjacent grid colors to blend in). It is not that important but would be nice to have some uniformity.
The code I'm working with is based around
//byte I = (byte)(0.2*R + 0.7*G + 0.1*B);
//byte I = (byte)(R + G + B)/3.0);
byte I = (byte)(Math.Max(Bar.Background.R, Math.Max(Bar.Background.G, Bar.Background.B)));
if (I < 120)
I = (byte)(I + 30);
else
I = (byte)(I - 30);
//I = (byte)(Math.Pow(I/255.0, 1/2.0)*255);
I've also tried gamma correcting the rgb's first.
Anyone have any ideas?
The colors that offer the most contrast are colors that are fully saturated. This offers you a way to find color that may work(but not necessarily for many reasons). Essentially you pick the color the furthest away along the line connecting color and the fully saturated color.

SetWorldTransform() and font rotation

I'm trying to display text on a Windows control, rotated by 90 degrees, so that it reads from 'bottom to top' so to speak; basically it's the label on the Y axis of a graph.
I got my text to display vertically by changing my coordinate system for the DC by using SetGraphicsMode(GM_ADVANCED) and then using
XFORM transform;
const double angle = 90 * (boost::math::constants::pi<double>() / 180);
transform.eM11 = (FLOAT)cos(angle);
transform.eM12 = (FLOAT)(-sin(angle));
transform.eM21 = (FLOAT)sin(angle);
transform.eM22 = (FLOAT)cos(angle);
transform.eDx = 0.0;
transform.eDy = 0.0;
dc.SetWorldTransform(&transform);
Now when I run my program, the rotated text looks different from the same text when it's shown 'normally' (horizontally). I've tried with a fixed-width (system) font and the default WinXP font. The system font comes out look anti-aliased and the other one looks almost as if it's being drawn in a 1-pixel smaller font than the horizontal version, although they are drawn using the same DC and with no font changes in between. It looks as if Windows detects that I'm drawing a font not along the normal (0 degrees) axis and that it's trying to 'optimize' by anti-aliasing.
Now I don't want any of that. I just want to same text that I draw horizontally to be drawn exactly the same, except 90 degrees rotated, which is possible since it's a rotation of exactly 90 degrees. Does anyone know what's going on and whether I can change this easily to work as I want? I'd hate to have gone through all this trouble and finding up that I will have to resort to rendering to an off-screen bitmap, rotating it using a simple pixel-by-pixel rotation and having to bitblt that into my control :(
Have you tried setting the nEscapement and nOrientation parameters when you create the font instead of using SetWorldTransform? See CreateFont for details.

Resources