I'm trying to create text surfaces that are of the same size no matter the text. In other words: I want longer text to have smaller font size and shorter text to have bigger font size in order to fit the text to an already existing Surface.
To create text in pygame I am:
Creating a font object. For example: font = pygame.font.SysFont('Arial', 32)
Creating a text surface from the font object. For example: text = font.render('My text', True, (255, 255, 255))
Bliting the text surface.
The problem is that I first need to create a font object of a certain size before creating the text surface. I've created a function that does what I want:
import pygame
def get_text(surface, text, color=(255, 255, 255), max_size=128, font_name='Arial'):
"""
Returns a text surface that fits inside given surface. The text
will have a font size of 'max_size' or less.
"""
surface_width, surface_height = surface.get_size()
lower, upper = 0, max_size
while True:
font = pygame.font.SysFont(font_name, max_size)
font_width, font_height = font.size(text)
if upper - lower <= 1:
return font.render(text, True, color)
elif max_size < 1:
raise ValueError("Text can't fit in the given surface.")
elif font_width > surface_width or font_height > surface_height:
upper = max_size
max_size = (lower + upper) // 2
elif font_width < surface_width or font_height < surface_height:
lower = max_size
max_size = (lower + upper) // 2
else:
return font.render(text, True, color)
Is there any other way to solve this problem that's cleaner and/or more efficient?
This, unfortunately, seems to be the most appropriate solution. Font sizes are more of a approximation and differs between fonts, so there isn't a uniform way to calculate the area a specific font will take up. Another problem is that certain characters differs in size for certain fonts.
Having a monospace font would theoretically make it more efficient to calculate. Just by dividing the surface's width with the string's length and check which size of a monospace font covers that area.
You can re-scale the text image to fit:
def fit_text_to_width(text, color, pixels, font_face = None):
font = pygame.font.SysFont(font_face, pixels *3 // len(text) )
text_surface = font.render(text, True, color)
size = text_surface.get_size()
size = ( pixels, int(size[1] * pixels / size[0]) )
return pygame.transform.scale(text_surface, size)
The Font object seems to have to methods size() -> (width, height) and metrics() -> list[(minx, maxx, miny, maxy)] that sounds useful for checking the size of each resulting character and width of text.
The simplest way would be to use size and simply scale down text to the required width and height based on some ratio screen_width / font.size()[0] for example.
For breaking the text into lines, the metrics method is needed. It should be possible to loop through the metrics list and split up the text based on the summed width for each line.
Related
I have this image
And I want to rotate it, and keep a smooth looking edge.
I have tried this approach below which adds some transparent borders to the image, to allow for the interpolation of the rotation to sample the transparent padding and the opaque image intensities when it renders the edge.
img = Image.open("sunset200x100.jpg")
im_array = np.asarray(img)
w, h = img.size
padding = 4
new_padded_size = (w+padding, h+padding)
img = img.convert('RGBA') # converting to RGBA adds transparency to the areas that aren't opaque
img = ImageOps.pad(img, size=new_padded_size)
im_array_rgba_padded = np.asarray(img)
rotated_im = img.rotate(56, expand=True, resample=PIL.Image.BICUBIC)
as_array = np.asarray(rotated_im)
#rotated_im.show()
rotated_im.save("rotated_sunset200x100_padded_with_2px.png")
However, it doesn't seem to do interpolation on the left, and right sides of the image. Inspecting the im_array_rgba_padded, I see that the first line, and last line of pixels have been made all black, however the left and right haven't got the same zero padding.
So the result ends up looking like this:-
wondering how I can get the padding into the left and right aswell, using the pad function, so that the left and right edges also look smooth ?? or why it is that the padding is not applied to the left and right aswell ?
you can use this change your code :
mg = Image.open("sunset200x100.jpg")
im_array = np.asarray(img)
w, h = img.size
print((w,h))
padding =4
new_padded_size = (w+padding, h+padding)
img = img.convert('RGBA') # converting to RGBA adds transparency to the areas that aren't opaque
# img = ImageOps.pad(img, size=new_padded_size)
img = ImageOps.expand(img,new_padded_size,fill='black')
im_array_rgba_padded = np.asarray(img)
rotated_im = img.rotate(56, expand=True, resample=PIL.Image.BICUBIC)
as_array = np.asarray(rotated_im)
rotated_im.show()
this work in windows10,python 3.9, pillow 8.3.
for more information go to this and pillow/ImageOps
I've got a binary image with an object and a rotated rectangle over it, found with cv2.findContours and cv2.minAreaRect. The image is normalized to [0;1]
What is the most efficient way to count non-zero area within the bounding rectangle?
Create new zero values Mat that has the same size of your original image.
Draw your rotated rectangle on it in (fillConvexPoly using the RotatedRect vertices).
Bitwise_and this image with your original mask
apply findnonzero function on the result image
You may also apply the previous steps on ROI of the image since you have the bounding box of your rotated rectangle.
According to Humam Helfawi's answer I've tuned a bit suggested steps, so the following code seems doing what i need:
rectangles = [(cv2.minAreaRect(cnt)) for cnt in contours]
for rect in rectangles:
rect = cv2.boxPoints(rect)
rect = np.int0(rect)
coords = cv2.boundingRect(rect)
rect[:,0] = rect[:,0] - coords[0]
rect[:,1] = rect[:,1] - coords[1]
area = cv2.contourArea(rect)
zeros = np.zeros((coords[3], coords[2]), np.uint8)
cv2.fillConvexPoly(zeros, rect, 255)
im = greyscale[coords[1]:coords[1]+coords[3],
coords[0]:coords[0]+coords[2]]
print(np.sum(cv2.bitwise_and(zeros,im))/255)
contours is a list of points. You can fill this shape on an empty binary image with the same size using cv2.fillConvexPoly and then use cv2.countNonZero or numpy.count_nonzero to get the number of occupied pixels.
I am trying to play with HTML5 canvas, and I want to get the color of the fillStyle right from my CSS, but also with some transparency. When I use jQuery to read CSS style, a rgb value is returned instead of hex.
fillColor = $(".myClass").css("background-color"); // return rgb(x, x, x)
At first it look it's convinient to me that I don't need to convert it again, but I find that I cannot add the alpha to the RGB value, so I have to convert it into Hex, then convert it to RGBA with an alpha value.
function convertHexToRGB(hex)
{
var red = hex.substr(1, 2), green = hex.substr(3, 2), blue = hex.substr(5, 2), alpha = arguments[1];
color = "rgba(" + parseInt(red, 16) + "," + parseInt(green, 16) + "," + parseInt(blue, 16) + "," + alpha + ")";
return color;
}
Now that make my code looks stink and inefficient, is there any way to add a alpha value to a RGB value. Or some function that converts RGB to RGBA?
Couldn't you just retrieve the opacity value from your css like this:
fillOpacity = $(".myClass").css("opacity"); // return 0.x
And then translate the opacity value into the 'A' channel you need:
var alpha = fillOpacity * 255;
And then append that to you rgb value (in int form)?
EDIT:
I should mention that the HTML5 canvas element works with bitmaps effectively so whilst you can do some combination bits and bobs, so there is no direct concept of layers, in that you can't (as far as I'm aware) tell a canvas element to be r,g,b,a because there is nothing for it to combine with underneath. Unless of course you are trying to place a semi-opaque canvas over a background image of some form. Or, combine an underlying image via say, multiply blending with your original CSS colour, to achieve the affect of a semi-transparent layer over an image.
I need to read tga's with pyqt and so far this seems to be working fine except where a tga has 2 bytes per pixel as opposed to 3 or 4. My code is taken from here http://pastebin.com/b5Vz61dZ.
Specifically this section:
def getPixel( file, bytesPerPixel):
'Given the file object f, and number of bytes per pixel, read in the next pixel and return a qRgba uint'
pixel = []
for i in range(bytesPerPixel):
pixel.append(ord(file.read(1)))
if bytesPerPixel==4:
pixel = [pixel[2], pixel[1], pixel[0], pixel[3]]
color = qRgba(*pixel)
elif bytesPerPixel == 3:
pixel = [pixel[2], pixel[1], pixel[0]]
color = qRgb(*pixel)
elif bytesPerPixel == 2:
# if greyscale
color = QColor.fromHsv( 0, pixel[0] , pixel[1])
color = color.value()
return color
and this part:
elif bytesPerPixel == 2:
# if greyscale
color = QColor.fromHsv( 0, pixel[0] , pixel[1])
color = color.value()
how would I input the pixel[0] and pixel[1] values to create get the values in the correct format and colorspace?
Any thoughts, ideas or help please!!!
pixel = [ pixel[1]*2 , pixel[1]*2 , pixel[1]*2 ]
color = qRgb(*pixel)
works for me. Correct luminance and all. Though I'm not sure doubling the pixel[1] value would work for all instances.
Thank you for all the help istepura :)
http://lists.xcf.berkeley.edu/lists/gimp-developer/2000-August/013021.html
"Pixels are stored in little-endian order, in BGR555 format."
So, you have to take "leftmost" 5 bits of pixel[1] as Blue, rest 3 bits + 2 "leftmost" bits of pixel[0] would be Green, and next 5 bits of pixel[0] would be Red.
In your case, I suppose, the code should be something like:
pixel = [(pixel[1]&0xF8)>>3, ((pixel[1]&0x7)<<2)|((pixel[0]&0xC0)>>6), (pixel[0]&0x3E)>>1)
color = qRgb(*pixel)
Given an RGB value, like 168, 0, 255, how do I create tints (make it lighter) and shades (make it darker) of the color?
Among several options for shading and tinting:
For shades, multiply each component by 1/4, 1/2, 3/4, etc., of its
previous value. The smaller the factor, the darker the shade.
For tints, calculate (255 - previous value), multiply that by 1/4,
1/2, 3/4, etc. (the greater the factor, the lighter the tint), and add that to the previous value (assuming each.component is a 8-bit integer).
Note that color manipulations (such as tints and other shading) should be done in linear RGB. However, RGB colors specified in documents or encoded in images and video are not likely to be in linear RGB, in which case a so-called inverse transfer function needs to be applied to each of the RGB color's components. This function varies with the RGB color space. For example, in the sRGB color space (which can be assumed if the RGB color space is unknown), this function is roughly equivalent to raising each sRGB color component (ranging from 0 through 1) to a power of 2.2. (Note that "linear RGB" is not an RGB color space.)
See also Violet Giraffe's comment about "gamma correction".
Some definitions
A shade is produced by "darkening" a hue or "adding black"
A tint is produced by "ligthening" a hue or "adding white"
Creating a tint or a shade
Depending on your Color Model, there are different methods to create a darker (shaded) or lighter (tinted) color:
RGB:
To shade:
newR = currentR * (1 - shade_factor)
newG = currentG * (1 - shade_factor)
newB = currentB * (1 - shade_factor)
To tint:
newR = currentR + (255 - currentR) * tint_factor
newG = currentG + (255 - currentG) * tint_factor
newB = currentB + (255 - currentB) * tint_factor
More generally, the color resulting in layering a color RGB(currentR,currentG,currentB) with a color RGBA(aR,aG,aB,alpha) is:
newR = currentR + (aR - currentR) * alpha
newG = currentG + (aG - currentG) * alpha
newB = currentB + (aB - currentB) * alpha
where (aR,aG,aB) = black = (0,0,0) for shading, and (aR,aG,aB) = white = (255,255,255) for tinting
HSV or HSB:
To shade: lower the Value / Brightness or increase the Saturation
To tint: lower the Saturation or increase the Value / Brightness
HSL:
To shade: lower the Lightness
To tint: increase the Lightness
There exists formulas to convert from one color model to another. As per your initial question, if you are in RGB and want to use the HSV model to shade for example, you can just convert to HSV, do the shading and convert back to RGB. Formula to convert are not trivial but can be found on the internet. Depending on your language, it might also be available as a core function :
RGB to HSV color in javascript?
Convert RGB value to HSV
Comparing the models
RGB has the advantage of being really simple to implement, but:
you can only shade or tint your color relatively
you have no idea if your color is already tinted or shaded
HSV or HSB is kind of complex because you need to play with two parameters to get what you want (Saturation & Value / Brightness)
HSL is the best from my point of view:
supported by CSS3 (for webapp)
simple and accurate:
50% means an unaltered Hue
>50% means the Hue is lighter (tint)
<50% means the Hue is darker (shade)
given a color you can determine if it is already tinted or shaded
you can tint or shade a color relatively or absolutely (by just replacing the Lightness part)
If you want to learn more about this subject: Wiki: Colors Model
For more information on what those models are: Wikipedia: HSL and HSV
I'm currently experimenting with canvas and pixels... I'm finding this logic works out for me better.
Use this to calculate the grey-ness ( luma ? )
but with both the existing value and the new 'tint' value
calculate the difference ( I found I did not need to multiply )
add to offset the 'tint' value
var grey = (r + g + b) / 3;
var grey2 = (new_r + new_g + new_b) / 3;
var dr = grey - grey2 * 1;
var dg = grey - grey2 * 1
var db = grey - grey2 * 1;
tint_r = new_r + dr;
tint_g = new_g + dg;
tint_b = new_b _ db;
or something like that...