PyQt Keeping QLabel Size Aspect Ratio - python-3.x

I have a QLabel which displays an image. Currently, I have the image set to keep it's aspect ratio, and grow as big as it can within the QLabel.
Is there any way I can also set the QLabel to maintain the image's aspect ratio? I do not want to have "blank" QLabel space either side of the image when the label is wider than the image.
I have been looking for any sort of QLabel property that would allow me to set the aspect ratio of the label, but have not managed to get anything to do what I wanted to do.
All the answers I have seen relate to keeping the aspect ratio of a resized QPixmap image, but not of the QLabel containing it.
Any help would be great!
Cheers
FP

I seem to have cracked it, so incase anyone else was wondering how to do this:
I took tmoreau's solution and modified it slightly. For it to work, you need to set the QLabel's maximum size to the image's new size prior to the paint event. Immediately afterwards, you need to set the maximum size for the QLabel to something very large, otherwise, you will not be able to enlarge the image at all as you will have specified the maximum size to be that of the current image.
def paintEvent(self, event):
size = self.size()
painter = QtGui.QPainter(self)
point = QtCore.QPoint(0,0)
scaledPix = self.pixmap.scaled(size, Qt.KeepAspectRatio, transformMode = Qt.SmoothTransformation)
self.setMaximumSize(scaledPix.size())
point.setX(0)
point.setY(0)
#print (point.x(), ' ', point.y())
painter.drawPixmap(point, scaledPix)
self.setMaximumSize(QtCore.QSize(4000,5000))
If anyone has a better solution, by all means please let me know!

Related

Kivy rotation during movement

I'm struggling with how to properly implement simultaneous movement and rotation using Kivy (in python, not kv lang). My goal is to rotate an object so it's facing its destination then move it towards the destination using Animation. Using the code below I typically get movement in relation to the angle rotated instead of in relation to my general playing area. For example the animation without rotation might move an image to point [1,1] whereas with rotation of 180* the animation is moving the image to [-1,-1]. The image is properly rotated in this scenario, meaning it's facing the right way but going the wrong way.
My understanding is that the push/pop matrix functions should provide the appropriate context for the animation rather than the rotated element context. Because the PopMatrix function is happening in Canvas.after it seems like this has no effect - my animation is completed before the original Matrix is restored.
I'm lacking some key piece of information here that's causing a lot of headache. Can someone explain why the code below causes an image to move to (-1,-1) rather than the (1,1) indicated, and how to properly implement this?
I threw this code together as an example, my game code is far more complex. That's why I'm hoping for an explanation rather than a direct solution for my code. Thanks.
with self.image.canvas.before:
PushMatrix()
self.rot = Rotate()
self.rot.axis = (0, 0, 1)
self.rot.origin=self.center
self.rot.angle=180
with self.image.canvas.after:
PopMatrix()
self.anim = Animation(pos = (1,1), duration=1)
self.anim.start(self)
self.image.pos = self.pos
self.image.size = self.size
In case others are interested in how to make this work consistently - I've found that setting origin and angle on each frame, along with binding the image widget to any pos change on it's parent will ensure the widget moves with its parent and in the proper direction. I implemented that like this:
Instantiate the image like this:
with self.image.canvas.before:
PushMatrix()
self.rot = Rotate()
self.rot.axis = (0, 0, 1)
self.rot.origin=self.center
self.rot.angle = 0
with self.image.canvas.after:
PopMatrix()
Bind it like this:
self.bind(pos = self.binding)
def binding(self, *args):
self.image.center = self.center
self.image.size = self.size
On each frame call a function that does similar to the below.
self.rot.origin = self.center
self.rot.angle = getangle()#you can use a set angle or generate a new angle every frame
Rotate effectively changes the coordinate system used by the entire canvas, so after you've rotated by 180 degrees the position [1, 1] really is in the opposite direction to what it was before, as far as any canvas instruction is concerned.
I don't know what self.image is (maybe an Image widget?), but presumably whatever you see is something like a Rectangle drawn on its canvas, whose pos and size match those of the widget. When you update that pos and size, the Rectangle is positioned according to the current coordinate system, which is in the rotated frame.
Thinking about it, I'm not sure if there's a neat way to combine Rotate instructions with Kivy's high level widget coordinates in quite this way. Of course you can work around it in various ways, such as by accounting for the rotation when setting the position of the Rectangle, but that's kind of fiddly, and inconvenient when working with prebuilt widgets. You can also look at what the Scatter widget does to enable arbitrary transformations.
If you just want to rotate by 180 degrees, you can instead adjust the image being displayed, either before displaying it or by adjusting the tex_coords of the Rectangle to change the displayed orientation. However, this won't work for arbitrary rotations, which it looks like you may want.

python pygame what does font mean?

I am now studying an online pygame tutorial. However, I am not sure how it works when trying to place text on the screen. According to the official docs for pygame.font.Sysfont():
Return a new Font object that is loaded from the system fonts. The font will match the requested bold and italic flags. If a suitable system font is not found this will fall back on loading the default pygame font. The font name can be a comma separated list of font names to look for.
What is a font?
font = pygame.font.SysFont(None, 25)
# message to the user
def message_to_screen(msg,color):
screen_text = font.render(msg, True, color)
screen.blit(screen_text, [screen_width/2,screen_height/2])
Ok, here is the simplest explanation i can give you:
Modules such as Pygame are simple (or sometimes not that simple...) codes that add new features and functions to your normal built in python functions. This means that when you import a module you also inherent from that module all of its functions and classes. So for example, the normal python does not contain the function "draw"
pygame.draw.rect(arguments)
however when you import pygame, you inherent that function from the pygame code. allowing you to draw and develop a GUI for better programs.
Same is with objects. Python is an 'object orientated programming language". Objects are a type of data store that defines and structures your code. So for example, sprites in Pygame can be anything you want. Your sprite can be anything you want from a monkey, or a freaking mummy eating zombie, to a simple rectangle. To create the exact sprite you want with the right shape, color, rect, and image, you need to structure it with a class. A class is what will create the object for your sprite. Look at this here:
#Here is the class named 'Button' of type ' pygame.sprite.Sprite'
class Button(pygame.sprite.Sprite):
def __init__(self):
super().__init__()
#Here we define the shape of the sprite. In the case it is a simple
#150 by 75 rectangle surface. The shape can also be an image or any
#or any geometric shape you want
self.image = pygame.Surface((150, 75))
#here we define the color of the sprite
self.image.fill(green)
#and here we make sure the sprite has a rect
self.rect = self.image.get_rect()
so as you see this class defines all what we need to create this simple sprite. Of course it can have many more variables to it depending on what the sprite is, but lets stick with something simple for now
Now the class stores this information in an object, to be used later. Like this:
MySpriteObject = Button()
simple enough i would say. so now you have a sprite object and can use pygames' many function to draw it on the screen, add interaction to it, group it, and a lot of other things.
so Finally you understand the idea of an object in python. Now to you're actual question.
What is a font?
Well a font is an object that you get when you import pygame. You don't have to do the class stuff at the top as the pygame module does that for you. Just create the object and use the function 'render'. So essentially it is an object that you can change two things in as you like. the font, and the size
MyFontObject = pygame.font.Font(#Font here, #size here)
If you make the Font argument None, then it will give you the default pygame font. Thats what I usually do. However, if you want to change the font, you can either download a font (usually a .ttf) and then type in its folder path in the Font argument, or you can use a font that you have on your computer. To do that instead of
MyFontObject = pygame.font.Font(#Font here,#size here)
you use
MyFontObject = pygame.font.SysFont(#Name of font here, #size here)
Where #Name of font here is, you can replace it by any font installed on your computer. To get a list of the names of fonts on your computer that pygame can identify:
pygame.font.get_fonts()
Ok, so that is how you create the font object. Now to rendering it.
Rendering the font uses the font object to change the shape and color of the text you want to display. Here is how its done
text = MyFontObject.render(#Your Text Here,#true or false,#color)
screen.blit(tex,t(#X axis,#Y axis))
Pretty self explanatory. Except for the #true or false i guess. That pretty much asks you if you want to use a technique that helps the text look less pixeled and square-like. If you provide true it will. If you dont the text will look awefull, so always keep it true.
So that's pretty much what i have to say. Here is a short summary:
1.) An object is a type of data store which stores different variables to structure and define your code. So therefore a font object is an object that defines the different things for a font, such as size and font type
2.) to create an object we use a class as shown above
3.) A font class is already there with the pygame module so you just have to call the font object straight away:
MyFontObject = pygame.font.Font(#filepath or None for the default pygame font,#size here)
or for a font that is installed on your system such as ariel (which can all be viewed with pygame.font.get_font())
MyFontObject = pygame.font.SysFont(#Name of system font here,#size here)
4.) To put this font object to use you render it:
text = MyFontObject.render(#Text here,#True or false,#color of text)
then normally blit it on the screen and call pygame.display.update
screen.blit(text,(#X axis,#Y axis)
pygame.display.update()
I hope this helps. I know I'm not the best explainer and I write too much, but you should read the summary at least.
P.S: Sorry for using sprites to explain classes and objects. I know I went of topic but it was just an example.

Android image Layout

We have Layout Container with size Width = 200px and Height = 800px, We have images of unequal Height and width,But all images size is such that all images fit in Layout Container. How to achieve this.
Thanks.
First of all you should use dp instead of px in android and suppose you have taken the imageview of 200dp * 200dp in xml file you can set the scaleType property to fitXY or any scale type suitable to you and i suggest you to also use adjustViewBounds=true so that any image will set to it properly you can know more about scaleType here.
I also suggest you to read this blog to understand concept behind it.
And also see these answers
https://stackoverflow.com/a/6670069/5476209
https://stackoverflow.com/a/10124018/5476209

Generate an image of all widgets within a scrollarea

BACKGROUND:
I have a python program that is being used by a number of engineers. It indicates the status of some piece of equipment under test.
I am using a QScrollArea() to contain a QGridLayout which is packed with alot of information.
bit_grid = QtGui.QGridLayout()
...
scroll = QtGui.QScrollArea()
info = QtGui.QWidget()
info.setLayout(bit_grid)
scroll.setWidget(info)
There are quite a few status indicators on the GUI and as such the scrollbar is used to ensure the GUI fits on one screen.
When an engineer want's to describe a failure what they are doing right now is taking multiple screenshot, one for each new displayed area of the ScrollArea. They are then stitched together to make one large image.
Is there a way to generate a png (or an img format) of the area that could be display within a ScrollArea?
Try this:
pixmap = QtGui.QPixmap.grabWidget(scroll)
pixmap.save('path/to/file.png', None, 100)
This snippet will take a snapshot of whatever is inside the scrollArea and save that as a png image to path/to/ folder as file.png
ok solved.
widget = self.scroll.widget()
pixmap = QtGui.QPixmap(widget.size())
widget.render(pixmap)
pixmap.save(filename, 'PNG', 100)
The key was to grab the widget that is in scroll as this could then be (virtually) rendered. The resultant pixmap could then be saved.

How to blend color of two sprites with constant alpha in DirectX?

Essentially, what I want to do (in DirectX) is to take two partially-transparent images and blend them together. This works fine with default blending, insofar as they both show up as overlapping, etc. However, the problem is that the opacity goes up markedly where the two intersect. This causes increasing problems as more sprites overlap. What I'd like to do is keep the blending the same, except keep a global opacity for all these sprites being blended, regardless of how they overlap.
Seems like there would be a render setting for this (all of these sprites are alone in their sprite batch, which keeps that part easy), but if so I don't know it. Right now I'm kind of shooting in the dark, and I've tried a lot of different things and none of them have looked right at all. I know I probably need some sort of variant of D3DBLENDOP, but I just don't know what sort of settings there I really need (I have tried many things, but it is all guessing at this stage).
Here is a screenshot of what is actually happening with standard blending (the best I can get it): http://arcengames.com/share/FFActual.png Here is a screenshot with a mockup of how I would want the blending to turn out (the forcefields were added to the same layer in Photoshop, then given a shared alpha value): http://arcengames.com/share/FFMockup.png
This is how I did it in Photoshop:
1. Take the two images, and remove all transparency (completely transparent pixels excepted).
2. Combine them into one layer, which blends the color but which has no partial alpha at all.
3. Now set the global transparency for that layer to (say) 40%.
The result is something that looks kind of blended together color-wise, but which has no increase in opaqueness on the overlapped sections.
UPDATE: Okay, thanks very much to Goz below, who suggested using the Z-Buffer. That works! The blending, by and large, is perfect and just what I would want. The only remaining problem? Using that new method, there is a huge artifact around the edge of the force field image that is rendered last. See this: http://www.arcengames.com/share/FFZBuffer.png
UPDATE: Below is the final solution in C# (SlimDX)
Clearing the ZBuffer to black, transparent, or white once per frame all has the same effect (this is right before BeginScene is called)
Direct3DWrapper.ClearDevice( SlimDX.Direct3D9.ClearFlags.ZBuffer, Color.Transparent, 0 );
All other sprites are drawn at Z=1, with the ZBuffer disabled for them:
device.SetRenderState( RenderState.ZEnable, ZBufferType.DontUseZBuffer );
The force field sprites are drawn at Z=2, with the ZBuffer enabled and ZWrite enabled and ZFunc as Less:
device.SetRenderState( RenderState.ZEnable, ZBufferType.UseZBuffer );
device.SetRenderState( RenderState.ZWriteEnable, true );
device.SetRenderState( RenderState.ZFunc, Compare.Less );
The following flags are also set at this time, to prevent the black border artifact I encountered:
device.SetRenderState( RenderState.AlphaTestEnable, true );
device.SetRenderState( RenderState.AlphaFunc, Compare.GreaterEqual );
device.SetRenderState( RenderState.AlphaRef, 55 );
Note that AlphaRef is at 55 because of the alpha levels set in the specific source image I was using. If my source image had a higher alpha value, then the AlphaRef would also need to be higher.
Best I can tell is that the forcefields are a whole object. Why not render them last, in front to back order, and with Z-buffering enabled. That will give you the effect you are after.
ie its not blending settings thats your problem at all.
Edit: Can you use render-to-texture then? IF so you could easily do what you did under photoshop. Render them all together into the texture and then blend the texture back over the screen.
Edit2: How about
ALPHATESTENABLE = TRUE;
ALPHAFUNC = LESS
ALPHABLENDENABLE = TRUE;
SRCBLEND = SRCALPHA;
DESTBLEND = INVSRCALPHA;
SEPERATEALPHABLENDENABLE = TRUE;
SRCBLENDALPHA = ONE;
DESTBLENDALPHA = ZERO;
You need to make sure the alpha is cleared to 0xff in the frame buffer each frame. You then do the standard alpha blend. while passing the alpha value straight through to the backbuffer. This is, though, where the alpha test comes in. You test the final alpha value against the one in the back buffer. If it is less than whats in the backbuffer then that pixel has not been blended yet and will be put into the frame buffer. If it is equal (or greater) then it HAS been blended already and the alpha value will be discarded.
That said ... using a Z-Buffer would cost you a load of RAM but would be faster overall as it would be able to throw away the pixels far earlier in the pipeline. Seeing as all the shields would just need to be written to a given Z-plane you wouldn't even need to go through the hell I suggested earlier. If the Z value it receives is less than whats there already then it will render it if it is greater or equal then it will discard it, fortunately before the blend calculation is ever performed.
That said ... you could also do it by using the stencil buffer which would require a Z-buffer anyway.
Anyway ... hope one of those methods is of some help.
Edit3: DO you render the forcefield with some form of feathering around the edge? Most likely that edge is caused by the fact that the alpha fades off slightly and then the "slightly alpha" pixels are getting written to the z-buffer and hence any subsequent draw doesn't overwrite them.
Try the following settings
ALPHATESTENABLE = TRUE
ALPHAFUNC = GREATEREQUAL // if this doesn't work try less .. i may be being a retard
ALPHAREF = 255
To fine tune the feathering around the edge adjust the alpharef but i'd suspect you need to keep it as above.
You can specify the D3DBLENDOP used when blending the two images together for the alpha channel. It sounds like your using D3DBLENDOP_ADD currently - try switching this to D3DBLENDOP_MAX, as that will just use the opacity of the "most opaque" image.
It is hard to tell exactly what you are trying to accomplish from your mock up since both forcefields are the same color; do you want to blend the colors and cap the alpha? Just take one of the colors?
Based off the above discussion it isnt' clear if you are setting all the relevant render states:
D3DRS_ALPHABLENDENABLE = TRUE (default: FALSE)
D3DRS_BLENDOP = D3DBLENDOP_MAX (default: D3DBLENDOP_ADD)
D3DRS_SRCBLEND = D3DBLEND_ONE (default: D3DBLEND_ONE)
D3DRS_DESTBLEND = D3DBLEND_ONE (default: D3DBLEND_ZERO)
It sounds like you are setting the first two, but what about the last two?

Resources