I need to crop a image.
The image is displayed on the Qlabel, and I have prepared the button to start cropping and the button to save the cropped image.
The goal is that after pressing the crop button, pressing and releasing the mouse can draw a frame on the original image, frame the area I want, and display it, and finally save the cropped image locally through the save image button. And I have multiple images that need to be cropped, they are saved in each page of Qgroupbox.
This is my qt interface diagram:
Here is part of my code:
self.labels_cut5678 = [self.label_cutphoto5, self.label_cutphoto6, self.label_cutphoto7, self.label_cutphoto8]
self.pushButton_cut.clicked.connect(self.pushButton_cut_clicked)
self.pushButton_save.clicked.connect(self.pushButton_save_clicked)
...
def photoload_right(self):
"""加载右目图像,4张"""
imgName, imgType = QFileDialog.getOpenFileNames(self.centralwidget, "打开图片", "", "*.jpg;;*.png;;All Files(*)")
# 弹出一个文件选择框,第一个返回值imgName记录选中的文件路径+文件名,第二个返回值imgType记录文件的类型
for i in range(len(imgName)):
jpg_cut = QtGui.QPixmap(imgName[i]).scaled(self.labels_cut5678[i].size(), Qt.KeepAspectRatio)
self.labels_cut5678[i].setPixmap(jpg_cut)
# self.labels_cut5678[i].repaint()
self.labels_cut5678[i].adjustSize()
Related
I'm new to Qt and trying to figure out how position graphics items?
For example, I want to overlay some text onto a video. However, instead of overlaying them, the text is vertically stacked above the video.
My code is below, I've tried setting the position of the video/text elements (e.g. video.setPos(0,0)) but it didn't work. I also tried using QGraphicsLayout but ran into problems adding the video/text elements.
import sys
from PySide6 import QtWidgets, QtCore, QtMultimedia, QtMultimediaWidgets
class MyWidget(QtWidgets.QWidget):
def __init__(self):
super().__init__()
view = QtWidgets.QGraphicsView(self)
# Create a QT player
player = QtMultimedia.QMediaPlayer(self)
player.setSource(QtCore.QUrl("https://archive.org/download/SampleVideo1280x7205mb/SampleVideo_1280x720_5mb.mp4"))
video = QtMultimediaWidgets.QGraphicsVideoItem()
player.setVideoOutput(video)
text = QtWidgets.QGraphicsTextItem("Hello World")
scene = QtWidgets.QGraphicsScene()
scene.addItem(video)
scene.addItem(text)
view.setScene(scene)
view.setFixedSize(800,800)
player.play()
player.setLoops(-1)
if __name__ == "__main__":
app = QtWidgets.QApplication([])
widget = MyWidget()
widget.resize(800, 800)
widget.show()
sys.exit(app.exec())
The problem is caused by two aspects:
QGraphicsVideoItem has a default size of 320x240; for conceptual and optimization reasons, it does not change its size when video content is loaded or changed;
the video you're using has a different aspect ratio: 320x240 is 4:3, while the video is 1280x720, which is the standard widescreen ratio, 16:9;
By default, QGraphicsVideoItem just adapts the video contents to its current size, respecting the aspectRatioMode. The result is that you get some blank space above and below the video, similarly to what was common when showing movies in old TVs ("letterboxing"):
Since graphics items are always positioned using the top left corner of their coordinate system, you see a shifted image. In fact, if you just print the boundingRect of the video item when playing starts, you'll see that it has a vertical offset:
player.mediaStatusChanged.connect(lambda: print(video.boundingRect()))
# result:
QtCore.QRectF(0.0, 30.0, 320.0, 180.0)
^^^^ down by 30 pixels!
There are various possibilities to solve this, but it all depends on your needs, and it all resorts on connecting to the nativeSizeChanged signal. Then it's just a matter of properly setting the size, adjust the content to the visible viewport area, or eventually always call fitInView() while adjusting the position of other items (and ignoring transformations).
For instance, the following code will keep the existing size (320x240) but will change the offset based on the adapted size of the video:
# ...
self.video = QtMultimediaWidgets.QGraphicsVideoItem()
# ...
self.video.nativeSizeChanged.connect(self.updateOffset)
def updateOffset(self, nativeSize):
if nativeSize.isNull():
return
realSize = self.video.size()
scaledSize = nativeSize.scaled(realSize,
QtCore.Qt.AspectRatioMode.KeepAspectRatio)
yOffset = (scaledSize.height() - realSize.height()) / 2
self.video.setOffset(QtCore.QPointF(0, yOffset))
I have an application that displays images. Since people need to read some information out of the image i implemented a zoom functionality.
I set the picture Widget to 600x600. To Preserve the aspect ratio i then scale the picture and draw it to the widget. This works really well.
For the zoom functionality the user should click anywhere on the picture and it should cut out the are 150x150 pixesl around where the cursor clicks. To be precises the click of the cursore should mark the middle of the rectangle i cut out. So if i click on x=300 y=300 the area should be x=225 y=225 width=150 height=150.
To Archive that i scale the coordinates where the user clicks back to the original image resolution, cut out the subimage and scale it back down. Cutting out the scaled image allready loaded in my programm would yield a far worse quality.
The error is simple. The area cut out is not exactly the aerea i would like to cut out. Sometimes it is to far left. Sometimes to far right. Somtes to high sometimes to low... And i fail to see where the problem lies.
I wrote a barebone prototype whith just the functionality needed. After you put in a path to a jpeg picture you should be able to run it.
# -*- coding: utf-8 -*-
"""
Created on Sun Jan 12 12:22:25 2020
#author: Paddy
"""
import wx
class ImageTest(wx.App):
def __init__(self, redirect=False, filename=None):
wx.App.__init__(self, redirect, filename)
self.frame = wx.Frame(None, title='Radsteuer Eintreiber')
self.panelleft = wx.Panel(self.frame)
self.picturepresent=False
self.index=0
self.PhotoMaxSize = 600
self.zoomed=False
#Change path here
self.imagepath='F:\Geolocation\Test\IMG_20191113_174257.jpg'
self.createUI()
self.frame.Centre()
self.frame.Show()
self.frame.Raise()
self.onView()
#Creates UI Elements on Initiation
def createUI(self):
#instructions = 'Bild'
img = wx.Image(self.PhotoMaxSize,self.PhotoMaxSize,clear=True)
self.imageCtrl = wx.StaticBitmap(self.panelleft, wx.ID_ANY,
wx.Bitmap(img),size=(self.PhotoMaxSize,self.PhotoMaxSize))
self.mainSizer = wx.BoxSizer(wx.VERTICAL)
self.imageCtrl.Bind(wx.EVT_LEFT_UP, self.onImageClick)
self.mainSizer.Add(self.imageCtrl, 0, wx.ALL|wx.ALIGN_CENTER, 5)
self.panelleft.SetSizer(self.mainSizer)
self.mainSizer.Fit(self.frame)
self.panelleft.Layout()
def onImageClick(self,event):
if self.zoomed:
self.onView()
self.zoomed=False
else :
#Determin position of mouse
ctrl_pos = event.GetPosition()
print(ctrl_pos)
picturecutof=self.PhotoMaxSize/4
if ctrl_pos[0]-((self.PhotoMaxSize-self.NewW)/2)>0:
xpos=ctrl_pos[0]-((self.PhotoMaxSize-self.NewW)/2)
else:
xpos=0
if ctrl_pos[0]+picturecutof>self.NewW:
xpos=self.NewW-picturecutof
if ctrl_pos[1]-((self.PhotoMaxSize-self.NewW)/2)>0:
ypos=ctrl_pos[1]-((self.PhotoMaxSize-self.NewW)/2)
else:
ypos=0
if ctrl_pos[1]+picturecutof>self.NewH:
ypos=self.NewH-picturecutof
xpos=xpos*self.W/self.NewW
ypos=ypos*self.H/self.NewH
picturecutofx=picturecutof*self.W/self.NewW
picturecutofy=picturecutof*self.H/self.NewH
rectangle=wx.Rect(xpos,ypos,picturecutofx,picturecutofy)
self.img = wx.Image(self.imagepath, wx.BITMAP_TYPE_ANY)
self.img=self.img.GetSubImage(rectangle)
self.img=self.img.Scale(600,600,wx.IMAGE_QUALITY_BICUBIC)
self.imageCtrl.SetBitmap(wx.Bitmap(self.img))
self.imageCtrl.Fit()
self.panelleft.Refresh()
self.zoomed=True
def onView(self,event=None):
self.img = wx.Image(self.imagepath, wx.BITMAP_TYPE_ANY)
# scale the image, preserving the aspect ratio
self.W = self.img.GetWidth()
self.H = self.img.GetHeight()
if self.W > self.H:
self.NewW = self.PhotoMaxSize
self.NewH = self.PhotoMaxSize * self.H / self.W
else:
self.NewH = self.PhotoMaxSize
self.NewW = self.PhotoMaxSize * self.W / self.H
self.img = self.img.Scale(self.NewW,self.NewH,wx.IMAGE_QUALITY_BICUBIC)
self.imageCtrl.SetBitmap(wx.Bitmap(self.img))
self.imageCtrl.Fit()
self.panelleft.Refresh()
if __name__ == '__main__':
app = ImageTest()
app.MainLoop()
There maybe some wird stuff happening in the code that doesnt make entirely sense. Most of it is because the normal programm is much bigger and i removed many features in the prototpy that are having nothing to do with the zooming. It might very well be that i do the scaling wrong. But im out of ideas.
The Functionality for this protopye is simple: Replace the Path to a jpge image of your choiche. Run the programm. Click on the image and it should zoom. Click around your image and you will see zooming is wrong.
Thats it. Thanks for your help.
So i found the answer. But i changed something on the logic also. The picture will now be centered at the position where the user clicked. This is much more intuitive to use. I onle post the onImageClick Function. If you want to use the whole thing feel free to replace it in the original code from the question.
def onImageClick(self,event):
if self.zoomed:
self.onView()
self.zoomed=False
else :
#Determin position of mouse
ctrl_pos = event.GetPosition()
#Set magnification.
scalingfactor=4
#Set Picture size for rectangle
picturecutof=self.PhotoMaxSize/scalingfactor
##Find coordinates by adjusting for picture position
xpos=ctrl_pos[0]-((self.PhotoMaxSize-self.NewW)/2)
ypos=ctrl_pos[1]-((self.PhotoMaxSize-self.NewH)/2)
#if position is out of range adjust
if xpos>self.NewW:
xpos=self.NewW
if xpos<0:
xpos=0
if ypos>self.NewH:
ypos=self.NewH
if ypos<0:
ypos=0
#scale rectangle area to size of the unscaled image
picturecutofx=picturecutof*self.W/self.NewW
picturecutofy=picturecutof*self.H/self.NewH
#scale coordinates to unscaled image
xpos=xpos*self.W/self.NewW
ypos=ypos*self.H/self.NewH
#centeres image onto the coordinates where they were clicked
xpos=xpos-((ctrl_pos[0]*self.W/self.NewW)/scalingfactor)
ypos=ypos-((ctrl_pos[1]*self.H/self.NewH)/scalingfactor)
#if position is out of range adjust
if xpos>self.W-picturecutofx:
xpos=self.W-picturecutofx-5
if xpos<0:
xpos=0
if ypos>self.H-picturecutofy:
ypos=self.H-picturecutofy-5
if ypos<0:
ypos=0
#create rectangle to cut from original image
rectangle=wx.Rect(xpos,ypos,picturecutofx,picturecutofy)
#load original image again
self.img = wx.Image(self.imagepath, wx.BITMAP_TYPE_ANY)
#get subimage
self.img=self.img.GetSubImage(rectangle)
#scale subimage to picture area
self.img=self.img.Scale(self.PhotoMaxSize,self.PhotoMaxSize,wx.IMAGE_QUALITY_BICUBIC)
self.imageCtrl.SetBitmap(wx.Bitmap(self.img))
self.imageCtrl.Fit()
self.panelleft.Refresh()
self.zoomed=True
event.Skip()
I want to create button from a list and I want them to have their own image.
I have tried this but only the last created button work.
liste_boutton = ['3DS','DS','GB']
for num,button_name in enumerate(liste_boutton):
button = Button(type_frame)
button['bg'] = "grey72"
photo = PhotoImage(file=".\dbinb\img\\{}.png".format(button_name))
button.config(image=photo, width="180", height="50")
button.grid(row=num, column=0, pady=5, padx=8)
Only your last button has an image as it is the only one that has reference in the global scope, or in any scope for that matter in your specific case. As is, you have only one referable button and image objects, namely button and photo.
Short answer would be:
photo = list()
for...
photo.append(PhotoImage(file=".\dbinb\img\\{}.png".format(button_name)))
But this would still have bad practice written all over it.
Your code appears okay, but you are need to keep a reference of the image effbot and also I do believe PhotoImage can only read GIF and PGM/PPM images from files, so you need another library PIL seems to be a good choice. I used a couple images from a google search for the images, they were placed in the same directory as my .py file, so i changed the code a little bit. Also the button width and height can cut off parts of the image if not careful.
from Tkinter import *
from PIL import Image, ImageTk
type_frame = Tk()
liste_boutton = ['3DS','DS','GB']
for num,button_name in enumerate(liste_boutton):
button = Button(type_frame)
button['bg'] = "grey72"
# this example works, if .py and images in same directory
image = Image.open("{}.png".format(button_name))
image = image.resize((180, 100), Image.ANTIALIAS) # resize the image to ratio needed, but there are better ways
photo = ImageTk.PhotoImage(image) # to support png, etc image files
button.image = photo # save reference
button.config(image=photo, width="180", height="100")
# be sure to check the width and height of the images, so there is no cut off
button.grid(row=num, column=0, pady=5, padx=8)
mainloop()
Output:
[https://i.stack.imgur.com/lxthT.png]
With all your comments I was able to achieve what I expected !
Thank ! I'm new to programming so this will not necessarily be the best solution but it works
import tkinter as tk
root = tk.Tk()
frame1 = tk.Frame(root)
frame1.pack(side=tk.TOP, fill=tk.X)
liste_boutton = ['3DS','DS','GB']
button = list()
photo = list()
for num,button_name in enumerate(liste_boutton):
button.append(tk.Button(frame1))
photo.append(tk.PhotoImage(file=".\dbinb\img\\{}.png".format(button_name)))
button[-1].config(bg="grey72",image=photo[-1], width="180", height="50")
button[-1].grid(row=num,column=0)
root.mainloop()
I want to draw a rectangle in a saved video. While drawing the rectangle,the video must freeze. I am successful in drawing a rectangle on a image,but I don't know how to do the same on a saved video using opencv and python.
I was in need of a ROI selection mechanism using opencv and I finally figured how to implement it.
The implementation can be found here (opencvdragrect). It uses opencv 3.1.0 and Python 2.7
For a saved video till the time you don't read another frame and display it, the video is considered as paused.
In terms of how to add it to a paused video (frame), this code below might help.
import cv2
import selectinwindow
wName = "select region"
video = cv2.VideoCapture(videoPath)
while(video.isOpened()):
# Read frame
ret, RGB = video.read()
frameCounter += 1
if frameCounter == 1 : # you can pause any frame you like
rectI = selectinwindow.dragRect
selectinwindow.init(rectI, I, wName, I.shape[1], I.shape[0])
cv2.namedWindow(wName)
cv2.setMouseCallback(wName, selectinwindow.dragrect, rectI)
while True:
# display the image
cv2.imshow(wName, rectI.image)
key = cv2.waitKey(1) & 0xFF
# if returnflag is set break
# this loop and run the video
if rectI.returnflag == True:
break
box = [rectI.outRect.x,rectI.outRect.y,rectI.outRect.w,rectI.outRect.h]
# process the video
# ...
# ...
In the (opencvdragrect) library you use double-click to stop the rectangle selection process and continue with video.
Hope this helps.
Like the topic,
how can I display a .png image and hide it? And what is the difference between canvas and photoimage?
I wrote a bit of code which hides/shows an image on the click of a button. You can edit it to fit your needs.
NOTE: At the moment, I'm using pack(), and pack_forget(), if you want to use grid or place, you must use grid_forget() or place_forget()
import tkinter
def hideBG():
global state
if state == "Hidden":
background_label.pack()
state = "Showing"
elif state == "Showing":
background_label.pack_forget()
state = "Hidden"
window = tkinter.Tk()
background_image=tkinter.PhotoImage(file="BG.png")
background_label = tkinter.Label(window, image=background_image)
hideBttn = tkinter.Button(window, text="Hide Background", command=hideBG)
state = "Showing"
hideBttn.pack()
background_label.pack()
window.mainloop()
This creates an image within a label, and a button. The button takes the current "state" of the image (whether it is hidden or showing), and changes it to the opposite, by calling the hideBG function when the button is pressed.
Hope this helps!