Find size of polygon area in Tkinter Canvas, Python - python-3.x

I am creating program that analises areas of user-drawn shapes.
Here is sample of code that creates polygon from dots. Program gets dots from mouse motion. Firstly it draws lines, than erases them and draws figure.
def finish_custom_selection(self, event):
# self.custom_lines_id - list of id of created by mouse motion lines [id1, id2 ...]
# self.canvas_for_selection - tkinter canvas I work with
# self.custom_dots - list of dots coords pairs [(x1, y1), (x2, y2) ...]
for line in self.custom_lines_id:
self.canvas_for_selection.delete(line)
item = self.canvas_for_selection.create_polygon(*self.custom_dots,
dash=(10, 10), width=2,
fill='',
tags="draggable",
outline="blue")
self.custom_dots.clear()
self.custom_lines_id.clear()
So here is my question. How can I calculate size of this polygon area? I know algorithms only for convex polygon, but these area can be completely random. Maybe there are any built-in method I am missing?

If you could find the way tkinter canvas fills its pppygon with color you may be able to tweek the method to produce the area. You get voter up

Related

Ignore points in Folium PolyLines

I am plotting a satellite ground track using Folium PolyLine.
Among the set of points (latitudes longitudes) I am passing to my map, there are two points in that set that I do not want a line drawn between. I do not want the horizontal line drawn (see screenshot below):
Here is the code that generates the map:
my_map = folium.Map(location=[0,0], height=1000, width=1000, zoom_start=2,
min_zoom=2, max_zoom=12, max_bounds=True, no_wrap=True)
map_name = "folium_1000_1000_map.html"
#add line to my_map
folium.PolyLine(latlon_list).add_to(my_map)
Where latlon_list is a list of 2000 tuples, with each tuple holding a latitude/longitude combination ([lat0, lon0],[...],[lat1999,lon1999]).
I do not want to start the line on the far left side of the map (it is important that I have precise representation of orbit, even if it means having the orbit starting on the right side of map, like it is on the example screenshot). How can I get rid of the horizontal line ?

How to make an Axes transparent to events?

I am trying to find a way to make an Axes object passthrough for events.
For context, I have a figure with 6 small subplots. Each of them responds to mouse motion events by displaying a cursor dot and text info where the user aims. I also made it so that clicking a subplot will make it as large as the figure for better visibility. When moving the mouse over invisible axes, event.inaxes will still point to that ax despite being set to invisible and that is what I would like to avoid.
Below is the MRE:
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.set_gid("ax1")
ax2.set_gid("ax2")
fig.show()
def on_movement(event):
"""Write on the figure in which `Axes`object the event happened."""
width, height = fig.bbox.bounds[2:]
x = event.x/width
y = event.y/height
text = ax.get_gid() if (ax := event.inaxes) is not None else "None"
fig.texts[:] = []
fig.text(x, y, s=text, transform=fig.transFigure, c="white", bbox=dict(fc="#0055AA", ec="black"))
fig.canvas.draw()
fig.canvas.mpl_connect("motion_notify_event", on_movement)
As expected, as you hover the mouse over ax1, the empty gap and ax2, you will see one of those three texts appear:
ax1.set_position((1/3, 1/3, 2/3, 2/3))
Same thing as I arbitrarily resize and move ax1 so that it is partly overlaps with ax2.
ax2.set_visible(False)
Now this is my problem. Invisible axes still trigger events. Is there a way to make some axes "transparent" to events? Obviously the usual technique of sorting all the cases in the callback does not work here.
Currently envisaged solutions:
ideally, finding a setting akin to zorder so that the "highest" axes gets the event.
ugly workaround: set the position of the invisible axes to ((0, 0, 1e-10, 1e-10)).
less ugly: working with figure coordinates to convert event.x, event.y into event.xdata, event.ydata for the only ax that I know is visible. Basically xdata1, ydata1 = ax1.transAxes.inverted().transform((event.x, event.y)) if event.inaxes is not None + see if there are edge cases.
The latter is already implemented and does work, so save your time if you want to write a reply using that approach. I'm mostly interested in an amazing one-liner that I would have missed, something like ax2.set_silenced(True).
Python 3.8.5
Matplotlib 3.1.3
Well, setting the appropriate zorder does work actually.
ax1.set_zorder(2)
ax2.set_zorder(1)
...
def on_movement(event):
...
fig.text(x, y, ..., zorder=1000)
...

Python PIL: How To Draw Custom Filled Polygons

I'm wondering if there is a way in pillow where I can draw custom filled polygons, I know I can draw rectangles and circles, what about custom polygons.
Specifically, I want to draw something like the below image:
How can I achieve this, any ideas. Thanks
I'm not known for my graphic design abilities or patience tinkering with aesthetics, but this should give you an idea:
#!/usr/bin/env python3
from PIL import Image, ImageDraw
# Form basic purple image without alpha
w, h = 700, 250
im = Image.new('RGB', (w,h), color=(66,0,102))
# Form alpha channel independently
alpha = Image.new('L', (w,h), color=0)
# Get drawing context
draw = ImageDraw.Draw(alpha)
radius = 50
hline = h-radius
OPAQUE, TRANSPARENT = 255, 0
# Draw white where we want opaque
draw.rectangle((0,0,w,hline), fill=OPAQUE)
draw.ellipse((w//2-radius,hline-radius,w//2+radius,h), fill=OPAQUE)
# Draw black where we want transparent
draw.ellipse(((w//3)-radius,-radius,(w//3)+radius,radius), fill=TRANSPARENT)
draw.ellipse(((2*w//3)-radius,-radius,(2*w//3)+radius,radius), fill=TRANSPARENT)
# DEBUG only - not necessary
alpha.save('alpha.png')
# Put our shiny new alpha channel into our purple background and save
im.putalpha(alpha)
im.save('result.png')
The alpha channel looks like this - I artificially added a red border so you can see the extent of it:

Transform Plates into Horizontal Using Hough transform

I am trying to transform images that are not horizontal, because they may be slanted.
It turns out that when testing 2 images, this photo that is horizontal, and this one that is not. It gives me good results with the horizontal photo, however when trying to change the second photo that is tilted, it does not do what was expected.
The fist image it's works fine like below with a theta 1.6406095. For now it looks bad because I'm trying to make the 2 photos look horizontally correct.
The second image say that theta is just 1.9198622
I think the error it is at this line:
lines= cv2.HoughLines(edges, 1, np.pi/90.0, 60, np.array([]))
I have done a little simulation on this link with colab.
Any help is welcome.
So far this is what I got.
import cv2
import numpy as np
img=cv2.imread('test.jpg',1)
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
imgBlur=cv2.GaussianBlur(imgGray,(5,5),0)
imgCanny=cv2.Canny(imgBlur,90,200)
contours,hierarchy =cv2.findContours(imgCanny,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)
rectCon=[]
for cont in contours:
area=cv2.contourArea(cont)
if area >100:
#print(area) #prints all the area of the contours
peri=cv2.arcLength(cont,True)
approx=cv2.approxPolyDP(cont,0.01*peri,True)
#print(len(approx)) #prints the how many corner points does the contours have
if len(approx)==4:
rectCon.append(cont)
#print(len(rectCon))
rectCon=sorted(rectCon,key=cv2.contourArea,reverse=True) # Sort out the contours based on largest area to smallest
bigPeri=cv2.arcLength(rectCon[0],True)
cornerPoints=cv2.approxPolyDP(rectCon[0],0.01*peri,True)
# Reorder bigCornerPoints so I can prepare it for warp transform (bird eyes view)
cornerPoints=cornerPoints.reshape((4,2))
mynewpoints=np.zeros((4,1,2),np.int32)
add=cornerPoints.sum(1)
mynewpoints[0]=cornerPoints[np.argmin(add)]
mynewpoints[3]=cornerPoints[np.argmax(add)]
diff=np.diff(cornerPoints,axis=1)
mynewpoints[1]=cornerPoints[np.argmin(diff)]
mynewpoints[2]=cornerPoints[np.argmax(diff)]
# Draw my corner points
#cv2.drawContours(img,mynewpoints,-1,(0,0,255),10)
##cv2.imshow('Corner Points in Red',img)
##print(mynewpoints)
# Bird Eye view of your region of interest
pt1=np.float32(mynewpoints) #What are your corner points
pt2=np.float32([[0,0],[300,0],[0,200],[300,200]])
matrix=cv2.getPerspectiveTransform(pt1,pt2)
imgWarpPers=cv2.warpPerspective(img,matrix,(300,200))
cv2.imshow('Result',imgWarpPers)
Now you just have to fix the tilt (opencv has skew) and then use some threshold to detect the letters and then recognise each letter.
As for a general purpose, I think images need to be normalised first so that we can easily detect the edges.

Detect rectangles in OpenCV (4.2.0) using Python (3.7),

I am working on a personal project where I detect rectangles (all the same dimensions) and then place those rectangles inside a list in the same order (top-bottom) and then process the information inside each rectangle using some function. Below is my test image.
I have managed to detect the rectangle of interest, however I keep getting other rectangles that I don't want. As you can see I only want the three rectangles with the information (6,9,3) into a list.
My code
import cv2
width=700
height=700
y1=0
y2=700
x1=500
x2=700
img=cv2.imread('test.jpg') #read image
img=cv2.resize(img,(width,height)) #resize image
roi = img[y1:y2, x1:x2] #region of interest i.e where the rectangles will be
gray = cv2.cvtColor(roi, cv2.COLOR_BGR2GRAY) #convert roi into gray
Blur=cv2.GaussianBlur(gray,(5,5),1) #apply blur to roi
Canny=cv2.Canny(Blur,10,50) #apply canny to roi
#Find my contours
contours =cv2.findContours(Canny,cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_NONE)[0]
#Loop through my contours to find rectangles and put them in a list, so i can view them individually later.
cntrRect = []
for i in contours:
epsilon = 0.05*cv2.arcLength(i,True)
approx = cv2.approxPolyDP(i,epsilon,True)
if len(approx) == 4:
cv2.drawContours(roi,cntrRect,-1,(0,255,0),2)
cv2.imshow('Roi Rect ONLY',roi)
cntrRect.append(approx)
cv2.waitKey(0)
cv2.destroyAllWindows()
There is a feature in Contour called cv2.contourArea for which your contour dimensions are input like this cv2.contourArea(contours) . You can use the condition,
if cv2.contourArea(contours)>#Rectangle area
By using this your problem will be solved
I'd suggest that you get the bounding rectangles of the contours and then sort the rectangles by area descending. Crop the first rectangle by default, then loop through the remaining rectangles and crop them if they're, let's say, >=90% of the first rectangle's area. This will ensure that you have the larger rectangles and the smaller ones are ignored.

Resources