Since you can create images, rectangles, lines and so on in the Tkinter-Canvas, I wonder if there is any possibility to place an image in a grid.
What I tried:
from tkinter import *
w = Tk()
background = Canvas(w, highlightthickness = 0)
for r in range(10):
ntexture = PhotoImage(PATH)
background.create_image(image = ntexture, row = r, column = 2)
background.pack(fill = BOTH, expand = True)
w.mainloop()
When I execute it, there is an IndexError.
The Problem:
There is are no attributes for row or column.
Do you know how to get the coordinates of rows/columnes or how to do it differently?
To use grid coordinates you need to use the .grid() method, rather than .pack().
In your case something like
bg = background.create_image(image = ntexture)
bg.grid(row = r, column = 2)
Related
Question
I know this question (or similar questions) have been posted, though I cannot seem to find what is going on.
I am doing a strip chart and I have tried using both geom_jitter(position = position_jitter(0.065)) and geom_point(position = position_jitter(0.065)), but I cannot seem to put a border around each of my points.
I have been only able to fill them with a colour, regardless of whether I use fill = or colour =.
Code:
ggplot(Titanic.Data, aes(x = as.factor(Survive),
y = Age,
colour = Survive)) +
geom_jitter(position = position_jitter(0.065)) +
labs(x = "Survive",
y = "Age",
title = "Strip Chart of Age vs Survive") +
scale_colour_manual(values = c("DarkBlue", "DarkRed")) +
theme_test() +
theme(plot.title = element_text(hjust = 0.5,
face = "bold",
size = 18)) +
theme(legend.position = "none")
Graph:
Here is what the graph will look like
Code
The data set is huge, and I do not know how to add the file to this thread so here is a small portion of it.
Titanic Data
I need to get the color of a pixels in a specifict region of image.
Im using this script in python:
import cv2
image = cv2.imread('abc.jpg')
color = image[100,50]
print(color) # gives me the RGB color (12,156,222)
and if a need to get the hex of it:
hex = (color[0] << 16) + (color[1] << 8) + (color[2])
my question is: There is a way to tell me what color is it? (12,156,222)
thank you.
I found another way to solve this problem
using webcolor i was able to detect the closest color
import webcolors
import time
start = time.time()
def closest_colour(requested_colour):
min_colours = {}
for key, name in webcolors.CSS3_HEX_TO_NAMES.items():
r_c, g_c, b_c = webcolors.hex_to_rgb(key)
rd = (r_c - requested_colour[0]) ** 2
gd = (g_c - requested_colour[1]) ** 2
bd = (b_c - requested_colour[2]) ** 2
min_colours[(rd + gd + bd)] = name
return min_colours[min(min_colours.keys())]
def get_colour_name(requested_colour):
try:
closest_name = actual_name = webcolors.rgb_to_name(requested_colour)
except ValueError:
closest_name = closest_colour(requested_colour)
actual_name = None
return actual_name, closest_name
requested_colour = (255, 0, 0)
actual_name, closest_name = get_colour_name(requested_colour)
print("Actual colour name:", actual_name, ", closest colour name:", closest_name)
print("Tempo: ", time.time() - start)
if anyone have another bester method please send here ^^
You can look it up at: https://shallowsky.com/colormatch/index.php?r=12&g=156&b=222
[spoiler alert: "DeepSkyBlue3"]
In general, you want to start with
a list of name → R,G,B values, and
a distance function
Then it's a matter of computing distance between each of those and your target value.
Return the corresponding name for whichever list entry has minimum distance to target.
A previous SO answer offers such an algorithm: Python - Find similar colors, best way
Using Euclidean distance (L2 norm) in some random color space is less than principled,
though often sufficient.
If you want to sweat the details, consider relying on https://python-colormath.readthedocs.io.
(You might even want to discuss maintainership with the author.)
If "perceptually similar" matters to you, definitely use Lab: https://en.wikipedia.org/wiki/CIELAB_color_space
I have an image like that:
I have both the mask and the original image. I would like to calculate the colour temperature of ONLY the ducks region.
Right now, I'm iterating through each row and column of the image below and getting pixels where their values are not zero. But I think this isn't the right way to do this. Any suggestions?
What I did was:
xyzImg = cv2.cvtColor(resImage, cv2.COLOR_BGR2XYZ)
x,y,z = cv2.split(xyzImg)
xList=[]
yList=[]
zList=[]
rows=x.shape[0]
cols=x.shape[1]
for i in range(rows):
for j in range(cols):
if (x[i][j]!=0) and (y[i][j]!=0) and (z[i][j]!=0):
xList.append(x[i][j])
yList.append(y[i][j])
zList.append(z[i][j])
xAvg = np.mean(xList)
yAvg = np.mean(yList)
zAvg = np.mean(zList)
xs = xAvg / (xAvg + yAvg + zAvg)
ys = yAvg / (xAvg + yAvg + zAvg)
xyChrome = np.array([xs,ys])
But this is very slow and I don't think its right...
The simplest way would be to use cv2.mean() function.
It takes two arguments src (having 1 to 4 channels) and mask and returns a vector with mean values for individual channels.
Refer to cv2::mask
I am looking for an efficient way to delete points of a meshgrid that comes inside the bounding box of blocks (block 1 and 2 in the code). My Code is:
x_max, x_min, y_max, y_min = 156.0, 141.0, 96.0, 80.0
offset = 5
stepSize = 0.2
x = np.arange(x_min-offset, x_max+offset, stepSize)
y = np.arange(y_min-offset, y_max+offset, stepSize)
xv, yv = np.meshgrid(x, y)
#bounding box (and pints inside) that I want to remove for mesh
block1 = [(139.78, 86.4), (142.6, 86.4), (142.6, 88.0), (139.78, 88.0)]
block2 = [(154.8, 87.2), (157.6, 87.2), (157.6, 88.8), (154.8, 88.8)]
As per one of the answer, I could generate the required result if I have only one block to be removed from the mesh. If I have multiple blocks then it won't work. What could be the optimized way to remove multiple blocks from mesh grid. The final figure should look like this:
Mesh
Edit: Improved questions and edited code.
Simply redefine your x and y around your block:
block_xmin = np.min(block[:,0])
block_xmax = np.max(block[:,0])
block_ymin = np.min(block[:,1])
block_ymax = np.max(block[:,1])
X = np.hstack((np.arange(x_min-offset, block_xmin, stepSize), np.arange(block_xmax, x_max+offset, stepSize)))
Y = np.hstack((np.arange(y_min-offset, block_ymin, stepSize), np.arange(block_ymax, y_max+offset, stepSize)))
XV, YV = np.meshgrid(X, Y)
I think I figured it out based on the explanation of #hpaulj (I cannot up-vote his suggestions as well probably due to low points). I can append blocks in allBlocks array and then run a loop over allBlocks an simultaneous disabling the points in mesh. Here is my solution:
x_new = np.copy(xv)
y_new = np.copy(yv)
ori_x = xv[0][0]
ori_y = yv[0][0]
for block in allBlocks:
block_xmin = np.min((block[0][0], block[1][0]))
block_xmax = np.max((block[0][0], block[1][0]))
block_ymin = np.min((block[0][1], block[1][1]))
block_ymax = np.max((block[0][1], block[3][1]))
rx_min, rx_max = int((block_xmin-ori_x)/stepSize), int((block_xmax-ori_x)/stepSize)
ry_min, ry_max = int((block_ymin-ori_y)/stepSize), int((block_ymax-ori_y)/stepSize)
for i in range(rx_min,rx_max+1):
for j in range(ry_min,ry_max+1):
x_new[j][i] = np.nan
for i in range(ry_min,ry_max+1):
for j in range(rx_min,rx_max+1):
y_new[i][j] = np.nan
I am trying to use vtkImageReSlicer to extract a 2d slice from a 3d
vtkImageData object. But I can't seem to get the recipe right. Am I doing it right?
I am also a bit confused about ResliceAxes Matrix. Does it represent a cutting plane? If
I move the ReSliceAxes origin will it also move the cutting plane? When I
call Update on the vtkImageReSlicer, the program crashes. But when I don't
call it, the output is empty.
Here's what I have so far.
#my input is any vtkactor that contains a closed curve of type vtkPolyData
ShapePolyData = actor.GetMapper().GetInput()
boundingBox = ShapePolyData.GetBounds()
for i in range(0,6,2):
delta = boundingBox[i+1]-boundingBox[i]
newBoundingBox.append(boundingBox[i]-0.5*delta)
newBoundingBox.append(boundingBox[i+1]+0.5*delta)
voxelizer = vtk.vtkVoxelModeller()
voxelizer.SetInputData(ShapePolyData)
voxelizer.SetModelBounds(newBoundingBox)
voxelizer.SetScalarTypeToBit()
voxelizer.SetForegroundValue(1)
voxelizer.SetBackgroundValue(0)
voxelizer.Update()
VoxelModel =voxelizer.GetOutput()
ImageOrigin = VoxelModel.GetOrigin()
slicer = vtk.vtkImageReslice()
#Am I setting the cutting axis here. x axis set at 1,0,0 , y axis at 0,1,0 and z axis at 0,0,1
slicer.SetResliceAxesDirectionCosines(1,0,0,0,1,0,0,0,1)
#if I increase the z value, will the cutting plane move up?
slicer.SetResliceAxesOrigin(ImageOrigin[0],ImageOrigin[1],ImageOrigin[2])
slicer.SetInputData(VoxelModel)
slicer.SetInterpolationModeToLinear()
slicer.SetOutputDimensionality(2)
slicer.Update() #this makes the code crash
voxelSurface = vtk.vtkContourFilter()
voxelSurface.SetInputConnection(slicer.GetOutputPort())
voxelSurface.SetValue(0, .999)
voxelMapper = vtk.vtkPolyDataMapper()
voxelMapper.SetInputConnection(voxelSurface.GetOutputPort())
voxelActor = vtk.vtkActor()
voxelActor.SetMapper(voxelMapper)
Renderer.AddActor(voxelActor)
I have never used vtkImageReslice, but I have used vtkExtractVOI for vtkImageData, which allows you to achieve a similar result, I think. Here is your example modified with the latter, instead:
ImageOrigin = VoxelModel.GetOrigin()
slicer = vtk.vtkExtractVOI()
slicer.SetInputData(VoxelModel)
#With the setVOI method you can define which slice you want to extract
slicer.SetVOI(xmin, xmax, ymin, ymax, zslice, zslice)
slicer.SetSampleRate(1, 1, 1)
slicer.Update()
voxelSurface = vtk.vtkContourFilter()
voxelSurface.SetInputConnection(slicer.GetOutputPort())
voxelSurface.SetValue(0, .999)
voxelMapper = vtk.vtkPolyDataMapper()
voxelMapper.SetInputConnection(voxelSurface.GetOutputPort())
voxelActor = vtk.vtkActor()
voxelActor.SetMapper(voxelMapper)
Renderer.AddActor(voxelActor)