I have approx 1 million datapoints representing the x,y,z,t coordinates of a number of small balls. I'm trying to create a system to view how they change over time.
I am trying to make a 3D plot.
if I plot the balls as "points" in VTK I find that they can be rendered pretty quickly. However, in the ideal scenario, I would be using tiny spheres to represent the data points.
I am concerned that 1M spheres would take too long to render. So I am wondering if there is some way to use the vtkpoints class but to 1.) look circular (instead of rectangular) and 2.) force them to appear bigger on the screen when I zoom in.
Yes, there is. 1) is possible through the vtkPlotPoints class. 2) is possible through the SetWidth method.
Here is a runnable script to enlighten better, with other functionalities you may find useful:
import vtk
import math
view = vtk.vtkContextView()
view.GetRenderer().SetBackground(1.0, 1.0, 1.0)
view.GetRenderWindow().SetSize(400, 300)
chart = vtk.vtkChartXY()
view.GetScene().AddItem(chart)
chart.SetShowLegend(True)
table = vtk.vtkTable()
arrX = vtk.vtkFloatArray()
arrX.SetName('X Axis')
arrC = vtk.vtkFloatArray()
arrC.SetName('Cosine')
arrS = vtk.vtkFloatArray()
arrS.SetName('Sine')
arrT = vtk.vtkFloatArray()
arrT.SetName('Sine-Cosine')
table.AddColumn(arrC)
table.AddColumn(arrS)
table.AddColumn(arrX)
table.AddColumn(arrT)
numPoints = 40
inc = 7.5/(numPoints-1)
table.SetNumberOfRows(numPoints)
for i in range(numPoints):
table.SetValue(i, 0, i*inc)
table.SetValue(i, 1, math.cos(i*inc))
table.SetValue(i, 2, math.sin(i*inc))
table.SetValue(i, 3, math.sin(i*inc)-math.cos(i*inc))
points = chart.AddPlot(vtk.vtkChart.POINTS)
points.SetInput(table, 0, 1)
points.SetColor(0, 0, 0, 255)
points.SetWidth(4.0)
points.SetMarkerStyle(vtk.vtkPlotPoints.DIAMOND) #CROSS, SQUARE, CIRCLE...
points = chart.AddPlot(vtk.vtkChart.POINTS)
points.SetInput(table, 0, 2)
points.SetColor(0, 0, 0, 255)
points.SetWidth(1.0)
points.SetMarkerStyle(vtk.vtkPlotPoints.PLUS)
points = chart.AddPlot(vtk.vtkChart.POINTS)
points.SetInput(table, 0, 3)
points.SetColor(0, 0, 255, 255)
points.SetWidth(1.0)
points.SetMarkerStyle(vtk.vtkPlotPoints.CIRCLE)
view.GetRenderWindow().SetMultiSamples(0)
view.GetInteractor().Initialize()
view.GetInteractor().Start()
Hope this helps!
Related
I have a glb from Blender (Sapling Tree) that has a trunk and leaves which import as separate meshes. I am trying to create a multi mesh with this tree and am currently having some strange results that I can't figure out. I have a script attached to one MultiMeshInstance3D that creates the multi mesh for the trunk and another script attached to another MultiMeshInstance3D which creates the Multimesh for the leaves. Since I want the leaves to be Transformed exactly like the trunk I thought I'd position and rotate the trunks first then grab that transform data and just assign it to each instance for the leaves Multimesh. Unfortunately once I apply the transform using multimesh.set_instance_transform() on the leaves the outcome seems to invert the Vertex in the transform.
In my tree trunk script I create the positions using
for i in len(positions):
var position = positions[i]
var basis = Basis(Vector3.UP, 0.0).rotated(Vector3.UP, randi_range(0, 180))
multimesh.set_instance_transform(i, Transform3D(basis, position))
And in my leaves script I take the trunk transform and then apply each leaves instance to each trunk
var transform1 = tree_trunk.multimesh.transform_array
var j = 0
for i in range(0, tree_trunk.count):
var t = transform1.slice(j, j+4)
multimesh.set_instance_transform(i, Transform3D(t[0], t[1], t[2], t[3]))
j += 4
However the result for the trunk and leaves are different in the fact that the leaves don't have the same transform
Here's an example of what happens:
The trunks transform_array index 0 as an example
[(-0.994367, 0, 0.105988), (0, 1, 0), (-0.105988, 0, -0.994367), (-48.50394, 35.99831, 29.6063),...
The leaves transform_array index 0
[(-0.994367, 0, -0.105988), (0, 1, 0), (0.105988, 0, -0.994367), (-48.50394, 35.99831, 29.6063),...
As you can see it inverts the x-axis z value and the z-axis x value but I don't know why. My current fix is to multiply them to fix it and get my leaves on the same rotation as the trunks.
multimesh.set_instance_transform(i, Transform3D(t[0] * Vector3(1, 1, -1), t[1], t[2] * Vector3(-1, 1, 1), t[3]))
The origin seems to be fine it's just the rotation of the leaves that is off.
That fixes the problem but why do they invert in the first place?
I'd like to use Nim to check the results of my Puppeteer test run executions.
Part of the end result is a screenshot. That screenshot should contain a certain amount of active colours. An active colour being orange, blue, red, or green. They indicate activity is present in the incoming data. Black, grey, and white need to be excluded, they only represent static data.
I haven't found a solution I can use yet.
import stb_image/read as stbi
var
w, h , c:int
data: seq[uint8]
cBin: array[256,int] #colour range was 0->255 afaict
data = stbi.load("screenshot.png",w,h,c,stbi.Default)
for d in data:
cBin[(int)d] = cBin[(int)d] + 1
echo cBin
Now I have a uint array, which I can see I can use to construct a histogram of the values, but I don't know how to map these to something like RGB values. Pointers anyone?
Is there a better package which has this automagically, I didn't spot one.
stbi.load() will return a sequence of interleaved uint8 color components. The number of interleaved components is determined either by c (i.e. channels_in_file) or desired_channels when it is non-zero.
For example, when channels_in_file == stbi.RGB and desired_channels == stbi.Default there are 3 interleaved components of red, green, and blue.
[
# r g b
255, 0, 0, # Pixel 1
0, 255, 0, # Pixel 2
0, 0, 255, # Pixel 3
]
You can process the above like:
import colors
for i in countUp(0, data.len - 3, step = stbi.RGB):
let
r = data[i + 0]
g = data[i + 1]
b = data[i + 2]
pixelColor = colors.rgb(r, g, b)
echo pixelColor
You can read more on this within comments for the stb_image.h.
I would like to rotate an image based on a second image. Both images are satellite images, however, they are not rotated in the same direction(in one image top is in the north direction and in the other the rotation is not known). But, I have at least three pixel pairs in each of the images (x1,y1,x2,y2). So my idea is to figure out their relative position and get the rotation angle from that.
Currently, I estimate the angle like this:
def angle_between(v1, v2):
""" Returns the angle in radians between vectors 'v1' and 'v2'::
>>> angle_between((1, 0, 0), (0, 1, 0))
1.5707963267948966
>>> angle_between((1, 0, 0), (1, 0, 0))
0.0
>>> angle_between((1, 0, 0), (-1, 0, 0))
3.141592653589793
"""
v1_u = unit_vector(v1)
v2_u = unit_vector(v2)
angle_rad = np.arccos(np.clip(np.dot(v1_u, v2_u), -1.0, 1.0))
return (angle_rad*180)/math.pi
with the inputs like this:
v1 = [points[0][0] - points[1][0], points[0][1] - points[1][1]] #hist
v2 = [points[0][2] - points[1][2], points[0][3] - points[1][3]] #ref
However, this only uses two pixel pairs instead of the three. Therefore, the rotation is some times incorrect. Could anybody show me how to use all three pixels?
My first attempt was to check on which side of the straight the third pixel lies in the image and based on that negate the angle. But, this does not work for all images.
EDIT:
I cannot add the original images, as they are copyrighted, however, as the image content is not really important I added whitened images. The first is the input image with the three points drawn in, the second is the rotated image (where additionally the (wrong, due to rotation) cutout area is marked with a rectangle) and third the historical image.
The points are the following:
567.01,144,1544.4,4581.8
1182.6,1568.1,2934.1,3724.3
938.97,1398.1,2795.8,4002.5
with:
x_historical, y_historical, x_presentday, y_presentday
I have a study which provides the length and width values of the objects in an image. What I need is to have exact measurements as length and width but my results deviate too little and I need to reach at the exact values.
I have a ready program but it needs to be developed to reach best result.
(contours, _) = contours.sort_contours(contours)
for cnt in contours:
box = cv2.minAreaRect(cnt)
box = cv2.boxPoints(box) if imutils.is_cv2() else cv2.boxPoints(box)
box = np.array(box, dtype="float")
box = perspective.order_points(box)
cv2.drawContours(orig, [box.astype("int")], -1, (0, 255, 0), 1)
To see the dataset I have I am sharing my test image:
It detects te contours inside of the purple lines but I would like to have it as the yellow lines.
Wht should I update obn my code to reach the aim?
I am trying to make a program that automatically corrects the perspective of a rectangle. I have managed to get the silhouette of the rectangle, and have the code to correct the perspective, but I can't find the corners. The biggest problem is that, because it has been deformed, I can't use the following "code":
c1 = min(x), min(y)
c2 = max(x), min(y)
c3 = min(x), max(y)
c4 = max(x), max(y)
This wouldn't work with this situation (X represents a corner):
X0000000000X
.00000000000
..X000000000
.....0000000
........0000
...........X
Does anyone know how to do this?
Farthest point from the center will give you one corner.
Farthest point from the first corner will give you another corner, which may be either adjacent or opposite to the first.
Farthest point from the line between those two corners (a bit more math intensitive) will give you a third corner. I'd use distance from center as a tie breaker.
For finding the 4th corner, it'll be the point outside the triangle formed by the first 3 corners you found, farthest from the nearest line between those corners.
This is a very time consuming way to do it, and I've never tried it, but it ought to work.
You could try to use a scanline algorithm - For every line of the polygon (so y = min(y)..max(y)), get l = min(x) and r = max(x). Calculate the left/right slope (deltax) and compare it with the slope the line before. If it changed (use some tolerance here), you are at a corner of the rectangle (or close to it). That won't work for all cases, as the slope can't be that exact because of low resolution, but for large rectangles and slopes not too similar, this should work.
At least, it works well for your example:
X0000000000X l = 0, r = 11
.00000000000 l = 1, r = 11, deltaxl = 1, deltaxr = 0
..X000000000 l = 2, r = 11, deltaxl = 1, deltaxr = 0
.....0000000 l = 5, r = 11, deltaxl = 3, deltaxr = 0
........0000 l = 8, r = 11, deltaxl = 3, deltaxr = 0
...........X l = 11, r = 11, deltaxl = 3, deltaxr = 0
You start with the top of the rectangle where you get two different values for l and r, so you already have two of the corners. On the left side, for the first three lines you'll get deltax = 1, but after it, you'll get deltax = 3, so there is a corner at (3, 3). On the right side, nothing changes, deltax = 0, so you only get the point at the end.
Note that you're "collecting" corners here, so if you don't have 4 corners at the end, the slopes were too similar (or you have a picture of a triangle) and you can switch to a different (more exact) algorithm or just give an error. The same if you have more than 4 corners or some other strange things like holes in the rectangle. It seems some kind of image detection is involved, so these cases can occur, right?
There are cases in which a simple deltax = (x - lastx) won't work good, see this example for the left side of a rectangle:
xxxxxx
xxxxx deltax = 1 dy/dx = 1/1 = 1
xxxxx deltax = 0 dy/dx = 2/1 = 2
xxxx deltax = 1 dy/dx = 3/2 = 1.5
xxxx deltax = 0 dy/dx = 4/2 = 2
xxx deltax = 1 dy/dx = 5/3 = 1.66
Sometimes deltax is 0, sometimes is 1. It's better to use the slope of the line from the actual point to the top left/right point (deltay / deltax). Using it, you'll still have to stick with a tolerance, but your values will get more exact with each new line.
You could use a hough transform to find the 4 most prominent lines in the masked image. These lines will be the sides of the quadrangle.
The lines will intersect in up to 6 points, which are the 4 corners and the 2 perspective vanishing points.
These are easy to distinguish: pick any point inside the quadrangle, and check if the line from this point to each of the 6 intersection points intersects any of the lines. If not, then that intersection point is a corner.
This has the advantage that it works well even for noisy or partially obstructed images, or if your segmentation is not exact.
en.wikipedia.org/wiki/Hough_transform
Example CImg Code
I would be very interested in your results. I have been thinking about writing something like this myself, to correct photos of paper sheets taken at an angle. I am currently struggling to think of a way to correct the perspective if the 4 points are known
p.s.
Also check out
Zhengyou Zhang , Li-Wei He, "Whiteboard scanning and image enhancement"
http://research.microsoft.com/en-us/um/people/zhang/papers/tr03-39.pdf
for a more advanced solution for quadrangle detection
I have asked a related question, which tries to solve the perspective transform:
proportions of a perspective-deformed rectangle
This looks like a convex hull problem.
http://en.wikipedia.org/wiki/Convex_hull
Your problem is simpler but the same solution should work.