vpython camera movement: how to walk around a tree? - vpython

I try to move the camera in a circular path around a couple of objects, with the camera always directing to the center. Simplified, up to now I have the following code (displaying a "tree" with a "small stone" to keep track of the movement):
import vpython
stem = vpython.cylinder(pos = vpython.vector(0, -1, 0),
axis = vpython.vector(0,4,0), length = 2, radius = 0.2)
crown = vpython.sphere(pos = vpython.vector(0, 1.5, 0), radius=0.5)
stone = vpython.sphere(pos = vpython.vector(1, -1.5, 0), radius=0.1)
sposi = vpython.scene.camera.pos # Startposition
abst = vpython.mag(sposi)
sollwinkel = 95
ziel_x = abst * vpython.cos(sollwinkel)
ziel_y = abst * vpython.sin(sollwinkel)
d_x = ziel_x / 100.0
d_y = ziel_y / 100.0
calc_x = 0
while True:
vpython.rate(20)
while calc_x < d_x:
calc_x = calc_x + d_x
vpython.scene.camera.pos = vpython.vector(sposi.x + d_x, sposi.y + d_y, sposi.z)
#vpython.scene.camera.axis = vpython.vector(sposi.x - d_x, sposi.y - d_y, -sposi.z)
vpython.sleep(0.1)
I got some movement, but not circular around the center.
And the camera axis probably has to be defined in another way, but I can't figure out, how. Actually, it jumps too near to the tree?
Thanks in advance for all help!

The initial camera position is <0, 0, 1.73205>, and sposi is <0.012647, 0.0118344, 1.73205>,
so naturally you don't see any change. Not sure what you're trying to do. I do recommend posting such questions to the VPython forum at
https://groups.google.com/forum/?fromgroups&hl=en#!forum/vpython-users

Related

ipycanvas displaying final stroke_lines thoughout animation

So I was playing with animating some Bezier curves - just part of learning how to use ipycanvas (0,10,2) -- The animation I produced is really hurting my head. What I expected to see was a set of straight lines between 4 Bezier control points "bouncing" around the canvas with the Bezier curve moving along with them.
I did get the moving Bezier curve -- BUT the control points stayed static. Even stranger they were static in the final position and the curve came to meet them.
Now sometimes Python's structures and references can get a little tricky and so you can sometimes get confusing results if you are not really thinking it through -- and this totally could be what's going on - but I am at a loss.
So to make sure I was not confused I printed the control points (pts) at the beginning and then displayed them to the canvas. This confirmed my suspicion. Through quantum tunneling or some other magic time travel the line canvas.stroke_lines(pts) reaches into the future and grabs the pts array as it will exist in the future and keeps the control points in their final state.
Every other use of pts uses the current temporal state.
So what I need to know is A) The laws of physics are safe and I am just too dumb to understand my own code. B) There is some odd bug in ipycanvas that I should report. C) How to monetize this time-traveling code -- like, could we use it to somehow factor large numbers?
from ipycanvas import Canvas, hold_canvas
import numpy as np
def rgb_to_hex(rgb):
if len(rgb) == 3:
return '#%02x%02x%02x' % rgb
elif len(rgb) == 4:
return '#%02x%02x%02x%02x' % rgb
def Bezier4(t, pts):
p = t**np.arange(0, 4,1)
M=np.matrix([[0,0,0,1],[0,0,3,-3],[0,3,-6,3],[1,-3,3,-1]])
return np.asarray((p*M*pts))
canvas = Canvas(width=800, height=800)
display(canvas) # display the canvas in the output cell..
pts = np.random.randint(50, 750, size=[4, 2]) #choose random starting point
print(pts) #print so we can compare with ending state
d = np.random.uniform(-4,4,size=[4,2]) #some random velocity vectors
c = rgb_to_hex(tuple(np.random.randint(75, 255,size=3))) #some random color
canvas.font = '16px serif' #font for displaying the changing pts array
with hold_canvas(canvas):
for ani in range(300):
#logic to bounce the points about...
for n in range(0,len(pts)):
pts[n]=pts[n] + d[n]
if pts[n][0] >= 800 or pts[n][0] <= 0 :
d[n][0] = - d[n][0]
if pts[n][1] >= 800 or pts[n][1] <= 0 :
d[n][1] = - d[n][1]
#calculate the points needed to display a bezier curve
B = [(Bezier4(i, pts)).ravel() for i in np.linspace(0,1,15)]
#begin display output....
canvas.clear()
#first draw bezier curve...
canvas.stroke_style = c
canvas.stroke_lines(B)
#Now draw control points
canvas.stroke_style = rgb_to_hex((255,255,128, 50))
canvas.stroke_lines(pts)
#print the control points to the canvas so we can see them move
canvas.stroke_style = rgb_to_hex((255,255,128, 150))
canvas.stroke_text(str(pts), 10, 32)
canvas.sleep(20)
In all seriousness, I have tried to think through what can be happening and I am coming up blank. Since ipycanvas is talking to the browser/javascript maybe all of the data for the frames are rendered first and the array used to hold the pts data for the stroke_lines ends up with the final values... Whereas the B array is recreated in each loop... It's a guess.
There are two ways to get the code to behave as expected and avoid the unsightly time-traveling code. The first way is to switch the location of the line with hold_canvas(canvas): to inside the loop. This however renders the canvas.sleep(20) line rather useless.
canvas = Canvas(width=800, height=800)
display(canvas)
pts = np.random.randint(50, 750, size=[4, 2])
print(pts)
d = np.random.uniform(-8,8,size=[4,2])
c = rgb_to_hex(tuple(np.random.randint(75, 255,size=3)))
canvas.font = '16px serif'
#with hold_canvas(canvas):
for ani in range(300):
with hold_canvas(canvas):
for n in range(0,len(pts)):
if pts[n][0] > 800 or pts[n][0] < 0 :
d[n][0] = -d[n][0]
if pts[n][1] > 800 or pts[n][1] < 50 :
d[n][1] = -d[n][1]
pts[n]=pts[n] + d[n]
B = [(Bezier4(i, pts)).ravel() for i in np.linspace(0,1,25)]
canvas.clear()
canvas.stroke_style = c
canvas.stroke_lines(B)
canvas.stroke_style = rgb_to_hex((255,255,128, 50))
#pts2 = np.copy(pts)
canvas.stroke_lines(pts)
canvas.fill_style = rgb_to_hex((255,255,255, 150))
canvas.fill_circles(pts.T[0], pts.T[1],np.array([4]*4))
canvas.stroke_style = rgb_to_hex((255,255,128, 150))
canvas.fill_text(str(pts), 10, 32)
sleep(20/1000)
#canvas.sleep(20)
In this version, the control lines are updated as expected. This version is a little more "real time" and thus the sleep(20/1000) is needed to
The other way to do it would be just to ensure that a copy of pts is made and passed to canvas.stroke_lines:
canvas = Canvas(width=800, height=800)
display(canvas)
pts = np.random.randint(50, 750, size=[4, 2])
print(pts)
d = np.random.uniform(-8,8,size=[4,2])
c = rgb_to_hex(tuple(np.random.randint(75, 255,size=3)))
canvas.font = '16px serif'
with hold_canvas(canvas):
for ani in range(300):
#with hold_canvas(canvas):
for n in range(0,len(pts)):
if pts[n][0] > 800 or pts[n][0] < 0:
d[n][0] = -d[n][0]
if pts[n][1] > 800 or pts[n][1] < 50:
d[n][1] = -d[n][1]
pts[n]=pts[n] + d[n]
B = [(Bezier4(i, pts)).ravel() for i in np.linspace(0,1,35)]
canvas.clear()
canvas.stroke_style = c
canvas.stroke_lines(B)
canvas.stroke_style = rgb_to_hex((255,255,128, 50))
pts2 = np.copy(pts)
canvas.stroke_lines(pts2)
canvas.fill_style = rgb_to_hex((255,255,255, 150))
canvas.fill_circles(pts.T[0], pts.T[1],np.array([4]*4))
canvas.stroke_style = rgb_to_hex((255,255,128, 150))
canvas.fill_text(str(pts), 10, 32)
#sleep(20/1000)
canvas.sleep(20)
I could not actually find the data passed between the python and the browser but it seems pretty logical that what is happening is that python is finishing its work (and ani loop) before sending the widget instructions on what to draw, and the pts values sent are the final ones.
(yes I know there is a bug in the bouncing logic)

How to find orientation of an object in image?

I have bunch of images of gear and they all are in different orientation and I need them all in same orientation. I mean there is one reference image and rest of the images should be rotated so they look like same as reference image. I followed these steps, first segment the gear and then tried to find an angle using moments but its not working correctly. I've attached the 3 images considering the first image as reference image and here's the code so far
def adjust_gamma(image, gamma=1.0):
invGamma = 1.0 / gamma
table = np.array([((i / 255.0) ** invGamma) * 255
for i in np.arange(0, 256)]).astype("uint8")
return cv2.LUT(image, table)
def unsharp_mask(image, kernel_size=(13, 13), sigma=1.0, amount=2.5, threshold=10):
"""Return a sharpened version of the image, using an unsharp mask."""
blurred = cv2.GaussianBlur(image, kernel_size, sigma)
sharpened = float(amount + 1) * image - float(amount) * blurred
sharpened = np.maximum(sharpened, np.zeros(sharpened.shape))
sharpened = np.minimum(sharpened, 255 * np.ones(sharpened.shape))
sharpened = sharpened.round().astype(np.uint8)
if threshold > 0:
low_contrast_mask = np.absolute(image - blurred) < threshold
np.copyto(sharpened, image, where=low_contrast_mask)
return sharpened
def find_orientation(cont):
m = cv2.moments(cont, True)
cen_x = m['m10'] / m['m00']
cen_y = m['m01'] / m['m00']
m_11 = 2*m['m11'] - m['m00'] * (cen_x*cen_x+cen_y*cen_y)
m_02 = m['m02'] - m['m00'] * cen_y*cen_y
m_20 = m['m20'] - m['m00'] * cen_x*cen_x
theta = 0 if m_20==m_02 else atan2(m_11, m_20-m_02)/2.0
theta = theta * 180 / pi
return (cen_x, cen_y, theta)
def rotate_image(img, angles):
height, width = img.shape[:2]
rotation_matrix = cv2.getRotationMatrix2D((width/2, height/2), angles, 1)
rotated_image = cv2.warpAffine(img, rotation_matrix, (width,height))
return rotated_image
img = cv2.imread('gear1.jpg')
resized_img = imutils.resize(img, width=540)
height, width = resized_img.shape[:2]
gamma_adjusted = adjust_gamma(resized_img, 2.5)
sharp = unsharp_mask(gamma_adjusted)
gray = cv2.cvtColor(sharp, cv2.COLOR_BGR2GRAY)
gauss_blur = cv2.GaussianBlur(gray, (13,13), 2.5)
ret, thresh = cv2.threshold(gauss_blur, 250, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
thresh = cv2.morphologyEx(thresh, cv2.MORPH_DILATE, kernel, iterations=2)
kernel = np.ones((3,3), np.uint8)
thresh = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=2)
contours = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)[0]
cen_x, cen_y, theta = find_orientation(contours[0])
reference_angle = -24.14141919602858
rot_angle = 0.0
if theta < reference_angle:
rot_angle = -(theta - reference_angle)
else:
rot_angle = (reference_angle - theta)
rot_img = rotate_image(resized_img, rot_angle)
Can anyone tell me where did i go wrong? Any help would be appreciated.
Binarization of the gear and the holes seems easy. You should be able to discriminate the holes from noise and extra small features.
First find the geometric center, and sort the holes by angle around the center. Also compute the areas of the holes. Then you can try to match the holes to the model in a cyclic way. There are 20 holes, and you just need to test 20 positions. You can rate a matching by some combination of the differences in the angles and the areas. The best match tells you the orientation.
This should be very reliable.
You can obtain a very accurate value of the angle by computing the average error per hole and correcting to cancel that value (this is equivalent to least-squares fitting).

Crop satellite image image based on a historical image with OpenCV in Python

I have the following problem, I have a pair of two images one historical and one present-day satellite image and as the historical image covers a smaller area I want to crop the satellite images. Here one can see the code I wrote for this:
import numpy as np
import cv2
import os
import imutils
import math
entries = os.listdir('../')
refImage = 0
histImages = []
def loadImage(index):
referenceImage = cv2.imread("../" + 'ref_' + str(index) + '.png')
top = int(0.5 * referenceImage.shape[0]) # shape[0] = rows
bottom = top
left = int(0.5 * referenceImage.shape[1]) # shape[1] = cols
right = left
referenceImage = cv2.copyMakeBorder(referenceImage, top, bottom, left, right, cv2.BORDER_CONSTANT, None, (0,0,0))
counter = 0
for entry in entries:
if entry.startswith("image_"+str(index)):
refImage = referenceImage.copy()
histImage = cv2.imread("../" + entry)
#histImages.append(img)
points = np.loadtxt("H2OPM/"+"CP_"+ entry[6:9] + ".txt", delimiter=",")
vector_image1 = [points[0][0] - points[1][0], points[0][1] - points[1][1]] #hist
vector_image2 = [points[0][2] - points[1][2], points[0][3] - points[1][3]] #ref
angle = angle_between(vector_image1, vector_image2)
hhist, whist, chist = histImage.shape
rotatedImage = imutils.rotate(refImage, angle)
x = int(points[0][2] - points[0][0])
y = int(points[1][2] - points[1][0])
crop_img = rotatedImage[x+left:x+left+hhist, y+top:y+top+whist]
print("NewImageWidth:", (y+top+whist)-(y+top),(x+left+hhist)-(x+left))
print(entry)
print(x,y)
counter += 1
#histImage = cv2.line(histImage, (points[0][0], ), end_point, color, thickness)
cv2.imwrite("../matchedImages/"+'image_' + str(index) + "_" + str(counter) + '.png' ,histImage)
#rotatedImage = cv2.line(rotatedImage, (), (), (0, 255, 0), 9)
cv2.imwrite("../matchedImages/"+'ref_' + str(index) + "_" + str(counter) + '.png' ,crop_img)
First, I load the original satellite image and pad it so I don't lose information due to the rotation, second, I load one of the matched historical images as well as the matched keypoints of the two images (i.e. a list of x_hist, y_hist, x_present_day, y_present_day). Third, I compute the rotation angle between the two images (which works) and fourth, I crop the image (and fifth, I save the images).
Problem: As stated the rotation works fine, but my program ends up cropping the wrong part of the image.
I think that, due to the rotation, the boundaries (i.e. left, right, top, bottom) are no longer correct and I think this is where my problem lies, but I am not sure how to fix this problem.
Information that might help:
The images are both scaled the same way (so one pixel = approx. 1m)
I have at least 6 keypoints for each image
I haven't looked at your code yet but would it be due to you mixing up the x's and y's ? Check the OpenCV documentation to make sure the variables you import are in the correct order.
During my limited time and experience with opencv, it is quite weird because sometimes, it asks for for example, BGR instead of RGB values. (In my programme, not yours)
Also, you seem to have a bunch of lists, make sure the list[x][y] is not mixed up as list[y][x]
So I found the error in my computation. The bounding boxes of the cutout area were wrongly converted into the present-day image.
So this:
x = int(points[0][2] - points[0][0])
y = int(points[1][2] - points[1][0])
was swapped with this:
v = [pointBefore[0],pointBefore[1],1]
# Perform the actual rotation and return the image
calculated = np.dot(m,v)
newPoint = (int(calculated[0]- points[0][0]),int(calculated[1]- points[0][1]))
where m(=M) is from the transformation:
def rotate_bound(image, angle):
# grab the dimensions of the image and then determine the
# center
(h, w) = image.shape[:2]
(cX, cY) = (w // 2, h // 2)
# grab the rotation matrix (applying the negative of the
# angle to rotate clockwise), then grab the sine and cosine
# (i.e., the rotation components of the matrix)
M = cv2.getRotationMatrix2D((cX, cY), -angle, 1.0)
cos = np.abs(M[0, 0])
sin = np.abs(M[0, 1])
# compute the new bounding dimensions of the image
nW = int((h * sin) + (w * cos))
nH = int((h * cos) + (w * sin))
# adjust the rotation matrix to take into account translation
M[0, 2] += (nW / 2) - cX
M[1, 2] += (nH / 2) - cY
# perform the actual rotation and return the image
return cv2.warpAffine(image, M, (nW, nH)), M
Thanks.

Sphere-Sphere Intersection

I have two spheres that are intersecting, and I'm trying to find the intersection point nearest in the direction of the point (0,0,1)
My first sphere's (c1) center is at (c1x = 0, c1y = 0, c1z = 0) and has a radius of r1 = 2.0
My second sphere's (c2) center is at (c2x = 2, c2y = 0, c2z = 0) and has a radius of r2 = 2.0
I've been following the logic on this identical question for the 'Typical intersections' part, but was having some trouble understanding it and was hoping someone could help me.
First I'm finding the center of intersection c_i and radius of the intersecting circle r_i:
Here the first sphere has center c_1 and radius r_1, the second c_2 and r_2, and their intersection has center c_i and radius r_i. Let d = ||c_2 - c_1||, the distance between the spheres.
So sphere1 has center c_1 = (0,0,0) with r_1 = 2. Sphere2 has c_2 = (2,0,0) with r_2 = 2.0.
d = ||c_2 - c_1|| = 2
h = 1/2 + (r_1^2 - r_2^2)/(2* d^2)
So now I solve the function of h like so and get 0.5:
h = .5 + (2^2 - 2^2)/(2*2^2)
h = .5 + (0)/(8)
h = 0.5
We can sub this into our formula for c_i above to find the center of the circle of intersections.
c_i = c_1 + h * (c_2 - c_1)
(this equation was my original question, but a comment on this post helped me understand to solve it for each x,y,z)
c_i_x = c_1_x + h * (c_2_x - c_1_x)
c_i_x = 0 + 0.5 * (2 - 0) = 0.5 * 2
1 = c_i_x
c_i_y = c_1_y + h * (c_2_y - c_1_y)
c_i_y = 0 + 0.5 * (0- 0)
0 = c_i_y
c_i_z = c_1_z + h * (c_2_z - c_1_z)
c_i_z = 0 + 0.5 * (0 - 0)
0 = c_i_z
c_i = (c_i_x, c_i_z, c_i_z) = (1, 0, 0)
Then, reversing one of our earlier Pythagorean relations to find r_i:
r_i = sqrt(r_1*r_1 - hhd*d)
r_i = sqrt(4 - .5*.5*2*2)
r_i = sqrt(4 - 1)
r_i = sqrt(3)
r_i = 1.73205081
So if my calculations are correct, I know the circle where my two spheres intersect is centered at (1, 0, 0) and has a radius of 1.73205081
I feel somewhat confident about all the calculations above, the steps make sense as long as I didn't make any math mistakes. I know I'm getting closer but my understanding begins to weaken starting at this point. My end goal is to find an intersection point nearest to (0,0,1), and I have the circle of intersection, so I think what I need to do is find a point on that circle which is nearest to (0,0,1) right?
The next step from this solutionsays:
So, now we have the center and radius of our intersection. Now we can revolve this around the separating axis to get our full circle of solutions. The circle lies in a plane perpendicular to the separating axis, so we can take n_i = (c_2 - c_1)/d as the normal of this plane.
So finding the normal of the plane involves n_i = (c_2 - c_1)/d, do I need to do something similar for finding n_i for x, y, and z again?
n_i_x = (c_2_x - c_1_x)/d = (2-0)/2 = 2/2 = 1
n_i_y = (c_2_y - c_1_y)/d = (0-0)/2 = 0/2 = 0
n_i_z = (c_2_z - c_1_z)/d = (0-0)/2 = 0/2 = 0
After choosing a tangent and bitangent t_i and b_i perpendicular to this normal and each other, you can write any point on this circle as: p_i(theta) = c_i + r_i * (t_i * cos(theta) + b_i sin(theta));
Could I choose t_i and b_i from the point I want to be nearest to? (0,0,1)
Because of the Hairy Ball Theorem, there's no one universal way to choose the tangent/bitangent to use. My recommendation would be to pick one of the coordinate axes not parallel to n_i, and set t_i = normalize(cross(axis, n_i)), and b_i = cross(t_i, n_i) or somesuch.
c_i = c_1 + h * (c_2 - c_1)
This is vector expression, you have to write similar one for every component like this:
c_i.x = c_1.x + h * (c_2.x - c_1.x)
and similar for y and z
As a result, you'll get circle center coordinates:
c_i = (1, 0, 0)
As your citate says, choose axis not parallel to n vect0r- for example, y-axis, get it's direction vector Y_dir=(0,1,0) and multiply by n
t = Y_dir x n = (0, 0, 1)
b = n x t = (0, 1, 0)
Now you have two vectors t,b in circle plane to build circumference points.

ValueError: operands could not be broadcast together with shapes (3,) (0,)

My aim is to make the image1 move along the ring from its current position upto 180 degree. I have been trying to do different things but nothing seem to work. My final aim is to move both the images along the ring in different directions and finally merge them to and make them disappear.I keep getting the error above.Can you please help? Also can you tell how I can go about this problem?
from visual import *
import numpy as np
x = 3
y = 0
z = 0
i = pi/3
c = 0.120239 # A.U/minute
r = 1
for theta in arange(0, 2*pi, 0.1): #range of theta values; 0 to
xunit = r * sin(theta)*cos(i) +x
yunit = r * sin(theta)*sin(i) +y
zunit = r*cos(theta) +z
ring = curve( color = color.white ) #creates a curve
for theta in arange(0, 2*pi, 0.01):
ring.append( pos=(sin(theta)*cos(i) +x,sin(theta)*sin(i) +y,cos(theta) +z) )
image1=sphere(pos=(2.5,-0.866,0),radius=0.02, color=color.yellow)
image2=sphere(pos=(2.5,-0.866,0),radius=0.02, color=color.yellow)
earth=sphere(pos=(-3,0,-0.4),color=color.yellow, radius =0.3,material=materials.earth) #creates the observer
d_c_p = pow((x-xunit)**2 + (y-yunit)**2 + (z-zunit)**2,0.5) #calculates the distance between the center and points on ring
d_n_p = abs(yunit + 0.4998112152755791) #calculates the distance to the nearest point
t1 = ( d_c_p+d_n_p)/c
t0=d_c_p/c
t=t1-t0 #calculates the time it takes from one point to another
theta = []
t = []
dtheta = np.diff(theta) #calculates the difference in theta
dt = np.diff(t) #calculates the difference in t
speed = r*dtheta/dt #hence this calculates the speed
deltat = 0.005
t2=0
while True:
rate(5)
image2.pos = image2.pos + speed*deltat #increments the position of the image1
t2 = t2 + deltat
Your problem is that image2.pos is a vector (that's the "3" in the error message) but speed*deltat is a scalar (that's the "0" in the error message). You can't add a vector and a scalar. Instead of a scalar "speed" you need a vector velocity. There seem to be some errors in indentation in the program you posted, so there is some possibility I've misinterpreted what you're trying to do.
For VPython questions it's better to post to the VPython forum, where there are many more VPython users who will see your question than if you post to stackoverflow:
https://groups.google.com/forum/?fromgroups&hl=en#!forum/vpython-users

Resources