Can you help a newbie out - vpython

The position of an aerial imaging drone can be described by the following expression:
r(t) = (3 m) cos((0.6π r/s)t) ı̂+ (2 m) sin((0.6π r/s)t) ȷ̂+ (0.05 m/s^2)t^2k.
Note the arguments of the trigonometric functions are in radians.
GlowScript Simulations:
i. Use GlowScript to simulate the motion of the drone. You may use a simple box
or sphere object as the drone. Be sure to include a ground/floor so that the
motion of the drone is clearly visible.
ii. Using your GlowScript simulation, calculate the position, velocity, and
accelerate of the drone at tt = 10 ss. What value of dddd is appropriate for these
calculations?

Related

How to Transform X and Y Coordinates to Left and Right Motor Movement

I am trying to implement an EKF Slam to a robot using python. The robot returns an estimate of its current x and y position and an angle (the direction the robot is facing). For my slam however I need this to be left and right motor movements which can be understood as the way a tank is driving: One side is moving more than the other which results in a turning motion.
Is there a formula to calculate l and r from x, y, theta?
This is usually modeled as a differential drive robot. Unfortunately, there isn't an equation directly relating x, y, and theta to wheel angular excursion (e.g. encoder position) L and R. However, there is a velocity based relationship, which can be seen here https://www.mathworks.com/help/robotics/ug/mobile-robot-kinematics-equations.html.
The system is nonholonomic, which makes designing an observer harder, but you can use local linear approximations to some degree.

Can camera.lookAt function be calculated from an angle and an axis of rotation, given a target point and the camera forward?

I am trying to understand three.js's camera.lookAt function, and to implement my own.
I'm using eye = camera position, target = target look at point, and up is always (0, 1, 0). A friend proposed that the obvious way to rotate a camera to look at a point in space would be to get the desired forward by calculating target - eye, and compute the angle between the camera's forward vector (0, 0, -1) and the target forward (using atan2 method described in these answers), and this is the angle of rotation. I would be able to find the axis of the rotation by computing the crossProduct of the forward vector and the desired forward. I would use a function like setFromAxisAngle to get the resulting quaternion.
Tried to draw it here:
Would this work in theory? When testing it against the canonical lookAt method, which uses eye, up, and target and does z = (eye - target), x = Cross(up, z), y = Cross(x, z) -- (also, why is it eye - target instead of target - eye?) -- I see small ( < 0.1 differences).
I personally think the implementation of three.js's Matri4.lookAt() method is somewhat confusing. It's also in the wrong class, it should be placed in Matrix3. Anyway, a more clear implementation can be found in the MathGeoLib, a C++ library for linear algebra and geometry manipulation for computer graphics.
https://github.com/juj/MathGeoLib/blob/ae6dc5e9b1ec83429af3b3ba17a7d61a046d3400/src/Math/float3x3.h#L167-L198
https://github.com/juj/MathGeoLib/blob/ae6dc5e9b1ec83429af3b3ba17a7d61a046d3400/src/Math/float3x3.cpp#L617-L678
A lookAt() method should first build an orthonormal linear basis A (localRight, localUp, localForward) for the object's local space. Then it builds an orthonormal linear basis B (worldRight, worldUp, targetDirection) for the desired target orientation. The primary task of lookAt() is to map from basis A to B. This is done by multiplying m1 (basis B) with the inverse of m2(basis A). Since this matrix is orthonormal, the inverse is computed by a simple transpose.
m1.makeBasis( worldRight, worldUp, targetDirection );
m2.makeBasis( localRight, localUp, localForward );
this.multiplyMatrices( m1, m2.transpose() );
this references to an instance of a 3x3 matrix class.
I suggest you carefully study the well-documented C++ code in order to understand each single step of the method.

Dynamic Camera Caliberation OpenCV python3

I am planning on making a robotic arm. I have a camera mounted on the arm. I am using Opencv with python3 to do IP.
I want the arm to detect the point on the ground and the servos to move accordingly. I have completed the part of detection and calculating the world coordinates. Also, the inverse kinematics that is required.
The problem here is that I have calibrated the camera for a certain height (20 cm). So, the correct world coordinates are received at the height of 20 cm only. I want the camera to keep correcting the reading at every 2s that it moves towards the ground (downward).
Is there a way that I can do the calibration dynamically, and give dynamic coordinates to my arm? I don't know if this is the right approach. If there is another method to do this, please help.
I am assuming you are using the undistort function to first undistort the image and then using the rotational vector(rcvt) and translational vector(tvct) along with distortCoeffs to get the world coordinates. The correct coordinates are only obtained at that specific height because the rvct and tvct will change according to the square size (of the chess-board) used for calibration.
A smart way to overcome this would be to eliminate the rotational vector and translational vector easily.
Since the camera calibration constants remain the same at any height/rotation, it can be used in this. Also, rather than calibrating it every 2 seconds (which would consume too much CPU), directly use the method below to get the values!
Let's say (img_x, img_y) is the image coordinate which you need to transform to world coordinate (world_x, world_y) and cameraMatrix is your camera matrix. For this method, you need to know the distance_cam, that is, the perpendicular distance of your object from the camera.
Using python, and opencv, use the following code :
import numpy as np
from numpy.linalg import inv
img_x, img_y = 20, 30 # your image coordinates go here
world_coord = np.array([[img_x], [img_y], [1]]) # create a 3x1 matrix
world_coord = inv(cameraMatrix) * world_coord # use formula cameraMatrix^(-1)*coordinates
world_coord = world_coord * distance_cam
world_x = world_coord[0][0]
world_y = world_coord[1][1]
print(world_x, world_y)
At first, we may not realise that the units in the world coordinates don't matter. After multiplying by the inverse of the camera matrix you have defined the ratio x/z which is unitless. So, you can choose the distance_cam in any unit and the end result would be in the units of distance_cam, that is, if distance_cam was in mm, then world_x, world_y would also be in mm.

estimate the slope of the straight part in boltzmann curve

I was working with one dataset and found the curve to be sigmoidal. i have fitted the curve and got the equation A2+((A1-A2)/1+exp((x-x0)/dx)) where:
x0 : Mid point of the curve
dx : slope of the curve
I need to find the slope and midpoint in order to give generalized equation. any suggestions?
You should be able to simplify the modeling of the sigmoid with a function of the following form:
The source includes code in R showing how to fit your data to the sigmoid curve, which you can adapt to whatever language you're writing in. The source also notes the following form:
Which you can adapt the linked R code to solve for. The nice thing about the general functions here will be that you can solve for the derivative from them. Also, you should note that the midpoint of the sigmoid is just where the derivative of dx (or dx^2) is 0 (where it goes from neg to pos or vice versa).
Assuming your equation is a misprint of
A2+(A1-A2)/(1+exp((x-x0)/dx))
then your graph does not reflect zero residual, since in your graph the upper shoulder is sharper than the lower shoulder.
Likely the problem is your starting values. Try using the native R function SSfpl, as in
nls(y ~ SSfpl(x,A2,A1,x0,dx))

In case of a given graph , Is that possible to build trapezoidal map in linear time

[This regarding to Computational geometry in CS]
Let's say that I have a graph G which contains v vectices and e edges, For instance a veronoi diagram VD(G).
[I'd like to build a trapezodial map out of my given graph,][1]
Is that possible to build trapezodial map in linear time for a given graph, Instead of O(nlogn) regular construction time ?
I have been thinking about sweep line trapezoidal map construction where for each edge during the sweep line would construct the upper and lower sites.
Thanks in advanced
No, the graph may consist of v/2 horizontal segments stacked on top of each other. Building the trapezoidal map means you sort these segments by height, and that takes at least c v log v time.

Resources