I am trying to calculate X and Y speed for small object that are attracted by gravity of one big object This is video how my calculation works. As you can see after a while every small object goes into one point. I want achieve moves like in this video. How may I calculate X and Y speed. I have saved coordinates of all small object in list and every game cycle I add X speed to X coordinate and Y speed to Y coordinate.
From what I can tell from the videos, in yours, the speed is variable. And in the video that you're aiming for, the velocities are either -1 or 1? It seems in both cases, (more so with your version admittedly) the small objects converge to a point.
Related
I am trying to implement an EKF Slam to a robot using python. The robot returns an estimate of its current x and y position and an angle (the direction the robot is facing). For my slam however I need this to be left and right motor movements which can be understood as the way a tank is driving: One side is moving more than the other which results in a turning motion.
Is there a formula to calculate l and r from x, y, theta?
This is usually modeled as a differential drive robot. Unfortunately, there isn't an equation directly relating x, y, and theta to wheel angular excursion (e.g. encoder position) L and R. However, there is a velocity based relationship, which can be seen here https://www.mathworks.com/help/robotics/ug/mobile-robot-kinematics-equations.html.
The system is nonholonomic, which makes designing an observer harder, but you can use local linear approximations to some degree.
Question is, I want to calculate the speed of my arm for Slap detection. So I am using openpose to get the body points (here total points: 25) using body_25 model and using this along with the time I want to deduce the speed of my arm, i googled through openpose, stackoverflow, github.But could not succeed?
Velocity = Distance / Time = dx/dt
dx = frame3_bodypoints - frame_1_bodypoints;
dt = ?
I don't know how to find this from the openpose, is there a way I can find this? Any thoughts, would be great help!
I've never used OpenPose. But Newtonian physics would indicate that a slap corresponds to a sudden change in velocity of the hand.
I think it's a reasonable first approximation to assume that the Δt between frames is constant. Instantaneous variation in frame rate is called jitter. I would expect jitter to be small for modern recording devices. In any case, I don't know how to get instantaneous frame rate with the tools (OpenCV, PIL) that I am familiar with. I couldn't find any references to frame rate or time in the OpenPose docs.
For calculating velocity and delta-velocity, you have choices. Straight up linear velocity of the hand might be the easiest. For position changes use the geometric mean of positions (Δs = sqrt((x2-x1)^2 + (y2-y1)^2).
You could also calculate an angular velocity between the hand and the elbow, but that would be a little more involved and prone to noise.
I made an object tracker that calculates the position of an object recorded in a live camera feed using stereoscopic cameras. The math was simple, once you know the camera distance and orientation. However, now I thought it would be nice to allow me to quickly extract all these parameters, so when I change my setup or cameras I will be able to quickly calibrate it again.
To calculate the object position I made some simplifications/assumptions, which made the math easier: the cameras are in the same YZ plane, so there is only a distance in x between them. Their tilt is also just in the XY plane.
To reverse the triangulation I thought a test pattern (square) of 4 points of which I know the distances to each other would suffice. Ideally I would like to get the cameras' positions (distances to test pattern and each other), their rotation in X (and maybe Y and Z if applicable/possible), as well as their view angle (to translate pixel position to real world distances - that should be a camera constant, but in case I change cameras, it is quite a bit to define accurately)
I started with the same trigonometric calculations, but always miss parameters. I am wondering if there is an existing solution or a solid approach. If I need to add parameter (like distances, they are easy enough to measure), it's no problem (my calculations didn't give me any simple equations with that possibility though).
I also read about Homography in opencv, but it seems it applies to 2D space only, or not?
Any help is appreciated!
I have got this problem, so there is a set of data as points in the spherical coordinate system - local (not a faithful arrangement of geographic or mathematical)and I'm trying to convert it to a Cartesian system to preview it in any program to draw the shape which should rise from these points .
Points are collected by the meter with a rotating laser head (thus slightly noisy). The head rotates in two axes, called phi, theta and the distance r.
Where
phi - is left-right rotation (-90 to 90)
theta - is up-down rotation (-90 to 90)
r - the distance
This can be seen in the figure below:
I tried to convert the data to Cartesian (xyz) according to the following formulas:
Unfortunately, every time something happens to me run them down and the picture that I get is incorrect.
For sample collection:
sample
I get such a picture (seen from top):
The expected result should be a rectangular tub (with bare upper part). This first arc (at the point where data has not yet been ran over) is called. lens effect, resulting from the fact that the meter was close to the wall, and second end of graph puzzles me end where the data are arranged in a completely unexpected.
With this number of points is hard for me to figure out what causes failure or bad for the conversion of data, or simply meter so measured. I would be grateful for verification my way of thinking and any advices.
Thank you in advance.
I think i am late to answer this question.
I can't see the images, anyway you can go through enter link description here .
It will give you clear idea how to convert spherical cordinate data into cartesian cordinate system.
So say I have an image that I want to "pixelate". I want this sharp image represented by a grid of, say, 100 x 100 squares. So if the original photo is 500 px X 500 px, each square is 5 px X 5 px. So each square would have a color corresponding to the 5 px X 5 px group of pixels it swaps in for...
How do I figure out what this one color, which is best representative of the stuff it covers, is? Do I just take the R G and B numbers for each of the 25 pixels and average them? Or is there some obscure other way I should know about? What is conventionally used in "pixelation" functions, say like in photoshop?
If you want to know about the 'theory' of pixelation, read up on resampling (and downsampling in particular). Pixelation algorithms are simply downsampling an image (using some downsampling method) and then upsampling it using nearest-neighbour interpolation. Note that in code these two steps may be fused into one.
For downsampling in general, to downsample by a factor of n the image is first filtered by an appropriate low-pass filter, and then one sample out of every n is taken. An "ideal" filter to use is the sinc filter, but because of issues with implementing it, the Lanczos filter is often used as a close alternative.
However, for almost all purposes when doing pixelization, using a simple box blur should work fine, and is very simple to implement. This is just an average of nearby pixels.
If you don't need to change the output size of the image, then this means you divide the image into blocks (the big resulting pixels) which are k×k pixels, and then replace all the pixels in each block with the average value of the pixels in that block.
when the source and target grids are so evenly divisible and aligned, most algorigthms give similar results. if the grids are fixed, go for simple averages.
in other cases, especially when resizing by a small percentage, the quality difference is quite evident. the simplest enhancement over simple average is weighting each pixel value considering how much of it's contained in the target pixel's area.
for more algorithms, check multivariate interpolation