Hello Everyone I wrote this sample code for 3R spatial geometry for inverse kinematics of a 6 axis robot.
Although the math looks fine to me I am getting different values in my result
import numpy as np
x=795
y=0
z=1264
d1=450
a1=155
a2=614
a3=np.sqrt(200**2+640**2)
print(a3)
#first joint angle
alpha=np.arctan(200/640)
#print(alpha)
theta1=np.arctan(y/x)
v1=np.cos(theta1)-a1
exd=x/v1
ezd=z-d1
v2=exd*exd+ezd*ezd-a2*a2-a3*a3
v3=2*a2*a3
v4=v2/v3
#print('\n',v4)
theta3=-np.arccos(v4)
theta2=np.arctan2(ezd,exd)-np.arctan2((a3*np.sin(theta3)),(a2+(a3*np.cos(theta3))))
theta1t1=theta1*180/np.pi
theta2t2=theta2*180/np.pi
theta3t3=(theta3+(np.pi/2-alpha))*180/np.pi
print(theta1t1,'\n')
print(theta2t2,'\n')
print(theta3t3)
My calculations are based on the following snippets
illustration
calculations
The output is coming out to be
670.5221845696084
0.0
100.9534267368054
52.12008227279429
which I do not understand why
I tried figuring out the mathematics many times but can't get any other result.
Maybe something is wrong with the interpreted output or I am not handling some cases.
Related
I have a cumulative function (CDF) made of 6 points. I have to interpolate it so I used interp1d (from scipy.interpolate import interp1d), the results is the following:
the blue dots are the initial data and the red curve is after linear intepolation.
However, I am not really happy about it especially between the point 4 and 5 the assumption of linear relation is underestimating the real curve (if I think of this curve as a sigmoid or hyperbolic tangent). Therefore I tried to use always interp1d but with quadratic and cubic and the result is catastrofic
the output makes no sense and it completely wrong, so my question is
how to make my original linear fit a bit more smooth and similar to a real cumulative function?
Thanks, Luigi
Try monotone interpolants, akima/pchip
I'm having a weird problem using the numpy fft class. I have the following bit of test code:
import numpy as np
import scipy.io.wavfile
import matplotlib.pyplot as plt
fs, a = scipy.io.wavfile.read('test.wav') # import audio file
spectrum = np.fft.fft(a) # create spectrum
b = np.real(np.fft.ifft(spectrum)) # reconstruct signal
# Print power of original and output signal
print(np.average(a**2))
print(np.average(b**2))
It outputs:
1497.887578558565
4397203.934254291
As expected for these values, the output is much louder than the input. The documentation for numpy.fft.ifft states:
"This function computes the inverse of the one-dimensional n-point discrete Fourier transform computed by fft. In other words, ifft(fft(a)) == a to within numerical accuracy."
Thus the signal should be nearly identical. Yet they are obviously not.
What am I doing wrong here?
Okay I managed to find the solution myself in the end.
The problem arises because the output of wavfile.read is an integer array. For some reason, the fft function handles integers in a different manner than floats. The problem is solved by typecasting a to an np.float64 type.
Why this happens is still not quite clear to me though.
I have am working with numpy tensors of shape (N,2,128,128).
When trying to visualize these as images (I reconstruct via ifft2), numpy and pyTorch seems to mix things up in an crazy manner ...
I have checked with small dummy arrays and when I pass a numpy ndarray to a torch.FloatTensor the values are exactly the same at the same positions (same shape!), but when I try to do an ifft2 on the torch tensor ones, the result is different than on the non-torch tensor! Can someone help me make sense of this ?
A small reproducible example is:
x=np.random.rand(3,2,2,2)
xTorch=torch.FloatTensor(x)
#visualize then in the interpreter, they are the same!
#
#now show the magnitude of an inverse fourier transform
plt.imshow(np.abs(np.fft.ifft2(xTorch[0,0,:,:]+1j*xTorch[0,1,:,:])))
plt.show()
plt.imshow(np.abs(np.fft.ifft2(x[0,0,:,:]+1j*x[0,1,:,:])))
plt.show()
#they are not the same ! What is the problem!?
I found out that if I use: torch.Tensor.cpu(xTorch).detach().numpy() I can get the same result, but what does that mean?!
P.S.
ALso, note that I know the correct visualization is with the x and not the xTensor, so it seems that torch is changing something when I do the ifft2 .. or when I reconstruct the 2 channels...or maybe there is a problem/bug with complex numbers ...
If you look inside : np.abs(np.fft.ifft2(x[0,0,:,:]+1j*x[0,1,:,:])) and the xTorch one, the values are so different, that it is not just a problem of floating point error, it is something serious, but I can't figure it out and it's driving me crazy.
I'm working on a 3D reconstruction system and want to generate a triangular mesh from the registered point cloud data using Python 3. My objects are not convex, so the marching cubes algorithm seems to be the solution.
I prefer to use an existing implementation of such method, so I tried scikit-image and Open3d but both the APIs do not accept raw point clouds as input (note that I'm not expert of those libraries). My attempts to convert my data failed and I'm running out of ideas since the documentation does not clarify the input format of the functions.
These are my desired snippets where pcd_to_volume is what I need.
scikit-image
import numpy as np
from skimage.measure import marching_cubes_lewiner
N = 10000
pcd = np.random.rand(N,3)
def pcd_to_volume(pcd, voxel_size):
#TODO
volume = pcd_to_volume(pcd, voxel_size=0.05)
verts, faces, normals, values = marching_cubes_lewiner(volume, 0)
open3d
import numpy as np
import open3d
N = 10000
pcd = np.random.rand(N,3)
def pcd_to_volume(pcd, voxel_size):
#TODO
volume = pcd_to_volume(pcd, voxel_size=0.05)
mesh = volume.extract_triangle_mesh()
I'm not able to find a way to properly write the pcd_to_volume function. I do not prefer a library over the other, so both the solutions are fine to me.
Do you have any suggestions to properly convert my data? A point cloud is a Nx3 matrix where dtype=float.
Do you know another implementation [of the marching cube algorithm] that works on raw point cloud data? I would prefer libraries like scikit and open3d, but I will also take into account github projects.
Do you know another implementation [of the marching cube algorithm] that works on raw point cloud data?
Hoppe's paper Surface reconstruction from unorganized points might contain the information you needed and it's open sourced.
And latest Open3D seems to be containing surface reconstruction algorithms like alphaShape, ballPivoting and PoissonReconstruction.
From what I know, marching cubes is usually used for extracting a polygonal mesh of an isosurface from a three-dimensional discrete scalar field (that's what you mean by volume). The algorithm does not work on raw point cloud data.
Hoppe's algorithm works by first generating a signed distance function field (a SDF volume), and then passing it to marching cubes. This can be seen as an implementation to you pcd_to_volume and it's not the only way!
If the raw point cloud is all you have, then the situation is a little bit constrained. As you might see, the Poisson reconstruction and Screened Poisson reconstruction algorithm both implement pcd_to_volume in their own way (they are highly related). However, they needs additional point normal information, and the normals have to be consistently oriented. (For consistent orientation you can read this question).
While some Delaunay based algorithm (they do not use marching cubes) like alphaShape and this may not need point normals as input, for surfaces with complex topology, it's hard to get a satisfactory result due to orientation problem. And the graph cuts method can use visibility information to solve that.
Having said that, if your data comes from depth images, you will usually have visibility information. And you can use TSDF to build a good surface mesh. Open3D have already implemented that.
I'm currently using svc to separate two classes of data (the features below are named data and the labels are condition). After fitting the data using the gridSearchCV I get a classification score of about .7 and I'm fairly happy with that number. After that though I went to get the relative distances from the hyper-plane for data from each class using grid.best_estimator_.decision_function() and plot them in a boxplot and a histogram to get a better idea of how much overlap there is. My problem is that in the histogram and the boxplot these look perfectly seperable shich I know is not the case. I'm sure I'm calling decision_function() incorrectly but not sure how to do this really.
svc=SVC(kernel='linear,probability=True,decision_function_shape='ovr')
cv=KFold(n_splits=4,shuffle=True)
svc=SVC(kernel='linear,probability=True,decision_function_shape='ovr')
C_range=[1,.001,.005,.01,.05,.1,.5,5,50,10,100]
param_grid=dict(C=C_range)
grid=GridSearchCV(svc,param_grid=param_grid, cv=cv,n_jobs=4,iid=False, refit=True)
grid.fit(data,condition)
print grid.best_params
print grid.best_score_
x=grid.best_estimator_.decision_function(data)
plt.hist(x)
sb.boxplot(condition,x)
sb.swarmplot
In the histogram and box plots it looks like almost all of the points have a distance of either exactly positive or negative one with nothing between them.