I want to use the ransac algorithm to segment the ground plane lidar rings, I use the python-pcl to do that, but I got the false results as the pictures showed below.
As we known, the lidar data has many rings of the ground plane, it can't find the right plane of the ground, but it found the plane above the ground. The reason I can guess is that maybe the ground of lidar is very sparse, and the plane above the ground has the points numbers more than the ground, so the algorithm find the false results. The code can be list as below:
seg = point_cloud.make_segmenter()
seg.set_optimize_coefficients(True)
seg.set_model_type(pcl.SACMODEL_PLANE)
seg.set_method_type(pcl.SAC_RANSAC)
seg.set_distance_threshold(0.1)
indices, model = seg.segment()
It's uncertain whether is the problem I guessed, so if anyone met the problem before, please tell me. And I don't know how to solve the problem, there is few information about the lidar rings segmentation, does anyone know how to solve it?
And is there other methods to do the lidar ground segmentation which I can get the code?
enter image description here
Try this -
from mpl_toolkits.mplot3d.axes3d import *
import matplotlib.pyplot as plt
from sklearn import linear_model
fig = plt.figure("Pointcloud")
ax = Axes3D(fig)
ax.grid = True
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_zlabel("Z")
xyz = get_points()# here xyz is a 3d numpy array
if xyz.size > 10:
XY = xyz[:, :2]
Z = xyz[:, 2]
ransac = linear_model.RANSACRegressor(residual_threshold=0.01)
ransac.fit(XY, Z)
inlier_mask = ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
inliers = np.zeros(shape=(len(inlier_mask), 3))
outliers = np.zeros(shape=(len(outlier_mask), 3))
a, b = ransac.estimator_.coef_
d = ransac.estimator_.intercept_
for i in range(len(inlier_mask)):
if not outlier_mask[i]:
inliers[i] = xyz[i]
else:
outliers[i] = xyz[i]
min_x = np.amin(inliers[:, 0])
max_x = np.amax(inliers[:, 0])
min_y = np.amin(inliers[:, 1])
max_y = np.amax(inliers[:, 1])
x = np.linspace(min_x, max_x)
y = np.linspace(min_y, max_y)
X, Y = np.meshgrid(x, y)
Z = a * X + b * Y + d
AA = ax.plot_surface(X, Y, Z, cmap='binary', rstride=1, cstride=1,
alpha=1.0)
BB = ax.scatter(outliers[:, 0], outliers[:, 1], outliers[:, 2],c='k', s
=1)
CC = ax.scatter(inliers[:, 0], inliers[:, 1], inliers[:, 2], c='green',
s=1)
plt.show()
Or please provide your dataset.Also play around with the ransac parameters
Related
How can I compute the euclidean distance to the boundary decision of the EllipticEnvelope? Here is my code :
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.covariance import EllipticEnvelope
from sklearn.model_selection import train_test_split
feature, output = "temperature", "consumption"
data = pd.DataFrame(np.random.normal(0,15, size=(2355,2)), columns=[feature, output])
X = data[[feature, output]]
X_train, X_test = train_test_split(X, shuffle=True, train_size=0.8)
model = EllipticEnvelope(contamination=0.18)
model.fit(X_train)
# extract the model predictions
y_pred = pd.Series(model.predict(X), index=X.index, name="anomaly")
# define the meshgrid : X = (u,v).T
u_min, u_max = X_train.iloc[:, 0].min() - 1.5, X_train.iloc[:, 0].max() + 1.5
v_min, v_max = X_train.iloc[:, 1].min() - 1.5, X_train.iloc[:, 1].max() + 1.5
n_points = 500
u = np.linspace(u_min, u_max, n_points)
v = np.linspace(v_min, v_max, n_points)
U, V = np.meshgrid(u, v)
# evaluate the decision function on the meshgrid
W = model.decision_function(np.c_[U.ravel(), V.ravel()])
W = W.reshape(U.shape)
plt.figure(figsize=(20,6))
a = plt.contour(U, V, W, levels=[0], linewidths=2, colors="black")
b = plt.scatter(X.loc[y_pred == 1].iloc[:, 0], X.loc[y_pred == 1].iloc[:, 1], c="yellowgreen", edgecolors='k')
c = plt.scatter(X.loc[y_pred == -1].iloc[:, 0], X.loc[y_pred == -1].iloc[:, 1], c="tomato", edgecolors='k')
plt.legend([a.collections[0], b, c], ['learned frontier', 'regular observations', 'abnormal observations'], bbox_to_anchor=(1.05, 1))
plt.axis('tight')
plt.show()
Edits
I am able to get the decision boundary points using the following code. Now, the problem can be solved by computing numerically the distance.
for item in a.collections:
for i in item.get_paths():
v = i.vertices
x = v[:, 0]
y = v[:, 1]
I have an obvious solution. Getting all data points d and compute the euclidean distance between d and e=(x,y). But, it is a brute-force technique.. :D I will continue my research !
Another solution would be to fit an ellipse and compute the distance using the formula described by #epiliam there : https://math.stackexchange.com/questions/3670465/calculate-distance-from-point-to-ellipse-edge
I will provide one solution tomorrow based on the brute-force. It seems to work well for small dataset (n_rows < 10000). I did not test for larger ones.
I have a RGB image of shape (587, 987, 3). #height, width, num_channels
I also have label data (pixels' locations) for each of 7 classes.
I wanted to apply KMeans clustering algorithm to segment the given image into 7 classes.
While applying KMeans clustering, I want to utilize the label data, i.e., pixels locations.
How can I utilize label data?
What I have tried so far is as follows.
img = np.random.randint(low=1,high=99, size=(587, 987, 3))
im = img.reshape(img.shape[0]*img.shape[1], img.shape[2])
im = StandardScaler().fit_transform(im)
clusters = KMeans(n_clusters=7,n_init= 100,max_iter=100,n_jobs=-1).fit(im)
kmeans_labels = clusters.labels_.reshape(img.shape[0], img.shape[1])
plt.imshow(kmeans_labels)
plt.show()
I'm looking for propagating some annotation to the remaining segments (superpixels)
As clarified in the comments of the question, you could treat the cluster as superpixels and propagate labels from a few samples to the remaining data, using some semi-supervised classifier [1].
Creating an image to run the example:
import numpy as np
from skimage.data import binary_blobs
import cv2
from pyift.shortestpath import seed_competition
from scipy import sparse, spatial
import matplotlib.pyplot as plt
# creating noisy image
size = 256
image = np.empty((size, size, 3))
image[:, :, 0] = binary_blobs(size, seed=0)
image[:, :, 1] = binary_blobs(size, seed=0)
image[:, :, 2] = binary_blobs(size, seed=1)
image += np.random.randn(*image.shape) / 10
image -= image.min()
image /= image.max()
plt.axis(False)
plt.imshow(image)
plt.show()
Computing superpixels:
def grid_seeds(image, rows = 15, cols = 15):
seeds = np.zeros(image.shape[:2], dtype=np.int)
v_step, h_step = image.shape[0] // rows, image.shape[1] // cols
count = 1
for i in range(rows):
y = v_step // 2 + i * v_step
for j in range(cols):
x = h_step // 2 + j * h_step
seeds[y, x] = count
count += 1
return seeds
seeds = grid_seeds(image)
_, _, _, superpixels = seed_competition(seeds, image=image)
superpixels -= 1 # shifting labels to zero
contours, _ = cv2.findContours(superpixels, cv2.RETR_FLOODFILL, cv2.CHAIN_APPROX_SIMPLE)
im_w_contours = image.copy()
cv2.drawContours(im_w_contours, contours, -1, (255, 0, 0))
plt.axis(False)
plt.imshow(im_w_contours)
plt.show()
Propagating labels from 4 arbitrary nodes, one for each class (color) and coloring the resulting labels with the expected color.
def create_graph(image, labels):
n_nodes = labels.max() + 1
h, w, d = image.shape
avg = np.zeros((n_nodes, d))
for i in range(h):
for j in range(w):
avg[labels[i, j]] += image[i, j]
avg[:] /= np.bincount(labels.flat)[:, np.newaxis] # ignore label 0
graph = spatial.distance_matrix(avg, avg)
return sparse.csr_matrix(graph)
graph = create_graph(image, superpixels)
graph_seeds = np.zeros(graph.shape[0], dtype=np.int)
graph_seeds[1] = 1 # blue training sample
graph_seeds[3] = 2 # yellow training sample
graph_seeds[13] = 3 # white training sample
graph_seeds[14] = 4 # black training sample
label_colors = {1: (0, 0, 255),
2: (255, 255, 0),
3: (255, 255, 255),
4: (0, 0, 0)}
_, _, _, labels = seed_competition(graph_seeds, graph=graph)
result = np.empty_like(image)
for i, lb in enumerate(labels):
result[superpixels == i] = label_colors[lb]
plt.axis(False)
plt.imshow(result)
plt.show()
For this example, I used the difference between the average color of each superpixel as their arc-weight. However, in a real problem, some more elaborate feature vector will be necessary.
Also, the labeled data is a subset of the image superpixels, but this is not strictly necessary, you can add any artificial node when modeling your graph, especially as the seed nodes.
This approach is commonly used in remote sensing, this article might be relevant [2].
[1] Amorim, W. P., Falcão, A. X., Papa, J. P., & Carvalho, M. H. (2016). Improving semi-supervised learning through optimum connectivity. Pattern Recognition, 60, 72-85.
[2] Vargas, John E., et al. "Superpixel-based interactive classification of very high resolution images." 2014 27th SIBGRAPI Conference on Graphics, Patterns, and Images. IEEE, 2014.
I edited some examples to make a simulation for the voltage superposition of 2 point charges and made a 3D surface plot, the code is the following:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
q1 = 2e-9
q2 = -2e-9
K = 9e9
#Charge1 position
x1 = 2.0
y1 = 4.0
#Charge2 position
x2 = 6.0
y2 = 4.0
x = np.linspace(0,8,50)
y = np.linspace(0,8,50)
x, y = np.meshgrid(x,y)
r1 = np.sqrt((x - x1)**2 + (y - y1)**2)
r2 = np.sqrt((x - x2)**2 + (y - y2)**2)
V = K*(q1/r1 + q2/r2)
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(x, y, V, rstride=1, cstride=1, cmap=cm.rainbow,
linewidth=0, antialiased=False)
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.show()
3D Surface
Now what I want to do is a contour plot with a vector (quiver) plot on top of it. I tried the following code, but I get a bunch of buggy vectors coming out of both charges, even the negative one:
fig2, ax2 = plt.subplots(1,1)
cp = ax2.contourf(x, y, V, cmap=cm.coolwarm)
fig2.colorbar(cp)
v,u = np.gradient(-V, 0.2, 0.2) #E = -∇V
ax2.quiver(x, y, u, v)
ax2.set_title("Point Charges")
plt.show()
Buggy vectors
I suspect that the long vectors are related to a division by zero. The vectors should come out of the positive charge and get into the negative one. But how would I go about fixing them? Thanks in advance.
Welcome to SO, very nice MWE. One option would be to exclude all vectors beyond a certain length by setting them to NaN. Here I use the 95th percentile.
r = np.sqrt(u**2 + v**2)
is_valid = r < np.percentile(r, 95)
u[~is_valid] = np.nan
v[~is_valid] = np.nan
x[~is_valid] = np.nan
y[~is_valid] = np.nan
fig2, ax2 = plt.subplots(1,1)
cp = ax2.contourf(x, y, V, cmap=cm.coolwarm)
fig2.colorbar(cp)
ax2.quiver(x, y, u, v)
ax2.set_title("Point Charges")
ax2.set_xlim(0, 8)
ax2.set_ylim(0, 8)
plt.show()
The figure above is a great artwork showing the wind speed, wind direction and temperature simultaneously. detailedly:
The X axes represent the date
The Y axes shows the wind direction(Southern, western, etc)
The variant widths of the line were stand for the wind speed through timeseries
The variant colors of the line were stand for the atmospheric temperature
This simple figure visualized 3 different attribute without redundancy.
So, I really want to reproduce similar plot in matplotlib.
My attempt now
## Reference 1 http://stackoverflow.com/questions/19390895/matplotlib-plot-with-variable-line-width
## Reference 2 http://stackoverflow.com/questions/17240694/python-how-to-plot-one-line-in-different-colors
def plot_colourline(x,y,c):
c = plt.cm.jet((c-np.min(c))/(np.max(c)-np.min(c)))
lwidths=1+x[:-1]
ax = plt.gca()
for i in np.arange(len(x)-1):
ax.plot([x[i],x[i+1]], [y[i],y[i+1]], c=c[i],linewidth = lwidths[i])# = lwidths[i])
return
x=np.linspace(0,4*math.pi,100)
y=np.cos(x)
lwidths=1+x[:-1]
fig = plt.figure(1, figsize=(5,5))
ax = fig.add_subplot(111)
plot_colourline(x,y,prop)
ax.set_xlim(0,4*math.pi)
ax.set_ylim(-1.1,1.1)
Does someone has a more interested way to achieve this? Any advice would be appreciate!
Using as inspiration another question.
One option would be to use fill_between. But perhaps not in the way it was intended. Instead of using it to create your line, use it to mask everything that is not the line. Under it you can have a pcolormesh or contourf (for example) to map color any way you want.
Look, for instance, at this example:
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import interp1d
def windline(x,y,deviation,color):
y1 = y-deviation/2
y2 = y+deviation/2
tol = (y2.max()-y1.min())*0.05
X, Y = np.meshgrid(np.linspace(x.min(), x.max(), 100), np.linspace(y1.min()-tol, y2.max()+tol, 100))
Z = X.copy()
for i in range(Z.shape[0]):
Z[i,:] = c
#plt.pcolormesh(X, Y, Z)
plt.contourf(X, Y, Z, cmap='seismic')
plt.fill_between(x, y2, y2=np.ones(x.shape)*(y2.max()+tol), color='w')
plt.fill_between(x, np.ones(x.shape) * (y1.min() - tol), y2=y1, color='w')
plt.xlim(x.min(), x.max())
plt.ylim(y1.min()-tol, y2.max()+tol)
plt.show()
x = np.arange(100)
yo = np.random.randint(20, 60, 21)
y = interp1d(np.arange(0, 101, 5), yo, kind='cubic')(x)
dv = np.random.randint(2, 10, 21)
d = interp1d(np.arange(0, 101, 5), dv, kind='cubic')(x)
co = np.random.randint(20, 60, 21)
c = interp1d(np.arange(0, 101, 5), co, kind='cubic')(x)
windline(x, y, d, c)
, which results in this:
The function windline accepts as arguments numpy arrays with x, y , a deviation (like a thickness value per x value), and color array for color mapping. I think it can be greatly improved by messing around with other details but the principle, although not perfect, should be solid.
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
x = np.linspace(0,4*np.pi,10000) # x data
y = np.cos(x) # y data
r = np.piecewise(x, [x < 2*np.pi, x >= 2*np.pi], [lambda x: 1-x/(2*np.pi), 0]) # red
g = np.piecewise(x, [x < 2*np.pi, x >= 2*np.pi], [lambda x: x/(2*np.pi), lambda x: -x/(2*np.pi)+2]) # green
b = np.piecewise(x, [x < 2*np.pi, x >= 2*np.pi], [0, lambda x: x/(2*np.pi)-1]) # blue
a = np.ones(10000) # alpha
w = x # width
fig, ax = plt.subplots(2)
ax[0].plot(x, r, color='r')
ax[0].plot(x, g, color='g')
ax[0].plot(x, b, color='b')
# mysterious parts
points = np.array([x, y]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
# mysterious parts
rgba = list(zip(r,g,b,a))
lc = LineCollection(segments, linewidths=w, colors=rgba)
ax[1].add_collection(lc)
ax[1].set_xlim(0,4*np.pi)
ax[1].set_ylim(-1.1,1.1)
fig.show()
I notice this is what I suffered.
The issue is that this script is not able to plot a sphere for example while it is able to plot several cones such as the one in the script.
I have changes the shape and tried finding the lines from which the error comes from using the error message given when plotting a sphere.
import sympy as sy
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
# Plot Figure
nd = 50 # Number of points in graph
ax = plt.axes(projection='3d') # Adds 3d axis to figure
x1 = np.linspace(-15, 15, nd)
y1 = np.linspace(-15, 15, nd)
X, Y = np.meshgrid(x1, y1) # Create 2D grid with x1 and y1
i = 0
a = 0
b = 0
Z = np.array([])
x = sy.Symbol('x')
y = sy.Symbol('y')
z = (x**2+y**2)**0.5 # Function goes here
for i in range(nd): # Iterate over rows
b = 0
xv1 = X[a, :]
yv1 = Y[a, :]
for i in range(nd): # Iterate over elements in one row
xv = xv1[b]
yv = yv1[b]
z1 = z.subs([(x, xv), (y, yv)])
Z = np.append(Z, z1) # Append values to array just a row
b = b + 1
a = a + 1
Z = np.reshape(Z, (nd, nd))
print(Z.dtype)
print(Y.dtype)
print(X.dtype)
Z = Z.astype(float)
Y = Y.astype(float)
X = X.astype(float)
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='viridis', edgecolor='none')
plt.show()
The result with a sphere function is that the script crashes. I would hope this script should be able to graph this kind of 3D shapes.
Is this the error you are getting by any chance?
TypeError: can't convert complex to float
The way this is formulated you are asking for an imaginary number back. If you define this as your sphere equation:
z = (x**2+y**2-1)**0.5
you will end up asking for sqrt(-1) when x=y=0, which will not work. Try parameterizing with spherical coordinates like in this example: Python/matplotlib : plotting a 3d cube, a sphere and a vector?