I have a point cloud from different parts of the human body, like an eye and I want to do a mesh. I tried to use Mayavi and Delaunay but I don't get a good mesh. The points of the cloud are in total disorder.
I have my point cloud in .npz file
Using Mayavi
Then I want to save my model in an obj or stl file, but first I want to generate the mesh.
What do you recommend me to use, do I need a special library?
You can use pyvista to do the 3D interpolation. You need however to manually play with the alpha parameter that controls the distance under which two points are linked.
import numpy as np
import pyvista as pv
# points is a 3D numpy array (n_points, 3) coordinates of a sphere
cloud = pv.PolyData(points)
cloud.plot()
volume = cloud.delaunay_3d(alpha=2.)
shell = volume.extract_geometry()
shell.plot()
Data
Let's use the capitals of Europe. We read them in from Excel with Pandas:
import pandas as pd
dg0 = pd.read_excel('psc_StaedteEuropa_coord.xlsx') # ,header=None
dg0.head()
City Inhabit xK yK
0 Andorra 24574.0 42.506939 1.521247
1 Athen 664046.0 37.984149 23.727984
2 Belgrad 1373651.0 44.817813 20.456897
3 Berlin 3538652.0 52.517037 13.388860
4 Bern 122658.0 46.948271 7.451451
Grid by triangulation
We use Scipy for that. For a 3-dim example see HERE and HERE or here (CGAL has a Python wrapper)
import numpy as np
from scipy.spatial import Delaunay
yk, xk, city = np.array(dg0['xK']), np.array(dg0['yK']), np.array(dg0['City'])
X1 = np.vstack((xk,yk)).T
tri = Delaunay(X1)
Graphics
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
#--- grafics -------
figX = 25; figY = 18
fig1 = plt.figure(figsize=(figX, figY), facecolor='white')
myProjection = ccrs.PlateCarree()
ax = plt.axes(projection=myProjection)
ax.stock_img()
ax.set_extent([-25, 40, 35, 65], crs=myProjection)
plt.triplot(X1[:,0], X1[:,1], tri.simplices.copy(), color='r', linestyle='-',lw=2)
plt.plot(X1[:,0], X1[:,1], 's', color='w')
plt.scatter(xk,yk,s=1000,c='w')
for i, txt in enumerate(city):
ax.annotate(txt, (X1[i,0], X1[i,1]), color='k', fontweight='bold')
plt.savefig('Europe_A.png')
plt.show()
If your points are "are in total disorder", and if you want to generate a mesh, then you need some interpolation from the cloud of points to the somehow structured grid points of the mesh..
In the 2-dimensional case matplotlib's triangulation can be a help:
matplotlib's triangulation 2dim.
In the 3-dimensional case there are 2 options. Depending on the data, you might want to interpolate them to a 3-dimensional surface. Then matplotlib's trisurf3d can be a help.
If you need a 3-dimensional volume grid then you have probably to look for a FEM (finite element) grid, e.g. FEnics
An example of interpolating a 3-dimensional field with scipy for contouring can be found here
Have you tried this example? https://docs.enthought.com/mayavi/mayavi/auto/example_surface_from_irregular_data.html
The relevant part is here
# Visualize the points
pts = mlab.points3d(x, y, z, z, scale_mode='none', scale_factor=0.2)
# Create and visualize the mesh
mesh = mlab.pipeline.delaunay2d(pts)
surf = mlab.pipeline.surface(mesh)
Related
I have a segmentation result stored in a binary image, from which i want to extract the contours. To do so, I compute the difference between the mask and the eroded mask. Hence, I am able to extract the pixels that are on the boundaries of my segmentation result. Here is a code snippet:
import numpy as np
from skimage.morphology import binary_erosion
from matplotlib import pyplot as plt
# mask is a 2D boolean np.array containing the segmentation result
contour_raw=np.logical_xor(mask,binary_erosion(mask))
contour_y,contour_x=np.where(contour_raw)
fig=plt.figure()
plt.imshow(mask)
plt.plot(contour_x,contour_y,'.r')
I end up with a collection of dots on the contours of the mask:
The troubles starts when I want to connect the dots. Doing a naive plot of the contours results of course in a disappointing results, because contour_x and contour_y are not sorted as I would like:
plt.plot(contour_x,contour_y,'--r')
And here is the result, with a focus on an arbitrary part of the figure to highlight the connection between the dots:
How is it possible to sort the contours coordinates contour_x and contour_y so that they are correctly ordered when I connect the dot? Furthermore, if my mask contains several independent connected component, I would like to obtain as many contours as there are connected components.
Thanks for your help!
Best,
I think combining a clustering and convex hull works in your case. For this example, I am generating three synthetic segments using make_blobs function and demonstrating each with a color:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.cluster import DBSCAN
from scipy.spatial import ConvexHull, convex_hull_plot_2d
X, y = make_blobs(n_samples=1000, centers=3, n_features=2, random_state=0, cluster_std=0.3)
plt.scatter(X[:,0], X[:,1], c=y)
Then, since segments are distributed in a two dimensional map, we can run density based clustering method to cluster them, and then by finding a convex hull around each cluster, we can find points surrounding those clusters coming with order:
# Fitting Clustering
c_alg = DBSCAN()
c_alg.fit(X)
labels = c_alg.labels_
for i in range(0, max(labels)+1):
ind = np.where(labels == i)
segment = X[ind, :][0]
hull = ConvexHull(segment)
plt.plot(segment[:, 0], segment[:, 1], 'o')
for simplex in hull.simplices:
plt.plot(segment[simplex, 0], segment[simplex, 1], 'k-')
However in your case Concave Hull should work not Convex Hull. There is a package alphashape in python that claimed to find Concave hulls in two-dimensional maps. More information here. The tricky part is to find the best alpha, but in this example, we can fit concave hulls using:
import alphashape
from descartes import PolygonPatch
fig, ax = plt.subplots()
for i in range(0, max(labels)+1):
ind = np.where(labels == i)
points = X[ind, :][0,:,:]
alpha_shape = alphashape.alphashape(points,5.0)
ax.scatter(*zip(*points))
ax.add_patch(PolygonPatch(alpha_shape, alpha=0.5))
plt.show()
I would like to create a version of this 2D binned "color map" with smoothed colors.
I am not even sure this would be the correct nomenclature for the plot, but, essentially, I want my figure to be color coded by the median values of a third variable for points that reside in each defined bin of my (X, Y) space.
Even though I am able to accomplish that to a certain degree (see example), I would like to find a way to create a version of the same plot with a smoothed color gradient. That would allow me to visualize the overall behavior of my distribution.
I tried ideas described here: Smoothing 2D map in python
and here: Python: binned_statistic_2d mean calculation ignoring NaNs in data
as well as links therein, but could not find a clear solution to the problem.
This is what I have so far:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from scipy.stats import binned_statistic_2d
import random
random.seed(999)
x = np.random.normal (0,10,5000)
y = np.random.normal (0,10,5000)
z = np.random.uniform(0,10,5000)
fig = plt.figure(figsize=(20, 20))
plt.rcParams.update({'font.size': 10})
ax = fig.add_subplot(3,3,1)
ax.set_axisbelow(True)
plt.grid(b=True, lw=0.5, zorder=-1)
x_bins = np.arange(-50., 50.5, 1.)
y_bins = np.arange(-50., 50.5, 1.)
cmap = plt.cm.get_cmap('jet_r',1000) #just a colormap
ret = binned_statistic_2d(x, y, z, statistic=np.median, bins=[x_bins, y_bins]) # Bin (X, Y) and create a map of the medians of "Colors"
plt.imshow(ret.statistic.T, origin='bottom', extent=(-50, 50, -50, 50), cmap=cmap)
plt.xlim(-40,40)
plt.ylim(-40,40)
plt.xlabel("X", fontsize=15)
plt.ylabel("Y", fontsize=15)
ax.set_yticks([-40,-30,-20,-10,0,10,20,30,40])
bounds = np.arange(2.0, 20.0, 1.0)
plt.colorbar(ticks=bounds, label="Color", fraction=0.046, pad=0.04)
# save plots
plt.savefig("Whatever_name.png", bbox_inches='tight')
Which produces the following image (from random data):
Therefore, the simple question would be: how to smooth these colors?
Thanks in advance!
PS: sorry for excessive coding, but I believe a clear visualization is crucial for this particular problem.
Thanks to everyone who viewed this issue and tried to help!
I ended up being able to solve my own problem. In the end, it was all about image smoothing with Gaussian Kernel.
This link: Gaussian filtering a image with Nan in Python gave me the insight for the solution.
I, basically, implemented the exactly same code, but, in the end, mapped the previously known NaN pixels from the original 2D array to the resulting smoothed version. Unlike the solution from the link, my version does NOT fill NaN pixels with some value derived from the pixels around. Or, it does, but then I erase those again.
Here is the final figure produced for the example I provided:
Final code, for reference, for those who might need in the future:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from scipy.stats import binned_statistic_2d
import scipy.stats as st
import scipy.ndimage
import scipy as sp
import random
random.seed(999)
x = np.random.normal (0,10,5000)
y = np.random.normal (0,10,5000)
z = np.random.uniform(0,10,5000)
fig = plt.figure(figsize=(20, 20))
plt.rcParams.update({'font.size': 10})
ax = fig.add_subplot(3,3,1)
ax.set_axisbelow(True)
plt.grid(b=True, lw=0.5, zorder=-1)
x_bins = np.arange(-50., 50.5, 1.)
y_bins = np.arange(-50., 50.5, 1.)
cmap = plt.cm.get_cmap('jet_r',1000) #just a colormap
ret = binned_statistic_2d(x, y, z, statistic=np.median, bins=[x_bins, y_bins]) # Bin (X, Y) and create a map of the medians of "Colors"
sigma=1 # standard deviation for Gaussian kernel
truncate=5.0 # truncate filter at this many sigmas
U = ret.statistic.T.copy()
V=U.copy()
V[np.isnan(U)]=0
VV=sp.ndimage.gaussian_filter(V,sigma=sigma)
W=0*U.copy()+1
W[np.isnan(U)]=0
WW=sp.ndimage.gaussian_filter(W,sigma=sigma)
np.seterr(divide='ignore', invalid='ignore')
Z=VV/WW
for i in range(len(Z)):
for j in range(len(Z[0])):
if np.isnan(U[i][j]):
Z[i][j] = np.nan
plt.imshow(Z, origin='bottom', extent=(-50, 50, -50, 50), cmap=cmap)
plt.xlim(-40,40)
plt.ylim(-40,40)
plt.xlabel("X", fontsize=15)
plt.ylabel("Y", fontsize=15)
ax.set_yticks([-40,-30,-20,-10,0,10,20,30,40])
bounds = np.arange(2.0, 20.0, 1.0)
plt.colorbar(ticks=bounds, label="Color", fraction=0.046, pad=0.04)
# save plots
plt.savefig("Whatever_name.png", bbox_inches='tight')
I am new to 3D image processing . I would like to know how to view the dicom series in python. I tried using matplotlib and VTK. In matplot I am not able to view the volume like I view in matlab using volViewer. Regarding VTK I am not able to import VTKRAyCASt for viewing 3D. The version I am using is 8.2.0.
I am doing the processing using scipy.ndimages
Kindly suggest me some resources to my volume dicom files
You can try ipyvolume https://github.com/maartenbreddels/ipyvolume for interactive plotting, I found it quite useful.
Also, you can plot them with matplotlib by using marching cubes to obtain the surface mesh but it is quite slow though:
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
import numpy as np
from skimage import measure
def plot_3d(image, threshold=-300):
p = image.transpose(2,1,0)
verts, faces, normals, values = measure.marching_cubes_lewiner(p, threshold)
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, projection='3d')
mesh = Poly3DCollection(verts[faces], alpha=0.1)
face_color = [0.5, 0.5, 1]
mesh.set_facecolor(face_color)
ax.add_collection3d(mesh)
ax.set_xlim(0, p.shape[0])
ax.set_ylim(0, p.shape[1])
ax.set_zlim(0, p.shape[2])
plt.show()
The threshold of -300 HU is fine for visualizing chest CT scans but change it if you going to use MRI (check your intensity values distribution) or binary volumes (threshold =0).
There is some example of visualization:
With vtkplotter you should be able to do this easily:
from vtkplotter import *
volume = load(mydicomdir) #returns a vtkVolume object
show(volume, bg='white')
To install:
pip install vtkplotter
guys, I'm a chemist and I've finished an experiment that gave me the energies of a metal d orbitals.
It is relatively easy to get the correct proportion of energies in Excel 1 and use a drawing program like Inkscape to draw the diagram for molecular orbitals (like I did with this one below 2) but I’d love to use python to get a beautiful diagram that considers the energies of my orbitals like we see in the books.
My first attempt using seaborn and swarmplot is obviously too far from the correct approach and maybe (probably!) is not the correct way to get there. I'd be more than happy to achieve something like the right side here in 3.
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
Energies = [-0.40008, -0.39583, -0.38466, -0.23478, -0.21239]
orbitals = ["dz2", "dxy", "dyz", "dx2y2", "dxz"]
df = pd.DataFrame(Energies)
df["Orbitals"] = pd.DataFrame(orbitals)
sns.swarmplot(y=df[0], size=16)
Thanks for any help.
1 The excel one
2 Drawn by hand using the excel version as the model
3 Extracted from literature
You can draw anything you like deriving from basic shapes and functions in matplotlib. Energy levels could be simple markers, the texts can be produced by annotate.
import numpy as np
import matplotlib.pyplot as plt
Energies = [-0.40008, -0.39583, -0.38466, -0.23478, -0.21239]
orbitals = ["$d_{z^2}$", "$d_{xy}$", "$d_{yz}$", "$d_{x^2 - y^2}$", "$d_{xz}$"]
x = np.arange(len(Energies))
fig, ax = plt.subplots()
ax.scatter(x, Energies, s=1444, marker="_", linewidth=3, zorder=3)
ax.grid(axis='y')
for xi,yi,tx in zip(x,Energies,orbitals):
ax.annotate(tx, xy=(xi,yi), xytext=(0,-4), size=18,
ha="center", va="top", textcoords="offset points")
ax.margins(0.2)
plt.show()
I'm struggling to draw a power law graph for Facebook Data that I found online. I'm using Networkx and I've found how to draw a Degree Histogram and a degree rank. The problem that I'm having is I want the y axis to be a probability so I'm assuming I need to sum up each y value and divide by the total number of nodes? Can anyone please help me do this? Once I've got this I'd like to draw a log-log graph to see if I can obtain a straight line. I'd really appreciate it if anyone could help! Here's my code:
import collections
import networkx as nx
import matplotlib.pyplot as plt
from networkx.algorithms import community
import math
import pylab as plt
g = nx.read_edgelist("/Users/Michael/Desktop/anaconda3/facebook_combined.txt","r")
nx.info(g)
degree_sequence = sorted([d for n, d in g.degree()], reverse=True)
degreeCount = collections.Counter(degree_sequence)
deg, cnt = zip(*degreeCount.items())
fig, ax = plt.subplots()
plt.bar(deg, cnt, width=0.80, color='b')
plt.title("Degree Histogram for Facebook Data")
plt.ylabel("Count")
plt.xlabel("Degree")
ax.set_xticks([d + 0.4 for d in deg])
ax.set_xticklabels(deg)
plt.show()
plt.loglog(degree_sequence, 'b-', marker='o')
plt.title("Degree rank plot")
plt.ylabel("Degree")
plt.xlabel("Rank")
plt.show()
You seem to be on the right tracks, but some simplifications will likely help you. The code below uses only 2 libraries.
Without access your graph, we can use some graph generators instead. I've chosen 2 qualitatively different types here, and deliberately chosen different sizes so that the normalization of the histogram is needed.
import networkx as nx
import matplotlib.pyplot as plt
g1 = nx.scale_free_graph(1000, )
g2 = nx.watts_strogatz_graph(2000, 6, p=0.8)
# we don't need to sort the values since the histogram will handle it for us
deg_g1 = nx.degree(g1).values()
deg_g2 = nx.degree(g2).values()
# there are smarter ways to choose bin locations, but since
# degrees must be discrete, we can be lazy...
max_degree = max(deg_g1 + deg_g2)
# plot different styles to see both
fig = plt.figure()
ax = fig.add_subplot(111)
ax.hist(deg_g1, bins=xrange(0, max_degree), density=True, histtype='bar', rwidth=0.8)
ax.hist(deg_g2, bins=xrange(0, max_degree), density=True, histtype='step', lw=3)
# setup the axes to be log/log scaled
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_xlabel('degree')
ax.set_ylabel('relative density')
ax.legend()
plt.show()
This produces an output plot like this (both g1,g2 are randomised so won't be identical):
Here we can see that g1 has an approximately straight line decay in the degree distribution -- as expected for scale-free distributions on log-log axes. Conversely, g2 does not have a scale-free degree distribution.
To say anything more formal, you could look at the toolboxes from Aaron Clauset: http://tuvalu.santafe.edu/~aaronc/powerlaws/ which implement model fitting and statistical testing of power-law distributions.