Looking for a detailed method to plot contours of confidence level - statistics

I try to find out a method or a tutorial to know how are plotted the contours of different confidence levels (68%, 95%, 99.7% etc ...).
Here below an example of these contours on a plot that I would like to generate:
It represents the constraints on cosmological parameters (\omega_Lambda represents dark energy and \Omega_m total matter quantity).
Once I have data sets on \Omega_Lambda and \Omega_mat, how can I produce these contours : I know what is a confidence level but I only know the standard deviation.
If I plot standard deviation for both parameters from the expected values, I get a cross symbol on it (horizontally for \Omega_m and vertically for \Omega_Lambda) : but from this cross, how to draw contours at different confidence levels?
On the figure above, these contours look like a 2D parametric curve where I have points (Omega_Lambda(t), Omega_m(t)) with t parameter but I don't think they are drawn like this.

You might want to check out Matplotlib's contour plot: the levels parameter seems to be what you need.
The plots in your example are not obtained from raw data, but from a statistical model of raw data. So you could first fit multivariate normal distributions to your data using numpy.mean and numpy.cov, then generate the multivariate normal pdf values with scipy.stats.multivariate_normal. You can also find a code snippet doing confidence ellipses here (which seems to be exactly the kind of thing you were looking for).

Related

How to approximate low-res 3D density map to smooth models?

3D Density maps of course can be plotted as heatmap, but when data itself is homogeneous (near 0) except for a small part (2D cross section for example):
This should give a letter 'E' shape as 2D "model". The original data is not saved as point-cloud however.
A naive approach would be to use the pixels that are more than a certain value, and then smooth the border. However this does not take into account of the border pixels being small.
Another would be to use some point-cloud based algorithms that come with modeling softwares, but then the point-cloud's probability function would still be discontinuous on pixel border, and not taking into account that only one side have signal.
Is there any tested solution to this (the example is 2D, the actual case is many 2D slices that compose a low-res 3D density map)? I was thinking of making border pixels have area proportional to signal data, and border should be defined from gradient? Any suggestions?
I was thinking of model visualization results similar to this (seems to be based on established point-cloud algorithm):

Deviation analysis color to vertex color using Meshlab

I would to know if is possible to do deviation analysis with Meshlab and transfer the result to vertex color in a mesh. So expand those 2 ideas...
1st. Is it possible to do deviation analysis with MeshLab? I have a scanned mesh and I will compare with a "ideal model". The difference between these 2 will generate a (grey or color) scale information that represent the distance I have from the points of the scanned model to the "ideal" one.
2nd. I want to get this information (color/grey grading that shows how distant the points are) and transfer to a vertex color information.
I don't know it was clear, but if you know what deviation analysis means I think you got it. The difference is that I would like the generate a 3d mesh with the vertex color provided by this deviation analysis.
Seems that mesh lab can compare two models and can deal with vertex colorizing, but I don't Know if is possible to deal with real measurements, transfer this information to vertex color and export a mesh that show it.
If its possible and If you know how just point me some direction. I'm not familiar with Meshlab and click here and there trying a impossible task can be very frustrating, so it will be good if someone can give me some tips.
Thanks.
Yes, MeshLab can compute deviation analysis between two similar surfaces (and the required alignment preprocessing too).
Estimating the deviation between two meshes means computing the hausdorff distance.
There is a small tutorial on how to compute and visualize it in MeshLab here:
http://meshlabstuff.blogspot.com/2010/01/measuring-difference-between-two-meshes.html

Getting distance to the hyperplane from sklearn's svm.svc

I'm currently using svc to separate two classes of data (the features below are named data and the labels are condition). After fitting the data using the gridSearchCV I get a classification score of about .7 and I'm fairly happy with that number. After that though I went to get the relative distances from the hyper-plane for data from each class using grid.best_estimator_.decision_function() and plot them in a boxplot and a histogram to get a better idea of how much overlap there is. My problem is that in the histogram and the boxplot these look perfectly seperable shich I know is not the case. I'm sure I'm calling decision_function() incorrectly but not sure how to do this really.
svc=SVC(kernel='linear,probability=True,decision_function_shape='ovr')
cv=KFold(n_splits=4,shuffle=True)
svc=SVC(kernel='linear,probability=True,decision_function_shape='ovr')
C_range=[1,.001,.005,.01,.05,.1,.5,5,50,10,100]
param_grid=dict(C=C_range)
grid=GridSearchCV(svc,param_grid=param_grid, cv=cv,n_jobs=4,iid=False, refit=True)
grid.fit(data,condition)
print grid.best_params
print grid.best_score_
x=grid.best_estimator_.decision_function(data)
plt.hist(x)
sb.boxplot(condition,x)
sb.swarmplot
In the histogram and box plots it looks like almost all of the points have a distance of either exactly positive or negative one with nothing between them.

Scipy Guassian_kde Nomalisation

I've been using scipy.stats.gausian_kde but have a few questions about its output. I've plotted the normalised histogram and the gaussian_kde plot on the same graph. Why are the y-values so vastly different? My understanding is that the gaussian_kde plot should touch the tips of the histograms, roughly. Using the scipy.integrate.quad functions I determined the area under the graph to be 0.7, rather than 1.0, which is what I expected.
Actually what I really want is for the gaussian_kde to represent the non-normalised histogram, does anyone know how can I do that?
Your expectations are a little off. The area under each of the KDE's peaks should roughly equal the area in their corresponding bars. That appears to hold, to my eye. Nonadaptive KDEs with a global bandwidth estimate (like scipy.stats.gaussian_kde) tend to broaden multimodal distributions with sharp peaks.
As for the underestimate of the total area under the KDE, I cannot say without the data and the code that you used to do the integration.
In order to make a KDE approximate an unnormalized histogram, you need to multiply by (bin_width*N) where N is the total number of data points.

Colors: CIE XYZ model - Chromaticity graph

I want to draw a section graph for XYZ CIE color model, like this one:
Do you have any idea how to do it?
Very briefly...
You can plot the spectral line (the horseshoe) by plotting the xy (I have XY not xy) data for the standard observer. Then you can find the polygon you need to fill by applying a convex hull algorithm to the points. Make a list of xy values you want to paint within the polygon. Find the z value for a fixed luminance by z = 1 - x - y. Convert to RGB - you will need a function called something like XYZtoRGB (there is a python module, or use the transform on wikipedia). You may want to increase the luminance by multiplying all the numbers by a constant or something first. Set the pixels at the xy locations to the RGB values. Plot along with the convex hull and/or the spectral line you calculated.
I have the data for the standard 2deg (I think) observer (I can't find a link) - you will need to divide by X+Y+Z to convert from XYZ to xyz. Send me a message if you want me to send them to you, there is too much data to post here.
The colour Python module has a plotting submodule where this kind of plot is one of the provided plots. See documentation for plot_chromaticity_diagram_CIE1931 and plot_sds_in_chromaticity_diagram_CIE1931
It uses Matplotlib under the hood.

Resources