Interpolating using a cubic function gives a negative value for probability - python-3.x

I have a set of data which correspond to ages (in steps of 0.1) along the x axis, and probabilities along the y axis. I'm trying to interpolate the data so I can find the maximum and a range of ages which covers 95% of the probability.
I've tried a simple interpolation using the code below, taken from the SciPy help pages, and it produces good results (I change the x and y variables to read my data), except for one feature.
from scipy.interpolate import interp1d
x = np.linspace(72, 100, num=29, endpoint=True)
y = df.iloc[:,0].values
f = interp1d(x, y)
f2 = interp1d(x, y, kind='cubic')
xnew = np.linspace(0, 10, num=41, endpoint=True)
import matplotlib.pyplot as plt
plt.plot(x, y, 'o', xnew, f(xnew), '-', xnew, f2(xnew), '--')
plt.legend(['data', 'linear', 'cubic'], loc='best')
plt.show()
The problem is, the cubic function works best, with the smoothest fit. However, it gives negative values for some parts of the probability curve, which is obviously not acceptable. Is there some way of setting a floor at y=0? I thought maybe switching to a quadratic kind would fix it, but it doesn't seem to. The linear fit does, but it's not smoothed, so is not a very good match.
I'm also not sure how to perform the second part of what I'm trying to do. It's probably very simple, but I don't know how to find the mean when I don't have a frequency table, but a grid of interpolated points which form a function. If I knew the function, I could integrate it, but I'm not sure how to do that in Python.
EDIT to include some data:
This is what my y data looks like:
array([3.41528917e-08, 7.81041275e-05, 9.60711716e-04, 5.75868934e-05,
6.50260297e-05, 2.95556411e-05, 2.37331370e-05, 9.11990619e-05,
1.08003254e-04, 4.16800419e-05, 6.63673113e-05, 2.57934035e-04,
3.42235937e-03, 5.07534495e-03, 1.76603165e-02, 1.69535370e-01,
2.67624254e-01, 4.29420872e-01, 8.25165926e-02, 2.08367339e-02,
2.01227453e-03, 1.15405995e-04, 5.40163098e-07, 1.66905537e-10,
8.31862858e-18, 4.14093219e-23, 8.32103362e-29, 5.65637769e-34,
7.93547444e-40])

Related

Vega-Lite/Altair extend regression line to the edges of the graph

I'm trying to find a way to extend regression lines in vega-lite/altair charts to the edge of the chart. As of now, when applying a regression transform to a dataset results in datapoints that only stretch to the bounding-box of the original dataset. Is it possible somehow to extend this range to the x/y extents of the chart? In the picture below, the black line is what vega-lite calculates per default. Extending the line to the edges as shown in yellow is what I'm trying to achieve.
EDIT
When specifying the extent property on the transform_regression call it seems like it is adjusting the y variable instead of the x variable. Maybe I'm grossly misunderstanding something but maybe it has something to do with the fact that my x variable are dates which might behave differently?
When I specify the extent like so
CDR_base.transform_regression(
'per_capita',
'year',
groupby=['region'],
extent=[2000, 2100]
).mark_line()
I would expect the extent of the regression lines to extend from 2000 to 2100. For some reason the extent gets applied to the y axis it seems.
You can use the extent argument of the regression transform to control the extent of the line. For example, here is a dataset with a default line:
import altair as alt
import pandas as pd
import numpy as np
np.random.seed(2)
df = pd.DataFrame({
'x': np.random.randint(0, 100, 10),
'y': np.random.randint(0, 100, 10)
})
points = alt.Chart(df).mark_point().encode(
x='x:Q',
y='y:Q'
)
points + points.transform_regression('x', 'y').mark_line()
And here it is with extent set:
points + points.transform_regression('x', 'y', extent=[0, 90]).mark_line()

How to fit a curve to this data using scipy curve_fit

I am hoping someone can me with where I'm going wrong with fitting a curve to this data. I am using the method in this link and so have the following code:
def sigmoid(x, L, x0, k, b):
y = L / (1 + np.exp(-k*(x-x0)))+b
return y
p0 = [max(y1), np.median(x2), 1, min(y1)]
popt, pcov = curve_fit(sigmoid, xdata=x2, ydata=y1, p0=p0, method='dogbox')
predictions = sigmoid(x2, *popt)
And my plotted "curve" looks like so:
But I am expecting a more s-shaped curve. I have experimented with different p0 values but not getting the required output (and if I'm honest I'm not sure how I'm supposed to find the ideal starting parameters).
Using p0 = [max(y1), np.median(x2), 0.4, 1] and method='trf I did get the following, which is closer but still missing the curve in the middle?
Any help greatly appreciated!
That is because your y-axis is a log scale. If you change the y-axis to a linear one, you'll see that the fit is actually quite good.

How to modify scatter-plot figure legend to show different formats for the same types of handles?

I am trying to modify the legend of a figure that contains two overlayed scatter plots. More specifically, I want two legend handles and labels: the first handle will contain multiple points (each colored differently), while the other handle consists of a single point.
As per this related question, I can modify the legend handle to show multiple points, each one being a different color.
As per this similar question, I am aware that I can change the number of points shown by a specified handle. However, this applies the change to all handles in the legend. Can it be applied to one handle only?
My goal is to combine both approaches. Is there a way to do this?
In case it isn't clear, I would like to modify the embedded figure (see below) such that Z vs X handle shows only one-point next to the corresponding legend label, while leaving the Y vs X handle unchanged.
My failed attempt at producing such a figure is below:
To replicate this figure, one can run the code below:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.legend_handler import HandlerTuple, HandlerRegularPolyCollection
class ScatterHandler(HandlerRegularPolyCollection):
def update_prop(self, legend_handle, orig_handle, legend):
""" """
legend._set_artist_props(legend_handle)
legend_handle.set_clip_box(None)
legend_handle.set_clip_path(None)
def create_collection(self, orig_handle, sizes, offsets, transOffset):
""" """
p = type(orig_handle)([orig_handle.get_paths()[0]], sizes=sizes, offsets=offsets, transOffset=transOffset, cmap=orig_handle.get_cmap(), norm=orig_handle.norm)
a = orig_handle.get_array()
if type(a) != type(None):
p.set_array(np.linspace(a.min(), a.max(), len(offsets)))
else:
self._update_prop(p, orig_handle)
return p
x = np.arange(10)
y = np.sin(x)
z = np.cos(x)
fig, ax = plt.subplots()
hy = ax.scatter(x, y, cmap='plasma', c=y, label='Y vs X')
hz = ax.scatter(x, z, color='k', label='Z vs X')
ax.grid(color='k', linestyle=':', alpha=0.3)
fig.subplots_adjust(bottom=0.2)
handler_map = {type(hz) : ScatterHandler()}
fig.legend(mode='expand', ncol=2, loc='lower center', handler_map=handler_map, scatterpoints=5)
plt.show()
plt.close(fig)
One solution that I do not like is to create two legends - one for Z vs X and one for Y vs X. But, my actual use case involves an optional number of handles (which can exceed two) and I would prefer not having to calculate the optimal width/height of each legend box. How else can this problem be approached?
This is a dirty trick and not an elegant solution, but you can set the sizes of other points for Z-X legend to 0. Just change your last two lines to the following.
leg = fig.legend(mode='expand', ncol=2, loc='lower center', handler_map=handler_map, scatterpoints=5)
# The third dot of the second legend stays the same size, others are set to 0
leg.legendHandles[1].set_sizes([0,0,leg.legendHandles[1].get_sizes()[2],0,0])
The result is as shown.

When plotting the Wigner function of a coherent state using QuTiP strange patterns appear

I noticed something strange this day when I plotted the Wigner function of a coherent state using the open source quantum toolbox QuTiP in python.
When I do the plot I noticed these strange patterns just around the edge of the plot that are not supposed to be there. I believe it's just some sort of numerical error but I don't know how I can get rid or minimize them or most impartant: what's causing them.
Here is the code
# import packages
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib as mpl
from matplotlib import cm
from qutip import *
N = 60 # number of levels in Hilbert space
# density matrix of a coherent state
rho_coherent = coherent_dm(N, 1-1j)
X = np.linspace(-3, 3, 300)
Y = np.linspace(-3, 3, 300)
# Wigner function
W = wigner(rho_coherent, X, Y, 'iterative', 2)
X, Y = np.meshgrid(X, Y)
# Color Normalization
class MidpointNormalize(colors.Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
colors.Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
# contour plot
plt.subplot(111, aspect='equal')
plt.contourf(X, Y, W, 100, cmap = cm.RdBu_r, norm = MidpointNormalize(midpoint=0.))
plt.show()
and here is the plot
The blue spots as you can clearly see that's around the edges are not supposed to be there! The blue spots indicate that the Wigner function is negative at that point, but a coherent state should have a Wigner function thats positive everywhere!
I also noticed that when I reduce the linspace steps from 300 to 100 the blue parts disappear.
Would appreciate very much if someone can explain what's causing this problem to appear.
This is simply due to truncation. When using a finite number of modes (in your case N=60), the Wigner function will go negative at some point.
Reducing the linspace steps brings the negative regions you see on the plot into the zero value increment and displays these regions as zero. Reducing the linspace steps is probably the best solution to your problem. Your plot will only be as accurate as the errors introduced by truncation, so simply reduce the resolution until those errors disappear.

1-D interpolation using python 3.x

I have a data that looks like a sigmoidal plot but flipped relative to the vertical line.
But the plot is a result of plotting 1D data instead of some sort of function.
My goal is to find the x value when the y value is at 50%. As you can see, there is no data point when y is exactly at 50%.
Interpolate comes to my mind. But I'm not sure if interpolate enable me to find the x value when the y value is 50%. So my question is 1) can you use interpolate to find the x when the y is 50%? or 2)do you need to fit the data to some sort of a function?
Below is what I currently have in my code
import numpy as np
import matplotlib.pyplot as plt
my_x = [4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66]
my_y_raw=np.array([0.99470977497817203, 0.99434995886145172, 0.98974611323163653, 0.961630837657524, 0.99327633558441175, 0.99338952769251909, 0.99428263292577534, 0.98690514212711611, 0.99111667721533181, 0.99149418924880861, 0.99133773062680464, 0.99143506380003499, 0.99151080464011454, 0.99268261743308517, 0.99289757252812316, 0.99100207861144063, 0.99157171773324027, 0.99112571824824358, 0.99031608691035722, 0.98978104266076905, 0.989782674787969, 0.98897835092187614, 0.98517540405423909, 0.98308943666187076, 0.96081810781994603, 0.85563541881892147, 0.61570811548079107, 0.33076276040577052, 0.14655134838124245, 0.076853147122142126, 0.035831324928136087, 0.021344669212790181])
my_y=my_y_raw/np.max(my_y_raw)
plt.plot(my_x, my_y,color='k', markersize=40)
plt.scatter(my_x,my_y,marker='*',label="myplot", color='k', edgecolor='k', linewidth=1,facecolors='none',s=50)
plt.legend(loc="lower left")
plt.xlim([4,102])
plt.show()
Using SciPy
The most straightforward way to do the interpolation is to use the SciPy interpolate.interp1d function. SciPy is closely related to NumPy and you may already have it installed. The advantage to interp1d is that it can sort the data for you. This comes at the cost of somewhat funky syntax. In many interpolation functions it is assumed that you are trying to interpolate a y value from an x value. These functions generally need the "x" values to be monotonically increasing. In your case, we swap the normal sense of x and y. The y values have an outlier as #Abhishek Mishra has pointed out. In the case of your data, you are lucky and you can get away with the the leaving the outlier in.
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
my_x = [4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,
48,50,52,54,56,58,60,62,64,66]
my_y_raw=np.array([0.99470977497817203, 0.99434995886145172,
0.98974611323163653, 0.961630837657524, 0.99327633558441175,
0.99338952769251909, 0.99428263292577534, 0.98690514212711611,
0.99111667721533181, 0.99149418924880861, 0.99133773062680464,
0.99143506380003499, 0.99151080464011454, 0.99268261743308517,
0.99289757252812316, 0.99100207861144063, 0.99157171773324027,
0.99112571824824358, 0.99031608691035722, 0.98978104266076905,
0.989782674787969, 0.98897835092187614, 0.98517540405423909,
0.98308943666187076, 0.96081810781994603, 0.85563541881892147,
0.61570811548079107, 0.33076276040577052, 0.14655134838124245,
0.076853147122142126, 0.035831324928136087, 0.021344669212790181])
# set assume_sorted to have scipy automatically sort for you
f = interp1d(my_y_raw, my_x, assume_sorted = False)
xnew = f(0.5)
print('interpolated value is ', xnew)
plt.plot(my_x, my_y_raw,'x-', markersize=10)
plt.plot(xnew, 0.5, 'x', color = 'r', markersize=20)
plt.plot((0, xnew), (0.5,0.5), ':')
plt.grid(True)
plt.show()
which gives
interpolated value is 56.81214249272691
Using NumPy
Numpy also has an interp function, but it doesn't do the sort for you. And if you don't sort, you'll be sorry:
Does not check that the x-coordinate sequence xp is increasing. If xp
is not increasing, the results are nonsense.
The only way I could get np.interp to work was to shove the data in to a structured array.
import numpy as np
import matplotlib.pyplot as plt
my_x = np.array([4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,
48,50,52,54,56,58,60,62,64,66], dtype = np.float)
my_y_raw=np.array([0.99470977497817203, 0.99434995886145172,
0.98974611323163653, 0.961630837657524, 0.99327633558441175,
0.99338952769251909, 0.99428263292577534, 0.98690514212711611,
0.99111667721533181, 0.99149418924880861, 0.99133773062680464,
0.99143506380003499, 0.99151080464011454, 0.99268261743308517,
0.99289757252812316, 0.99100207861144063, 0.99157171773324027,
0.99112571824824358, 0.99031608691035722, 0.98978104266076905,
0.989782674787969, 0.98897835092187614, 0.98517540405423909,
0.98308943666187076, 0.96081810781994603, 0.85563541881892147,
0.61570811548079107, 0.33076276040577052, 0.14655134838124245,
0.076853147122142126, 0.035831324928136087, 0.021344669212790181],
dtype = np.float)
dt = np.dtype([('x', np.float), ('y', np.float)])
data = np.zeros( (len(my_x)), dtype = dt)
data['x'] = my_x
data['y'] = my_y_raw
data.sort(order = 'y') # sort data in place by y values
print('numpy interp gives ', np.interp(0.5, data['y'], data['x']))
which gives
numpy interp gives 56.81214249272691
As you said, your data looks like a flipped sigmoidal. Can we make the assumption that your function is a strictly decreasing function? If that is the case, we can try the following methods:
Remove all the points where the data is not strictly decreasing.For example, for your data that point will be near 0.
Use the binary search to find the location where y=0.5 should be put in.
Now you know two (x, y) pairs where your desired y=0.5 should lie.
You can use simple linear interpolation if (x, y) pairs are very close.
Otherwise, you can see what is the approximation of sigmoid near those pairs.
You might not need to fit any functions to your data. Simply find the following two elements:
The largest x for which y<50%
The smallest x for which y>50%
Then use interpolation and find the x*. Below is the code
my_x = np.array([4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66])
my_y=np.array([0.99470977497817203, 0.99434995886145172, 0.98974611323163653, 0.961630837657524, 0.99327633558441175, 0.99338952769251909, 0.99428263292577534, 0.98690514212711611, 0.99111667721533181, 0.99149418924880861, 0.99133773062680464, 0.99143506380003499, 0.99151080464011454, 0.99268261743308517, 0.99289757252812316, 0.99100207861144063, 0.99157171773324027, 0.99112571824824358, 0.99031608691035722, 0.98978104266076905, 0.989782674787969, 0.98897835092187614, 0.98517540405423909, 0.98308943666187076, 0.96081810781994603, 0.85563541881892147, 0.61570811548079107, 0.33076276040577052, 0.14655134838124245, 0.076853147122142126, 0.035831324928136087, 0.021344669212790181])
tempInd1 = my_y<.5 # This will only work if the values are monotonic
x1 = my_x[tempInd1][0]
y1 = my_y[tempInd1][0]
x2 = my_x[~tempInd1][-1]
y2 = my_y[~tempInd1][-1]
scipy.interp(0.5, [y1, y2], [x1, x2])

Resources