I am trying to modify the legend of a figure that contains two overlayed scatter plots. More specifically, I want two legend handles and labels: the first handle will contain multiple points (each colored differently), while the other handle consists of a single point.
As per this related question, I can modify the legend handle to show multiple points, each one being a different color.
As per this similar question, I am aware that I can change the number of points shown by a specified handle. However, this applies the change to all handles in the legend. Can it be applied to one handle only?
My goal is to combine both approaches. Is there a way to do this?
In case it isn't clear, I would like to modify the embedded figure (see below) such that Z vs X handle shows only one-point next to the corresponding legend label, while leaving the Y vs X handle unchanged.
My failed attempt at producing such a figure is below:
To replicate this figure, one can run the code below:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.legend_handler import HandlerTuple, HandlerRegularPolyCollection
class ScatterHandler(HandlerRegularPolyCollection):
def update_prop(self, legend_handle, orig_handle, legend):
""" """
legend._set_artist_props(legend_handle)
legend_handle.set_clip_box(None)
legend_handle.set_clip_path(None)
def create_collection(self, orig_handle, sizes, offsets, transOffset):
""" """
p = type(orig_handle)([orig_handle.get_paths()[0]], sizes=sizes, offsets=offsets, transOffset=transOffset, cmap=orig_handle.get_cmap(), norm=orig_handle.norm)
a = orig_handle.get_array()
if type(a) != type(None):
p.set_array(np.linspace(a.min(), a.max(), len(offsets)))
else:
self._update_prop(p, orig_handle)
return p
x = np.arange(10)
y = np.sin(x)
z = np.cos(x)
fig, ax = plt.subplots()
hy = ax.scatter(x, y, cmap='plasma', c=y, label='Y vs X')
hz = ax.scatter(x, z, color='k', label='Z vs X')
ax.grid(color='k', linestyle=':', alpha=0.3)
fig.subplots_adjust(bottom=0.2)
handler_map = {type(hz) : ScatterHandler()}
fig.legend(mode='expand', ncol=2, loc='lower center', handler_map=handler_map, scatterpoints=5)
plt.show()
plt.close(fig)
One solution that I do not like is to create two legends - one for Z vs X and one for Y vs X. But, my actual use case involves an optional number of handles (which can exceed two) and I would prefer not having to calculate the optimal width/height of each legend box. How else can this problem be approached?
This is a dirty trick and not an elegant solution, but you can set the sizes of other points for Z-X legend to 0. Just change your last two lines to the following.
leg = fig.legend(mode='expand', ncol=2, loc='lower center', handler_map=handler_map, scatterpoints=5)
# The third dot of the second legend stays the same size, others are set to 0
leg.legendHandles[1].set_sizes([0,0,leg.legendHandles[1].get_sizes()[2],0,0])
The result is as shown.
I noticed something strange this day when I plotted the Wigner function of a coherent state using the open source quantum toolbox QuTiP in python.
When I do the plot I noticed these strange patterns just around the edge of the plot that are not supposed to be there. I believe it's just some sort of numerical error but I don't know how I can get rid or minimize them or most impartant: what's causing them.
Here is the code
# import packages
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib as mpl
from matplotlib import cm
from qutip import *
N = 60 # number of levels in Hilbert space
# density matrix of a coherent state
rho_coherent = coherent_dm(N, 1-1j)
X = np.linspace(-3, 3, 300)
Y = np.linspace(-3, 3, 300)
# Wigner function
W = wigner(rho_coherent, X, Y, 'iterative', 2)
X, Y = np.meshgrid(X, Y)
# Color Normalization
class MidpointNormalize(colors.Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
colors.Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
# contour plot
plt.subplot(111, aspect='equal')
plt.contourf(X, Y, W, 100, cmap = cm.RdBu_r, norm = MidpointNormalize(midpoint=0.))
plt.show()
and here is the plot
The blue spots as you can clearly see that's around the edges are not supposed to be there! The blue spots indicate that the Wigner function is negative at that point, but a coherent state should have a Wigner function thats positive everywhere!
I also noticed that when I reduce the linspace steps from 300 to 100 the blue parts disappear.
Would appreciate very much if someone can explain what's causing this problem to appear.
This is simply due to truncation. When using a finite number of modes (in your case N=60), the Wigner function will go negative at some point.
Reducing the linspace steps brings the negative regions you see on the plot into the zero value increment and displays these regions as zero. Reducing the linspace steps is probably the best solution to your problem. Your plot will only be as accurate as the errors introduced by truncation, so simply reduce the resolution until those errors disappear.
I have a data that looks like a sigmoidal plot but flipped relative to the vertical line.
But the plot is a result of plotting 1D data instead of some sort of function.
My goal is to find the x value when the y value is at 50%. As you can see, there is no data point when y is exactly at 50%.
Interpolate comes to my mind. But I'm not sure if interpolate enable me to find the x value when the y value is 50%. So my question is 1) can you use interpolate to find the x when the y is 50%? or 2)do you need to fit the data to some sort of a function?
Below is what I currently have in my code
import numpy as np
import matplotlib.pyplot as plt
my_x = [4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66]
my_y_raw=np.array([0.99470977497817203, 0.99434995886145172, 0.98974611323163653, 0.961630837657524, 0.99327633558441175, 0.99338952769251909, 0.99428263292577534, 0.98690514212711611, 0.99111667721533181, 0.99149418924880861, 0.99133773062680464, 0.99143506380003499, 0.99151080464011454, 0.99268261743308517, 0.99289757252812316, 0.99100207861144063, 0.99157171773324027, 0.99112571824824358, 0.99031608691035722, 0.98978104266076905, 0.989782674787969, 0.98897835092187614, 0.98517540405423909, 0.98308943666187076, 0.96081810781994603, 0.85563541881892147, 0.61570811548079107, 0.33076276040577052, 0.14655134838124245, 0.076853147122142126, 0.035831324928136087, 0.021344669212790181])
my_y=my_y_raw/np.max(my_y_raw)
plt.plot(my_x, my_y,color='k', markersize=40)
plt.scatter(my_x,my_y,marker='*',label="myplot", color='k', edgecolor='k', linewidth=1,facecolors='none',s=50)
plt.legend(loc="lower left")
plt.xlim([4,102])
plt.show()
Using SciPy
The most straightforward way to do the interpolation is to use the SciPy interpolate.interp1d function. SciPy is closely related to NumPy and you may already have it installed. The advantage to interp1d is that it can sort the data for you. This comes at the cost of somewhat funky syntax. In many interpolation functions it is assumed that you are trying to interpolate a y value from an x value. These functions generally need the "x" values to be monotonically increasing. In your case, we swap the normal sense of x and y. The y values have an outlier as #Abhishek Mishra has pointed out. In the case of your data, you are lucky and you can get away with the the leaving the outlier in.
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
my_x = [4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,
48,50,52,54,56,58,60,62,64,66]
my_y_raw=np.array([0.99470977497817203, 0.99434995886145172,
0.98974611323163653, 0.961630837657524, 0.99327633558441175,
0.99338952769251909, 0.99428263292577534, 0.98690514212711611,
0.99111667721533181, 0.99149418924880861, 0.99133773062680464,
0.99143506380003499, 0.99151080464011454, 0.99268261743308517,
0.99289757252812316, 0.99100207861144063, 0.99157171773324027,
0.99112571824824358, 0.99031608691035722, 0.98978104266076905,
0.989782674787969, 0.98897835092187614, 0.98517540405423909,
0.98308943666187076, 0.96081810781994603, 0.85563541881892147,
0.61570811548079107, 0.33076276040577052, 0.14655134838124245,
0.076853147122142126, 0.035831324928136087, 0.021344669212790181])
# set assume_sorted to have scipy automatically sort for you
f = interp1d(my_y_raw, my_x, assume_sorted = False)
xnew = f(0.5)
print('interpolated value is ', xnew)
plt.plot(my_x, my_y_raw,'x-', markersize=10)
plt.plot(xnew, 0.5, 'x', color = 'r', markersize=20)
plt.plot((0, xnew), (0.5,0.5), ':')
plt.grid(True)
plt.show()
which gives
interpolated value is 56.81214249272691
Using NumPy
Numpy also has an interp function, but it doesn't do the sort for you. And if you don't sort, you'll be sorry:
Does not check that the x-coordinate sequence xp is increasing. If xp
is not increasing, the results are nonsense.
The only way I could get np.interp to work was to shove the data in to a structured array.
import numpy as np
import matplotlib.pyplot as plt
my_x = np.array([4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,
48,50,52,54,56,58,60,62,64,66], dtype = np.float)
my_y_raw=np.array([0.99470977497817203, 0.99434995886145172,
0.98974611323163653, 0.961630837657524, 0.99327633558441175,
0.99338952769251909, 0.99428263292577534, 0.98690514212711611,
0.99111667721533181, 0.99149418924880861, 0.99133773062680464,
0.99143506380003499, 0.99151080464011454, 0.99268261743308517,
0.99289757252812316, 0.99100207861144063, 0.99157171773324027,
0.99112571824824358, 0.99031608691035722, 0.98978104266076905,
0.989782674787969, 0.98897835092187614, 0.98517540405423909,
0.98308943666187076, 0.96081810781994603, 0.85563541881892147,
0.61570811548079107, 0.33076276040577052, 0.14655134838124245,
0.076853147122142126, 0.035831324928136087, 0.021344669212790181],
dtype = np.float)
dt = np.dtype([('x', np.float), ('y', np.float)])
data = np.zeros( (len(my_x)), dtype = dt)
data['x'] = my_x
data['y'] = my_y_raw
data.sort(order = 'y') # sort data in place by y values
print('numpy interp gives ', np.interp(0.5, data['y'], data['x']))
which gives
numpy interp gives 56.81214249272691
As you said, your data looks like a flipped sigmoidal. Can we make the assumption that your function is a strictly decreasing function? If that is the case, we can try the following methods:
Remove all the points where the data is not strictly decreasing.For example, for your data that point will be near 0.
Use the binary search to find the location where y=0.5 should be put in.
Now you know two (x, y) pairs where your desired y=0.5 should lie.
You can use simple linear interpolation if (x, y) pairs are very close.
Otherwise, you can see what is the approximation of sigmoid near those pairs.
You might not need to fit any functions to your data. Simply find the following two elements:
The largest x for which y<50%
The smallest x for which y>50%
Then use interpolation and find the x*. Below is the code
my_x = np.array([4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66])
my_y=np.array([0.99470977497817203, 0.99434995886145172, 0.98974611323163653, 0.961630837657524, 0.99327633558441175, 0.99338952769251909, 0.99428263292577534, 0.98690514212711611, 0.99111667721533181, 0.99149418924880861, 0.99133773062680464, 0.99143506380003499, 0.99151080464011454, 0.99268261743308517, 0.99289757252812316, 0.99100207861144063, 0.99157171773324027, 0.99112571824824358, 0.99031608691035722, 0.98978104266076905, 0.989782674787969, 0.98897835092187614, 0.98517540405423909, 0.98308943666187076, 0.96081810781994603, 0.85563541881892147, 0.61570811548079107, 0.33076276040577052, 0.14655134838124245, 0.076853147122142126, 0.035831324928136087, 0.021344669212790181])
tempInd1 = my_y<.5 # This will only work if the values are monotonic
x1 = my_x[tempInd1][0]
y1 = my_y[tempInd1][0]
x2 = my_x[~tempInd1][-1]
y2 = my_y[~tempInd1][-1]
scipy.interp(0.5, [y1, y2], [x1, x2])