This question already has an answer here:
imshow(img, cmap=cm.gray) shows a white for 128 value
(1 answer)
Closed 4 years ago.
So this seems like a bug, but it could be intended behavior.
My code is as follows:
import matplotlib.pyplot as pyplot
import numpy as np
array = np.ones([10, 10])
# array[0, 0] = 0
fig, ax = pyplot.subplots(figsize=(10, 5))
ax.imshow(array, cmap=pyplot.cm.binary)
pyplot.show()
The result is a white image and not a black one as expected:
What's weird about this behavior is that uncommenting one line an changing one pixel seemingly "fixes" the problem:
Closest explanation that I found online, was:
[...] The issue is that when initialising the image with a uniform array, the minimum and maximum of the colormap are identical. As we are only changing the data, not the colormap, all images are shown as being of uniform colour.
With that explanation in mind, how do I fix this behavior?
If the vmin and vmax parameters of imshow are left unspecified, imshow sets them to be
vmin = array.min() # in this case, vmin=1
vmax = array.max() # in this case, vmax=1
It then normalizes the array values to fall between 0 and 1, using matplotlib.colors.Normalize by default.
In [99]: norm = mcolors.Normalize(vmin=1, vmax=1)
In [100]: norm(1)
Out[100]: 0.0
Thus each point in array is mapped to the color associated with 0.0:
In [101]: plt.cm.binary(0)
Out[101]: (1.0, 1.0, 1.0, 1.0) # white
Usually array will contain a variety of values and matplotlib's normalization will just "do the right thing" for you automatically. However, in these corner cases where array consists of only one value, you may need to set vmin and vmax explicitly:
import matplotlib.pyplot as pyplot
import numpy as np
array = np.ones([10, 10])
fig, ax = pyplot.subplots(figsize=(10, 5))
ax.imshow(array, cmap=pyplot.cm.binary, vmin=0, vmax=1)
pyplot.show()
You can sidestep this problem by using explicit colors instead of color mapping:
array = np.zeros((10, 10, 3), 'u1')
#array[0, 0, :] = 255
fig, ax = pyplot.subplots(figsize=(10, 5))
ax.imshow(array)
pyplot.show()
This way, zero means black and (255,255,255) means white (in RGB).
Related
How to get the correct yticks of a colorbar in matplotlib without whitespace in the colorbar?
This is my code, note that the colors of the colorbar are misaligned if I apply .set_ticks() using the (formatted) values I got through get.ticks(), these values (as printed in the output) seem incorrect as the minimum shown is 15 but my minimum input value is 17.15116279.
import geopandas as gpd # version 0.11.0
import matplotlib.pyplot as plt # version 3.5.2
import matplotlib.colors as clr
from matplotlib import colorbar
from matplotlib.colors import Normalize # tbv colorbar
from matplotlib import cm
import matplotlib.ticker as mtick
cmap = clr.LinearSegmentedColormap.from_list('custom blue', ["#fce19c", "#c4ddee"], N=400)
world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
world = world[(world.pop_est>0) & (world.name!="Antarctica")]
vals = [22.36958444, 29.21348315, 30.74534161, 37.42331288, 20.,
19.31407942, 26.08695652, 26.36165577, 25.0, 17.79279279,
17.15116279, 19.60784314]
world = world[:len(vals)]
world['gdp_per_cap'] = vals
fig, ax = plt.subplots(1, 1)
ax = world.plot(column='gdp_per_cap', ax=ax, legend=False, cmap=cmapgeelblauw)
from mpl_toolkits.axes_grid1.axes_divider import make_axes_locatable
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.1)
vmin = world['gdp_per_cap'].min()
vmax = world['gdp_per_cap'].max()
norm = Normalize(vmin=vmin, vmax=vmax)
n_cmap = cm.ScalarMappable(norm=norm, cmap=cmap)
n_cmap.set_array([])
cbar = fig.colorbar(n_cmap, cax=cax)
print(cax==cbar.ax) # True
vals = cbar.ax.get_yticks()
print(vals)
cbar.ax.yaxis.set_ticks(vals)
cbar.ax.set_yticklabels(['{:,.0%}'.format(x/100) for x in vals])
plt.show()
Note that the colorbar remains correct is if
cbar.ax.yaxis.set_ticks(vals)
is not applied. But in that case I get the warning "UserWarning: FixedFormatter should only be used together with FixedLocator".
Also note: to avoid the issue I could apply a format this way:
cax_format = mtick.PercentFormatter(decimals=2)
cbar = fig.colorbar(n_cmap, cax=cax, format=cax_format)
And if I add the line
fig.draw_without_rendering()
# followed by vals = cbar.ax.get_yticks()
as suggested by Stef in the comments then the values are different (but still incorrect from my point of view) and the colorbar gets a 2nd white area due to this:
This is what is looks like if I do not set the ticks: This is what I am after but the warning made me set the ticks and realise that something may be wrong.
Based on the 2nd comment by Stef: "note that not necessarily all ticks are within the view limits, i.e. this first and last one may not actually be displayed. Manually setting ticks, on the other hand, expands the view limits to the ticks range given. If these are outside vmin / vmax it will cause the white gap you see."
Indeed, if I manually adjust the values as follows:
fig.draw_without_rendering()
vals = cbar.ax.get_yticks()
print(vals)
vals = [vmin] + vals[1:-1].tolist() + [vmax]
print(vals)
cbar.ax.yaxis.set_ticks(vals)
vals = ['{:,.0%}'.format(x/100) for x in vals]
vals = [''] + vals[1:-1] + ['']
print(vals)
cbar.ax.set_yticklabels(vals)
plt.show()
Then you get:
By manually setting the ticks and tick labels, you create a fixed locator and a corresponding function formatter. Using a fixed locator is seldom the optimal solution due to the possible pitfalls outlined in the comments.
If you just want to add a % sign and/or change the number of decimals, you can use a string formatter which is implicitely created when you pass a formatting string to set_major_formatter:
cax.yaxis.set_major_formatter('{x:g} %')
So, I have to make a bunch of contourf plots for different days that need to share colorbar ranges. That was easily made but sometimes it happens that the maximum value for a given date is above the colorbar range and that changes the look of the plot in a way I dont need. The way I want it to treat it when that happens is to add the extend triangle above the "original colorbar". It's clear in the attached picture.
I need the code to run things automatically, right now I only feed the data and the color bar range and it outputs the images, so the fitting of the colorbar in the code needs to be automatic, I can't add padding in numbers because the figure sizes changes depending on the area that is being asked to be plotted.
The reason why I need this behavior is because eventually I would want to make a .gif and I can't have the colorbar to move in that short video. I need for the triangle to be added, when needed, to the top (and below) without messing with the "main" colorbar.
Thanks!
import matplotlib.pyplot as plt
from matplotlib.colors import Normalize, BoundaryNorm
from matplotlib import cm
###############
## Finds the appropriate option for variable "extend" in fig colorbar
def find_extend(vmin, vmax, datamin, datamax):
#extend{'neither', 'both', 'min', 'max'}
if datamin >= vmin:
if datamax <= vmax:
extend="neither"
else:
extend="max"
else:
if datamax <= vmax:
extend="min"
else:
extend="both"
return extend
###########
vmin=0
vmax=30
nlevels=8
colormap=cm.get_cmap("rainbow")
### Creating data
z_1=30*abs(np.random.rand(5, 5))
z_2=37*abs(np.random.rand(5, 5))
data={1:z_1, 2:z_2}
x=range(5)
y=range(5)
## Plot
for day in [1, 2]:
fig = plt.figure(figsize=(4,4))
## Normally figsize=get_figsize(bounds) and bounds is retrieved from gdf.total_bounds
## The function creates the figure size based on the x/y ratio of the bounds
ax = fig.add_subplot(1, 1, 1)
norm=BoundaryNorm(np.linspace(vmin, vmax, nlevels+1), ncolors=colormap.N)
z=data[day]
cs=ax.contourf(x, y, z, cmap=cmap, norm=norm, vmin=vmin, vmax=vmax)
extend=find_extend(vmin, vmax, np.nanmin(z), np.nanmax(z))
fig.colorbar(cm.ScalarMappable(norm=norm, cmap=cmap), ax=ax, extend=extend)
plt.close(fig)
You can do something like this: putting a triangle on top of the colorbar manually:
fig, ax = plt.subplots()
pc = ax.pcolormesh(np.random.randn(20, 20))
cb = fig.colorbar(pc)
trixy = np.array([[0, 1], [1, 1], [0.5, 1.05]])
p = mpatches.Polygon(trixy, transform=cb.ax.transAxes,
clip_on=False, edgecolor='k', linewidth=0.7,
facecolor='m', zorder=4, snap=True)
cb.ax.add_patch(p)
plt.show()
I would like to create a version of this 2D binned "color map" with smoothed colors.
I am not even sure this would be the correct nomenclature for the plot, but, essentially, I want my figure to be color coded by the median values of a third variable for points that reside in each defined bin of my (X, Y) space.
Even though I am able to accomplish that to a certain degree (see example), I would like to find a way to create a version of the same plot with a smoothed color gradient. That would allow me to visualize the overall behavior of my distribution.
I tried ideas described here: Smoothing 2D map in python
and here: Python: binned_statistic_2d mean calculation ignoring NaNs in data
as well as links therein, but could not find a clear solution to the problem.
This is what I have so far:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from scipy.stats import binned_statistic_2d
import random
random.seed(999)
x = np.random.normal (0,10,5000)
y = np.random.normal (0,10,5000)
z = np.random.uniform(0,10,5000)
fig = plt.figure(figsize=(20, 20))
plt.rcParams.update({'font.size': 10})
ax = fig.add_subplot(3,3,1)
ax.set_axisbelow(True)
plt.grid(b=True, lw=0.5, zorder=-1)
x_bins = np.arange(-50., 50.5, 1.)
y_bins = np.arange(-50., 50.5, 1.)
cmap = plt.cm.get_cmap('jet_r',1000) #just a colormap
ret = binned_statistic_2d(x, y, z, statistic=np.median, bins=[x_bins, y_bins]) # Bin (X, Y) and create a map of the medians of "Colors"
plt.imshow(ret.statistic.T, origin='bottom', extent=(-50, 50, -50, 50), cmap=cmap)
plt.xlim(-40,40)
plt.ylim(-40,40)
plt.xlabel("X", fontsize=15)
plt.ylabel("Y", fontsize=15)
ax.set_yticks([-40,-30,-20,-10,0,10,20,30,40])
bounds = np.arange(2.0, 20.0, 1.0)
plt.colorbar(ticks=bounds, label="Color", fraction=0.046, pad=0.04)
# save plots
plt.savefig("Whatever_name.png", bbox_inches='tight')
Which produces the following image (from random data):
Therefore, the simple question would be: how to smooth these colors?
Thanks in advance!
PS: sorry for excessive coding, but I believe a clear visualization is crucial for this particular problem.
Thanks to everyone who viewed this issue and tried to help!
I ended up being able to solve my own problem. In the end, it was all about image smoothing with Gaussian Kernel.
This link: Gaussian filtering a image with Nan in Python gave me the insight for the solution.
I, basically, implemented the exactly same code, but, in the end, mapped the previously known NaN pixels from the original 2D array to the resulting smoothed version. Unlike the solution from the link, my version does NOT fill NaN pixels with some value derived from the pixels around. Or, it does, but then I erase those again.
Here is the final figure produced for the example I provided:
Final code, for reference, for those who might need in the future:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from scipy.stats import binned_statistic_2d
import scipy.stats as st
import scipy.ndimage
import scipy as sp
import random
random.seed(999)
x = np.random.normal (0,10,5000)
y = np.random.normal (0,10,5000)
z = np.random.uniform(0,10,5000)
fig = plt.figure(figsize=(20, 20))
plt.rcParams.update({'font.size': 10})
ax = fig.add_subplot(3,3,1)
ax.set_axisbelow(True)
plt.grid(b=True, lw=0.5, zorder=-1)
x_bins = np.arange(-50., 50.5, 1.)
y_bins = np.arange(-50., 50.5, 1.)
cmap = plt.cm.get_cmap('jet_r',1000) #just a colormap
ret = binned_statistic_2d(x, y, z, statistic=np.median, bins=[x_bins, y_bins]) # Bin (X, Y) and create a map of the medians of "Colors"
sigma=1 # standard deviation for Gaussian kernel
truncate=5.0 # truncate filter at this many sigmas
U = ret.statistic.T.copy()
V=U.copy()
V[np.isnan(U)]=0
VV=sp.ndimage.gaussian_filter(V,sigma=sigma)
W=0*U.copy()+1
W[np.isnan(U)]=0
WW=sp.ndimage.gaussian_filter(W,sigma=sigma)
np.seterr(divide='ignore', invalid='ignore')
Z=VV/WW
for i in range(len(Z)):
for j in range(len(Z[0])):
if np.isnan(U[i][j]):
Z[i][j] = np.nan
plt.imshow(Z, origin='bottom', extent=(-50, 50, -50, 50), cmap=cmap)
plt.xlim(-40,40)
plt.ylim(-40,40)
plt.xlabel("X", fontsize=15)
plt.ylabel("Y", fontsize=15)
ax.set_yticks([-40,-30,-20,-10,0,10,20,30,40])
bounds = np.arange(2.0, 20.0, 1.0)
plt.colorbar(ticks=bounds, label="Color", fraction=0.046, pad=0.04)
# save plots
plt.savefig("Whatever_name.png", bbox_inches='tight')
This question already has answers here:
Setting Transparency Based on Pixel Values in Matplotlib
(3 answers)
Closed 3 years ago.
I am interpolating point data to generate dynamic flood inundation maps in a for loop. This produces a flood map in each iteration where the pixel values show the probability of water presence. However, I'm unable to make the dry pixels (values < 0.2) transparent so the base map can be seen when added.
Initially I created a geodataframe of predicted probabilities. Then generated raster files using rasterio library using the condition ( <0.2 = np.nan) to set dry pixels. My code is:
geom = [Point(xy) for xy in zip(df.X, df.Y)]
crs = {'init': 'epsg:27700'}
gdf = geopandas.GeoDataFrame(df, crs=crs, geometry=geom)
###Rasterization
#Set up filenames
rst_fn = 'dem_9m_study_area.asc' #template raster
out_fn = 'Raster.tif'
rst = rasterio.open(rst_fn)
#copy and update the metadata from the input raster for the output
meta = rst.meta.copy()
meta.update(compress='lzw')
with rasterio.open(out_fn, 'w', **meta) as out:
out_arr = out.read(1)
shapes = ((geom,value) for geom, value in zip(gdf.geometry, gdf.Prob))
burned = features.rasterize(shapes=shapes, fill=0, out=out_arr, transform=out.transform)
burned[burned < 0.2] = np.nan
out.write_band(1, burned)
Now i would like to import the saved raster and plot it over a background raster which is also in the same coordinate (EPSG:27700), and also show the color bar.
I tried this:
plt.figure(num=None, figsize=(10, 8), dpi=80, facecolor='w', edgecolor='k')
plt.imshow(burned, cmap='Blues')
plt.colorbar()
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Flood Extent: T=%i h'%op)
Extent with values <0.2 set to nan
this works fine without the background though the x and y coordinates are not showing correctly. But does not work if I add this to the above code:
bmap = rasterio.open("background_upton.tif") #import basemap
src = bmap.read(1)
plt.imshow(src, cmap = 'pink')
I also tried the method described in "Adding a background map to plots": https://geopandas.readthedocs.io/en/latest/gallery/plotting_basemap_background.html
But this does not seem to solve the problem. It would be great if I can get some suggestions how to solve the issue.
I want to overlay the extent map using this background image
Try setting the minimum value for your color map and then specifying values below the minimum to be completely transparent.
I'll create an example array with values of either 1 or 2.
import numpy as np
import matplotlib.pyplot as plt
arr = np.ones([64,64])
arr[0:32] = 2
plt.imshow(arr, cmap='viridis')
plt.colorbar()
Now say we want values of 1 (purple) to be transparent in this case. We can do that by setting color map lower limit and then specifying how to map values below the limit:
cmap = plt.get_cmap('viridis')
cmap.set_under('k', alpha=0)
plt.imshow(arr, cmap=cmap, vmin=1.5)
plt.colorbar()
'k' is actually black but alpha=0 makes it transparent
I have a plot of values sampled from the set [ -1, 0, 1] and each value is mapped to a color. However it is possible to have a sample where only two different values appear ( [-1,0], [-1,1], [0,1] ) and if that happens, then the color scheme should adapt accordingly
If the number of unique values is 3, then this code works
ax2 = plt.subplot2grid((n_rows , 1), (2, 0))
colors = [(216/255, 24/255, 24/255), (1, 1, 1), (143/255, 188/255, 143/255)]
positions = df['long'].astype(int) - df['short'].astype(int)
cm = LinearSegmentedColormap.from_list('mycols', colors, N=3)
ax2.pcolorfast(ax2.get_xlim(), ax2.get_ylim(), positions.values[np.newaxis], cmap=cm, alpha=0.5)
The result is
How should I manage the scenarios where only two colors are needed?
I think this controls the number of segments, but I don't know how to account for the color scheme
cm = LinearSegmentedColormap.from_list('colores', colors, N=len(list(set(positions))))
If you make colors a numpy array, you could do something along these lines: colors[np.isin([-1, 0, 1], sorted(available_values))] to select just the wanted colours. The [-1, 0, 1] should of course be a complete list of all available values, with a one to one correspondence with colors.
Note that this may not work when the values are floating point values, since the comparison will not be accurate at times.
Example code (untested):
all_values = np.array([-1, 0, 1])
colors = np.array([(216/255, 24/255, 24/255), (1, 1, 1), (143/255, 188/255, 143/255)])
positions = df['long'].astype(int) - df['short'].astype(int)
available_values = set(positions)
cm = LinearSegmentedColormap.from_list('mycols', colors[np.isin(all_values, sorted(available_values))], N=len(available_values))
ax2.pcolorfast(ax2.get_xlim(), ax2.get_ylim(), positions.values[np.newaxis], cmap=cm, alpha=0.5)
Usually one would not create new colormap for each plot with different values, but rather change the normalization.
Here, as I understand it, there are only ever the values [-1,0,1] or any subset of those in use. Hence one may use a single normalization as plt.Normalize(-1,1) throughout.
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import numpy as np
colors = [(216/255., 24/255., 24/255.), (1., 1., 1.), (143/255., 188/255., 143/255.)]
cmap = ListedColormap(colors)
norm=plt.Normalize(-1,1)
combinations = [[-1,0,1],[-1,0],[0,1],[-1,1]]
fig, axes = plt.subplots(nrows=len(combinations), sharex=True)
for combo, ax in zip(combinations, axes):
data = np.random.choice(combo, size=(50))
ax.pcolorfast(np.atleast_2d(data), cmap=cmap, norm=norm, alpha=0.5)
ax.set_ylabel(combo)
plt.show()