Why does contourf (matplotlib) switch x and y coordinates? - python-3.x

I am trying to get contourf to plot my stuff right, but it seems to switch the x and y coordinates. In the example below, I show this by evaluating a 2d Gaussian function that has different widths in x and y directions. With the values given, the width in y direction should be larger. Here is the script:
from numpy import *
from matplotlib.pyplot import *
xMax = 50
xNum = 100
w0x = 10
w0y = 15
dx = xMax/xNum
xGrid = linspace(-xMax/2+dx/2, xMax/2-dx/2, xNum, endpoint=True)
yGrid = xGrid
Int = zeros((xNum, xNum))
for idX in range(xNum):
for idY in range(xNum):
Int[idX, idY] = exp(-((xGrid[idX]/w0x)**2 + (yGrid[idY]/(w0y))**2))
fig = figure(6)
clf()
ax = subplot(2,1,1)
X, Y = meshgrid(xGrid, yGrid)
contour(X, Y, Int, colors='k')
plot(array([-xMax, xMax])/2, array([0, 0]), '-b')
plot(array([0, 0]), array([-xMax, xMax])/2, '-r')
ax.set_aspect('equal')
xlabel("x")
ylabel("y")
subplot(2,1,2)
plot(xGrid, Int[:, int(xNum/2)], '-b', label='I(x, y=max/2)')
plot(xGrid, Int[int(xNum/2), :], '-r', label='I(x=max/2, y)')
ax.set_aspect('equal')
legend()
xlabel(r"x or y")
ylabel(r"I(x or y)")
The figure thrown out is this:
On top the contour plot which has the larger width in x direction (not y). Below are slices shown, one across x direction (at constant y=0, blue), the other in y direction (at constant x=0, red). Here, everything seems fine, the y direction is broader than the x direction. So why would I have to transpose the array in order to have it plotted as I want? This seems unintuitive to me and not in agreement with the documentation.

It helps if you think of a 2D array's shape not as (x, y) but as (rows, columns), because that is how most math routines interpret them - including matplotlib's 2D plotting functions. Therefore, the first dimension is vertical (which you call y) and the second dimension is horizontal (which you call x).
Note that this convention is very prominent, even in numpy. The function np.vstack is supposed to concatenate arrays vertically works along the first dimension and np.hstack works horizontally on the second dimension.
To illustrate the point:
import numpy as np
import matplotlib.pyplot as plt
a = np.array([[0, 0, 1, 0, 0],
[0, 1, 1, 1, 0],
[1, 1, 1, 1, 1]])
a[:, 2] = 2 # set column
print(a)
plt.imshow(a)
plt.contour(a, colors='k')
This prints
[[0 0 2 0 0]
[0 1 2 1 0]
[1 1 2 1 1]]
and consistently plots
According to your convention that an array is (x, y) the command a[:, 2] = 2 should have assigned to the third row, but numpy and matplotlib both agree that it was the column :)
You can of course use your own convention how to interpret the dimensions of your arrays, but in the long run it will be more consistent to treat them as (y, x).

Related

Pyplot: subsequent plots with a gradient of colours [duplicate]

I am plotting multiple lines on a single plot and I want them to run through the spectrum of a colormap, not just the same 6 or 7 colors. The code is akin to this:
for i in range(20):
for k in range(100):
y[k] = i*x[i]
plt.plot(x,y)
plt.show()
Both with colormap "jet" and another that I imported from seaborn, I get the same 7 colors repeated in the same order. I would like to be able to plot up to ~60 different lines, all with different colors.
The Matplotlib colormaps accept an argument (0..1, scalar or array) which you use to get colors from a colormap. For example:
col = pl.cm.jet([0.25,0.75])
Gives you an array with (two) RGBA colors:
array([[ 0. , 0.50392157, 1. , 1. ],
[ 1. , 0.58169935, 0. , 1. ]])
You can use that to create N different colors:
import numpy as np
import matplotlib.pylab as pl
x = np.linspace(0, 2*np.pi, 64)
y = np.cos(x)
pl.figure()
pl.plot(x,y)
n = 20
colors = pl.cm.jet(np.linspace(0,1,n))
for i in range(n):
pl.plot(x, i*y, color=colors[i])
Bart's solution is nice and simple but has two shortcomings.
plt.colorbar() won't work in a nice way because the line plots aren't mappable (compared to, e.g., an image)
It can be slow for large numbers of lines due to the for loop (though this is maybe not a problem for most applications?)
These issues can be addressed by using LineCollection. However, this isn't too user-friendly in my (humble) opinion. There is an open suggestion on GitHub for adding a multicolor line plot function, similar to the plt.scatter(...) function.
Here is a working example I was able to hack together
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.collections import LineCollection
def multiline(xs, ys, c, ax=None, **kwargs):
"""Plot lines with different colorings
Parameters
----------
xs : iterable container of x coordinates
ys : iterable container of y coordinates
c : iterable container of numbers mapped to colormap
ax (optional): Axes to plot on.
kwargs (optional): passed to LineCollection
Notes:
len(xs) == len(ys) == len(c) is the number of line segments
len(xs[i]) == len(ys[i]) is the number of points for each line (indexed by i)
Returns
-------
lc : LineCollection instance.
"""
# find axes
ax = plt.gca() if ax is None else ax
# create LineCollection
segments = [np.column_stack([x, y]) for x, y in zip(xs, ys)]
lc = LineCollection(segments, **kwargs)
# set coloring of line segments
# Note: I get an error if I pass c as a list here... not sure why.
lc.set_array(np.asarray(c))
# add lines to axes and rescale
# Note: adding a collection doesn't autoscalee xlim/ylim
ax.add_collection(lc)
ax.autoscale()
return lc
Here is a very simple example:
xs = [[0, 1],
[0, 1, 2]]
ys = [[0, 0],
[1, 2, 1]]
c = [0, 1]
lc = multiline(xs, ys, c, cmap='bwr', lw=2)
Produces:
And something a little more sophisticated:
n_lines = 30
x = np.arange(100)
yint = np.arange(0, n_lines*10, 10)
ys = np.array([x + b for b in yint])
xs = np.array([x for i in range(n_lines)]) # could also use np.tile
colors = np.arange(n_lines)
fig, ax = plt.subplots()
lc = multiline(xs, ys, yint, cmap='bwr', lw=2)
axcb = fig.colorbar(lc)
axcb.set_label('Y-intercept')
ax.set_title('Line Collection with mapped colors')
Produces:
Hope this helps!
An anternative to Bart's answer, in which you do not specify the color in each call to plt.plot is to define a new color cycle with set_prop_cycle. His example can be translated into the following code (I've also changed the import of matplotlib to the recommended style):
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 2*np.pi, 64)
y = np.cos(x)
n = 20
ax = plt.axes()
ax.set_prop_cycle('color',[plt.cm.jet(i) for i in np.linspace(0, 1, n)])
for i in range(n):
plt.plot(x, i*y)
If you are using continuous color pallets like brg, hsv, jet or the default one then you can do like this:
color = plt.cm.hsv(r) # r is 0 to 1 inclusive
Now you can pass this color value to any API you want like this:
line = matplotlib.lines.Line2D(xdata, ydata, color=color)
This approach seems to me like the most concise, user-friendly and does not require a loop to be used. It does not rely on user-made functions either.
import numpy as np
import matplotlib.pyplot as plt
# make 5 lines
n_lines = 5
x = np.arange(0, 2).reshape(-1, 1)
A = np.linspace(0, 2, n_lines).reshape(1, -1)
Y = x # A
# create colormap
cm = plt.cm.bwr(np.linspace(0, 1, n_lines))
# plot
ax = plt.subplot(111)
ax.set_prop_cycle('color', list(cm))
ax.plot(x, Y)
plt.show()
Resulting figure here

Quiver plot with optical flow?

Recently I'm working in cloud motion tracking using images, but in many examples when is used in video implementations shows a quiver plot that moves according the object tracked.
Quiver documentations takes four argumets principally ([X, Y], U, V), when X and Y are the starting points and U and V the directions. In the other hand, optical flow based on this example returnsp1 (the displacements) with a shape (m, n, l) of the image with shape of (200,200). My confusion is in how to order the parameters, because also goodFeaturesToTrack return the same as p1
¿How can I join both components to plot a quiver of the cloud motion?
I found a pretty good solution. I explain all my example here using the Hamburg taxi sequence:
Download the taxi sequence.
$ curl -O ftp://ftp.ira.uka.de/pub/vid-text/image_sequences/taxi/taxi.zip
$ unzip -q taxi.zip
Get all images and pick two random frames
from pathlib import Path
import numpy as np
import cv2 as cv
from PIL import Image
import matplotlib.pyplot as plt
taxis_fnames = list(Path('taxi').iterdir())
taxi1 = Image.open(taxis_fnames[rand_idx])
taxi2 = Image.open(taxis_fnames[rand_idx + 4])
Compute the optical flow
flow = cv.calcOpticalFlowFarneback(np.array(taxi1),
np.array(taxi2),
None, 0.5, 3, 15, 3, 5, 1.2, 0)
Plot the quiver
step = 3
plt.quiver(np.arange(0, flow.shape[1], step), np.arange(flow.shape[0], -1, -step),
flow[::step, ::step, 0], flow[::step, ::step, 1])
The step is to downsample the number of optical flow vectors picked. The x positions goes from 0 to image width, while the y positions are inversed (otherwise the optical flow will be up side down) from image height to 0. In some occasions, you will have to change the step so the height and with are divisible by it.
The resulting image:
Here is a general method for plotting a quiver field easily and accurately.
def plot_quiver(ax, flow, spacing, margin=0, **kwargs):
"""Plots less dense quiver field.
Args:
ax: Matplotlib axis
flow: motion vectors
spacing: space (px) between each arrow in grid
margin: width (px) of enclosing region without arrows
kwargs: quiver kwargs (default: angles="xy", scale_units="xy")
"""
h, w, *_ = flow.shape
nx = int((w - 2 * margin) / spacing)
ny = int((h - 2 * margin) / spacing)
x = np.linspace(margin, w - margin - 1, nx, dtype=np.int64)
y = np.linspace(margin, h - margin - 1, ny, dtype=np.int64)
flow = flow[np.ix_(y, x)]
u = flow[:, :, 0]
v = flow[:, :, 1]
kwargs = {**dict(angles="xy", scale_units="xy"), **kwargs}
ax.quiver(x, y, u, v, **kwargs)
ax.set_ylim(sorted(ax.get_ylim(), reverse=True))
ax.set_aspect("equal")
Example usage:
flow = cv2.calcOpticalFlowFarneback(
frame_1, frame_2, None, 0.5, 3, 15, 3, 5, 1.2, 0
)
fig, ax = plt.subplots()
plot_quiver(ax, flow, spacing=10, scale=1, color="#ff44ff")

Matplotlib - Scatter Plot, How to fill in the space between each individual point?

I am plotting the following data:
fig, ax = plt.subplots()
im = ax.scatter(std_sorted[:, [1]], std_sorted[:, [2]], s=5, c=std_sorted[:, [0]])
With the following result:
My question is: can I fill the space between each point in the plot by extrapolating and then coloring that extrapolated space accordingly, so I get a uniform plot without any points?
So basically I'm looking for this result (This is simply me "squeezing" the above picture to show the desired result and not dealing with the space between the points):
The simplest thing to do in this case, is to use a short vertical line a marker, and set the markersize large enough such that there is no white space left.
Another option is to use tricontourf to create a filled image of x, y and z.
Note that neither a scatter plot nor tricontourf need the points to be sorted in any order.
If you do have your points sorted into an orderded grid, plt.imshow should give the best result.
Here is some code to show how it could look like. First some dummy data slightly similar to the example are generated. As the x,y are random, they don't fill the complete space. This might leave some blank spots in the scatter plot. The spots are nicely interpolated for the contourf, except possibly in the corners.
import numpy as np
import matplotlib.pyplot as plt
N = 50000
xmin = 0
xmax = 0.20
ymin = -0.01
ymax = 0.01
std_sorted = np.zeros((N, 3))
std_sorted[:,1] = np.random.uniform(xmin, xmax, N)
std_sorted[:,2] = np.random.choice(np.linspace(ymin, ymax, 80), N)
std_sorted[:,0] = np.cos(3*(std_sorted[:,1] - 0.04 - 100*std_sorted[:,2]**2))**10
fig, ax = plt.subplots(ncols=2)
# im = ax[0].scatter(std_sorted[:, 1], std_sorted[:, 2], s=20, c=std_sorted[:, 0], marker='|')
im = ax[0].scatter(std_sorted[:, 1], std_sorted[:, 2], s=5, c=std_sorted[:, 0], marker='.')
ax[0].set_xlim(xmin, xmax)
ax[0].set_ylim(ymin, ymax)
ax[0].set_title("scatter plot")
ax[1].tricontourf(std_sorted[:, 1], std_sorted[:, 2], std_sorted[:, 0], 256)
ax[1].set_title("tricontourf")
plt.tight_layout()
plt.show()

Rotating 2D grayscale image with transformation matrix

I am new to image processing so i am really confused regarding the coordinate system with images. I have a sample image and i am trying to rotate it 45 clockwise. My transformation matrix is T = [ [cos45 sin45] [-sin45 cos45] ]
Here is the code:
import numpy as np
from matplotlib import pyplot as plt
from skimage import io
image = io.imread('sample_image')
img_transformed = np.zeros((image.shape), dtype=np.uint8)
trans_matrix = np.array([[np.cos(45), np.sin(45)], [-np.sin(45), np.cos(45)]])
for i, row in enumerate(image):
for j,col in enumerate(row):
pixel_data = image[i,j] #get the value of pixel at corresponding location
input_coord = np.array([i, j]) #this will be my [x,y] matrix
result = trans_matrix # input_coord
i_out, j_out = result #store the resulting coordinate location
#make sure the the i and j values remain within the index range
if (0 < int(i_out) < image.shape[0]) and (0 < int(j_out) < image.shape[1]):
img_transformed[int(i_out)][int(j_out)] = pixel_data
plt.imshow(img_transformed, cmap='gray')
The image comes out distorted and doesn't seems right. I know that in pixel coordinate, the origin is at the top left corner (row, column). is the rotation happening with respect to origin from the top left corner? is there a way to shift origin to center or any other given point?
Thank you all!
Yes, as you suspect, the rotation is happening with respect to the top left corner, which has coordinates (0, 0). (Also: the NumPy trigonometric functions use radians rather than degrees, so you need to convert your angle.) To compute a rotation with respect to the center, you do a little hack: you compute the transformation for moving the image so that it is centered on (0, 0), then you rotate it, then you move the result back. You need to combine these transformations in a sequence because if you do it one after the other, you'll lose everything in negative coordinates.
It's much, much easier to do this using Homogeneous coordinates, which add an extra "dummy" dimension to your image. Here's what your code would look like in homogeneous coordinates:
import numpy as np
from matplotlib import pyplot as plt
from skimage import io
image = io.imread('sample_image')
img_transformed = np.zeros((image.shape), dtype=np.uint8)
c, s = np.cos(np.radians(45)), np.sin(np.radians(45))
rot_matrix = np.array([[c, s, 0], [-s, c, 0], [0, 0, 1]])
x, y = np.array(image.shape) // 2
# move center to (0, 0)
translate1 = np.array([[1, 0, -x], [0, 1, -y], [0, 0, 1]])
# move center back to (x, y)
translate2 = np.array([[1, 0, x], [0, 1, y], [0, 0, 1]])
# compose all three transformations together
trans_matrix = translate2 # rot_matrix # translate1
for i, row in enumerate(image):
for j,col in enumerate(row):
pixel_data = image[i,j] #get the value of pixel at corresponding location
input_coord = np.array([i, j, 1]) #this will be my [x,y] matrix
result = trans_matrix # input_coord
i_out, j_out, _ = result #store the resulting coordinate location
#make sure the the i and j values remain within the index range
if (0 < int(i_out) < image.shape[0]) and (0 < int(j_out) < image.shape[1]):
img_transformed[int(i_out)][int(j_out)] = pixel_data
plt.imshow(img_transformed, cmap='gray')
The above should work ok, but you will probably get some black spots due to aliasing. What can happen is that no coordinates i, j from the input land exactly on an output pixel, so that pixel never gets updated. Instead, what you need to do is iterate over the pixels of the output image, then use the inverse transform to find which pixel in the input image maps closest to that output pixel. Something like:
inverse_tform = np.linalg.inv(trans_matrix)
for i, j in np.ndindex(img_transformed.shape):
i_orig, j_orig, _ = np.round(inverse_tform # [i, j, 1]).astype(int)
if i_orig in range(image.shape[0]) and j_orig in range(image.shape[1]):
img_transformed[i, j] = image[i_orig, j_orig]
Hope this helps!

Calculate the volume of 3d plot

The data is from a measurement. The picture of the plotted data
I tried using trapz twice, but I get and error code: "ValueError: operands could not be broadcast together with shapes (1,255) (256,531)"
The x has 256 points and y has 532 points, also the Z is a 2d array that has a 256 by 532 lenght. The code is below:
import numpy as np
img=np.loadtxt('focus_x.txt')
m=0
m=np.max(img)
Z=img/m
X=np.loadtxt("pixelx.txt",float)
Y=np.loadtxt("pixely.txt",float)
[X, Y] = np.meshgrid(X, Y)
volume=np.trapz(X,np.trapz(Y,Z))
The docs state that trapz should be used like this
intermediate = np.trapz(Z, x)
result = np.trapz(intermediate, y)
trapz is reducing the dimensionality of its operand (by default on the last axis) using optionally a 1D array of abscissae to determine the sub intervals of integration; it is not using a mesh grid for its operation.
A complete example.
First we compute, using sympy, the integral of a simple bilinear function over a rectangular domain (0, 5) × (0, 7)
In [1]: import sympy as sp, numpy as np
In [2]: x, y = sp.symbols('x y')
In [3]: f = 1 + 2*x + y + x*y
In [4]: f.integrate((x, 0, 5)).integrate((y, 0, 7))
Out[4]: 2555/4
Now we compute the trapezoidal approximation to the integral (as it happens, the approximation is exact for a bilinear function) — we need coordinates arrays
In [5]: x, y = np.linspace(0, 5, 11), np.linspace(0, 7, 22)
(note that the sampling is different in the two directions and different from the defalt value used by trapz) — we need a mesh grid to compute the integrand and we need to compute the integrand
In [6]: X, Y = np.meshgrid(x, y)
In [7]: z = 1 + 2*X + Y + X*Y
and eventually we compute the integral
In [8]: 4*np.trapz(np.trapz(z, x), y)
Out[8]: 2555.0

Resources