ColorFunction with InterpolationFunction in RegionPlot3D - colors

I am writing to ask a question regarding the implementation of a field-dependent color in a 3d region plot in Mathematica.
Specifically, I have created the following plot, where f[x,y,z] is an interpolating function of a three-dimensional array (this is done to have lower resolution plots with ease, since the amount of data in the array is significant).
The problem I am encountering is that if I run the following instruction:
RegionPlot3D[f[x, y, z] >= 0.5 && f[x, y, z] <= 0.6, {x, 0, 1}, {y, 0,0.416}, {z, 0, 0.666},
ColorFunction -> Function[{x, y, z}, Hue[Rescale[f[x, y, z], {0, 1}]]]]
The color is not imposed correctly (i get a region of uniform color). If I utilize a function g instead (can be any function, e.g. the norm of the point position) inside the Hue, so that
Hue[Rescale[g[x, y, z], {0, 1}]]
the color information is passed correctly. I assume that I am making a mistake with the handling of InterpolatingFunction objects. How should this problem be handled?
Any help is appreciated.

RegionPlot3D passes 4 arguments to ColorFunction :
Try this (just adding a dummy arg to Function )
RegionPlot3D[f[x, y, z] >= 0.5 && f[x, y, z] <= 0.6, {x, 0, 1}, {y, 0,0.416}, {z, 0, 0.666},
ColorFunction -> Function[{x, y, z, p}, Hue[Rescale[f[x, y, z], {0, 1}]]]]
or like this:
RegionPlot3D[f[x, y, z] >= 0.5 && f[x, y, z] <= 0.6, {x, 0, 1}, {y, 0,0.416}, {z, 0, 0.666},
ColorFunction -> ( Hue[Rescale[f[#1, #2, #3 ], {0, 1}]] &)]

Related

matplotlib shift pcolormesh plot to symmetrized coordinates

I have some 2D data with x and y coordinates both within [0,1], plotted using pcolormesh.
Now I want to symmetrize the plot to [-0.5, 0.5] for both x and y coordinates. In Matlab I was able to achieve this by changing x and y from e.g. [0, 0.2, 0.4, 0.6, 0.8] to [0, 0.2, 0.4, -0.4, -0.2], without rearranging the data. However, with pcolormesh I cannot get the desired result.
A minimum example is shown below, with data represented simply by x+y:
import matplotlib.pyplot as plt
import numpy as np
x,y = np.mgrid[0:1:5j,0:1:5j]
fig,(ax1,ax2,ax3) = plt.subplots(1,3,figsize=(9,3.3),constrained_layout=1)
# original plot spanning [0,1]
img1 = ax1.pcolormesh(x,y,x+y,shading='auto')
# shift x and y from [0,1] to [-0.5,0.5]
x = x*(x<0.5)+(x-1)*(x>0.5)
y = y*(y<0.5)+(y-1)*(y>0.5)
img2 = ax2.pcolormesh(x,y,x+y,shading='auto') # similar code works in Matlab
# for this specific case, the following is close to the desired result, I can just rename x and y tick labels
# to [-0.5,0.5], but in general data is not simply x+y
img3 = ax3.pcolormesh(x+y,shading='auto')
fig.colorbar(img1,ax=[ax1,ax2,ax3],orientation='horizontal')
The corresponding figure is below, any suggestion on what is missed would be appreciated!
Let's look at what you want to achieve in a 1D example.
You have x values between 0 and 1 and a dummy function f(x) = 20*x to produce some values.
# x = [0, .2, .4, .6, .8] -> [0, .2, .4, -.4, -.2] -> [-.4, .2, .0, .2, .4])
# fx = [0, 4, 8, 12, 16] -> [0, 4, 8, 12, 16] -> [ 12, 16, 0, 4, 8]
# ^ only flip and shift x not fx ^
You could use np.roll() to achieve the last operation.
I used n=14 to make the result better visible and show that this approach works for arbitrary n.
import numpy as np
import matplotlib.pyplot as plt
n = 14
x, y = np.meshgrid(np.linspace(0, 1, n, endpoint=False),
np.linspace(0, 1, n, endpoint=False))
z = x + y
x_sym = x*(x <= .5)+(x-1)*(x > .5)
# array([[ 0. , 0.2, 0.4, -0.4, -0.2], ...
x_sym = np.roll(x_sym, n//2, axis=(0, 1))
# array([[-0.4, -0.2, 0. , 0.2, 0.4], ...
y_sym = y*(y <= .5)+(y-1)*(y > .5)
y_sym = np.roll(y_sym, n//2, axis=(0, 1))
z_sym = np.roll(z, n//2, axis=(0, 1))
# array([[1.2, 1.4, 0.6, 0.8, 1. ],
# [1.4, 1.6, 0.8, 1. , 1.2],
# [0.6, 0.8, 0. , 0.2, 0.4],
# [0.8, 1. , 0.2, 0.4, 0.6],
# [1. , 1.2, 0.4, 0.6, 0.8]])
fig, (ax1, ax2) = plt.subplots(1, 2)
img1 = ax1.imshow(z, origin='lower', extent=(.0, 1., .0, 1.))
img2 = ax2.imshow(z_sym, origin='lower', extent=(-.5, .5, -.5, .5))

Python List Comprehension: assign to multiple variables

Is there a way to assign to multiple variables in a one-liner?
Let's say I have a list of 3D points and I want x, y and z lists.
polygon = [(0, 0, 0), (1, 0, 0), (1, 1, 0), (0, 1, 0)]
# this works to get the expected result, but is not a single list comprehension
x = [x for x, y, z in polygon ]
y = [y for x, y, z in polygon ]
z = [z for x, y, z in polygon ]
I am thinking of something like:
x, y, z = [... for x, y, z in polygon ]
You can use zip() function:
polygon = [(0, 0, 0), (1, 0, 0), (1, 1, 0), (0, 1, 0)]
x, y, z = zip(*polygon)
print(x)
print(y)
print(z)
Prints:
(0, 1, 1, 0)
(0, 0, 1, 1)
(0, 0, 0, 0)
Or if you want lists instead of tuples:
x, y, z = map(list, zip(*polygon))
Unpack the list of tuples using zip():
polygon = [(0, 0, 0), (1, 0, 0), (1, 1, 0), (0, 1, 0)]
x,y,z = zip(*polygon)
print(list(x))
print(list(y))
print(list(z))
OUTPUT:
[0, 1, 1, 0]
[0, 0, 1, 1]
[0, 0, 0, 0]
EDIT:
If you want the lists:
x,y,z = [list(a) for a in zip(*polygon)]

Is there any build in function for making 2D tensor from 1D tensor using specific calculation?

Hi I'm student who just started for deep learning.
For example, I have 1-D tensor x = [ 1 , 2]. From this one, I hope to make 2D tensor y whose (i,j)th element has value (x[i] - x[j]), i.e y[0,:] = [0 , 1] , y[1,:]=[ -1 , 0].
Is there built-in function like this in pytorch library?
Thanks.
Here you need right dim of tensor to get expected result which you can get using torch.unsqueeze
x = torch.tensor([1 , 2])
y = x - x.unsqueeze(1)
y
tensor([[ 0, 1],
[-1, 0]])
There are a few ways you could get this result, the cleanest I can think of is using broadcasting semantics.
x = torch.tensor([1, 2])
y = x.view(-1, 1) - x.view(1, -1)
which produces
y = tensor([[0, -1],
[1, 0]])
Note I'll try to edit this answer and remove this note if the original question is clarified.
In your question you ask for y[i, j] = x[i] - x[j], which the above code produces.
You also say that you expect y to have values
y = tensor([[ 0, 1],
[-1, 0]])
which is actually y[i, j] = x[j] - x[i] as was posted in Dishin's answer. If you instead wanted the latter then you can use
y = x.view(1, -1) - x.view(-1, 1)

Count number of repeated elements in list considering the ones larger than them

I am trying to do some clustering analysis on a dataset. I am using a number of different approaches to estimate the number of clusters, then I put what every approach gives (number of clusters) in a list, like so:
total_pred = [0, 0, 1, 1, 0, 1, 1]
Now I want to estimate the real number of clusters, so I let the methods above vote, for example, above, more models found 1 cluster than 0, so I take 1 as the real number of clusters.
I do this by:
counts = np.bincount(np.array(total_pred))
real_nr_of_clusters = np.argmax(counts))
There is a problem with this method, however. If the above list contains something like:
[2, 0, 1, 0, 1, 0, 1, 0, 1]
I will get 0 clusters as the average, since 0 is repeated more often. However, if one model found 2 clusters, it's safe to assume it considers at least 1 cluster is there, hence the real number would be 1.
How can I do this by modifying the above snippet?
To make the problem clear, here are a few more examples:
[1, 1, 1, 0, 0, 0, 3]
should return 1,
[0, 0, 0, 1, 1, 3, 4]
should also return 1 (since most of them agree there is AT LEAST 1 cluster).
There is a problem with your logic
Here is an implementation of the described algorithm.
l = [2, 0, 1, 0, 1, 0, 1, 0, 1]
l = sorted(l, reverse=True)
votes = {x: i for i, x in enumerate(l, start=1)}
Output
{2: 1, 1: 5, 0: 9}
Notice that since you define a vote as agreeing with anything smaller than itself, then min(l) will always win, because everyone will agree that there are at least min(l) clusters. In this case min(l) == 0.
How to fix it
Mean and median
Beforehand, notice that taking the mean or the median are valid and light-weight options that both satisfy the desired output on your examples.
Bias
Although, taking the mean might not be what you want if, for say, you encounter votes with high variance such as [0, 0, 7, 8, 10] where it is unlikely that the answer is 5.
A more general way to fix that is to include a voter's bias toward votes close to theirs. Surely that a 2-voter will agree more to a 1 than a 0.
You do that by implementing a metric (note: this is not a metric in the mathematical sense) that determines how much an instance that voted for x is willing to agree to a vote for y on a scale of 0 to 1.
Note that this approach will allow voters to agree on a number that is not on the list.
We need to update our code to account for applying that pseudometric.
def d(x, y):
return x <= y
l = [2, 0, 1, 0, 1, 0, 1, 0, 1]
votes = {y: sum(d(x, y) for x in l) for y in range(min(l), max(l) + 1)}
Output
{0: 9, 1: 5, 2: 1}
The above metric is a sanity check. It is the one your provided in your question and it indeed ends up determining that 0 wins.
Metric choices
You will have to toy a bit with your metrics, but here are a few which may make sense.
Inverse of the linear distance
def d(x, y):
return 1 / (1 + abs(x - y))
l = [2, 0, 1, 0, 1, 0, 1, 0, 1]
votes = {y: sum(d(x, y) for x in l) for y in range(min(l), max(l) + 1)}
# {0: 6.33, 1: 6.5, 2: 4.33}
Inverse of the nth power of the distance
This one is a generalization of the previous. As n grows, voters tend to agree less and less with distant vote casts.
def d(x, y, n=1):
return 1 / (1 + abs(x - y)) ** n
l = [2, 0, 1, 0, 1, 0, 1, 0, 1]
votes = {y: sum(d(x, y, n=2) for x in l) for y in range(min(l), max(l) + 1)}
# {0: 5.11, 1: 5.25, 2: 2.44}
Upper-bound distance
Similar to the previous metric, this one is close to what you described at first in the sense that a voter will never agree to a vote higher than theirs.
def d(x, y, n=1):
return 1 / (1 + abs(x - y)) ** n if x >= y else 0
l = [2, 0, 1, 0, 1, 0, 1, 0, 1]
votes = {y: sum(d(x, y, n=2) for x in l) for y in range(min(l), max(l) + 1)}
# {0: 5.11, 1: 4.25, 2: 1.0}
Normal distribution
An other option that would be sensical is a normal distribution or a skewed normal distribution.
While the other answer provides a comprehensive review of possible metrics and methods, it seems what you are seeking is to find the closest number of clusters to mean!
So something as simple as:
cluster_num=int(np.round(np.mean(total_pred)))
Which returns 1 for all your cases as you expect.

Revolve around a horizontal line in Mathematica?

I am very new to Mathematica. I have version 11, if that makes a difference.
I am trying to take the area formed by the following lines and and revolve it to form a 3D solid.
y = e^-x
Here is my code, in two sections
f[x_] := E^-x
g[x_] := 1
Plot[{f[x], g[x]}, {x, 0, 2}, Filling -> {1 -> {2}},
PlotLegends -> {"f[x]", "g[x]", "h[y]"}]
Next:
RevolutionPlot3D[(1 - f[x]) , {x, 0, 2}, RevolutionAxis -> "X"]
Here is the 2D and 3D representations:
The 2D one is correct, but not the 3D. I want to rotate the area about y=2 (horizontal line) as to form a shape with a hole in the center. I don't know how to set the axis of rotation to anything other than an axis line. I just want y=2.
How do you accomplish this?
RevolutionPlot3D isn't the right tool for what you want for 2 reasons. First, you want to rotate a 2D region not a line. Second, you want to rotate around a line that isn't one of the axes. RegionPlot3D is the built-in tool for the job. You can easily set up your region as a boolean region, just think about the conditions that the radius x^2 + y^2 has to satisfy
RegionPlot3D[
1 < z^2 + y^2 < (2 - Exp[-x])^2, {x, 0, 2}, {y, -3, 3}, {z, -3, 3}]
I showed the result from 2 different angles to point out the shortcomings of RegionPlot3D. You could improve this result by using a high value for the PlotPoints option, but it isn't great. That's why you should use Simon Woods's function contourRegionPlot3D, defined in this post:
contourRegionPlot3D[
1 < z^2 + y^2 < (2 - Exp[-x])^2, {x, 0, 2}, {y, -3, 3}, {z, -3, 3}]

Resources