Colormapping the Mandelbrot set by iterations in python - python-3.x

I am using np.ogrid to create the x and y grid from which I am drawing my values. I have tried a number of different ways to color the scheme according to the iterations required for |z| >= 2 but nothing seems to work. Even when iterating 10,000 times just to be sure that I have a clear picture when zooming, I cannot figure out how to color the set according to iteration ranges. Here is the code I am using, some of the structure was borrowed from a tutorial. Any suggestions?
#I found this function and searched in numpy for best usage for this type of density plot
x_val, y_val = np.ogrid[-2:2:2000j, -2:2:2000j]
#Creating the values to work with during the iterations
c = x_val + 1j*y_val
z = 0
iter_num = int(input("Please enter the number of iterations:"))
for n in range(iter_num):
z = z**2 + c
if n%10 == 0:
print("Iterations left: ",iter_num - n)
#Creates the mask to filter out values of |z| > 2
z_mask = abs(z) < 2
proper_z_mask = z_mask - 255 #switches current black/white pallette
#Creating the figure and sizing for optimal viewing on a small laptop screen
plt.figure(1, figsize=(8,8))
plt.imshow(z_mask.T, extent=[-2, 2, -2, 2])
plt.gray()
plt.show()

Related

Distance between 2 user defined georeferenced grids in km

I have 2 variables 'Root zone' and 'Tree cover' both are geolocated (NetCDF) (which are basically grids with each grid having a specific value). The values in TC varies from 0 to 100. Each grid size is 0.25 degrees (might be helpful in understanding the distance).
My problem is "I want to calculate the distance of each TC value ranging between 70-100 and 30-70 (so each value of TC value greater than 30 at each lat and lon) from the points where nearest TC ranges between 0-30 (less than 30)."
What I want to do is create a 2-dimensional scatter plot with X-axis denoting the 'distance in km of 70-100 TC (and 30-70 TC) from 0-30 values', Y-axis denoting 'RZS of those 70-100 TC points (and 30-70 TC)'
#I read the files using xarray
deficit_annual = xr.open_dataset('Rootzone_CHIRPS_era5_2000-2015_annual_SA_masked.nc')
tc = xr.open_dataset('Treecover_MODIS_2000-2015_annual_SA_masked.nc')
fig, ax = plt.subplots(figsize = (8,8))
## year I am interested in
year = 2000
i = year - 2000
# Select the indices of the low- and high-valued points
# This will results in warnings here because of NaNs;
# the NaNs should be filtered out in the indices, since they will
# compare to False in all the comparisons, and thus not be
# indexed by 'low' and 'high'
low = (tc[i,:,:] <= 30) # Savanna
moderate = (tc[i,:,:] > 30) & (tc[i,:,:] < 70) #Transitional forest
high = (tc[i,:,:] >= 70) #Forest
# Get the coordinates for the low- and high-valued points,
# combine and transpose them to be in the correct format
y, x = np.where(low)
low_coords = np.array([x, y]).T
y, x = np.where(high)
high_coords = np.array([x, y]).T
y, x = np.where(moderate)
moderate_coords = np.array([x, y]).T
# We now calculate the distances between *all* low-valued points, and *all* high-valued points.
# This calculation scales as O^2, as does the memory cost (of the output),
# so be wary when using it with large input sizes.
from scipy.spatial.distance import cdist, pdist
distances = cdist(low_coords, moderate_coords, 'euclidean')
# Now find the minimum distance along the axis of the high-valued coords,
# which here is the second axis.
# Since we also want to find values corresponding to those minimum distances,
# we should use the `argmin` function instead of a normal `min` function.
indices = distances.argmin(axis=1)
mindistances = distances[np.arange(distances.shape[0]), indices]
minrzs = np.array(deficit_annual[i,:,:]).flatten()[indices]
plt.scatter(mindistances*25, minrzs, s = 60, alpha = 0.5, color = 'goldenrod', label = 'Trasitional Forest')
distances = cdist(low_coords, high_coords, 'euclidean')
# Now find the minimum distance along the axis of the high-valued coords,
# which here is the second axis.
# Since we also want to find values corresponding to those minimum distances,
# we should use the `argmin` function instead of a normal `min` function.
indices = distances.argmin(axis=1)
mindistances = distances[np.arange(distances.shape[0]), indices]
minrzs = np.array(deficit_annual[i,:,:]).flatten()[indices]
plt.scatter(mindistances*25, minrzs, s = 60, alpha = 1, color = 'green', label = 'Forest')
plt.xlabel('Distance from Savanna (km)', fontsize = '14')
plt.xticks(fontsize = '14')
plt.yticks(fontsize = '14')
plt.ylabel('Rootzone storage capacity (mm/year)', fontsize = '14')
plt.legend(fontsize = '14')
#plt.ylim((-10, 1100))
#plt.xlim((0, 30))
What I want is to know whether the code seems to have an error (as it is working now, but doesn't seem to work when I increase the 'high = (tc[i,:,:] >= 70 ` to 80 for year 2000. This makes me wonder if the code is correct or not.
Secondly, is it possible to define a 20 km buffer region of 'low = (tc[i,:,:] <= 30)'. What I mean is that the 'low' is defined only when a cluster of Tree cover values are below 30 and not by an individual pixel.
Some netCDF files are attached in the link below:
https://www.dropbox.com/sh/unm96q7sfto8y53/AAA7e12bs07XtpMiVFdML_PIa?dl=0
The graph I want is something like this (derived from the code above).
Thank you for your help.

Creating a symmetrical grid of random size squares in Python3/Tkinter

I have a question revolving around what would be a viable approach to placing out random-sized squares on a symmetrical, non-visible grid on a tkinter-canvas. I'm going to explain it quite thoroughly as it's a somewhat proprietary problem.
This far I've tried to solve it mostly mathematically. But I've found it to be quite a complex problem, and it seems reasonable that there would be a better approach to take it on than what I've tried.
In its most basic form the code looks like this:
while x_len > canvas_width:
xpos = x_len + margin
squares[i].place(x=xpos, y=ypos)
x_len += square_size + space
i += 1
x_len is the total width of all the squares on a given row, and resets when exiting the while-loop (eg. when x_len > window width), among with xpos (the position on X), as well as altering Y-axis to create a new row.
When placing same-size squares it looks like this:
So far so good.
However when the squares are of random-size it looks like this (at best):
The core problem, beyond that the layout can be quite unpredictable, is that the squares aren't centered to the "invisible grid" - because there is none.
So to solve this I've tried an approach where I use a fixed distance and a relative distance based on every given square. This yields satisficing results for the Y-axis on the first row, but not on the X-axis, nor the following rows on Y.
See example (where first row is centered on Y, but following rows and X is not):
So with this method I'm using a per-square alteration in both Y- and X-axis, based on variables that I fetch from a list that contain widths for all of the generated squares.
In it's entirety it looks like this (though it's work in progress so it's not very well optimized):
square_widths = [60, 75, 75, 45...]
space = square_size*0.5
margin = (square_size+space)/2
xmax = frame_width - margin - square_size
xmin = -1 + margin
def iterate(ypos, xpos, x_len):
y = ypos
x = xpos
z = x_len
i=0
m_ypos = 0
extra_x = 0
while len(squares) <= 100:
n=-1
# row_ypos alters y for every new row
row_ypos += 200-square_widths[n]/2
# this if-statement is not relevant to the question
if x < 0:
n=0
xpos = x
extra_x = x
x_len = z
while x_len < xmax:
ypos = row_ypos
extra_x += 100
ypos = row_ypos + (200-square_widths[n])/2
xpos = extra_x + (200-square_widths[n])/2
squares[i].place(x=xpos, y=ypos)
x_len = extra_x + 200
i += 1
n += 1
What's most relevant here is row_ypos, that alters Y for each row, as well as ypos, that alters Y for each square (I don't have a working calculation for X yet). What I would want to achieve is a similar result that I get for Y-axis on the first row; on all rows and columns (eg. both in X and Y). To create a symmetrical grid with squares of different sizes.
So my questions are:
Is this really best practice to solve this?
If so - Do you have any tips on decent calculations that would do the trick?
If not - How would you approach this?
A sidenote is that it has to be done "manually" and I can not use built-in functions of tkinter to solve it.
Why don't you just use the grid geometry manager?
COLUMNS = 5
ROWS = 5
for i in range(COLUMNS*ROWS):
row, col = divmod(i, COLUMNS)
l = tk.Label(self, text=i, font=('', randint(10,50)))
l.grid(row=row, column=col)
This will line everything up, but the randomness may make the rows and columns different sizes. You can adjust that with the row- and columnconfigure functions:
import tkinter as tk
from random import randint
COLUMNS = 10
ROWS = 5
class GUI(tk.Frame):
def __init__(self, master=None, **kwargs):
tk.Frame.__init__(self, master, **kwargs)
labels = []
for i in range(COLUMNS*ROWS):
row, col = divmod(i, COLUMNS)
l = tk.Label(self, text=i, font=('', randint(10,50)))
l.grid(row=row, column=col)
labels.append(l)
self.update() # draw everything
max_width = max(w.winfo_width() for w in labels)
max_height = max(w.winfo_height() for w in labels)
for column in range(self.grid_size()[0]):
self.columnconfigure(col, minsize=max_width) # set all columns to the max width
for row in range(self.grid_size()[1]):
self.rowconfigure(row, minsize=max_height) # set all rows to the max height
def main():
root = tk.Tk()
win = GUI(root)
win.pack()
root.mainloop()
if __name__ == "__main__":
main()
I found the culprit that made the results not turn out the way expected, and it wasn't due to the calculations. Rather it turned out that the list I created didn't put the squares in correct order (which I should know since before).
And so I fetched the width from the raw data itself, which makes a lot more sense than creating a list.
The function now looks something like this (again, it's still under refinement, but I just wanted to post this, so that people don't waste their time in coming up with solutions to an already solved problem :)):
def iterate(ypos, xpos, x_len):
y = ypos
x = xpos
z = x_len
i=0
while len(squares) <= 100:
n=0
if y > 1:
ypos -= max1 + 10
if y < 0:
if ypos < 0:
ypos=10
else:
ypos += max1 + 10 #+ (max1-min1)/2
if x < 0:
n=0
xc=0
xpos = x
x_len = z
while x_len < xmax:
yc = ypos + (max1-squares[i].winfo_width())/2
if xpos <= 0:
xpos = 10
else:
xpos += max1 + 10
xc = xpos + (max1-squares[i].winfo_width())/2
squares[i].place(x=xc, y=yc)
x_len += max1 + 10
print (x_len)
i += 1
n += 1

Weighted moving average in python with different width in different regions

I was trying to take a oscillation avarage of a highly oscillating data. The oscillations are not uniform, it has less oscillations in the initial regions.
x = np.linspace(0, 1000, 1000001)
y = some oscillating data say, sin(x^2)
(The original data file is huge, so I can't upload it)
I want to take a weighted moving avarage of the function and plot it. Initially the period of the function is larger, so I want to take avarage over a large time interval. While I can do with smaller time interval latter.
I have found a possible elegant solution in following post:
Weighted moving average in python
However, I want to have different width in different regions of x. Say when x is between (0,100) I want the width=0.6, while when x is between (101, 300) width=0.2 and so on.
This is what I have tried to implement( with my limited knowledge in programing!)
def weighted_moving_average(x,y,step_size=0.05):#change the width to control average
bin_centers = np.arange(np.min(x),np.max(x)-0.5*step_size,step_size)+0.5*step_size
bin_avg = np.zeros(len(bin_centers))
#We're going to weight with a Gaussian function
def gaussian(x,amp=1,mean=0,sigma=1):
return amp*np.exp(-(x-mean)**2/(2*sigma**2))
if x.any() < 100:
for index in range(0,len(bin_centers)):
bin_center = bin_centers[index]
weights = gaussian(x,mean=bin_center,sigma=0.6)
bin_avg[index] = np.average(y,weights=weights)
else:
for index in range(0,len(bin_centers)):
bin_center = bin_centers[index]
weights = gaussian(x,mean=bin_center,sigma=0.1)
bin_avg[index] = np.average(y,weights=weights)
return (bin_centers,bin_avg)
It is needless to say that this is not working! I am getting the plot with the first value of sigma. Please help...
The following snippet should do more or less what you tried to do. You have mainly a logical problem in your code, x.any() < 100 will always be True, so you'll never execute the second part.
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 10, 1000)
y = np.sin(x**2)
def gaussian(x,amp=1,mean=0,sigma=1):
return amp*np.exp(-(x-mean)**2/(2*sigma**2))
def weighted_average(x,y,step_size=0.3):
weights = np.zeros_like(x)
bin_centers = np.arange(np.min(x),np.max(x)-.5*step_size,step_size)+.5*step_size
bin_avg = np.zeros_like(bin_centers)
for i, center in enumerate(bin_centers):
# Select the indices that should count to that bin
idx = ((x >= center-.5*step_size) & (x <= center+.5*step_size))
weights = gaussian(x[idx], mean=center, sigma=step_size)
bin_avg[i] = np.average(y[idx], weights=weights)
return (bin_centers,bin_avg)
idx = x <= 4
plt.plot(*weighted_average(x[idx],y[idx], step_size=0.6))
idx = x >= 3
plt.plot(*weighted_average(x[idx],y[idx], step_size=0.1))
plt.plot(x,y)
plt.legend(['0.6', '0.1', 'y'])
plt.show()
However, depending on the usage, you could also implement moving average directly:
x = np.linspace(0, 60, 1000)
y = np.sin(x**2)
z = np.zeros_like(x)
z[0] = x[0]
for i, t in enumerate(x[1:]):
a=.2
z[i+1] = a*y[i+1] + (1-a)*z[i]
plt.plot(x,y)
plt.plot(x,z)
plt.legend(['data', 'moving average'])
plt.show()
Of course you could then change a adaptively, e.g. depending of the local variance. Also note that this has apriori a small bias depending on a and the step size in x.

TkInter python - creating points on a canvas to obtain a Sierpinsky triangle

I want to make a program which plots a Sierpinsky triangle (of any modulo). In order to do it I've used TkInter. The program generates the fractal by moving a point randomly, always keeping it in the sides. After repeating the process many times, the fractal appears.
However, there's a problem. I don't know how to plot points on a canvas in TkInter. The rest of the program is OK, but I had to "cheat" in order to plot the points by drawing small lines instead of points. It works more or less, but it doesn't have as much resolution as it could have.
Is there a function to plot points on a canvas, or another tool to do it (using Python)? Ideas for improving the rest of the program are also welcome.
Thanks. Here's what I have:
from tkinter import *
import random
import math
def plotpoint(x, y):
global canvas
point = canvas.create_line(x-1, y-1, x+1, y+1, fill = "#000000")
x = 0 #Initial coordinates
y = 0
#x and y will always be in the interval [0, 1]
mod = int(input("What is the modulo of the Sierpinsky triangle that you want to generate? "))
points = int(input("How many points do you want the triangle to have? "))
tkengine = Tk() #Window in which the triangle will be generated
window = Frame(tkengine)
window.pack()
canvas = Canvas(window, height = 700, width = 808, bg = "#FFFFFF") #The dimensions of the canvas make the triangle look equilateral
canvas.pack()
for t in range(points):
#Procedure for placing the points
while True:
#First, randomly choose one of the mod(mod+1)/2 triangles of the first step. a and b are two vectors which point to the chosen triangle. a goes one triangle to the right and b one up-right. The algorithm gives the same probability to every triangle, although it's not efficient.
a = random.randint(0,mod-1)
b = random.randint(0,mod-1)
if a + b < mod:
break
#The previous point is dilated towards the origin of coordinates so that the big triangle of step 0 becomes the small one at the bottom-left of step one (divide by modulus). Then the vectors are added in order to move the point to the same place in another triangle.
x = x / mod + a / mod + b / 2 / mod
y = y / mod + b / mod
#Coordinates [0,1] converted to pixels, for plotting in the canvas.
X = math.floor(x * 808)
Y = math.floor((1-y) * 700)
plotpoint(X, Y)
tkengine.mainloop()
If you are wanting to plot pixels, a canvas is probably the wrong choice. You can create a PhotoImage and modify individual pixels. It's a little slow if you plot each individual pixel, but you can get dramatic speedups if you only call the put method once for each row of the image.
Here's a complete example:
from tkinter import *
import random
import math
def plotpoint(x, y):
global the_image
the_image.put(('#000000',), to=(x,y))
x = 0
y = 0
mod = 3
points = 100000
tkengine = Tk() #Window in which the triangle will be generated
window = Frame(tkengine)
window.pack()
the_image = PhotoImage(width=809, height=700)
label = Label(window, image=the_image, borderwidth=2, relief="raised")
label.pack(fill="both", expand=True)
for t in range(points):
while True:
a = random.randint(0,mod-1)
b = random.randint(0,mod-1)
if a + b < mod:
break
x = x / mod + a / mod + b / 2 / mod
y = y / mod + b / mod
X = math.floor(x * 808)
Y = math.floor((1-y) * 700)
plotpoint(X, Y)
tkengine.mainloop()
You can use canvas.create_oval with the same coordinates for the two corners of the bounding box:
from tkinter import *
import random
import math
def plotpoint(x, y):
global canvas
# point = canvas.create_line(x-1, y-1, x+1, y+1, fill = "#000000")
point = canvas.create_oval(x, y, x, y, fill="#000000", outline="#000000")
x = 0 #Initial coordinates
y = 0
#x and y will always be in the interval [0, 1]
mod = int(input("What is the modulo of the Sierpinsky triangle that you want to generate? "))
points = int(input("How many points do you want the triangle to have? "))
tkengine = Tk() #Window in which the triangle will be generated
window = Frame(tkengine)
window.pack()
canvas = Canvas(window, height = 700, width = 808, bg = "#FFFFFF") #The dimensions of the canvas make the triangle look equilateral
canvas.pack()
for t in range(points):
#Procedure for placing the points
while True:
#First, randomly choose one of the mod(mod+1)/2 triangles of the first step. a and b are two vectors which point to the chosen triangle. a goes one triangle to the right and b one up-right. The algorithm gives the same probability to every triangle, although it's not efficient.
a = random.randint(0,mod-1)
b = random.randint(0,mod-1)
if a + b < mod:
break
#The previous point is dilated towards the origin of coordinates so that the big triangle of step 0 becomes the small one at the bottom-left of step one (divide by modulus). Then the vectors are added in order to move the point to the same place in another triangle.
x = x / mod + a / mod + b / 2 / mod
y = y / mod + b / mod
#Coordinates [0,1] converted to pixels, for plotting in the canvas.
X = math.floor(x * 808)
Y = math.floor((1-y) * 700)
plotpoint(X, Y)
tkengine.mainloop()
with a depth of 3 and 100,000 points, this gives:
Finally found a solution: if a 1x1 point is to be placed in pixel (x,y), a command which does it exactly is:
point = canvas.create_line(x, y, x+1, y+1, fill = "colour")
The oval is a good idea for 2x2 points.
Something remarkable about the original program is that it uses a lot of RAM if every point is treated as a separate object.

Selecting colors that are furthest apart

I'm working on a project that requires me to select "unique" colors for each item. At times there could be upwards of 400 items. Is there some way out there of selecting the 400 colors that differ the most? Is it as simple as just changing the RGB values by a fixed increment?
You could come up with an equal distribution of 400 colours by incrementing red, green and blue in turn by 34.
That is:
You know you have three colour channels: red, green and blue
You need 400 distinct combinations of R, G and B
So on each channel the number of increments you need is the cube root of 400, i.e. about 7.36
To span the range 0..255 with 7.36 increments, each increment must be about 255/7.36, i.e. about 34
Probably HSL or HSV would be a better representations than RGB for this task.
You may find that changing the hue gives better variability perception to the eye, so adjust your increments in a way that for every X units changed in S and L you change Y (with Y < X) units of hue, and adjust X and Y so you cover the spectrum with your desired amount of samples.
Here is my final code. Hopefully it helps someone down the road.
from PIL import Image, ImageDraw
import math, colorsys, os.path
# number of color circles needed
qty = 400
# the lowest value (V in HSV) can go
vmin = 30
# calculate how much to increment value by
vrange = 100 - vmin
if (qty >= 72):
vdiff = math.floor(vrange / (qty / 72))
else:
vdiff = 0
# set options
sizes = [16, 24, 32]
border_color = '000000'
border_size = 3
# initialize variables
hval = 0
sval = 50
vval = vmin
count = 0
while count < qty:
im = Image.new('RGBA', (100, 100), (0, 0, 0, 0))
draw = ImageDraw.Draw(im)
draw.ellipse((5, 5, 95, 95), fill='#'+border_color)
r, g, b = colorsys.hsv_to_rgb(hval/360.0, sval/100.0, vval/100.0)
r = int(r*255)
g = int(g*255)
b = int(b*255)
draw.ellipse((5+border_size, 5+border_size, 95-border_size, 95-border_size), fill=(r, g, b))
del draw
hexval = '%02x%02x%02x' % (r, g, b)
for size in sizes:
result = im.resize((size, size), Image.ANTIALIAS)
result.save(str(qty)+'/'+hexval+'_'+str(size)+'.png', 'PNG')
if hval + 10 < 360:
hval += 10
else:
if sval == 50:
hval = 0
sval = 100
else:
hval = 0
sval = 50
vval += vdiff
count += 1
Hey I came across this problem a few times in my projects where I wanted to display, say, clusters of points. I found that the best way to go was to use the colormaps from matplotlib (https://matplotlib.org/stable/tutorials/colors/colormaps.html) and
colors = plt.get_cmap("hsv")[np.linspace(0, 1, n_colors)]
this will output rgba colors so you can get the rgb with just
rgb = colors[:,:3]

Resources