PyQt creating color circle - pyqt

I want to add a color circle to the widget placeholder there:
I already tried this library:
https://gist.github.com/tobi08151405/7b0a8151c9df1a41a87c1559dac1243a
But if the window wasnt a quadrat, the color circle didnt worked.
I have already tried the other solution, but there is no method to get the color value.
How to create a "Color Circle" in PyQt?
Could you recommend/ show me a way of creating my own, so I can add them there?
Thanks!

The widget assumes that its shape is always a square; the code provides a custom AspectLayout for that, but it's not necessary.
The problem comes from the fact that when the shape is not a square the computation of the color is wrong, as coordinates are not properly mapped when a dimension is much bigger than the other. For instance, if the widget is much wider than tall, the x coordinate is "shifted" since the circle (which is now an actual ellipse) is shown centered, but the color function uses the minimum size.
The solution is to create an internal QRect that is always displayed at the center and use that for both painting and computation:
class ColorCircle(QWidget):
# ...
def resizeEvent(self, ev: QResizeEvent) -> None:
size = min(self.width(), self.height())
self.radius = size / 2
self.square = QRect(0, 0, size, size)
self.square.moveCenter(self.rect().center())
def paintEvent(self, ev: QPaintEvent) -> None:
# ...
p.setPen(Qt.transparent)
p.setBrush(hsv_grad)
p.drawEllipse(self.square)
p.setBrush(val_grad)
p.drawEllipse(self.square)
# ...
def map_color(self, x: int, y: int) -> QColor:
x -= self.square.x()
y -= self.square.y()
# ...
Note that the code uses numpy, but it's used for functions that do not really require such a huge library for an application that clearly doesn't need numpy's performance.
For instance, the line_circle_inter uses a complex way to compute the position for the "cursor" (the small circle), but that's absolutely unnecessary, as the Hue and Saturation values already provide "usable" coordinates: the Hue indicates the angle in the circle (starting from 12 hour position, counter-clockwise), while the Saturation is the distance from the center.
QLineF provides a convenience function, fromPolar(), which returns a line with a given length and angle: the length will be the radius multiplied by the Saturation, the angle is the Hue multiplied by 360 (plus 90°, as angles start always at 3 o'clock); then we can translate that line at the center of the circle and the cursor will be positioned at the second point of the segment:
def paintEvent(self, event):
# ...
p.setPen(Qt.black)
p.setBrush(self.selected_color)
line = QLineF.fromPolar(self.radius * self.s, 360 * self.h + 90)
line.translate(self.square.center())
p.drawEllipse(line.p2(), 10, 10)
The map color function can use the same logic, but inverted: we construct a line starting from the center to the mouse cursor position, then the Saturation is the length divided by the radius (sanitizing the value to 1.0, as that's the maximum possible value), while the Hue is the line angle (minus 90° as above) divided by 360 and sanitized for a positive 0.0-1.0 range.
def map_color(self, x: int, y: int) -> QColor:
line = QLineF(QPointF(self.rect().center()), QPointF(x, y))
s = min(1.0, line.length() / self.radius)
h = (line.angle() - 90) / 360 % 1.
return h, s, self.v

Related

How to translate points on image after cropping it and resizing it?

I am creating a program which allows a user to annotate images with points.
This program allows user to zoom in an image so user can annotate more precisely.
Program zooms in an image doing the following:
Find the center of image
Find minimum and maximum coordinates of new cropped image relative to center
Crop image
Resize the image to original size
For this I have written the following Python code:
import cv2
def zoom_image(original_image, cut_off_percentage, list_of_points):
height, width = original_image.shape[:2]
center_x, center_y = int(width/2), int(height/2)
half_new_width = center_x - int(center_x * cut_off_percentage)
half_new_height = center_y - int(center_y * cut_off_percentage)
min_x, max_x = center_x - half_new_width, center_x + half_new_width
min_y, max_y = center_y - half_new_height, center_y + half_new_height
#I want to include max coordinates in new image, hence +1
cropped = original_image[min_y:max_y+1, min_x:max_x+1]
new_height, new_width = cropped.shape[:2]
resized = cv2.resize(cropped, (width, height))
translate_points(list_of_points, height, width, new_height, new_width, min_x, min_y)
I want to resize the image to original width and height so user always works on same "surface"
regardless of how zoomed image is.
The problem I encounter is how to correctly scale points (annotations) when doing this. My algorithm to do so was following:
Translate points on original image by subtracting min_x from x coordinate and min_y from y coordinate
Calculate constants for scaling x and y coordinates of points
Multiply coordinates by constants
For this I use the following Python code:
import cv2
def translate_points(list_of_points, height, width, new_height, new_width, min_x, min_y):
#Calculate constants for scaling points
scale_x, scale_y = width / new_width, height / new_height
#Translate and scale points
for point in list_of_points:
point.x = (point.x - min_x) * scale_x
point.y = (point.y - min_y) * scale_y
This code doesn't work. If I zoom in once, it is hard to detect the offset of pixels but it happens. If I keep zooming in, it will be much easier to detect the "drift" of points. Here are images to provide examples. On original image (1440x850) I places a point in the middle of blue crosshair. The more I zoom in the image it is easier to see that algorithm doesn't work with bigger cut-ofs.
Original image. Blue crosshair is middle point of an image. Red angles indicate what will be borders after image is zoomed once
Image after zooming in once.
Image after zooming in 5 times. Clearly, green point is no longer in the middle of image
The cut_off_percentage I used is 15% (meaning that I keep 85% of width and height of original image, calculated from the center).
I have also tried the following library: Augmentit python library
Library has functions for cropping images and resizing them together with points. Library also causes the points to drift. This is expected since the code I implemented and library's functions use the same algorithm.
Additionally, I have checked whether this is a rounding problem. It is not. Library rounds the points after multiplying coordinates with scales. Regardless on how they are rounded, points are still off by 4-5 px. This increases the more I zoom in the picture.
EDIT: A more detailed explanation is given here since I didn't understand a given answer.
The following is an image of right human hand.
Image of a hand in my program
Original dimension of this image is 1440 pixels in width and 850 pixels in height. As you can see in this image, I have annotated right wrist at location (756.0, 685.0). To check whether my program works correctly, I have opened this exact image in GIMP and placed a white point at location (756.0, 685.0). The result is following:
Image of a hand in GIMP
Coordinates in program work correctly. Now, if I were to calculate parameters given in first answer according to code given in first answer I get following:
vec = [756, 685]
hh = 425
hw = 720
cov = [720, 425]
These parameters make sense to me. Now I want to zoom the image to scale of 1.15. I crop the image by choosing center point and calculating low and high values which indicate what rectangle of image to keep and what to cut. On the following image you can see what is kept after cutting (everything inside red rectangle).
What is kept when cutting
Lows and highs when cutting are:
xb = [95,1349]
yb = [56,794]
Size of cropped image: 1254 x 738
This cropped image will be resized back to original image. However, when I do that my annotation gets completely wrong coordinates when using parameters described above.
After zoom
This is the code I used to crop, resize and rescale points, based on the first answer:
width, height = image.shape[:2]
center_x, center_y = int(width / 2), int(height / 2)
scale = 1.15
scaled_width = int(center_x / scale)
scaled_height = int(center_y / scale)
xlow = center_x - scaled_width
xhigh = center_x + scaled_width
ylow = center_y - scaled_height
yhigh = center_y + scaled_height
xb = [xlow, xhigh]
yb = [ylow, yhigh]
cropped = image[yb[0]:yb[1], xb[0]:xb[1]]
resized = cv2.resize(cropped, (width, height), cv2.INTER_CUBIC)
#Rescaling poitns
cov = (width / 2, height / 2)
width, height = resized.shape[:2]
hw = width / 2
hh = height / 2
for point in points:
x, y = point.scx, point.scy
x -= xlow
y -= ylow
x -= cov[0] - (hw / scale)
y -= cov[1] - (hh / scale)
x *= scale
y *= scale
x = int(x)
y = int(y)
point.set_coordinates(x, y)
So this really is an integer rounding issue. It's magnified at high zoom levels because being off by 1 pixel at 20x zoom throws you off much further. I tried out two versions of my crop-n-zoom gui. One with int rounding, another without.
You can see that the one with int rounding keeps approaching the correct position as the zoom grows, but as soon as the zoom takes another step, it rebounds back to being wrong. The non-rounded version sticks right up against the mid-lines (denoting the proper position) the whole time.
Note that the resized rectangle (the one drawn on the non-zoomed image) blurs past the midlines. This is because of the resize interpolation from OpenCV. The yellow rectangle that I'm using to check that my points are correctly scaling is redrawn on the zoomed frame so it stays crisp.
With Int Rounding
Without Int Rounding
I have the center-of-view locked to the bottom right corner of the rectangle for this demo.
import cv2
import numpy as np
# clamp value
def clamp(val, low, high):
if val < low:
return low;
if val > high:
return high;
return val;
# bound the center-of-view
def boundCenter(cov, scale, hh, hw):
# scale half res
scaled_hw = int(hw / scale);
scaled_hh = int(hh / scale);
# bound
xlow = scaled_hw;
xhigh = (2*hw) - scaled_hw;
ylow = scaled_hh;
yhigh = (2*hh) - scaled_hh;
cov[0] = clamp(cov[0], xlow, xhigh);
cov[1] = clamp(cov[1], ylow, yhigh);
# do a zoomed view
def zoomView(orig, cov, scale, hh, hw):
# calculate crop
scaled_hh = int(hh / scale);
scaled_hw = int(hw / scale);
xlow = cov[0] - scaled_hw;
xhigh = cov[0] + scaled_hw;
ylow = cov[1] - scaled_hh;
yhigh = cov[1] + scaled_hh;
xb = [xlow, xhigh];
yb = [ylow, yhigh];
# crop and resize
copy = np.copy(orig);
crop = copy[yb[0]:yb[1], xb[0]:xb[1]];
display = cv2.resize(crop, (width, height), cv2.INTER_CUBIC);
return display;
# draw vector shape
def drawVec(img, vec, pos, cov, hh, hw, scale):
con = [];
for point in vec:
# unpack point
x,y = point;
x += pos[0];
y += pos[1];
# here's the int version
# Note: this is the same as xlow and ylow from the above function
# x -= cov[0] - int(hw / scale);
# y -= cov[1] - int(hh / scale);
# rescale point
x -= cov[0] - (hw / scale);
y -= cov[1] - (hh / scale);
x *= scale;
y *= scale;
x = int(x);
y = int(y);
# add
con.append([x,y]);
con = np.array(con);
cv2.drawContours(img, [con], -1, (0,200,200), -1);
# font stuff
font = cv2.FONT_HERSHEY_SIMPLEX;
fontScale = 1;
fontColor = (255, 100, 0);
thickness = 2;
# draw blank
res = (800,1200,3);
blank = np.zeros(res, np.uint8);
print(blank.shape);
# draw a rectangle on the original
cv2.rectangle(blank, (100,100), (400,200), (200,150,0), -1);
# vectored shape
# comparison shape
bshape = [[100,100], [400,100], [400,200], [100,200]];
bpos = [0,0]; # offset
# random shape
vshape = [[148, 89], [245, 179], [299, 67], [326, 171], [385, 222], [291, 235], [291, 340], [229, 267], [89, 358], [151, 251], [57, 167], [167, 164]];
vpos = [100,100]; # offset
# get original image res
height, width = blank.shape[:2];
hh = int(height / 2);
hw = int(width / 2);
# center of view
cov = [600, 400];
camera_spd = 5;
# scale
scale = 1;
scale_step = 0.2;
# loop
done = False;
while not done:
# crop and show image
display = zoomView(blank, cov, scale, hh, hw);
# drawVec(display, vshape, vpos, cov, hh, hw, scale);
drawVec(display, bshape, bpos, cov, hh, hw, scale);
# draw a dot in the middle
cv2.circle(display, (hw, hh), 4, (0,0,255), -1);
# draw center lines
cv2.line(display, (hw,0), (hw,height), (0,0,255), 1);
cv2.line(display, (0,hh), (width,hh), (0,0,255), 1);
# draw zoom text
cv2.putText(display, "Zoom: " + str(scale), (15,40), font,
fontScale, fontColor, thickness, cv2.LINE_AA);
# show
cv2.imshow("Display", display);
key = cv2.waitKey(1);
# check keys
done = key == ord('q');
# Note: if you're actually gonna make a GUI
# use the keyboard module or something else for this
# wasd to move center-of-view
if key == ord('d'):
cov[0] += camera_spd;
if key == ord('a'):
cov[0] -= camera_spd;
if key == ord('w'):
cov[1] -= camera_spd;
if key == ord('s'):
cov[1] += camera_spd;
# z,x to decrease/increase zoom (lower bound is 1.0)
if key == ord('x'):
scale += scale_step;
if key == ord('z'):
scale -= scale_step;
scale = round(scale, 2);
# bound cov
boundCenter(cov, scale, hh, hw);
Edit: Explanation of the drawVec parameters
img: The OpenCV image to be drawn on
vec: A list of [x,y] points
pos: The offset to draw those points at
cov: Center-Of-View, where the middle of our zoomed display is pointed at
hh: Half-Height, the height of "img" divided by 2
hw: Half-Width, the width of "img" divided by 2
I have looked through my code and realized where I was making a mistake which caused points to be offset.
In my program, I have a canvas of specific size. The size of canvas is a constant and is always larger than images being drawn on canvas. When program draws an image on canvas it first resizes that image so it could fit on canvas. The size of resized image is somewhat smaller than size of canvas. Image is usually drawn starting from top left corner of canvas. Since I wanted to always draw image in the center of canvas, I shifted the location from top left corner of canvas to another point. This is what I didn't account when doing image zooming.
def zoom(image, ratio, points, canvas_off_x, canvas_off_y):
width, height = image.shape[:2]
new_width, new_height = int(ratio * width), int(ratio * height)
center_x, center_y = int(new_width / 2), int(new_height / 2)
radius_x, radius_y = int(width / 2), int(height / 2)
min_x, max_x = center_x - radius_x, center_x + radius_x
min_y, max_y = center_y - radius_y, center_y + radius_y
img_resized = cv2.resize(image, (new_width,new_height), interpolation=cv2.INTER_LINEAR)
img_cropped = img_resized[min_y:max_y+1, min_x:max_x+1]
for point in points:
x, y = point.get_original_coordinates()
x -= canvas_off_x
y -= canvas_off_y
x = int((x * ratio) - min_x + canvas_off_x)
y = int((y * ratio) - min_y + canvas_off_y)
point.set_scaled_coordinates(x, y)
In the code below canvas_off_x and canvas_off_y is the location of offset from top left corner of canvas

Python | Rotate Rectangle and get new boundariy dimensions

So we got the rectangle A and we rotate it around its corner by x degree. Now I want to know how to calculate the boundaries of the new rectangle.
What I mean with boundaries (blue rect):
known values are inner rectangle width/height/center/corners
Thanks in advance!
Bounding rectange has dimensions:
New_Height = Old_Width * Abs(Sin(Fi)) + Old_Height * Abs(Cos(Fi))
New_Width = Old_Width * Abs(Cos(Fi)) + Old_Height * Abs(Sin(Fi))

How to get cursor coordinates relative to matrix scale in pyglet/opengl?

I am making a 2D game in pyglet and use both glTranslatef and glScalef:
def background_motion(dt):
if stars.left:
pyglet.gl.glTranslatef(stars.speed, 0, 0)
stars.translation[0] += stars.speed
if stars.right:
pyglet.gl.glTranslatef(-stars.speed, 0, 0)
stars.translation[0] -= stars.speed
if stars.up:
pyglet.gl.glTranslatef(0, -stars.speed, 0)
stars.translation[1] -= stars.speed
if stars.down:
pyglet.gl.glTranslatef(0, stars.speed, 0)
stars.translation[1] += stars.speed
pyglet.clock.schedule_interval(background_motion, 0.05)
#window.event
def on_mouse_scroll(x, y, scroll_x, scroll_y):
if scroll_y > 0:
stars.scale += 0.01
elif scroll_y < 0:
stars.scale -= 0.01
#window.event
def on_draw():
window.clear()
pyglet.gl.glScalef(stars.scale,stars.scale, 1, 1)
stars.image.draw()
for s in game.ships:
s.draw()
pyglet.gl.glPushMatrix()
pyglet.gl.glLoadIdentity()
#HUD Start
overlay.draw(stars.image.x,stars.image.y,game.ships,stars.scale,stars.image.width)
if game.pause:
pause_text.draw()
#HUD End
pyglet.gl.glPopMatrix()
stars.scale = 1
However I also need the cursor coordinates relative to the background. For the movement I simply added the translation onto the x y coordinates which works however only when I don't scale the matrix:
#window.event
def on_mouse_motion(x, y, dx, dy):
if player.course_setting:
player.projected_heading = (x - stars.translation[0],y -stars.translation[1])
How can I get the cursor coordinates accounting for scale?
You'll have to unproject the pointer position. Projection happens as following:
p_eye = M · p
p_clip = P · p_eye
at this point the primitive is clipped, but we can ignore this for the moment. After clipping comes the homogenous divide, which brings the coordinates into NDC space, i.e. the viewport is treated as a cuboid of dimensions [-1,1]×[-1,1]×[0,1]
p_NDC = p_clip / p_clip.w
From there it's mapped into pixel dimensions. I'm going to omit this step here.
Unprojecting is doing these operations in reverse. There's a small trick in there, regarding the homogenous divide, though; this is kind of an "antisymmetric" (not the proper term for this, but it gets across the point) operation, and happens at the end, for each projection and unprojection. Unprojection hence is
p_NDC.w = 1
p_eye' = inv(P)·p_NDC
p' = inv(M)·p_eye'
p = p' / p'.w
All of this has been wrapped into unproject functions for your convenience by GLU (if you insist on using the fixed function matrix stack) or GLM – but not my linmath.h, though.

TkInter python - creating points on a canvas to obtain a Sierpinsky triangle

I want to make a program which plots a Sierpinsky triangle (of any modulo). In order to do it I've used TkInter. The program generates the fractal by moving a point randomly, always keeping it in the sides. After repeating the process many times, the fractal appears.
However, there's a problem. I don't know how to plot points on a canvas in TkInter. The rest of the program is OK, but I had to "cheat" in order to plot the points by drawing small lines instead of points. It works more or less, but it doesn't have as much resolution as it could have.
Is there a function to plot points on a canvas, or another tool to do it (using Python)? Ideas for improving the rest of the program are also welcome.
Thanks. Here's what I have:
from tkinter import *
import random
import math
def plotpoint(x, y):
global canvas
point = canvas.create_line(x-1, y-1, x+1, y+1, fill = "#000000")
x = 0 #Initial coordinates
y = 0
#x and y will always be in the interval [0, 1]
mod = int(input("What is the modulo of the Sierpinsky triangle that you want to generate? "))
points = int(input("How many points do you want the triangle to have? "))
tkengine = Tk() #Window in which the triangle will be generated
window = Frame(tkengine)
window.pack()
canvas = Canvas(window, height = 700, width = 808, bg = "#FFFFFF") #The dimensions of the canvas make the triangle look equilateral
canvas.pack()
for t in range(points):
#Procedure for placing the points
while True:
#First, randomly choose one of the mod(mod+1)/2 triangles of the first step. a and b are two vectors which point to the chosen triangle. a goes one triangle to the right and b one up-right. The algorithm gives the same probability to every triangle, although it's not efficient.
a = random.randint(0,mod-1)
b = random.randint(0,mod-1)
if a + b < mod:
break
#The previous point is dilated towards the origin of coordinates so that the big triangle of step 0 becomes the small one at the bottom-left of step one (divide by modulus). Then the vectors are added in order to move the point to the same place in another triangle.
x = x / mod + a / mod + b / 2 / mod
y = y / mod + b / mod
#Coordinates [0,1] converted to pixels, for plotting in the canvas.
X = math.floor(x * 808)
Y = math.floor((1-y) * 700)
plotpoint(X, Y)
tkengine.mainloop()
If you are wanting to plot pixels, a canvas is probably the wrong choice. You can create a PhotoImage and modify individual pixels. It's a little slow if you plot each individual pixel, but you can get dramatic speedups if you only call the put method once for each row of the image.
Here's a complete example:
from tkinter import *
import random
import math
def plotpoint(x, y):
global the_image
the_image.put(('#000000',), to=(x,y))
x = 0
y = 0
mod = 3
points = 100000
tkengine = Tk() #Window in which the triangle will be generated
window = Frame(tkengine)
window.pack()
the_image = PhotoImage(width=809, height=700)
label = Label(window, image=the_image, borderwidth=2, relief="raised")
label.pack(fill="both", expand=True)
for t in range(points):
while True:
a = random.randint(0,mod-1)
b = random.randint(0,mod-1)
if a + b < mod:
break
x = x / mod + a / mod + b / 2 / mod
y = y / mod + b / mod
X = math.floor(x * 808)
Y = math.floor((1-y) * 700)
plotpoint(X, Y)
tkengine.mainloop()
You can use canvas.create_oval with the same coordinates for the two corners of the bounding box:
from tkinter import *
import random
import math
def plotpoint(x, y):
global canvas
# point = canvas.create_line(x-1, y-1, x+1, y+1, fill = "#000000")
point = canvas.create_oval(x, y, x, y, fill="#000000", outline="#000000")
x = 0 #Initial coordinates
y = 0
#x and y will always be in the interval [0, 1]
mod = int(input("What is the modulo of the Sierpinsky triangle that you want to generate? "))
points = int(input("How many points do you want the triangle to have? "))
tkengine = Tk() #Window in which the triangle will be generated
window = Frame(tkengine)
window.pack()
canvas = Canvas(window, height = 700, width = 808, bg = "#FFFFFF") #The dimensions of the canvas make the triangle look equilateral
canvas.pack()
for t in range(points):
#Procedure for placing the points
while True:
#First, randomly choose one of the mod(mod+1)/2 triangles of the first step. a and b are two vectors which point to the chosen triangle. a goes one triangle to the right and b one up-right. The algorithm gives the same probability to every triangle, although it's not efficient.
a = random.randint(0,mod-1)
b = random.randint(0,mod-1)
if a + b < mod:
break
#The previous point is dilated towards the origin of coordinates so that the big triangle of step 0 becomes the small one at the bottom-left of step one (divide by modulus). Then the vectors are added in order to move the point to the same place in another triangle.
x = x / mod + a / mod + b / 2 / mod
y = y / mod + b / mod
#Coordinates [0,1] converted to pixels, for plotting in the canvas.
X = math.floor(x * 808)
Y = math.floor((1-y) * 700)
plotpoint(X, Y)
tkengine.mainloop()
with a depth of 3 and 100,000 points, this gives:
Finally found a solution: if a 1x1 point is to be placed in pixel (x,y), a command which does it exactly is:
point = canvas.create_line(x, y, x+1, y+1, fill = "colour")
The oval is a good idea for 2x2 points.
Something remarkable about the original program is that it uses a lot of RAM if every point is treated as a separate object.

Perlin Noise understanding

I found two tutorials that explains how the Perlin Noise works, but in the first tutorial I found not understandable mystery of gradients, and in the second one I found the mystery of surflets.
First case
The first tutorial is located here catlikecoding.com/unity/tutorials/noise. At first autor explains the value noise which is completely understandable, because all we need to do is to draw a grid of random colors and then just interpolate between the colors.
But when it comes to the Perlin Noise, we have to deal with gradients, not with single colors. At first I thought about gradiens as colors, so if we have two gradients and we want to make interpolation between them, we have to take a respective point of the first gradient and interpolate it with respective point of the second gradient. But if the gradients are the same, we will have a result which is the same as gradients.
In the tutorial author makes it in another way. If we have a 1d grid which consists of columns that are filled with the same gradient, and each gradient can be represented as transition from 0 to 1 (here 0 is black color, and 1 is white color). Then author says
Now every stripe has the same gradient, except that they are offset
from one another. So for every t0, the gradient to the right of it is
t1 = t0 - 1. Let's smoothly interpolate them.
So it means that we have to interpolate between a gradient which is represented as transition from 0 to 1, and a gradient which is represented as transition from -1 to 0.
It implies that every gradient doesn't start at position with value of 0 and doesn't stop at position with value of 1. It starts somewhere at -1 and ends somewhere at 2, or maybe it has no start and end points. We can see only 0 to 1 range, and I can't understand why it is like this. Whence did we take the idea of continuous gradient? I thought that we have only gradient from 0 to 1 for every strip and that's all, don't we?
When I asked the author about all this he answered like this is something obvious
The gradient to the right is a visual reference. It’s the gradient for
the next higher integer. You’re right that it goes negative to the
left. They all do.
So t0 is the gradient that’s zero at the lattice point on the left
side of the region between two integers. And t1 is the gradient that’s
zero at the lattice point on the right side of the same region.
Gradient noise is obtained by interpolating between these two
gradients in between lattice points. And yes, that can produce
negative results, which end up black. That’s why the next step is to
scale and offset the result.
Now I feel like this is impossible for me to understand how this works, so I have just to believe and repeat after smarter guys. But hope dies last, so I beg you to explain it to me somehow.
Second case
The second tutorial is located here eastfarthing.com/blog/2015-04-21-noise/ and it's much less sophisticated than the previous one.
The only problem I had encountered is that I can't understand next paragraph and what's going on after this
So given this, we can just focus on the direction of G and always use
unit length vectors. If we clamp the product of the falloff kernel and
the gradient to 0 at all points beyond the 2×2 square, this gives us
the surflet mentioned in that cryptic sentence.
I'm not sure whether the problem is in my poor math or English knowledge, so I ask you to explain what does this actually mean in simple words.
Here is some code I have written so far, it relates to the second case
import sys
import random
import math
from PyQt4.QtGui import *
from PyQt4.QtCore import pyqtSlot
class Example(QWidget):
def __init__(self):
super(Example, self).__init__()
self.gx=1
self.gy=0
self.lbl=QLabel()
self.tlb = None
self.image = QImage(512, 512, QImage.Format_RGB32)
self.hbox = QHBoxLayout()
self.pixmap = QPixmap()
self.length = 1
self.initUI()
def mousePressEvent(self, QMouseEvent):
px = QMouseEvent.pos().x()
py = QMouseEvent.pos().y()
size = self.frameSize()
self.gx = px-size.width()/2
self.gy = py-size.height()/2
h = (self.gx**2+self.gy**2)**0.5
self.gx/=h
self.gy/=h
self.fillImage()
def wheelEvent(self,event):
self.length+=(event.delta()*0.001)
print(self.length)
def initUI(self):
self.hbox = QHBoxLayout(self)
self.pixmap = QPixmap()
self.move(300, 200)
self.setWindowTitle('Red Rock')
self.addedWidget = None
self.fillImage()
self.setLayout(self.hbox)
self.show()
def fillImage(self):
step = 128
for x in range(0, 512, step):
for y in range(0, 512, step):
rn = random.randrange(0, 360)
self.gx = math.cos(math.radians(rn))
self.gy = math.sin(math.radians(rn))
for x1 in range(0, step):
t = -1+(x1/step)*2
color = (1 - (3 - 2*abs(t))*t**2)
for y1 in range(0, step):
t1 = -1+(y1/step)*2
color1 = (1 - (3 - 2*abs(t1))*t1**2)
result = (255/2)+(color * color1 * (t*self.gx+t1*self.gy) )*(255/2)
self.image.setPixel(x+x1, y+y1, qRgb(result, result, result))
self.pixmap = self.pixmap.fromImage(self.image)
if self.lbl == None:
self.lbl = QLabel(self)
else:
self.lbl.setPixmap(self.pixmap)
if self.addedWidget == None:
self.hbox.addWidget(self.lbl)
self.addedWidget = True
self.repaint()
self.update()
def main():
app = QApplication(sys.argv)
ex = Example()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
float Noise::perlin1D(glm::vec3 point, float frequency)
{
// map the point to the frequency space
point *= frequency;
// get the base integer the point exists on
int i0 = static_cast<int>(floorf(point.x));
// distance from the left integer to the point
float t0 = point.x - static_cast<float>(i0);
// distance from the right integer to the point
float t1 = t0 - 1.f;
// make sure the base integer is in the range of the hash function
i0 &= hashMask;
// get the right integer (already in range of the hash function)
int i1 = i0 + 1;
// choose a pseudorandom gradient for the left and the right integers
float g0 = gradients1D[hash[i0] & gradientsMask1D];
float g1 = gradients1D[hash[i1] & gradientsMask1D];
// take the dot product between our gradients and our distances to
// get the influence values. (gradients are more influential the closer they are to the point)
float v0 = g0 * t0;
float v1 = g1 * t1;
// map the point to a smooth curve with first and second derivatives of 0
float t = smooth(t0);
// interpolate our influence values along the smooth curve
return glm::mix(v0, v1, t) * 2.f;
}
here is a commented version of the code in question. But rewritten for c++. Obviously all credit goes to catlikecoding.
We've given the function a point p. Let's assume the point p is fractional so for example if p is .25 then the integer to the left of p is 0 and the integer to the right of p is 1. Let's call these integers l and r respectively.
Then t0 is the distance from l to p and t1 is the distance from r to p. The distance is negative for t1 since you have to move in a negative direction to get from r to p.
If we continue on to the perlin noise part of this implementation g0 and g1 are pseudorandom gradients in 1 dimension. Once again gradient may be confusing here since g0 and g1 are floats, but a gradient is simply a direction and in one dimension you can only go positive or negative, so these gradients are +1 and -1. We take the dot product between the gradients and the distances, but in one dimension this is simply multiplication. The result of the dot product is the two floats v0 and v1. These are the influence values of our perlin noise implementation. Finally we smoothly interpolate between these influence values to produce a smooth noise function. Let me know if this helps! This perlin noise explanation was very helpful in my understanding of this code.

Resources