basic fractal coloring problems - graphics

I am trying to get more comfortable with the math behind fractal coloring and understanding the coloring algorithms much better. I am the following paper:
http://jussiharkonen.com/files/on_fractal_coloring_techniques%28lo-res%29.pdf
The paper gives specific parameters to each of the functions, however when I use the same, my results are not quite right. I have no idea what could be going on though.
I am using the iteration count coloring algorithm to start and using the following julia set:
c = 0.5 + 0.25i and p = 2
with the coloring algorithm:
The coloring function simply returns the number of
elements in the truncated orbit divided by 20
And the palette function:
I(u) = k(u − u0),
where k = 2.5 and u0 = 0, was used.
And with a palette being white at 0 and 1, and interpolating to black in-between.
and following this algorithm:
Set z0 to correspond to the position of the pixel in the complex plane.
Calculate the truncated orbit by iterating the formula zn = f(zn−1) starting
from z0 until either
• |zn| > M, or
• n = Nmax,
where Nmax is the maximum number of iterations.
Using the coloring and color index functions, map the resulting truncated
orbit to a color index value.
Determine an RGB color of the pixel by using the palette function
Using this my code looks like the following:
float izoom = pow(1.001, zoom );
vec2 z = focusPoint + (uv * 4.0 - 2.0) * 1.0 / izoom;
vec2 c = vec2(0.5f, 0.25f) ;
const float B = 2.0;
float l;
for( int i=0; i<100; i++ )
{
z = vec2( z.x*z.x - z.y*z.y, 2.0*z.x*z.y ) + c;
if( length(z)>10.0) break;
l++;
}
float ind = basicindex(l);
vec4 col = color(ind);
and have the following index and coloring functions:
float basicindex(float val){
return val / 20.0;
}
vec4 color(float index){
float r = 2.5 * index;
float g = r;
float b = g;
vec3 v = 0.5 - 0.5 * sin(3.14/2.0 + 3.14 * vec3(r, g, b));
return vec4(1.0 - v, 1.0) ;
}
The paper provides the following image:
https://imgur.com/YIZMhaa
While my code produces:
https://imgur.com/OrxdMsN
I get the correct results by using k = 1.0 instead of 2.5, however I would prefer to understand why my results are incorrect. When extending this to the smooth coloring algorithms, my results are still incorrect so I would like to figure this out first.
Let me know if this isn't the correct place for this kind of question and I can move it to the math stack exchange. I wasn't sure which place was more appropriate.

Your image is perfectly implemented for Figure 3.3 in the paper. The other image you posted uses a different routine.
Your figure seems to have that bit of perspective code there at top, but remove that and they should be the same.
If your objection is the color extremes you set that with the "0.5 - 0.5 * ..." part of your code. This makes the darkest black originally 0.5 when in the example image you're trying to duplicate the darkest black should be 1 and the lightest white should be 0.
You're making the whiteness equal to the distance from 0.5
If you ignore the fractal all together you are getting a bunch of values that can be normalized between 0 and 1 and you're coloring those in some particular ways. Clearly the image you are duplicating is linear between 0 and 1 so putting black as 0.5 cannot be correct.
o = {
length : 500,
width : 500,
c : [.5, .25], // c = x + iy will be [x, y]
maxIterate : 100,
canvas : null
}
function point(pos, color){
var c = 255 - Math.round((1 + Math.log(color)/Math.log(o.maxIterate)) * 255);
c = c.toString(16);
if (c.length == 1) c = '0'+c;
o.canvas.fillStyle="#"+c+c+c;
o.canvas.fillRect(pos[0], pos[1], 1, 1);
}
function conversion(x, y, R){
var m = R / o.width;
var x1 = m * (2 * x - o.width);
var y2 = m * (o.width - 2 * y);
return [x1, y2];
}
function f(z, c){
return [z[0]*z[0] - z[1] * z[1] + c[0], 2 * z[0] * z[1] + c[1]];
}
function abs(z){
return Math.sqrt(z[0]*z[0] + z[1]*z[1]);
}
function init(){
var R = (1 + Math.sqrt(1+4*abs(o.c))) / 2,
z, x, y, i;
o.canvas = document.getElementById('a').getContext("2d");
for (x = 0; x < o.width; x++){
for (y = 0; y < o.length; y++){
i = 0;
z = conversion(x, y, R);
while (i < o.maxIterate && abs(z) < R){
z = f(z, o.c);
if (abs(z) > R) break;
i++;
}
if (i) point([x, y], i / o.maxIterate);
}
}
}
init();
<canvas id="a" width="500" height="500"></canvas>
via: http://jsfiddle.net/3fnB6/29/

Related

Raytracer renders objects too large

I am following this course to learn computer graphics and write my first ray tracer.
I already have some visible results, but they seem to be too large.
The overall algorithm the course outlines is this:
Image Raytrace (Camera cam, Scene scene, int width, int height)
{
Image image = new Image (width, height) ;
for (int i = 0 ; i < height ; i++)
for (int j = 0 ; j < width ; j++) {
Ray ray = RayThruPixel (cam, i, j) ;
Intersection hit = Intersect (ray, scene) ;
image[i][j] = FindColor (hit) ;
}
return image ;
}
I perform all calculations in camera space (where the camera is at (0, 0, 0)). Thus RayThruPixel returns me a ray in camera coordinates, Intersect returns an intersection point also in camera coordinates, and the image pixel array is a direct mapping from the intersectionr results.
The below image is the rendering of a sphere at (0, 0, -40000) world coordinates and radius 0.15, and camera at (0, 0, 2) world coordinates looking towards (0, 0, 0) world coordinates. I would normally expect the sphere to be a lot smaller given its small radius and far away Z coordinate.
The same thing happens with rendering triangles too. In the below image I have 2 triangles that form a square, but it's way too zoomed in. The triangles have coordinates between -1 and 1, and the camera is looking from world coordinates (0, 0, 4).
This is what the square is expected to look like:
Here is the code snippet I use to determine the collision with the sphere. I'm not sure if I should divide the radius by the z coordinate here - without it, the circle is even larger:
Sphere* sphere = dynamic_cast<Sphere*>(object);
float t;
vec3 p0 = ray->origin;
vec3 p1 = ray->direction;
float a = glm::dot(p1, p1);
vec3 center2 = vec3(modelview * object->transform * glm::vec4(sphere->center, 1.0f)); // camera coords
float b = 2 * glm::dot(p1, (p0 - center2));
float radius = sphere->radius / center2.z;
float c = glm::dot((p0 - center2), (p0 - center2)) - radius * radius;
float D = b * b - 4 * a * c;
if (D > 0) {
// two roots
float sqrtD = glm::sqrt(D);
float root1 = (-b + sqrtD) / (2 * a);
float root2 = (-b - sqrtD) / (2 * a);
if (root1 > 0 && root2 > 0) {
t = glm::min(root1, root2);
found = true;
}
else if (root2 < 0 && root1 >= 0) {
t = root1;
found = true;
}
else {
// should not happen, implies sthat both roots are negative
}
}
else if (D == 0) {
// one root
float root = -b / (2 * a);
t = root;
found = true;
}
else if (D < 0) {
// no roots
// continue;
}
if (found) {
hitVector = p0 + p1 * t;
hitNormal = glm::normalize(result->hitVector - center2);
}
Here I generate the ray going through the relevant pixel:
Ray* RayThruPixel(Camera* camera, int x, int y) {
const vec3 a = eye - center;
const vec3 b = up;
const vec3 w = glm::normalize(a);
const vec3 u = glm::normalize(glm::cross(b, w));
const vec3 v = glm::cross(w, u);
const float aspect = ((float)width) / height;
float fovyrad = glm::radians(camera->fovy);
const float fovx = 2 * atan(tan(fovyrad * 0.5) * aspect);
const float alpha = tan(fovx * 0.5) * (x - (width * 0.5)) / (width * 0.5);
const float beta = tan(fovyrad * 0.5) * ((height * 0.5) - y) / (height * 0.5);
return new Ray(/* origin= */ vec3(modelview * vec4(eye, 1.0f)), /* direction= */ glm::normalize(vec3( modelview * glm::normalize(vec4(alpha * u + beta * v - w, 1.0f)))));
}
And intersection with a triangle:
Triangle* triangle = dynamic_cast<Triangle*>(object);
// vertices in camera coords
vec3 vertex1 = vec3(modelview * object->transform * vec4(*vertices[triangle->index1], 1.0f));
vec3 vertex2 = vec3(modelview * object->transform * vec4(*vertices[triangle->index2], 1.0f));
vec3 vertex3 = vec3(modelview * object->transform * vec4(*vertices[triangle->index3], 1.0f));
vec3 N = glm::normalize(glm::cross(vertex2 - vertex1, vertex3 - vertex1));
float D = -glm::dot(N, vertex1);
float m = glm::dot(N, ray->direction);
if (m == 0) {
// no intersection because ray parallel to plane
}
else {
float t = -(glm::dot(N, ray->origin) + D) / m;
if (t < 0) {
// no intersection because ray goes away from triange plane
}
vec3 Phit = ray->origin + t * ray->direction;
vec3 edge1 = vertex2 - vertex1;
vec3 edge2 = vertex3 - vertex2;
vec3 edge3 = vertex1 - vertex3;
vec3 c1 = Phit - vertex1;
vec3 c2 = Phit - vertex2;
vec3 c3 = Phit - vertex3;
if (glm::dot(N, glm::cross(edge1, c1)) > 0
&& glm::dot(N, glm::cross(edge2, c2)) > 0
&& glm::dot(N, glm::cross(edge3, c3)) > 0) {
found = true;
hitVector = Phit;
hitNormal = N;
}
}
Given that the output image is a circle, and that the same problem happens with triangles as well, my guess is the problem isn't from the intersection logic itself, but rather something wrong with the coordinate spaces or transformations. Could calculating everything in camera space be causing this?
I eventually figured it out by myself. I first noticed the problem was here:
return new Ray(/* origin= */ vec3(modelview * vec4(eye, 1.0f)),
/* direction= */ glm::normalize(vec3( modelview *
glm::normalize(vec4(alpha * u + beta * v - w, 1.0f)))));
When I removed the direction vector transformation (leaving it at just glm::normalize(alpha * u + beta * v - w)) I noticed the problem disappeared - the square was rendered correctly. I was prepared to accept it as an answer, although I wasn't completely sure why.
Then I noticed that after doing transformations on the object, the camera wasn't positioned properly, which makes sense - we're not pointing the rays in the correct direction.
I realized that my entire approach of doing the calculations in camera space was wrong. If I still wanted to use this approach, the rays would have to be transformed, but in a different way that would involve some complex math I wasn't ready to deal with.
I instead changed my approach to do transformations and intersections in world space and only use camera space at the lighting stage. We have to use camera space at some point, since we want to actually look in the direction of the object we are rendering.

Optimize quadratic curve tracing using numeric methods

I am trying to trace quadratic bezier curves, placing "markers" at a given step length distance. Tried to do it a naive way:
const p = toPoint(map, points[section + 1]);
const p2 = toPoint(map, points[section]);
const {x: cx, y: cy} = toPoint(map, cp);
const ll1 = toLatLng(map, p),
ll2 = toLatLng(map, p2),
llc = toLatLng(map, { x: cx, y: cy });
const lineLength = quadraticBezierLength(
ll1.lat,
ll1.lng,
llc.lat,
llc.lng,
ll2.lat,
ll2.lng
);
for (let index = 0; index < Math.floor(lineLength / distance); index++) {
const t = distance / lineLength;
const markerPoint = getQuadraticPoint(
t * index,
p.x,
p.y,
cx,
cy,
p2.x,
p2.y
);
const markerLatLng = toLatLng(map, markerPoint);
markers.push(markerLatLng);
}
This approach does not work since the correlation of a quadratic curve between t and L is not linear. I could not find a formula, that would give me a good approximation, so looking at solving this problem using numeric methods [Newton]. One simple option that I am considering is to split the curve into x [for instance 10] times more pieces than needed. After that, using the same quadraticBezierLength() function calculate the distance to each of those points. After this, chose the point so that the length is closest to the distance * index.
This however would be a huge overkill in terms of algorithm complexity. I could probably start comparing points for index + 1 from the subset after/without the point I selected already, thus skipping the beginning of the set. This would lower the complexity some, yet still very inefficient.
Any ideas and/or suggestions?
Ideally, I want a function that would take d - distance along the curve, p0, cp, p1 - three points defining a quadratic bezier curve and return an array of coordinates, implemented with the least complexity possible.
OK I found analytic formula for 2D quadratic bezier curve in here:
Calculate the length of a segment of a quadratic bezier
So the idea is simply binary search the parameter t until analytically obtained arclength matches wanted length...
C++ code:
//---------------------------------------------------------------------------
float x0,x1,x2,y0,y1,y2; // control points
float ax[3],ay[3]; // coefficients
//---------------------------------------------------------------------------
void get_xy(float &x,float &y,float t) // get point on curve from parameter t=<0,1>
{
float tt=t*t;
x=ax[0]+(ax[1]*t)+(ax[2]*tt);
y=ay[0]+(ay[1]*t)+(ay[2]*tt);
}
//---------------------------------------------------------------------------
float get_l_naive(float t) // get arclength from parameter t=<0,1>
{
// naive iteration
float x0,x1,y0,y1,dx,dy,l=0.0,dt=0.001;
get_xy(x1,y1,t);
for (int e=1;e;)
{
t-=dt; if (t<0.0){ e=0; t=0.0; }
x0=x1; y0=y1; get_xy(x1,y1,t);
dx=x1-x0; dy=y1-y0;
l+=sqrt((dx*dx)+(dy*dy));
}
return l;
}
//---------------------------------------------------------------------------
float get_l(float t) // get arclength from parameter t=<0,1>
{
// analytic fomula from: https://stackoverflow.com/a/11857788/2521214
float ax,ay,bx,by,A,B,C,b,c,u,k,cu,cb;
ax=x0-x1-x1+x2;
ay=y0-y1-y1+y2;
bx=x1+x1-x0-x0;
by=y1+y1-y0-y0;
A=4.0*((ax*ax)+(ay*ay));
B=4.0*((ax*bx)+(ay*by));
C= (bx*bx)+(by*by);
b=B/(2.0*A);
c=C/A;
u=t+b;
k=c-(b*b);
cu=sqrt((u*u)+k);
cb=sqrt((b*b)+k);
return 0.5*sqrt(A)*((u*cu)-(b*cb)+(k*log(fabs((u+cu))/(b+cb))));
}
//---------------------------------------------------------------------------
float get_t(float l0) // get parameter t=<0,1> from arclength
{
float t0,t,dt,l;
for (t=0.0,dt=0.5;dt>1e-10;dt*=0.5)
{
t0=t; t+=dt;
l=get_l(t);
if (l>l0) t=t0;
}
return t;
}
//---------------------------------------------------------------------------
void set_coef() // compute coefficients from control points
{
ax[0]= ( x0);
ax[1]= +(2.0*x1)-(2.0*x0);
ax[2]=( x2)-(2.0*x1)+( x0);
ay[0]= ( y0);
ay[1]= +(2.0*y1)-(2.0*y0);
ay[2]=( y2)-(2.0*y1)+( y0);
}
//---------------------------------------------------------------------------
Usage:
set control points x0,y0,...
then you can use t=get_t(wanted_arclength) freely
In case you want to use get_t_naive and or get_xy you have to call set_coef first
In case you want to tweak speed/accuracy you can play with the target accuracy of binsearch currently set to1e-10
Here optimized (merged get_l,get_t functions) version:
//---------------------------------------------------------------------------
float get_t(float l0) // get parameter t=<0,1> from arclength
{
float t0,t,dt,l;
float ax,ay,bx,by,A,B,C,b,c,u,k,cu,cb,cA;
// precompute get_l(t) constants
ax=x0-x1-x1+x2;
ay=y0-y1-y1+y2;
bx=x1+x1-x0-x0;
by=y1+y1-y0-y0;
A=4.0*((ax*ax)+(ay*ay));
B=4.0*((ax*bx)+(ay*by));
C= (bx*bx)+(by*by);
b=B/(2.0*A);
c=C/A;
k=c-(b*b);
cb=sqrt((b*b)+k);
cA=0.5*sqrt(A);
// bin search t so get_l == l0
for (t=0.0,dt=0.5;dt>1e-10;dt*=0.5)
{
t0=t; t+=dt;
// l=get_l(t);
u=t+b; cu=sqrt((u*u)+k);
l=cA*((u*cu)-(b*cb)+(k*log(fabs((u+cu))/(b+cb))));
if (l>l0) t=t0;
}
return t;
}
//---------------------------------------------------------------------------
For now, I came up with the below:
for (let index = 0; index < Math.floor(numFloat * times); index++) {
const t = distance / lineLength / times;
const l1 = toLatLng(map, p), lcp = toLatLng(map, new L.Point(cx, cy));
const lutPoint = getQuadraticPoint(
t * index,
p.x,
p.y,
cx,
cy,
p2.x,
p2.y
);
const lutLatLng = toLatLng(map, lutPoint);
const length = quadraticBezierLength(l1.lat, l1.lng, lcp.lat, lcp.lng, lutLatLng.lat, lutLatLng.lng);
lut.push({t: t * index, length});
}
const lut1 = lut.filter(({length}) => !isNaN(length));
console.log('lookup table:', lut1);
for (let index = 0; index < Math.floor(numFloat); index++) {
const t = distance / lineLength;
// find t closest to distance * index
const markerT = lut1.reduce((a, b) => {
return a.t && Math.abs(b.length - distance * index) < Math.abs(a.length - distance * index) ? b.t : a.t || 0;
});
const markerPoint = getQuadraticPoint(
markerT,
p.x,
p.y,
cx,
cy,
p2.x,
p2.y
);
const markerLatLng = toLatLng(map, markerPoint);
}
I think only that my Bezier curve length is not working as I expected.
function quadraticBezierLength(x1, y1, x2, y2, x3, y3) {
let a, b, c, d, e, u, a1, e1, c1, d1, u1, v1x, v1y;
v1x = x2 * 2;
v1y = y2 * 2;
d = x1 - v1x + x3;
d1 = y1 - v1y + y3;
e = v1x - 2 * x1;
e1 = v1y - 2 * y1;
c1 = a = 4 * (d * d + d1 * d1);
c1 += b = 4 * (d * e + d1 * e1);
c1 += c = e * e + e1 * e1;
c1 = 2 * Math.sqrt(c1);
a1 = 2 * a * (u = Math.sqrt(a));
u1 = b / u;
a = 4 * c * a - b * b;
c = 2 * Math.sqrt(c);
return (
(a1 * c1 + u * b * (c1 - c) + a * Math.log((2 * u + u1 + c1) / (u1 + c))) /
(4 * a1)
);
}
I believe that the full curve length is correct, but the partial length that is being calculated for the lookup table is wrong.
If I am right, you want points at equally spaced points in terms of curvilinear abscissa (rather than in terms of constant Euclidean distance, which would be a very different problem).
Computing the curvilinear abscissa s as a function of the curve parameter t is indeed an option, but that leads you to the resolution of the equation s(t) = Sk/n for integer k, where S is the total length (or s(t) = kd if a step is imposed). This is not convenient because s(t) is not available as a simple function and is transcendental.
A better method is to solve the differential equation
dt/ds = 1/(ds/dt) = 1/√(dx/dt)²+(dy/dt)²
using your preferred ODE solver (RK4). This lets you impose your fixed step on s and is computationally efficient.

Color interpolation with a set of 2D points

I am trying to modelize a physical problem of photoelasticity on a surface. I succeed to get an array of X,Y - coordinates of the surface and for each points I have a corresponding color in a RGB format. I used Python scatter to plot and the result is already great but there is still some discontinuities because of the lack of resolution that I can not improve :( I just wanted to ask how I can generate the same surface plot in a continuous way (with new "in-between" points for which the color of them have been interpolated with respect to the neighborhood points). I am not necessarily looking for coding it in python, every software is welcome. Thanks!
I wrote this in javascript: https://jsfiddle.net/87nw05kz/
The important part is calculateColor. It finds the inverse square of the distance to each color from the pixel being shaded and it uses that to decide how much that pixel should be effected by each color.
function calculateColor(x, y) {
let total = 0;
for (let i = 0; i < colors.length; i++) {
let c = colors[i];
let d = distance(c.x, c.y, x, y);
if (d === 0) {
return c;
}
d = 1 / (d * d);
c.d = d;
total += d;
}
let r = 0, g = 0, b = 0;
for (let i = 0; i < colors.length; i++) {
let c = colors[i];
let ratio = c.d / total;
r += ratio * c.r;
g += ratio * c.g;
b += ratio * c.b;
}
r = Math.floor(r);
g = Math.floor(g);
b = Math.floor(b);
return {r:r,g:g,b:b};
}

RGB to HSL conversion

I'm creating a Color Picker tool and for the HSL slider, I need to be able to convert RGB to HSL. When I searched SO for a way to do the conversion, I found this question HSL to RGB color conversion.
While it provides a function to do conversion from RGB to HSL, I see no explanation to what's really going on in the calculation. To understand it better, I've read the HSL and HSV on Wikipedia.
Later, I've rewritten the function from the "HSL to RGB color conversion" using the calculations from the "HSL and HSV" page.
I'm stuck at the calculation of hue if the R is the max value. See the calculation from the "HSL and HSV" page:
This is from another wiki page that's in Dutch:
and this is from the answers to "HSL to RGB color conversion":
case r: h = (g - b) / d + (g < b ? 6 : 0); break; // d = max-min = c
I've tested all three with a few RGB values and they seem to produce similar (if not exact) results. What I'm wondering is are they performing the same thing? Will get I different results for some specific RGB values? Which one should I be using?
hue = (g - b) / c; // dutch wiki
hue = ((g - b) / c) % 6; // eng wiki
hue = (g - b) / c + (g < b ? 6 : 0); // SO answer
function rgb2hsl(r, g, b) {
// see https://en.wikipedia.org/wiki/HSL_and_HSV#Formal_derivation
// convert r,g,b [0,255] range to [0,1]
r = r / 255,
g = g / 255,
b = b / 255;
// get the min and max of r,g,b
var max = Math.max(r, g, b);
var min = Math.min(r, g, b);
// lightness is the average of the largest and smallest color components
var lum = (max + min) / 2;
var hue;
var sat;
if (max == min) { // no saturation
hue = 0;
sat = 0;
} else {
var c = max - min; // chroma
// saturation is simply the chroma scaled to fill
// the interval [0, 1] for every combination of hue and lightness
sat = c / (1 - Math.abs(2 * lum - 1));
switch(max) {
case r:
// hue = (g - b) / c;
// hue = ((g - b) / c) % 6;
// hue = (g - b) / c + (g < b ? 6 : 0);
break;
case g:
hue = (b - r) / c + 2;
break;
case b:
hue = (r - g) / c + 4;
break;
}
}
hue = Math.round(hue * 60); // °
sat = Math.round(sat * 100); // %
lum = Math.round(lum * 100); // %
return [hue, sat, lum];
}
I've been reading several wiki pages and checking different calculations, and creating visualizations of RGB cube projection onto a hexagon. And I'd like to post my understanding of this conversion. Since I find this conversion (representations of color models using geometric shapes) interesting, I'll try to be as thorough as I can be. First, let's start with RGB.
RGB
Well, this doesn't really need much explanation. In its simplest form, you have 3 values, R, G, and B in the range of [0,255]. For example, 51,153,204. We can represent it using a bar graph:
RGB Cube
We can also represent a color in a 3D space. We have three values R, G, B that corresponds to X, Y, and Z. All three values are in the [0,255] range, which results in a cube. But before creating the RGB cube, let's work on 2D space first. Two combinations of R,G,B gives us: RG, RB, GB. If we were to graph these on a plane, we'd get the following:
These are the first three sides of the RGB cube. If we place them on a 3D space, it results in a half cube:
If you check the above graph, by mixing two colors, we get a new color at (255,255), and these are Yellow, Magenta, and Cyan. Again, two combinations of these gives us: YM, YC, and MC. These are the missing sides of the cube. Once we add them, we get a complete cube:
And the position of 51,153,204 in this cube:
Projection of RGB Cube onto a hexagon
Now that we have the RGB Cube, let's project it onto a hexagon. First, we tilt the cube by 45° on the x, and then 35.264° on the y. After the second tilt, black corner is at the bottom and the white corner is at the top, and they both pass through the z axis.
As you can see, we get the hexagon look we want with the correct hue order when we look at the cube from the top. But we need to project this onto a real hexagon. What we do is draw a hexagon that is in the same size with the cube top view. All the corners of the hexagon corresponds to the corners of the cube and the colors, and the top corner of the cube that is white, is projected onto the center of the hexagon. Black is omitted. And if we map every color onto the hexagon, we get the look at right.
And the position of 51,153,204 on the hexagon would be:
Calculating the Hue
Before we make the calculation, let's define what hue is.
Hue is roughly the angle of the vector to a point in the projection, with red at 0°.
... hue is how far around that hexagon’s edge the point lies.
This is the calculation from the HSL and HSV wiki page. We'll be using it in this explanation.
Examine the hexagon and the position of 51,153,204 on it.
First, we scale the R, G, B values to fill the [0,1] interval.
R = R / 255 R = 51 / 255 = 0.2
G = G / 255 G = 153 / 255 = 0.6
B = B / 255 B = 204 / 255 = 0.8
Next, find the max and min values of R, G, B
M = max(R, G, B) M = max(0.2, 0.6, 0.8) = 0.8
m = min(R, G, B) m = min(0.2, 0.6, 0.8) = 0.2
Then, calculate C (chroma). Chroma is defined as:
... chroma is roughly the distance of the point from the origin.
Chroma is the relative size of the hexagon passing through a point ...
C = OP / OP'
C = M - m
C = 0.8- 0.2 = 0.6
Now, we have the R, G, B, and C values. If we check the conditions, if M = B returns true for 51,153,204. So, we'll be using H'= (R - G) / C + 4.
Let's check the hexagon again. (R - G) / C gives us the length of BP segment.
segment = (R - G) / C = (0.2 - 0.6) / 0.6 = -0.6666666666666666
We'll place this segment on the inner hexagon. Starting point of the hexagon is R (red) at 0°. If the segment length is positive, it should be on RY, if negative, it should be on RM. In this case, it is negative -0.6666666666666666, and is on the RM edge.
Next, we need to shift the position of the segment, or rather P₁ towars the B (because M = B). Blue is at 240°. Hexagon has 6 sides. Each side corresponds to 60°. 240 / 60 = 4. We need to shift (increment) the P₁ by 4 (which is 240°). After the shift, P₁ will be at P and we'll get the length of RYGCP.
segment = (R - G) / C = (0.2 - 0.6) / 0.6 = -0.6666666666666666
RYGCP = segment + 4 = 3.3333333333333335
Circumference of the hexagon is 6 which corresponds to 360°. 53,151,204's distance to 0° is 3.3333333333333335. If we multiply 3.3333333333333335 by 60, we'll get its position in degrees.
H' = 3.3333333333333335
H = H' * 60 = 200°
In the case of if M = R, since we place one end of the segment at R (0°), we don't need to shift the segment to R if the segment length is positive. The position of P₁ will be positive. But if the segment length is negative, we need to shift it by 6, because negative value means that the angular position is greater than 180° and we need to do a full rotation.
So, neither the Dutch wiki solution hue = (g - b) / c; nor the Eng wiki solution hue = ((g - b) / c) % 6; will work for negative segment length. Only the SO answer hue = (g - b) / c + (g < b ? 6 : 0); works for both negative and positive values.
JSFiddle: Test all three methods for rgb(255,71,99)
JSFiddle: Find a color's position in RGB Cube and hue hexagon visually
Working hue calculation:
console.log(rgb2hue(51,153,204));
console.log(rgb2hue(255,71,99));
console.log(rgb2hue(255,0,0));
console.log(rgb2hue(255,128,0));
console.log(rgb2hue(124,252,0));
function rgb2hue(r, g, b) {
r /= 255;
g /= 255;
b /= 255;
var max = Math.max(r, g, b);
var min = Math.min(r, g, b);
var c = max - min;
var hue;
if (c == 0) {
hue = 0;
} else {
switch(max) {
case r:
var segment = (g - b) / c;
var shift = 0 / 60; // R° / (360° / hex sides)
if (segment < 0) { // hue > 180, full rotation
shift = 360 / 60; // R° / (360° / hex sides)
}
hue = segment + shift;
break;
case g:
var segment = (b - r) / c;
var shift = 120 / 60; // G° / (360° / hex sides)
hue = segment + shift;
break;
case b:
var segment = (r - g) / c;
var shift = 240 / 60; // B° / (360° / hex sides)
hue = segment + shift;
break;
}
}
return hue * 60; // hue is in [0,6], scale it up
}
This page provides a function for conversion between color spaces, including RGB to HSL.
function RGBToHSL(r,g,b) {
// Make r, g, and b fractions of 1
r /= 255;
g /= 255;
b /= 255;
// Find greatest and smallest channel values
let cmin = Math.min(r,g,b),
cmax = Math.max(r,g,b),
delta = cmax - cmin,
h = 0,
s = 0,
l = 0;
// Calculate hue
// No difference
if (delta == 0)
h = 0;
// Red is max
else if (cmax == r)
h = ((g - b) / delta) % 6;
// Green is max
else if (cmax == g)
h = (b - r) / delta + 2;
// Blue is max
else
h = (r - g) / delta + 4;
h = Math.round(h * 60);
// Make negative hues positive behind 360°
if (h < 0)
h += 360;
// Calculate lightness
l = (cmax + cmin) / 2;
// Calculate saturation
s = delta == 0 ? 0 : delta / (1 - Math.abs(2 * l - 1));
// Multiply l and s by 100
s = +(s * 100).toFixed(1);
l = +(l * 100).toFixed(1);
return "hsl(" + h + "," + s + "%," + l + "%)";
}
Hue in HSL is like an angle in a circle. Relevant values for such angle reside in the 0..360 interval. However, negative values might come out of the calculation. And that's why those three formulas are different. They do the same in the end, they just handle differently the values outside the 0..360 interval. Or, to be precise, the 0..6 interval which is then eventually multiplied by 60 to 0..360
hue = (g - b) / c; // dutch wiki
does nothing with negative values and presumes the subsequent code can handle negative H values.
hue = ((g - b) / c) % 6; // eng wiki uses the % operator to fit the values inside the 0..6 interval
hue = (g - b) / c + (g < b ? 6 : 0); // SO answer takes care of negative values by adding +6 to make them positive
You see that these are just cosmetic differences. Either the second or the third formula will work fine for you.
Continuing from my comment, the English version looks correct, but I'm not sure what's happening in the Dutch version as I don't understand the WIKI page.
Here is an ES6 version that I made from the English WIKI page, along with some sample data that appear to match the WIKI examples (give or take Javascript's numeric accuracy). Hopefully it may be of use while creating your own function.
// see: https://en.wikipedia.org/wiki/RGB_color_model
// see: https://en.wikipedia.org/wiki/HSL_and_HSV
// expects R, G, B, Cmax and chroma to be in number interval [0, 1]
// returns undefined if chroma is 0, or a number interval [0, 360] degrees
function hue(R, G, B, Cmax, chroma) {
let H;
if (chroma === 0) {
return H;
}
if (Cmax === R) {
H = ((G - B) / chroma) % 6;
} else if (Cmax === G) {
H = ((B - R) / chroma) + 2;
} else if (Cmax === B) {
H = ((R - G) / chroma) + 4;
}
H *= 60;
return H < 0 ? H + 360 : H;
}
// returns the average of the supplied number arguments
function average(...theArgs) {
return theArgs.length ? theArgs.reduce((p, c) => p + c, 0) / theArgs.length : 0;
}
// expects R, G, B, Cmin, Cmax and chroma to be in number interval [0, 1]
// type is by default 'bi-hexcone' equation
// set 'luma601' or 'luma709' for alternatives
// see: https://en.wikipedia.org/wiki/Luma_(video)
// returns a number interval [0, 1]
function lightness(R, G, B, Cmin, Cmax, type = 'bi-hexcone') {
if (type === 'luma601') {
return (0.299 * R) + (0.587 * G) + (0.114 * B);
}
if (type === 'luma709') {
return (0.2126 * R) + (0.7152 * G) + (0.0772 * B);
}
return average(Cmin, Cmax);
}
// expects L and chroma to be in number interval [0, 1]
// returns a number interval [0, 1]
function saturation(L, chroma) {
return chroma === 0 ? 0 : chroma / (1 - Math.abs(2 * L - 1));
}
// returns the value to a fixed number of digits
function toFixed(value, digits) {
return Number.isFinite(value) && Number.isFinite(digits) ? value.toFixed(digits) : value;
}
// expects R, G, and B to be in number interval [0, 1]
// returns a Map of H, S and L in the appropriate interval and digits
function RGB2HSL(R, G, B, fixed = true) {
const Cmin = Math.min(R, G, B);
const Cmax = Math.max(R, G, B);
const chroma = Cmax - Cmin;
// default 'bi-hexcone' equation
const L = lightness(R, G, B, Cmin, Cmax);
// H in degrees interval [0, 360]
// L and S in interval [0, 1]
return new Map([
['H', toFixed(hue(R, G, B, Cmax, chroma), fixed && 1)],
['S', toFixed(saturation(L, chroma), fixed && 3)],
['L', toFixed(L, fixed && 3)]
]);
}
// expects value to be number in interval [0, 255]
// returns normalised value as a number interval [0, 1]
function colourRange(value) {
return value / 255;
};
// expects R, G, and B to be in number interval [0, 255]
function RGBdec2HSL(R, G, B) {
return RGB2HSL(colourRange(R), colourRange(G), colourRange(B));
}
// converts a hexidecimal string into a decimal number
function hex2dec(value) {
return parseInt(value, 16);
}
// slices a string into an array of paired characters
function pairSlicer(value) {
return value.match(/../g);
}
// prepend '0's to the start of a string and make specific length
function prePad(value, count) {
return ('0'.repeat(count) + value).slice(-count);
}
// format hex pair string from value
function hexPair(value) {
return hex2dec(prePad(value, 2));
}
// expects R, G, and B to be hex string in interval ['00', 'FF']
// without a leading '#' character
function RGBhex2HSL(R, G, B) {
return RGBdec2HSL(hexPair(R), hexPair(G), hexPair(B));
}
// expects RGB to be a hex string in interval ['000000', 'FFFFFF']
// with or without a leading '#' character
function RGBstr2HSL(RGB) {
const hex = prePad(RGB.charAt(0) === '#' ? RGB.slice(1) : RGB, 6);
return RGBhex2HSL(...pairSlicer(hex).slice(0, 3));
}
// expects value to be a Map object
function logIt(value) {
console.log(value);
document.getElementById('out').textContent += JSON.stringify([...value]) + '\n';
};
logIt(RGBstr2HSL('000000'));
logIt(RGBstr2HSL('#808080'));
logIt(RGB2HSL(0, 0, 0));
logIt(RGB2HSL(1, 1, 1));
logIt(RGBdec2HSL(0, 0, 0));
logIt(RGBdec2HSL(255, 255, 254));
logIt(RGBhex2HSL('BF', 'BF', '00'));
logIt(RGBstr2HSL('008000'));
logIt(RGBstr2HSL('80FFFF'));
logIt(RGBstr2HSL('8080FF'));
logIt(RGBstr2HSL('BF40BF'));
logIt(RGBstr2HSL('A0A424'));
logIt(RGBstr2HSL('411BEA'));
logIt(RGBstr2HSL('1EAC41'));
logIt(RGBstr2HSL('F0C80E'));
logIt(RGBstr2HSL('B430E5'));
logIt(RGBstr2HSL('ED7651'));
logIt(RGBstr2HSL('FEF888'));
logIt(RGBstr2HSL('19CB97'));
logIt(RGBstr2HSL('362698'));
logIt(RGBstr2HSL('7E7EB8'));
<pre id="out"></pre>

Please explain this color blending mode formula so I can replicate it in PHP/ImageMagick

I've been trying to use ImageMagick to replicate Photoshops Colour Blend Mode. I found the following formulas in an online guide but I don't know what they mean. Do I just need to swap certain channels?
A while ago I reversed engineered Photoshop blending modes.
Have a look here:
http://www.kineticsystem.org/?q=node/13
And here below the code I use to convert between HSY (Hue, Saturation, Luminosity) to and from RGB (Red, Green, Blue). Photoshop use something called Hexacones to calculate the saturation.
Giovanni
/**
* This is the formula used by Photoshop to convert a color from
* RGB (Red, Green, Blue) to HSY (Hue, Saturation, Luminosity).
* The hue is calculated using the exacone approximation of the saturation
* cone.
* #param rgb The input color RGB normalized components.
* #param hsy The output color HSY normalized components.
*/
public static void rgbToHsy(double rgb[], double hsy[]) {
double r = Math.min(Math.max(rgb[0], 0d), 1d);
double g = Math.min(Math.max(rgb[1], 0d), 1d);
double b = Math.min(Math.max(rgb[2], 0d), 1d);
double h;
double s;
double y;
// For saturation equals to 0 any value of hue are valid.
// In this case we choose 0 as a default value.
if (r == g && g == b) { // Limit case.
s = 0d;
h = 0d;
} else if ((r >= g) && (g >= b)) { // Sector 0: 0° - 60°
s = r - b;
h = 60d * (g - b) / s;
} else if ((g > r) && (r >= b)) { // Sector 1: 60° - 120°
s = g - b;
h = 60d * (g - r) / s + 60d;
} else if ((g >= b) && (b > r)) { // Sector 2: 120° - 180°
s = g - r;
h = 60d * (b - r) / s + 120d;
} else if ((b > g) && (g > r)) { // Sector 3: 180° - 240°
s = b - r;
h = 60d * (b - g) / s + 180d;
} else if ((b > r) && (r >= g)) { // Sector 4: 240° - 300°
s = b - g;
h = 60d * (r - g) / s + 240d;
} else { // Sector 5: 300° - 360°
s = r - g;
h = 60d * (r - b) / s + 300d;
}
y = R * r + G * g + B * b;
// Approximations erros can cause values to exceed bounds.
hsy[0] = h % 360;
hsy[1] = Math.min(Math.max(s, 0d), 1d);
hsy[2] = Math.min(Math.max(y, 0d), 1d);
}
/**
* This is the formula used by Photoshop to convert a color from
* HSY (Hue, Saturation, Luminosity) to RGB (Red, Green, Blue).
* The hue is calculated using the exacone approximation of the saturation
* cone.
* #param hsy The input color HSY normalized components.
* #param rgb The output color RGB normalized components.
*/
public static void hsyToRgb(double hsy[], double rgb[]) {
double h = hsy[0] % 360;
double s = Math.min(Math.max(hsy[1], 0d), 1d);
double y = Math.min(Math.max(hsy[2], 0d), 1d);
double r;
double g;
double b;
double k; // Intermediate variable.
if (h >= 0d && h < 60d) { // Sector 0: 0° - 60°
k = s * h / 60d;
b = y - R * s - G * k;
r = b + s;
g = b + k;
} else if (h >= 60d && h < 120d) { // Sector 1: 60° - 120°
k = s * (h - 60d) / 60d;
g = y + B * s + R * k;
b = g - s;
r = g - k;
} else if (h >= 120d && h < 180d) { // Sector 2: 120° - 180°
k = s * (h - 120d) / 60d;
r = y - G * s - B * k;
g = r + s;
b = r + k;
} else if (h >= 180d && h < 240d) { // Sector 3: 180° - 240°
k = s * (h - 180d) / 60d;
b = y + R * s + G * k;
r = b - s;
g = b - k;
} else if (h >= 240d && h < 300d) { // Sector 4: 240° - 300°
k = s * (h - 240d) / 60d;
g = y - B * s - R * k;
b = g + s;
r = g + k;
} else { // Sector 5: 300° - 360°
k = s * (h - 300d) / 60d;
r = y + G * s + B * k;
g = r - s;
b = r - k;
}
// Approximations erros can cause values to exceed bounds.
rgb[0] = Math.min(Math.max(r, 0d), 1d);
rgb[1] = Math.min(Math.max(g, 0d), 1d);
rgb[2] = Math.min(Math.max(b, 0d), 1d);
}
Wikipedia has a good article on blend modes
http://en.wikipedia.org/wiki/Blend_modes
They give formulas for Multiply, Screen and Overlay modes.
Multiply
Formula: Result Color = (Top Color) * (Bottom Color) /255
Screen
Formula: Result Color = 255 - [((255 - Top Color)*(255 - Bottom Color))/255]
Overlay
Formula: Result Color = if (Bottom Color < 128)
then (2 * Top Color * Bottom Color / 255)
else (255 - 2 * (255 - Top Color) * (255 - Bottom Color) / 255)
A is the Foreground pixel, B is the Background pixel, C is the new pixel. H is the Hue value for each pixel, S is the Saturation value, L is the Luminance value, and Y is the Brightness value. (Not sure what the difference is between luminance and brightness though.
Anyway, in the first example the Hue(H) and Saturation(S) values of the new pixel(C) are copied from the Foreground pixel(A) while the Brightness(Y) value of the new pixel is taken from the Luminance(L) value of the Background(B) pixel.
These color blending formulas are quite tricky if you need to incorporate also the alpha channel. I was not able to reproduce the blending of Photoshop, but Gimp works like this:
Color mix_hsv(
ColorMixMode::Enum mode, // blending mode
Color cd, // destination color (bottom pixel)
Color cs) // source color (top pixel)
{
// Modify the source color
float dh, ds, dv; // destination hsv
float sh, ss, sv; // source hsv
cd.GetHsv(dh, ds, dv);
cs.GetHsv(sh, ss, sv);
switch (mode) {
case HUE: cs.InitFromHsv(sh, ds, dv); break;
case SATURATION: cs.InitFromHsv(dh, ss, dv); break;
case COLOR: cs.InitFromHsv(sh, ss, dv); break;
case LUMINOSITY: cs.InitFromHsv(dh, ds, sv); break;
}
cs.A = std::min(cd.A, cs.A);
// Blend the modified source color onto the destination color
unsigned char cd_A_orig = cd.A;
cd = mix(NORMAL, cd, cs); // normal blending
cd.A = cd_A_orig;
return cd;
}
If you use premultiplied alpha, don't forget to correctly handle it in the above code. I was not able to find the code for blending in Gimp's source code, but the resulting images are very similar.
Photoshop's color blending is clearly different, so if anyone finds a way to implement it, please let us all know :o)
Miso

Resources