Raytracer renders objects too large - graphics

I am following this course to learn computer graphics and write my first ray tracer.
I already have some visible results, but they seem to be too large.
The overall algorithm the course outlines is this:
Image Raytrace (Camera cam, Scene scene, int width, int height)
{
Image image = new Image (width, height) ;
for (int i = 0 ; i < height ; i++)
for (int j = 0 ; j < width ; j++) {
Ray ray = RayThruPixel (cam, i, j) ;
Intersection hit = Intersect (ray, scene) ;
image[i][j] = FindColor (hit) ;
}
return image ;
}
I perform all calculations in camera space (where the camera is at (0, 0, 0)). Thus RayThruPixel returns me a ray in camera coordinates, Intersect returns an intersection point also in camera coordinates, and the image pixel array is a direct mapping from the intersectionr results.
The below image is the rendering of a sphere at (0, 0, -40000) world coordinates and radius 0.15, and camera at (0, 0, 2) world coordinates looking towards (0, 0, 0) world coordinates. I would normally expect the sphere to be a lot smaller given its small radius and far away Z coordinate.
The same thing happens with rendering triangles too. In the below image I have 2 triangles that form a square, but it's way too zoomed in. The triangles have coordinates between -1 and 1, and the camera is looking from world coordinates (0, 0, 4).
This is what the square is expected to look like:
Here is the code snippet I use to determine the collision with the sphere. I'm not sure if I should divide the radius by the z coordinate here - without it, the circle is even larger:
Sphere* sphere = dynamic_cast<Sphere*>(object);
float t;
vec3 p0 = ray->origin;
vec3 p1 = ray->direction;
float a = glm::dot(p1, p1);
vec3 center2 = vec3(modelview * object->transform * glm::vec4(sphere->center, 1.0f)); // camera coords
float b = 2 * glm::dot(p1, (p0 - center2));
float radius = sphere->radius / center2.z;
float c = glm::dot((p0 - center2), (p0 - center2)) - radius * radius;
float D = b * b - 4 * a * c;
if (D > 0) {
// two roots
float sqrtD = glm::sqrt(D);
float root1 = (-b + sqrtD) / (2 * a);
float root2 = (-b - sqrtD) / (2 * a);
if (root1 > 0 && root2 > 0) {
t = glm::min(root1, root2);
found = true;
}
else if (root2 < 0 && root1 >= 0) {
t = root1;
found = true;
}
else {
// should not happen, implies sthat both roots are negative
}
}
else if (D == 0) {
// one root
float root = -b / (2 * a);
t = root;
found = true;
}
else if (D < 0) {
// no roots
// continue;
}
if (found) {
hitVector = p0 + p1 * t;
hitNormal = glm::normalize(result->hitVector - center2);
}
Here I generate the ray going through the relevant pixel:
Ray* RayThruPixel(Camera* camera, int x, int y) {
const vec3 a = eye - center;
const vec3 b = up;
const vec3 w = glm::normalize(a);
const vec3 u = glm::normalize(glm::cross(b, w));
const vec3 v = glm::cross(w, u);
const float aspect = ((float)width) / height;
float fovyrad = glm::radians(camera->fovy);
const float fovx = 2 * atan(tan(fovyrad * 0.5) * aspect);
const float alpha = tan(fovx * 0.5) * (x - (width * 0.5)) / (width * 0.5);
const float beta = tan(fovyrad * 0.5) * ((height * 0.5) - y) / (height * 0.5);
return new Ray(/* origin= */ vec3(modelview * vec4(eye, 1.0f)), /* direction= */ glm::normalize(vec3( modelview * glm::normalize(vec4(alpha * u + beta * v - w, 1.0f)))));
}
And intersection with a triangle:
Triangle* triangle = dynamic_cast<Triangle*>(object);
// vertices in camera coords
vec3 vertex1 = vec3(modelview * object->transform * vec4(*vertices[triangle->index1], 1.0f));
vec3 vertex2 = vec3(modelview * object->transform * vec4(*vertices[triangle->index2], 1.0f));
vec3 vertex3 = vec3(modelview * object->transform * vec4(*vertices[triangle->index3], 1.0f));
vec3 N = glm::normalize(glm::cross(vertex2 - vertex1, vertex3 - vertex1));
float D = -glm::dot(N, vertex1);
float m = glm::dot(N, ray->direction);
if (m == 0) {
// no intersection because ray parallel to plane
}
else {
float t = -(glm::dot(N, ray->origin) + D) / m;
if (t < 0) {
// no intersection because ray goes away from triange plane
}
vec3 Phit = ray->origin + t * ray->direction;
vec3 edge1 = vertex2 - vertex1;
vec3 edge2 = vertex3 - vertex2;
vec3 edge3 = vertex1 - vertex3;
vec3 c1 = Phit - vertex1;
vec3 c2 = Phit - vertex2;
vec3 c3 = Phit - vertex3;
if (glm::dot(N, glm::cross(edge1, c1)) > 0
&& glm::dot(N, glm::cross(edge2, c2)) > 0
&& glm::dot(N, glm::cross(edge3, c3)) > 0) {
found = true;
hitVector = Phit;
hitNormal = N;
}
}
Given that the output image is a circle, and that the same problem happens with triangles as well, my guess is the problem isn't from the intersection logic itself, but rather something wrong with the coordinate spaces or transformations. Could calculating everything in camera space be causing this?

I eventually figured it out by myself. I first noticed the problem was here:
return new Ray(/* origin= */ vec3(modelview * vec4(eye, 1.0f)),
/* direction= */ glm::normalize(vec3( modelview *
glm::normalize(vec4(alpha * u + beta * v - w, 1.0f)))));
When I removed the direction vector transformation (leaving it at just glm::normalize(alpha * u + beta * v - w)) I noticed the problem disappeared - the square was rendered correctly. I was prepared to accept it as an answer, although I wasn't completely sure why.
Then I noticed that after doing transformations on the object, the camera wasn't positioned properly, which makes sense - we're not pointing the rays in the correct direction.
I realized that my entire approach of doing the calculations in camera space was wrong. If I still wanted to use this approach, the rays would have to be transformed, but in a different way that would involve some complex math I wasn't ready to deal with.
I instead changed my approach to do transformations and intersections in world space and only use camera space at the lighting stage. We have to use camera space at some point, since we want to actually look in the direction of the object we are rendering.

Related

Upscaling using color interpolation for lighting?

I'm writing a lighting system for 2D games using a rather common method of 2D radiosity. The idea is to generate a JFA voronoi of the game scene (black, alpha = 1.0 for occluders and color, alpha = 1.0 for emitters) and generate an SDF from the JFA. Next you raymarch every pixel on screen for N rays with M max steps on the SDF with random angle offsets for each pixel. You then sample the emitter/occluder surface at the end point of each ray, step back into empty space and sample again for light emitted in the nearest empty space. This gives you a nice result as seen below:
That isn't the problem, it works great. The problem is efficiency. The idea behind fixing this is to render the GI at 1/N sample size (width/N, height/N) and then upscale the GI using interpolation. As I've done below:
This is the problem. The upscaling I've accomplished using weighted color-interpolation, but it produces these nasty results near occluders:
Here's the full shader:
The uniforms passed are the GI downsampled texture (in_GIField), Scene (emitters/occluders only) Texture (gm_basetexture), Signed Distance Field (in_SDField), Resolution (in_Screen) and the Downsample ratio (in_Sample).
/*
UPSCALING SHADER:
Find the nearest 4 boundign samples to the current pixel (xyDelta & xyShift)
Calculate all of the sample's weights based on whether they're marchable or source pixels.
Final perform a composite weighted interpolation for the current pixel to the nearest 4 samples.
*/
varying vec2 in_Coord;
uniform float in_Sample;
uniform vec2 in_Screen;
uniform sampler2D in_GIField;
uniform sampler2D in_SDField;
#define TPI 9.4247779607693797153879301498385
#define PI 3.1415926535897932384626433832795
#define TAU 6.2831853071795864769252867665590
#define EPSILON 0.001 // floating point precision check
#define dot(f) dot(f,f) // shorthand dot of a single float
float ATAN2(float yy, float xx) { return mod(atan(yy, xx), TAU); }
float DIRECT(vec2 v1, vec2 v2) { vec2 v3 = v2 - v1; return ATAN2(-v3.y, v3.x); }
float DIFFERENCE(float src, float dst) { return mod(dst - src + TPI, TAU) - PI; }
float V2_F16(vec2 v) { return v.x + (v.y / 255.0); }
float VMAX(vec3 v) { return max(v.r, max(v.g, v.b)); }
vec2 SAMPLEXY(vec2 xycoord) { return (floor(xycoord / in_Sample) * in_Sample) + (in_Sample*0.5); }
vec3 TONEMAP(vec3 color, float dist) { return color * (1.0 / (1.0 + dot(dist / min(in_Screen.x, in_Screen.y)))); }
float TESTMARCH(vec2 pix, vec2 end) {
float aspect = in_Screen.x / in_Screen.y,
dst = distance(pix, end);
vec2 dir = normalize((end*in_Screen) - (pix*in_Screen)) / in_Screen;
for(float i = 0.0; i < in_Sample; i += 1.0) {
vec2 test = vec2(pix.x * aspect, pix.y) + (dir * (i/in_Screen));
test.x /= aspect;
vec4 sourceCol = texture2D(gm_BaseTexture, test);
float source = max(sourceCol.r, max(sourceCol.g, sourceCol.b));
if (source < EPSILON && sourceCol.a > 0.0) return 0.0;
}
return 1.0;
}
vec3 WCOMPOSITE(vec3 colors[4], float weights[4], vec2 uv) {
// (uv * A * B) + (B * (1.0 - A)) //0, 2, 1, 3
float weightA = (uv.y * weights[0] * weights[2]) + (weights[2] * (1.0 - weights[0])),
weightB = (uv.y * weights[1] * weights[3]) + (weights[3] * (1.0 - weights[1]));
vec3 colorA = mix(colors[0], colors[2], weightA),
colorB = mix(colors[1], colors[3], weightB);
return mix(colorA, colorB, uv.x);
}
void main() {
vec2 xyCoord = in_Coord * in_Screen;
vec2 xyLight = SAMPLEXY(xyCoord);
vec2 xyDelta = sign(sign(xyCoord - xyLight) - 1.0);
vec2 xyShift[4];
xyShift[0] = vec2(0.,0.) + xyDelta;
xyShift[1] = vec2(1.,0.) + xyDelta;
xyShift[2] = vec2(0.,1.) + xyDelta;
xyShift[3] = vec2(1.,1.) + xyDelta;
vec2 xyField[4]; vec3 xyColor[4]; float notSource[4]; float xyWghts[4];
for(int i = 0; i < 4; i++) {
xyField[i] = (xyLight + (xyShift[i] * in_Sample)) * (1.0/in_Screen);
xyColor[i] = texture2D(in_GIField, xyField[i]).rgb;
notSource[i] = 1.0 - sign(texture2D(gm_BaseTexture, xyField[i]).a);
xyWghts[i] = TESTMARCH(in_Coord, xyField[i]) * sign(VMAX(xyColor[i])) * notSource[i];
}
vec2 uvCoord = mod(xyCoord-xyLight, in_Sample) * (1.0/in_Sample);
vec3 xyFinal = WCOMPOSITE(xyColor, xyWghts, uvCoord);
vec4 xySource = texture2D(gm_BaseTexture, in_Coord);
float isSource = sign(xySource.a);
gl_FragColor = vec4((isSource * xySource.rgb) + ((1.0-isSource) * xyFinal), 1.0);
}
EDIT: This DOES produce the intended result in empty space, but ends up with nasty artifacting near emitters and occluders. I tried to solve this in the for-loop in the main function by weighting out the emitter/occluder (source pixels in the scene texture) colors, but this isn't working.
See shader code attached (Shadertoy). I noticed that the weighting function will actually produce some colors with a weight of 0 (as expected as originally written). I currently don't have a solution for how to remove colors from the interpolation process entirely.
Full Source Code
Full Color Shader Code

Path tracing cosine hemisphere sampling and emissive objects

I'm building my own path tracer by self-learning from online resources. But I find that my implementation has an issue with emissive objects in the scene, especially in a dark environment (no skybox).
For example, in the following environment:
The box in the middle is the only light source in the environment, with emission value of (3.0,3.0,3.0), and all other objects emission value of (0.0,0.0,0.0). I was expecting the light to scatter smoothly on the walls, but it looks like they are biased towards one direction.
My cosine sampling function is (modified from lwjgl3-demos):
float3 SampleHemisphere3(float3 norm, float alpha = 0.0)
{
float3 randomVec = rand3();
float r = saturate(pow(randomVec.x, 1.0 / (1.0 + alpha)));
float angle = randomVec.y * PI_TWO;
float sr = saturate(sqrt(1.0 - r * r));
float3 ph = float3(sr * cos(angle), sr * sin(angle), r);
float3 tangent = normalize(randomVec * 2.0 - 1.0);
float3 bitangent = cross(norm, tangent);
tangent = cross(norm, bitangent);
return mul(ph, float3x3(tangent, bitangent, norm));
}
This is how I compute the shading and next ray info:
float3 Shade(inout Ray ray, HitInfo hit)
{
ray.origin = hit.pos + hit.norm * 1e-5;
ray.dir = normalize(SampleHemisphere3(hit.norm, 0.0));
ray.energy *= 2.0 * hit.colors.albedo * saturate(dot(hit.norm, ray.dir));
return hit.colors.emission;
}
And the recursion happens here:
// generate ray from camera
Ray ray = CreateCameraRay(camera, PixelCenter);
// trace ray
float3 color = 0.0;
for (int i = 0; i < _TraceDepth; i++)
{
// get nearest ray hit
HitInfo hit = Trace(ray);
// accumulate color
color += ray.energy * Shade(ray, hit);
// if ray has no energy, stop tracing
if(!any(ray.energy))
break;
}
// write to frame target
_FrameTarget[id.xy] = float4(color, 1.0);
I learned the last two functions from GPU Path Tracing in Unity.
Here is another example of the similar error:
I feel that the problem is caused by the cosine weighted hemisphere sampling, but I have no idea how to fix it.
What should I do to get distributed light effect from emissive objects on the diffuse surfaces? Do I have to specify light sources and shapes and sample from them directly instead of emissive objects?
Edit:
It is indeed the cosine weighted sampling that is causing the problem.
Instead of:
float3 tangent = normalize(randomVec * 2.0 - 1.0);
I should have another vector of independent random values:
float3 tangent = normalize(rand3() * 2.0 - 1.0);
Now it is shows
Still not perfect, because it is clearly a cross shape. (Probably caused by sparsity of floating values)
How can I further improve this?
Edit 2:
After some more debugging and experiments, I figure out the "solution", but I don't understand the reason behind it.
The random value generator is from this Shadertoy project, because I see that GLSL-PathTracer is also using it.
Here is part of it:
void rng_initialize(float2 p, int frame)
{
//white noise seed
RandomSeed = uint4(p, frame, p.x + p.y);
}
void pcg4d(inout uint4 v)
{
v = v * 1664525u + 1013904223u;
v.x += v.y * v.w;
v.y += v.z * v.x;
v.z += v.x * v.y;
v.w += v.y * v.z;
v = v ^ (v >> 16u);
v.x += v.y * v.w;
v.y += v.z * v.x;
v.z += v.x * v.y;
v.w += v.y * v.z;
}
float3 rand3()
{
pcg4d(RandomSeed);
return float3(RandomSeed.xyz) / float(0xffffffffu);
}
float4 rand4()
{
pcg4d(RandomSeed);
return float4(RandomSeed) / float(0xffffffffu);
}
At initialization, I pass float2(id.xy) from SV_DispatchThreadID and current frame counter to rng_initialize.
And here is my new cosine weighted hemisphere sampling function:
float3 SampleHemisphere3(float3 norm, float alpha = 0.0)
{
float4 rand = rand4();
float r = pow(rand.w, 1.0 / (1.0 + alpha));
float angle = rand.y * PI_TWO;
float sr = sqrt(1.0 - r * r);
float3 ph = float3(sr * cos(angle), sr * sin(angle), r);
float3 tangent = normalize(rand.zyx + rand3() - 1.0);
float3 bitangent = cross(norm, tangent);
tangent = cross(norm, bitangent);
return mul(ph, float3x3(tangent, bitangent, norm));
}
And the results are: (which looks much better)
My discoveries from the experiments are:
r in the sampling function has to be dependent on w component of random values.
angle can be any in x, y, z.
tangent has to be dependent on current xyz values and a new vector of random xyz values. Order doesn't matter so I use zyx here. Missing either current xyz or new xyz will result in a cross shape on the wall.
I'm not sure if this is a correct solution, but as far as my eyes can tell, it solves the problem.

What is the best algorithm for a non antialiased line and a aliased line

I'm new to shaders and I have been messing about with the website shadertoy. I'm trying to understand graphics (and the graphics pipeline) such as drawing lines, interpolation, rasterization, etc... I've written two line functions that return a color if the pixel processed is on the line. This is the shadertoy code here using fragment shaders
struct Vertex {
vec2 p;
vec4 c;
};
vec4 overlay(vec4 c1, vec4 c2) {
return vec4((1.0 - c2.w) * c1.xyz + c2.w * c2.xyz, 1.0);
}
vec4 drawLineA(Vertex v1, Vertex v2, vec2 pos) {
vec2 a = v1.p;
vec2 b = v2.p;
vec2 r = floor(pos);
vec2 diff = b - a;
if (abs(diff.y) < abs(diff.x)) {
if (diff.x < 0.0) {
Vertex temp1 = v1;
Vertex temp2 = v2;
v1 = temp2;
v2 = temp1;
a = v1.p;
b = v2.p;
diff = b - a;
}
float m = diff.y / diff.x;
float q = r.x - a.x;
if (floor(m * q + a.y) == r.y && a.x <= r.x && r.x <= b.x) {
float h = q / diff.x;
return vec4((1.0 - h) * v1.c + h * v2.c);
}
} else {
if (diff.y < 0.0) {
Vertex temp1 = v1;
Vertex temp2 = v2;
v1 = temp2;
v2 = temp1;
a = v1.p;
b = v2.p;
diff = b - a;
}
float m = diff.x / diff.y;
float q = r.y - a.y;
if (floor(m * q + a.x) == r.x && a.y <= r.y && r.y <= b.y) {
float h = q / diff.y;
return vec4((1.0 - h) * v1.c + h * v2.c);
}
}
return vec4(0,0,0,0);
}
vec4 drawLineB(Vertex v1, Vertex v2, vec2 pos) {
vec2 a = v1.p;
vec2 b = v2.p;
vec2 l = b - a;
vec2 r = pos - a;
float h = dot(l,r) / dot (l,l);
vec2 eC = a + h * l;
if (floor(pos) == floor(eC) && 0.0 <= h && h <= 1.0 ) {
return vec4((1.0 - h) * v1.c + h * v2.c);
}
return vec4(0,0,0,0);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float t = iTime;
float r = 300.0;
Vertex v1 = Vertex(vec2(400,225), vec4(1,0,0,1));
Vertex v2 = Vertex(vec2(400.0 + r*cos(t) ,225.0 + r*sin(t)), vec4(0,1,0,1));
vec4 col = vec4(0,0,0,1);
col = overlay(col,drawLineA(v1, v2, fragCoord));
col = overlay(col,drawLineB(v1, v2, fragCoord));
// Output to screen
fragColor = col;
}
However, the lines that I have been using are not fast or using antialiasing. Which is the fastest algorithm for both antialiasing and aliasing lines, and how should I implement it thanks.
A fragment shader is really not the right approach for this, a lot on shadertoy is really just a toy / code-golfing showing solutions overcoming the limitations of the platform which are terribly inefficient in real-world scenarios.
All graphics APIs provide dedicated interfaces for drawing line segments just search for "API_NAME draw line" e.g. "webgl draw line". In cases where those do not suffice triangle strips with either MSAA or custom in-shader AA are used.
If you're really just looking for an efficient algorithm the wikipedia page has you covered on that.
As the other answer says shaders are not very good for this.
Line rasterization is done behind the scenes with HW interpolators on the gfx card these days. The shaders are invoked for each pixel of rendered primitive which in your case means its called for every pixel of screen and this all is invoked for each line you render which is massively slower than native way.
If you truly want to learn rasterization do this on CPU side instead. The best algo for lines depends on the computation HW architecture you are using.
For sequentional processing it is:
DDA this one is with subpixel precision
In the past Bresenham was faster but that is not true IIRC since x386 ...
For parallel processing you just compute distance of pixel to the line (more or less like you do now).
So if you insist on using shaders for this You can speed up things using geometry shader and process only fragment (pixels) that are near your line. See:
cubic curves rendering in GLSL
So simply you create OOBB around your line and render it by emitting 2 triangles per line then in fragment you compute the distance to line and set the color accordingly ...
For antialiasing you simply change the color for pixels on the last pixel edge distance. So if your line has half width w and distance of fragment to line is d then:
if (d>w) discard; // fragment too far
d=(w-d)/pixel_size; // distance from edge in pixels
frag_color = vec4(r,g,b,min(1.0,d)); // use transparency/blending
As you can see anti aliasing is just rendering with blending modulated by subpixel position/distance of pixel relative to rasterized object) the same technique can be used with DDA.
There are also ray tracing methods of rendering lines but they are pretty much the same as finding distance to line ... however instead of 2D pixel position you checking against 3D ray which slightly complicates the math.

basic fractal coloring problems

I am trying to get more comfortable with the math behind fractal coloring and understanding the coloring algorithms much better. I am the following paper:
http://jussiharkonen.com/files/on_fractal_coloring_techniques%28lo-res%29.pdf
The paper gives specific parameters to each of the functions, however when I use the same, my results are not quite right. I have no idea what could be going on though.
I am using the iteration count coloring algorithm to start and using the following julia set:
c = 0.5 + 0.25i and p = 2
with the coloring algorithm:
The coloring function simply returns the number of
elements in the truncated orbit divided by 20
And the palette function:
I(u) = k(u − u0),
where k = 2.5 and u0 = 0, was used.
And with a palette being white at 0 and 1, and interpolating to black in-between.
and following this algorithm:
Set z0 to correspond to the position of the pixel in the complex plane.
Calculate the truncated orbit by iterating the formula zn = f(zn−1) starting
from z0 until either
• |zn| > M, or
• n = Nmax,
where Nmax is the maximum number of iterations.
Using the coloring and color index functions, map the resulting truncated
orbit to a color index value.
Determine an RGB color of the pixel by using the palette function
Using this my code looks like the following:
float izoom = pow(1.001, zoom );
vec2 z = focusPoint + (uv * 4.0 - 2.0) * 1.0 / izoom;
vec2 c = vec2(0.5f, 0.25f) ;
const float B = 2.0;
float l;
for( int i=0; i<100; i++ )
{
z = vec2( z.x*z.x - z.y*z.y, 2.0*z.x*z.y ) + c;
if( length(z)>10.0) break;
l++;
}
float ind = basicindex(l);
vec4 col = color(ind);
and have the following index and coloring functions:
float basicindex(float val){
return val / 20.0;
}
vec4 color(float index){
float r = 2.5 * index;
float g = r;
float b = g;
vec3 v = 0.5 - 0.5 * sin(3.14/2.0 + 3.14 * vec3(r, g, b));
return vec4(1.0 - v, 1.0) ;
}
The paper provides the following image:
https://imgur.com/YIZMhaa
While my code produces:
https://imgur.com/OrxdMsN
I get the correct results by using k = 1.0 instead of 2.5, however I would prefer to understand why my results are incorrect. When extending this to the smooth coloring algorithms, my results are still incorrect so I would like to figure this out first.
Let me know if this isn't the correct place for this kind of question and I can move it to the math stack exchange. I wasn't sure which place was more appropriate.
Your image is perfectly implemented for Figure 3.3 in the paper. The other image you posted uses a different routine.
Your figure seems to have that bit of perspective code there at top, but remove that and they should be the same.
If your objection is the color extremes you set that with the "0.5 - 0.5 * ..." part of your code. This makes the darkest black originally 0.5 when in the example image you're trying to duplicate the darkest black should be 1 and the lightest white should be 0.
You're making the whiteness equal to the distance from 0.5
If you ignore the fractal all together you are getting a bunch of values that can be normalized between 0 and 1 and you're coloring those in some particular ways. Clearly the image you are duplicating is linear between 0 and 1 so putting black as 0.5 cannot be correct.
o = {
length : 500,
width : 500,
c : [.5, .25], // c = x + iy will be [x, y]
maxIterate : 100,
canvas : null
}
function point(pos, color){
var c = 255 - Math.round((1 + Math.log(color)/Math.log(o.maxIterate)) * 255);
c = c.toString(16);
if (c.length == 1) c = '0'+c;
o.canvas.fillStyle="#"+c+c+c;
o.canvas.fillRect(pos[0], pos[1], 1, 1);
}
function conversion(x, y, R){
var m = R / o.width;
var x1 = m * (2 * x - o.width);
var y2 = m * (o.width - 2 * y);
return [x1, y2];
}
function f(z, c){
return [z[0]*z[0] - z[1] * z[1] + c[0], 2 * z[0] * z[1] + c[1]];
}
function abs(z){
return Math.sqrt(z[0]*z[0] + z[1]*z[1]);
}
function init(){
var R = (1 + Math.sqrt(1+4*abs(o.c))) / 2,
z, x, y, i;
o.canvas = document.getElementById('a').getContext("2d");
for (x = 0; x < o.width; x++){
for (y = 0; y < o.length; y++){
i = 0;
z = conversion(x, y, R);
while (i < o.maxIterate && abs(z) < R){
z = f(z, o.c);
if (abs(z) > R) break;
i++;
}
if (i) point([x, y], i / o.maxIterate);
}
}
}
init();
<canvas id="a" width="500" height="500"></canvas>
via: http://jsfiddle.net/3fnB6/29/

How to draw partial-ellipse in CF? (Graphics.DrawArc in full framework)

I hope there will be an easy answer, as often times, something stripped out of Compact Framework has a way of being performed in a seemingly roundabout manner, but works just as well as the full framework (or can be made more efficient).
Simply put, I wish to be able to do a function similar to System.Drawing.Graphics.DrawArc(...) in Compact Framework 2.0.
It is for a UserControl's OnPaint override, where an arc is being drawn inside an ellipse I already filled.
Essentially (close pseudo code, please ignore imperfections in parameters):
FillEllipse(ellipseFillBrush, largeEllipseRegion);
DrawArc(arcPen, innerEllipseRegion, startAngle, endAngle); //not available in CF
I am only drawing arcs in 90 degree spaces, so the bottom right corner of the ellipse's arc, or the top left. If the answer for ANY angle is really roundabout, difficult, or inefficient, while there's an easy solution for just doing just a corner of an ellipse, I'm fine with the latter, though the former would help anyone else who has a similar question.
I use this code, then use FillPolygon or DrawPolygon with the output points:
private Point[] CreateArc(float StartAngle, float SweepAngle, int PointsInArc, int Radius, int xOffset, int yOffset, int LineWidth)
{
if(PointsInArc < 0)
PointsInArc = 0;
if(PointsInArc > 360)
PointsInArc = 360;
Point[] points = new Point[PointsInArc * 2];
int xo;
int yo;
int xi;
int yi;
float degs;
double rads;
for(int p = 0 ; p < PointsInArc ; p++)
{
degs = StartAngle + ((SweepAngle / PointsInArc) * p);
rads = (degs * (Math.PI / 180));
xo = (int)(Radius * Math.Sin(rads));
yo = (int)(Radius * Math.Cos(rads));
xi = (int)((Radius - LineWidth) * Math.Sin(rads));
yi = (int)((Radius - LineWidth) * Math.Cos(rads));
xo += (Radius + xOffset);
yo = Radius - yo + yOffset;
xi += (Radius + xOffset);
yi = Radius - yi + yOffset;
points[p] = new Point(xo, yo);
points[(PointsInArc * 2) - (p + 1)] = new Point(xi, yi);
}
return points;
}
I had this exactly this problem and me and my team solved that creating a extension method for compact framework graphics class;
I hope I could help someone, cuz I spent a lot of work to get this nice solution
Mauricio de Sousa Coelho
Embedded Software Engineer
public static class GraphicsExtension
{
// Implements the native Graphics.DrawArc as an extension
public static void DrawArc(this Graphics g, Pen pen, float x, float y, float width, float height, float startAngle, float sweepAngle)
{
//Configures the number of degrees for each line in the arc
int degreesForNewLine = 5;
//Calculates the number of points in the arc based on the degrees for new line configuration
int pointsInArc = Convert.ToInt32(Math.Ceiling(sweepAngle / degreesForNewLine)) + 1;
//Minimum points for an arc is 3
pointsInArc = pointsInArc < 3 ? 3 : pointsInArc;
float centerX = (x + width) / 2;
float centerY = (y + height) / 2;
Point previousPoint = GetEllipsePoint(x, y, width, height, startAngle);
//Floating point precision error occurs here
double angleStep = sweepAngle / pointsInArc;
Point nextPoint;
for (int i = 1; i < pointsInArc; i++)
{
//Increments angle and gets the ellipsis associated to the incremented angle
nextPoint = GetEllipsePoint(x, y, width, height, (float)(startAngle + angleStep * i));
//Connects the two points with a straight line
g.DrawLine(pen, previousPoint.X, previousPoint.Y, nextPoint.X, nextPoint.Y);
previousPoint = nextPoint;
}
//Garantees connection with the last point so that acumulated errors cannot
//cause discontinuities on the drawing
nextPoint = GetEllipsePoint(x, y, width, height, startAngle + sweepAngle);
g.DrawLine(pen, previousPoint.X, previousPoint.Y, nextPoint.X, nextPoint.Y);
}
// Retrieves a point of an ellipse with equation:
private static Point GetEllipsePoint(float x, float y, float width, float height, float angle)
{
return new Point(Convert.ToInt32(((Math.Cos(ToRadians(angle)) * width + 2 * x + width) / 2)), Convert.ToInt32(((Math.Sin(ToRadians(angle)) * height + 2 * y + height) / 2)));
}
// Converts an angle in degrees to the same angle in radians.
private static float ToRadians(float angleInDegrees)
{
return (float)(angleInDegrees * Math.PI / 180);
}
}
Following up from #ctacke's response, which created an arc-shaped polygon for a circle (height == width), I edited it further and created a function for creating a Point array for a curved line, as opposed to a polygon, and for any ellipse.
Note: StartAngle here is NOON position, 90 degrees is the 3 o'clock position, so StartAngle=0 and SweepAngle=90 makes an arc from noon to 3 o'clock position.
The original DrawArc method has the 3 o'clock as 0 degrees, and 90 degrees is the 6 o'clock position. Just a note in replacing DrawArc with CreateArc followed by DrawLines with the resulting Point[] array.
I'd play with this further to change that, but why break something that's working?
private Point[] CreateArc(float StartAngle, float SweepAngle, int PointsInArc, int ellipseWidth, int ellipseHeight, int xOffset, int yOffset)
{
if (PointsInArc < 0)
PointsInArc = 0;
if (PointsInArc > 360)
PointsInArc = 360;
Point[] points = new Point[PointsInArc];
int xo;
int yo;
float degs;
double rads;
//could have WidthRadius and HeightRadius be parameters, but easier
// for maintenance to have the diameters sent in instead, matching closer
// to DrawEllipse and similar methods
double radiusW = (double)ellipseWidth / 2.0;
double radiusH = (double)ellipseHeight / 2.0;
for (int p = 0; p < PointsInArc; p++)
{
degs = StartAngle + ((SweepAngle / PointsInArc) * p);
rads = (degs * (Math.PI / 180));
xo = (int)Math.Round(radiusW * Math.Sin(rads), 0);
yo = (int)Math.Round(radiusH * Math.Cos(rads), 0);
xo += (int)Math.Round(radiusW, 0) + xOffset;
yo = (int)Math.Round(radiusH, 0) - yo + yOffset;
points[p] = new Point(xo, yo);
}
return points;
}

Resources