I am working on implementing Alchemy AO and reading through their paper and they mention to sample each point by: considering a
Disk of radius r and center C that is parallel to the image plane,
select a screen-space point Q uniformly at random on it's project, and
then read a depth or position buffer to find the camera-space scene
point P = (xp, yp, z(Q)) on that ray.
I am wondering how you would go about selecting a screen-space point in the manor? I have made an attempt below but since my result appears quite incorrect, I think it's the wrong approach.
vec3 Position = depthToPosition(uvCoords);
int turns = 16;
float screen_radius = (sampleRadius * 100.0 / Position.z); //ball around the point
const float disk = (2.0 * PI) / turns;
ivec2 px = ivec2(gl_FragCoord.xy);
float phi = (30u * px.x ^ px.y + 10u * px.x * px.y); //per pixel hash for randdom rotation angle at a pixel
for (int i = 0; i < samples; ++i)
{
float theta = disk * float(i+1) + phi;
vec2 samplepoint = vec2(cos(theta), sin(theta));
}
Related
I am following this course to learn computer graphics and write my first ray tracer.
I already have some visible results, but they seem to be too large.
The overall algorithm the course outlines is this:
Image Raytrace (Camera cam, Scene scene, int width, int height)
{
Image image = new Image (width, height) ;
for (int i = 0 ; i < height ; i++)
for (int j = 0 ; j < width ; j++) {
Ray ray = RayThruPixel (cam, i, j) ;
Intersection hit = Intersect (ray, scene) ;
image[i][j] = FindColor (hit) ;
}
return image ;
}
I perform all calculations in camera space (where the camera is at (0, 0, 0)). Thus RayThruPixel returns me a ray in camera coordinates, Intersect returns an intersection point also in camera coordinates, and the image pixel array is a direct mapping from the intersectionr results.
The below image is the rendering of a sphere at (0, 0, -40000) world coordinates and radius 0.15, and camera at (0, 0, 2) world coordinates looking towards (0, 0, 0) world coordinates. I would normally expect the sphere to be a lot smaller given its small radius and far away Z coordinate.
The same thing happens with rendering triangles too. In the below image I have 2 triangles that form a square, but it's way too zoomed in. The triangles have coordinates between -1 and 1, and the camera is looking from world coordinates (0, 0, 4).
This is what the square is expected to look like:
Here is the code snippet I use to determine the collision with the sphere. I'm not sure if I should divide the radius by the z coordinate here - without it, the circle is even larger:
Sphere* sphere = dynamic_cast<Sphere*>(object);
float t;
vec3 p0 = ray->origin;
vec3 p1 = ray->direction;
float a = glm::dot(p1, p1);
vec3 center2 = vec3(modelview * object->transform * glm::vec4(sphere->center, 1.0f)); // camera coords
float b = 2 * glm::dot(p1, (p0 - center2));
float radius = sphere->radius / center2.z;
float c = glm::dot((p0 - center2), (p0 - center2)) - radius * radius;
float D = b * b - 4 * a * c;
if (D > 0) {
// two roots
float sqrtD = glm::sqrt(D);
float root1 = (-b + sqrtD) / (2 * a);
float root2 = (-b - sqrtD) / (2 * a);
if (root1 > 0 && root2 > 0) {
t = glm::min(root1, root2);
found = true;
}
else if (root2 < 0 && root1 >= 0) {
t = root1;
found = true;
}
else {
// should not happen, implies sthat both roots are negative
}
}
else if (D == 0) {
// one root
float root = -b / (2 * a);
t = root;
found = true;
}
else if (D < 0) {
// no roots
// continue;
}
if (found) {
hitVector = p0 + p1 * t;
hitNormal = glm::normalize(result->hitVector - center2);
}
Here I generate the ray going through the relevant pixel:
Ray* RayThruPixel(Camera* camera, int x, int y) {
const vec3 a = eye - center;
const vec3 b = up;
const vec3 w = glm::normalize(a);
const vec3 u = glm::normalize(glm::cross(b, w));
const vec3 v = glm::cross(w, u);
const float aspect = ((float)width) / height;
float fovyrad = glm::radians(camera->fovy);
const float fovx = 2 * atan(tan(fovyrad * 0.5) * aspect);
const float alpha = tan(fovx * 0.5) * (x - (width * 0.5)) / (width * 0.5);
const float beta = tan(fovyrad * 0.5) * ((height * 0.5) - y) / (height * 0.5);
return new Ray(/* origin= */ vec3(modelview * vec4(eye, 1.0f)), /* direction= */ glm::normalize(vec3( modelview * glm::normalize(vec4(alpha * u + beta * v - w, 1.0f)))));
}
And intersection with a triangle:
Triangle* triangle = dynamic_cast<Triangle*>(object);
// vertices in camera coords
vec3 vertex1 = vec3(modelview * object->transform * vec4(*vertices[triangle->index1], 1.0f));
vec3 vertex2 = vec3(modelview * object->transform * vec4(*vertices[triangle->index2], 1.0f));
vec3 vertex3 = vec3(modelview * object->transform * vec4(*vertices[triangle->index3], 1.0f));
vec3 N = glm::normalize(glm::cross(vertex2 - vertex1, vertex3 - vertex1));
float D = -glm::dot(N, vertex1);
float m = glm::dot(N, ray->direction);
if (m == 0) {
// no intersection because ray parallel to plane
}
else {
float t = -(glm::dot(N, ray->origin) + D) / m;
if (t < 0) {
// no intersection because ray goes away from triange plane
}
vec3 Phit = ray->origin + t * ray->direction;
vec3 edge1 = vertex2 - vertex1;
vec3 edge2 = vertex3 - vertex2;
vec3 edge3 = vertex1 - vertex3;
vec3 c1 = Phit - vertex1;
vec3 c2 = Phit - vertex2;
vec3 c3 = Phit - vertex3;
if (glm::dot(N, glm::cross(edge1, c1)) > 0
&& glm::dot(N, glm::cross(edge2, c2)) > 0
&& glm::dot(N, glm::cross(edge3, c3)) > 0) {
found = true;
hitVector = Phit;
hitNormal = N;
}
}
Given that the output image is a circle, and that the same problem happens with triangles as well, my guess is the problem isn't from the intersection logic itself, but rather something wrong with the coordinate spaces or transformations. Could calculating everything in camera space be causing this?
I eventually figured it out by myself. I first noticed the problem was here:
return new Ray(/* origin= */ vec3(modelview * vec4(eye, 1.0f)),
/* direction= */ glm::normalize(vec3( modelview *
glm::normalize(vec4(alpha * u + beta * v - w, 1.0f)))));
When I removed the direction vector transformation (leaving it at just glm::normalize(alpha * u + beta * v - w)) I noticed the problem disappeared - the square was rendered correctly. I was prepared to accept it as an answer, although I wasn't completely sure why.
Then I noticed that after doing transformations on the object, the camera wasn't positioned properly, which makes sense - we're not pointing the rays in the correct direction.
I realized that my entire approach of doing the calculations in camera space was wrong. If I still wanted to use this approach, the rays would have to be transformed, but in a different way that would involve some complex math I wasn't ready to deal with.
I instead changed my approach to do transformations and intersections in world space and only use camera space at the lighting stage. We have to use camera space at some point, since we want to actually look in the direction of the object we are rendering.
I'm building my own path tracer by self-learning from online resources. But I find that my implementation has an issue with emissive objects in the scene, especially in a dark environment (no skybox).
For example, in the following environment:
The box in the middle is the only light source in the environment, with emission value of (3.0,3.0,3.0), and all other objects emission value of (0.0,0.0,0.0). I was expecting the light to scatter smoothly on the walls, but it looks like they are biased towards one direction.
My cosine sampling function is (modified from lwjgl3-demos):
float3 SampleHemisphere3(float3 norm, float alpha = 0.0)
{
float3 randomVec = rand3();
float r = saturate(pow(randomVec.x, 1.0 / (1.0 + alpha)));
float angle = randomVec.y * PI_TWO;
float sr = saturate(sqrt(1.0 - r * r));
float3 ph = float3(sr * cos(angle), sr * sin(angle), r);
float3 tangent = normalize(randomVec * 2.0 - 1.0);
float3 bitangent = cross(norm, tangent);
tangent = cross(norm, bitangent);
return mul(ph, float3x3(tangent, bitangent, norm));
}
This is how I compute the shading and next ray info:
float3 Shade(inout Ray ray, HitInfo hit)
{
ray.origin = hit.pos + hit.norm * 1e-5;
ray.dir = normalize(SampleHemisphere3(hit.norm, 0.0));
ray.energy *= 2.0 * hit.colors.albedo * saturate(dot(hit.norm, ray.dir));
return hit.colors.emission;
}
And the recursion happens here:
// generate ray from camera
Ray ray = CreateCameraRay(camera, PixelCenter);
// trace ray
float3 color = 0.0;
for (int i = 0; i < _TraceDepth; i++)
{
// get nearest ray hit
HitInfo hit = Trace(ray);
// accumulate color
color += ray.energy * Shade(ray, hit);
// if ray has no energy, stop tracing
if(!any(ray.energy))
break;
}
// write to frame target
_FrameTarget[id.xy] = float4(color, 1.0);
I learned the last two functions from GPU Path Tracing in Unity.
Here is another example of the similar error:
I feel that the problem is caused by the cosine weighted hemisphere sampling, but I have no idea how to fix it.
What should I do to get distributed light effect from emissive objects on the diffuse surfaces? Do I have to specify light sources and shapes and sample from them directly instead of emissive objects?
Edit:
It is indeed the cosine weighted sampling that is causing the problem.
Instead of:
float3 tangent = normalize(randomVec * 2.0 - 1.0);
I should have another vector of independent random values:
float3 tangent = normalize(rand3() * 2.0 - 1.0);
Now it is shows
Still not perfect, because it is clearly a cross shape. (Probably caused by sparsity of floating values)
How can I further improve this?
Edit 2:
After some more debugging and experiments, I figure out the "solution", but I don't understand the reason behind it.
The random value generator is from this Shadertoy project, because I see that GLSL-PathTracer is also using it.
Here is part of it:
void rng_initialize(float2 p, int frame)
{
//white noise seed
RandomSeed = uint4(p, frame, p.x + p.y);
}
void pcg4d(inout uint4 v)
{
v = v * 1664525u + 1013904223u;
v.x += v.y * v.w;
v.y += v.z * v.x;
v.z += v.x * v.y;
v.w += v.y * v.z;
v = v ^ (v >> 16u);
v.x += v.y * v.w;
v.y += v.z * v.x;
v.z += v.x * v.y;
v.w += v.y * v.z;
}
float3 rand3()
{
pcg4d(RandomSeed);
return float3(RandomSeed.xyz) / float(0xffffffffu);
}
float4 rand4()
{
pcg4d(RandomSeed);
return float4(RandomSeed) / float(0xffffffffu);
}
At initialization, I pass float2(id.xy) from SV_DispatchThreadID and current frame counter to rng_initialize.
And here is my new cosine weighted hemisphere sampling function:
float3 SampleHemisphere3(float3 norm, float alpha = 0.0)
{
float4 rand = rand4();
float r = pow(rand.w, 1.0 / (1.0 + alpha));
float angle = rand.y * PI_TWO;
float sr = sqrt(1.0 - r * r);
float3 ph = float3(sr * cos(angle), sr * sin(angle), r);
float3 tangent = normalize(rand.zyx + rand3() - 1.0);
float3 bitangent = cross(norm, tangent);
tangent = cross(norm, bitangent);
return mul(ph, float3x3(tangent, bitangent, norm));
}
And the results are: (which looks much better)
My discoveries from the experiments are:
r in the sampling function has to be dependent on w component of random values.
angle can be any in x, y, z.
tangent has to be dependent on current xyz values and a new vector of random xyz values. Order doesn't matter so I use zyx here. Missing either current xyz or new xyz will result in a cross shape on the wall.
I'm not sure if this is a correct solution, but as far as my eyes can tell, it solves the problem.
I'm new to shaders and I have been messing about with the website shadertoy. I'm trying to understand graphics (and the graphics pipeline) such as drawing lines, interpolation, rasterization, etc... I've written two line functions that return a color if the pixel processed is on the line. This is the shadertoy code here using fragment shaders
struct Vertex {
vec2 p;
vec4 c;
};
vec4 overlay(vec4 c1, vec4 c2) {
return vec4((1.0 - c2.w) * c1.xyz + c2.w * c2.xyz, 1.0);
}
vec4 drawLineA(Vertex v1, Vertex v2, vec2 pos) {
vec2 a = v1.p;
vec2 b = v2.p;
vec2 r = floor(pos);
vec2 diff = b - a;
if (abs(diff.y) < abs(diff.x)) {
if (diff.x < 0.0) {
Vertex temp1 = v1;
Vertex temp2 = v2;
v1 = temp2;
v2 = temp1;
a = v1.p;
b = v2.p;
diff = b - a;
}
float m = diff.y / diff.x;
float q = r.x - a.x;
if (floor(m * q + a.y) == r.y && a.x <= r.x && r.x <= b.x) {
float h = q / diff.x;
return vec4((1.0 - h) * v1.c + h * v2.c);
}
} else {
if (diff.y < 0.0) {
Vertex temp1 = v1;
Vertex temp2 = v2;
v1 = temp2;
v2 = temp1;
a = v1.p;
b = v2.p;
diff = b - a;
}
float m = diff.x / diff.y;
float q = r.y - a.y;
if (floor(m * q + a.x) == r.x && a.y <= r.y && r.y <= b.y) {
float h = q / diff.y;
return vec4((1.0 - h) * v1.c + h * v2.c);
}
}
return vec4(0,0,0,0);
}
vec4 drawLineB(Vertex v1, Vertex v2, vec2 pos) {
vec2 a = v1.p;
vec2 b = v2.p;
vec2 l = b - a;
vec2 r = pos - a;
float h = dot(l,r) / dot (l,l);
vec2 eC = a + h * l;
if (floor(pos) == floor(eC) && 0.0 <= h && h <= 1.0 ) {
return vec4((1.0 - h) * v1.c + h * v2.c);
}
return vec4(0,0,0,0);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
float t = iTime;
float r = 300.0;
Vertex v1 = Vertex(vec2(400,225), vec4(1,0,0,1));
Vertex v2 = Vertex(vec2(400.0 + r*cos(t) ,225.0 + r*sin(t)), vec4(0,1,0,1));
vec4 col = vec4(0,0,0,1);
col = overlay(col,drawLineA(v1, v2, fragCoord));
col = overlay(col,drawLineB(v1, v2, fragCoord));
// Output to screen
fragColor = col;
}
However, the lines that I have been using are not fast or using antialiasing. Which is the fastest algorithm for both antialiasing and aliasing lines, and how should I implement it thanks.
A fragment shader is really not the right approach for this, a lot on shadertoy is really just a toy / code-golfing showing solutions overcoming the limitations of the platform which are terribly inefficient in real-world scenarios.
All graphics APIs provide dedicated interfaces for drawing line segments just search for "API_NAME draw line" e.g. "webgl draw line". In cases where those do not suffice triangle strips with either MSAA or custom in-shader AA are used.
If you're really just looking for an efficient algorithm the wikipedia page has you covered on that.
As the other answer says shaders are not very good for this.
Line rasterization is done behind the scenes with HW interpolators on the gfx card these days. The shaders are invoked for each pixel of rendered primitive which in your case means its called for every pixel of screen and this all is invoked for each line you render which is massively slower than native way.
If you truly want to learn rasterization do this on CPU side instead. The best algo for lines depends on the computation HW architecture you are using.
For sequentional processing it is:
DDA this one is with subpixel precision
In the past Bresenham was faster but that is not true IIRC since x386 ...
For parallel processing you just compute distance of pixel to the line (more or less like you do now).
So if you insist on using shaders for this You can speed up things using geometry shader and process only fragment (pixels) that are near your line. See:
cubic curves rendering in GLSL
So simply you create OOBB around your line and render it by emitting 2 triangles per line then in fragment you compute the distance to line and set the color accordingly ...
For antialiasing you simply change the color for pixels on the last pixel edge distance. So if your line has half width w and distance of fragment to line is d then:
if (d>w) discard; // fragment too far
d=(w-d)/pixel_size; // distance from edge in pixels
frag_color = vec4(r,g,b,min(1.0,d)); // use transparency/blending
As you can see anti aliasing is just rendering with blending modulated by subpixel position/distance of pixel relative to rasterized object) the same technique can be used with DDA.
There are also ray tracing methods of rendering lines but they are pretty much the same as finding distance to line ... however instead of 2D pixel position you checking against 3D ray which slightly complicates the math.
Weird thing. I keep getting processing or java to crash with this code which is based on a sample code from the processing website.
On pc it doesn't work at all, on one mac it works for 5 seconds until it crushes and on another mac it just crust and gives me this:
libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: RtApiCore::probeDeviceOpen: the device (2) does not support the requested channel count.
Could not run the sketch (Target VM failed to initialize).
Do you think it's a problem with the library or with the code?
If it's a problem with the library, could you recommend the best sound library to do something like this?
Thank you :)
import processing.sound.*;
FFT fft;
AudioIn in;
int bands = 512;
float[] spectrum = new float[bands];
void setup() {
size(900, 600);
background(255);
// Create an Input stream which is routed into the Amplitude analyzer
fft = new FFT(this, bands);
in = new AudioIn(this, 0);
// start the Audio Input
in.start();
// patch the AudioIn
fft.input(in);
}
void draw() {
background(255);
int midPointW = width/2;
int midPointH = height/2;
float angle = 1;
fft.analyze(spectrum);
//float radius = 200;
for(int i = 0; i < bands; i++){
// create the actions for placing points on a circle
float radius = spectrum[i]*height*10;
//float radius = 10;
float endX = midPointW+sin(angle) * radius*10;
float endY = midPointH+cos(angle) * radius*10;
float startX = midPointW+sin(angle) * radius*5;
float startY = midPointH+cos(angle) * radius*5;
// The result of the FFT is normalized
// draw the line for frequency band i scaling it up by 5 to get more amplitude.
line( startX, startY, endX, endY);
angle = angle + angle;
println(endX, "" ,endY);
// if(angle > 360){
// angle = 0;
// }
}
}
If you print the values you use like angle and start x,y you'll notice that:
start/end x,y values become NaN(not a number - invalid)
angle quickly goes to Infinity (but not beyond)
One of the main issues is this line:
angle = angle + angle;
You're exponentially increasing this value which you probably don't want.
Additionally, bare in mind trigonometric functions such as sin() and cos() use radians not degrees, so values tend to be small. You can constrain the values to 360 degrees or TWO_PI radians using the modulo operator(%) or the constrain() function:
angle = (angle + 0.01) % TWO_PI;
You were very close though as your angle > 360 check shows it. Not sure why you've left that commented out.
Here's your code with the tweak and comments
import processing.sound.*;
FFT fft;
AudioIn in;
int bands = 512;
float[] spectrum = new float[bands];
void setup() {
size(900, 600);
background(255);
// Create an Input stream which is routed into the Amplitude analyzer
fft = new FFT(this, bands);
in = new AudioIn(this, 0);
// start the Audio Input
in.start();
// patch the AudioIn
fft.input(in);
}
void draw() {
background(255);
int midPointW = width/2;
int midPointH = height/2;
float angle = 1;
fft.analyze(spectrum);
//float radius = 200;
for (int i = 0; i < bands; i++) {
// create the actions for placing points on a circle
float radius = spectrum[i] * height * 10;
//float radius = 10;
float endX = midPointW + (sin(angle) * radius * 10);
float endY = midPointH + (cos(angle) * radius * 10);
float startX = midPointW + (sin(angle) * radius * 5);
float startY = midPointH + (cos(angle) * radius * 5);
// The result of the FFT is normalized
// draw the line for frequency band i scaling it up by 5 to get more amplitude.
line( startX, startY, endX, endY);
//angle = angle + angle;
angle = (angle + 0.01) % TWO_PI;//linearly increase the angle and constrain it to a 360 degrees (2 * PI)
}
}
void exit() {
in.stop();//try to cleanly stop the audio input
super.exit();
}
The sketch ran for more than 5 minutes but when closing the sketch I still encountered JVM crashes on OSX.
I haven't used this sound library much and haven't looked into it's internals, but it might be a bug.
If this still is causing problems, for pragmatic reasons I'd recommend installing a different Processing library for FFT sound analysis via Contribution Manager.
Here are a couple of libraries:
Minim - provides some nice linear and logarithmic averaging functions that can help in visualisations
Beads - feature rich but more Java like syntax. There's also a free book on it: Sonifying Processing
Both libraries provide FFT examples.
I am working with OpenGL ES 2.0 on an Android device.
I am trying to get a sphere up and running and drawing. Currentley, I almost have a sphere, but clearly it's being done very, very wrong.
In my app, I hold a list of Vector3's, which I convert to a ByteBuffer along the way, and pass to OpenGL.
I know my code is okay, since I have a Cube and Tetrahedron drawing nicley.
What two parts I changed were:
Determing the vertices
Drawing the vertices.
Here are the code snippits in question. What am I doing wrong?
Determining the polar coordinates:
private void ConstructPositionVertices()
{
for (float latitutde = 0.0f; latitutde < (float)(Math.PI * 2.0f); latitutde += 0.1f)
{
for (float longitude = 0.0f; longitude < (float)(2.0f * Math.PI); longitude += 0.1f)
{
mPositionVertices.add(ConvertFromSphericalToCartesian(1.0f, latitutde, longitude));
}
}
}
Converting from Polar to Cartesian:
public static Vector3 ConvertFromSphericalToCartesian(float inLength, float inPhi, float inTheta)
{
float x = inLength * (float)(Math.sin(inPhi) * Math.cos(inTheta));
float y = inLength * (float)(Math.sin(inPhi) * Math.sin(inTheta));
float z = inLength * (float)Math.cos(inTheta);
Vector3 convertedVector = new Vector3(x, y, z);
return convertedVector;
}
Drawing the circle:
inGL.glDrawArrays(GL10.GL_TRIANGLES, 0, numVertices);
Obviously I omitted some code, but I am positive my mistake lies in these snippits somewhere.
I do nothing more with the points than pass them to OpenGL, then call Triangles, which should connect the points for me.. right?
EDIT:
A picture might be nice!
your z must be calculated using phi. float z = inLength * (float)Math.cos(inPhi);
Also,the points generated are not triangles so it would be better to use GL_LINE_STRIP
Using triangle strip on Polar sphere is as easy as drawing points in pairs, for example:
const float GL_PI = 3.141592f;
GLfloat x, y, z, alpha, beta; // Storage for coordinates and angles
GLfloat radius = 60.0f;
const int gradation = 20;
for (alpha = 0.0; alpha < GL_PI; alpha += GL_PI/gradation)
{
glBegin(GL_TRIANGLE_STRIP);
for (beta = 0.0; beta < 2.01*GL_PI; beta += GL_PI/gradation)
{
x = radius*cos(beta)*sin(alpha);
y = radius*sin(beta)*sin(alpha);
z = radius*cos(alpha);
glVertex3f(x, y, z);
x = radius*cos(beta)*sin(alpha + GL_PI/gradation);
y = radius*sin(beta)*sin(alpha + GL_PI/gradation);
z = radius*cos(alpha + GL_PI/gradation);
glVertex3f(x, y, z);
}
glEnd();
}
First point entered is as follows the formula, and the second one is shifted by the single step of alpha angle (from the next parallel).