Ray Generation Inconsistency - graphics

I have written code that generates a ray from the "eye" of the camera to the viewing plane some distance away from the camera's eye:
R3Ray ConstructRayThroughPixel(...)
{
R3Point p;
double increments_x = (lr.X() - ul.X())/(double)width;
double increments_y = (ul.Y() - lr.Y())/(double)height;
p.SetX( ul.X() + ((double)i_pos+0.5)*increments_x );
p.SetY( lr.Y() + ((double)j_pos+0.5)*increments_y );
p.SetZ( lr.Z() );
R3Vector v = p-camera_pos;
R3Ray new_ray(camera_pos,v);
return new_ray;
}
ul is the upper left corner of the viewing plane and lr is the lower left corner of the viewing plane. They are defined as follows:
R3Point org = scene->camera.eye + scene->camera.towards * radius;
R3Vector dx = scene->camera.right * radius * tan(scene->camera.xfov);
R3Vector dy = scene->camera.up * radius * tan(scene->camera.yfov);
R3Point lr = org + dx - dy;
R3Point ul = org - dx + dy;
Here, org is the center of the viewing plane with radius being the distance between the viewing plane and the camera eye, dx and dy are the displacements in the x and y directions from the center of the viewing plane.
The ConstructRayThroughPixel(...) function works perfectly for a camera whose eye is at (0,0,0). However, when the camera is at some different position, not all needed rays are produced for the image.
Any suggestions what could be going wrong? Maybe something wrong with my equations?
Thanks for the help.

Here's a quibble that may have nothing to do with you problem:
When you do this:
R3Vector dx = scene->camera.right * radius * tan(scene->camera.xfov);
R3Vector dy = scene->camera.up * radius * tan(scene->camera.yfov);
I assume that the right and up vectors are normalized, right? In that case you want sin not tan. Of course, if the fov angles are small it won't make much difference.

The reason why my code wasn't working was because I was treating x,y,z values separately. This is wrong, since the camera can be facing in any direction and thus if it was facing down the x-axis, the x coordinates would be the same, producing increments of 0 (which is incorrect). Instead, what should be done is an interpolation of corner points (where points have x,y,z coordinates). Please see answer in related post: 3D coordinate of 2D point given camera and view plane

Related

Pixel space depth offset in vertex shader

I'm trying to draw simple scaled points in my custom graphics engine. The points are scaled in pixel space, and the radius of the points are in pixels, but the position of the points fed to the draw function are in world coordinates.
So far, everything is working great, except for a depth clipping issue. The points are of constant size, regardless of how far away they are, which is done by offsetting the vertices in projected/clip space. However, when they are close to surfaces, they partially intersect them in the depth buffer.
Since these points represent world coordinates, I want them to use the depth buffer, and be hidden behind objects that are in front of them. However, when the point is close to a surface, I want to push it toward the camera, so it doesn't partially intersect it. I think it is easier to just always do this push, regardless of the point being close to a surface. What makes the most sense to me is to just push it by its radius, so that all of its vertices are exactly far enough away to avoid clipping into nearby surfaces.
The easiest way I've found to do this is to simply subtract from the Z value in the vertex shader, after transforming into view-projection space. However, I'm having some trouble converting my pixel radius into a depth offset. Regardless of the math I use, what works close up never seems to work far away. I'm thinking maybe this is due to how the z buffer is non-linear, but could be wrong.
Currently, the closest I've been to solving this is the following:
proj_vertex_pos.z -= point_pixel_radius / proj_vertex_pos.w * 100.0
I'm honestly not sure why 100.0 helps make this work yet. I added it simply because dividing the radius by w was too small of a value. Can anyone point me in the right direction? How do I convert my pixel distance into a depth distance? Especially if the depth distance changes scale depending on which depth you are at? Or am I just way off?
The solution was to convert my pixel space radius into world space units, since the z-buffer is still in world space, even after transforming by the view-projection transform. This can be done by converting pixels into a factor (factor = pixels / screen_size), then convert the factor into world space units, which was a little more involved - I had to calculate the world-space size of the screen at a given distance, then multiply the factor by that to get world units. I can post the related code if anyone needs it. There's probably a simpler way to calculate it, but my brain always goes straight for factors.
The reason I was getting different results at different distances was mainly because I was only offsetting the z component of the clip position by the result. It's also necessary to offset the w component, to make the depth offset work at any distance (linear). However, in order to offset the w component, you first have to scale xy by w, modify w as needed, then divide xy by the new w. This resulted in making the math pretty involved, so I changed the strategy to offset the vertex before clip space, which requires calculating the distance to the camera in Z space manually, but it honestly ended up being about the same amount of math either way.
Here is the final vertex shader at the moment. Hopefully the global values make sense. I did not modify this to post it, so please forgive any sillyness in my comments. EDIT: I had to make some edits to this, because I was accidentally moving the vertex along the camera-Z direction instead of directly toward the camera:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// compute offset from vertex to camera
float3 to_cam_offset = Scene.CamPos - vin.Position.xyz;
// compute the Z distance of the camera from the vertex
float cam_z_dist = -dot( Scene.CamZ, to_cam_offset );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * cam_z_dist * 2.0;
// finally, push the vertex toward the camera by the world radius
// + note: moving by radius will only work with surfaces facing the camera, since we are moving toward the camera, rather than away from the surface
// + because of this, we also multiply by another 4, to compensate for nearby surface angles, but there is no scale that would work for every angle
float3 offset = normalize(to_cam_offset) * (radius_world * -4.0);
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz + offset, 1.0) );
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}
Here is the other version that offsets z & w instead of changing things in world space. After edits above, this is probably the more optimal solution:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz, 1.0) );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * pin.ClipPos.w * 2.0;
// offset depth by our world radius
// + we scale this extra to compensate for surfaces with high angles relative to the camera (since we are moving directly at it)
// + notice we have to make the perspective divide before modifying w, then re-apply it after, or xy will be off
pin.ClipPos.xy /= pin.ClipPos.w;
pin.ClipPos.z -= radius_world * 10.0;
pin.ClipPos.w -= radius_world * 10.0;
pin.ClipPos.xy *= pin.ClipPos.w;
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}

Drawing a circle without using a function for it

So I was wondering how does a circle() function work, and how can I draw to circle without using it (wanted to do something related to it). Does anyone know this stuff?
A classic way of rasterizing a circle is using the Midpoint Circle Algorithm.
It works by tracking the pixels which are as close to the x2 + y2 = r2 isoline as possible. This can even be done with purely integer calculations, which is particularly suitable for low-computation power devices.
A circle is the set of points located at a constant distance from another point, called the center.
If you can draw lines defined by two points, you can draw the representation of a circle on a canvas, knowing its center, and its radius.
The approach is to determine a set of consecutive points located on the circumference, then join them with lines.
for instance, in python (which reads like pseudocode):
import math
def make_circle(center, radius, num_points=40):
"""returns a sequence of points on the circumference
"""
points = [center]
d_theta = 2 * math.pi / num_points
cx, cy = center
for idx in range(num_points + 1):
theta = idx * d_theta
points.append((cx + math.cos(theta) * radius, cy + math.sin(theta) * radius))
return points
And if you want to try it, here it is: circles codeskulptor.
You will see that for display purposes, 40 points on the circumference is enough to give an acceptable rendition.

Having the coordinates of the two triangles of a twisted triangle prism, how can I know if a point is inside it?

Here some examples of twisted triangle prisms.
I want to know if a moving triangle will hit a certain point. That's why I need to solve this problem.
The idea is that a triangle with random coordinates becomes the other random triangle whose vertices all move between then
related: How to determine point/time of intersection for ray hitting a moving triangle?
One of my students made this little animation in Mathematica.
It shows the twisting of a prism to the Schönhardt polyhedron.
See the Wikipedia page for its significance.
It would be easy to determine if a particular point is inside the polyhedron.
But whether it is inside a particular smooth twisting, as in your image, depends on the details (the rate) of the twisting.
Let's bottom triangle lies in plane z=0, it has rotation angle 0, top triangle has rotation angle Fi. Height of twisted prism is Hgt.
Rotation angle linearly depends on height, so layer at height h has rotation angle
a(h) = Fi * h / Hgt
If point coordinates are (x,y,z), then shift point to z=0 and rotate (x,y) coordinates about rotation axis (rx, ry) by -a(z) angle
t = -a(z) = - Fi * z / Hgt
xn = rx + (x-rx) * Cos(t) - (y-ry) * Sin(t)
yn = ry + (x-rx) * Sin(t) - (y-ry) * Cos(t)
Then check whether (xn, yn) lies inside bottom triangle

camera in ray tracing

I’ve completely stuck with camera in ray tracing. Please, take a look at my calculations and point me out where is the error. I’m using left handed coordinate system.
x,y // range [0..S) x [0..S) //pixels coordinates
Now, let’s transform pixels coordinates to parametric coordinates of camera plane:
xp = x/S * 2 – 1;
yp = y/S * 2 – 1;
xp, yp // range [-1..1] x [-1..1]
calculation of camera basis:
//eye - camera position
//up - camera up vector
//look_at - camera target point
vec3 w = normalize(look_at-eye);
vec3 u = cross(up,w);
vec3 v = cross(w,u);
so ray direction should have following coordinates:
vec3 dir = look_at – eye + xp*u + yp*v;
ray3 ray = {eye, normalize(dir)};
I think the mistake is here:
vec3 dir = look_at – eye + xp*u + yp*v;
The image plane should have a normal vector w, and either be between the eye and the look at point (the more common way in ray tracers), or be behind the eye (more closely models an actual pinhole camera). So let's create a scalar zoom_factor. A positive number will put the plane in front of the eye, and a negative one will put it behind the eye (and flip the image).
The center of the image plane is thus:
eye + zoom_factor*w
A point (xp, yp) on the image plane is thus:
eye + zoom_factor*w + xp*u + yp*v
Now you want the direction to be from the eye to this point on this image plane:
vec3 dir = eye + zoom_factor*w + xp*u + yp*v - eye;
The eyes cancel, so it simplifies to:
vec3 dir = zoom_factor*w + xp*u + yp*v
This assumes xp an yp are each in a range like (-0.5, 0.5). Note that (0, 0) is the middle of the image plane with this arrangement.

Issues with bullet entry points for "shoulder mounted" guns

I'm making a SHMUP game that has a space ship. That space ship currently fires a main cannon from its center point. The sprite that represents the ship has a center based registration point. 0,0 is center of the ship.
When I fire the main cannon i make a bullet and assign make its x & y coordinates match the avatar and add it to the display list. This works fine.
I then made two new functions called fireLeftCannon, fireRightCannon. These create a bullet and add it to the display list but the x, y values are this.y + 15 and this.y +(-) 10. This creates a sort of triangle of bullet entry points.
Similar to this:
   ▲
▲   ▲
the game tick function will adjust the avatar's rotation to always point at the cursor. This is my aiming method. When I shoot straight up all 3 bullets fire up in the expected pattern. However when i rotate and face the right the entry points do not rotate. This is not an issue for the center point main cannon.
My question is how do i use the current center position ( this.x, this.y ) and adjust them based on my current rotation to place a new bullet so that it is angled correctly.
Thanks a lot in advance.
Tyler
EDIT
OK i tried your solution and it didn't work. Here is my bullet move code:
var pi:Number = Math.PI
var _xSpeed:Number = Math.cos((_rotation - 90) * (pi/180) );
var _ySpeed:Number = Math.sin((_rotation - 90) * (pi / 180) );
this.x += (_xSpeed * _bulletSpeed );
this.y += (_ySpeed * _bulletSpeed );
And i tried adding your code to the left shoulder cannon:
_bullet.x = this.x + Math.cos( StaticMath.ToRad(this.rotation) ) * ( this.x - 10 ) - Math.sin( StaticMath.ToRad(this.rotation)) * ( this.x - 10 );
_bullet.y = this.y + Math.sin( StaticMath.ToRad(this.rotation)) * ( this.y + 15 ) + Math.cos( StaticMath.ToRad(this.rotation)) * ( this.y + 15 );
This is placing the shots a good deal away from the ship and sometimes off screen.
How am i messing up the translation code?
What you need to start with is, to be precise, the coordinates of your cannons in the ship's coordinate system (or “frame of reference”). This is like what you have now but starting from 0, not the ship's position, so they would be something like:
(0, 0) -- center
(10, 15) -- left shoulder
(-10, 15) -- right shoulder
Then what you need to do is transform those coordinates into the coordinate system of the world/scene; this is the same kind of thing your graphics library is doing to draw the sprite.
In your particular case, the intervening transformations are
world ←translation→ ship position ←rotation→ ship positioned and rotated
So given that you have coordinates in the third frame (how the ship's sprite is drawn), you need to apply the rotation, and then apply the translation, at which point you're in the first frame. There are two approaches to this: one is matrix arithmetic, and the other is performing the transformations individually.
For this case, it is simpler to skip the matrices unless you already have a matrix library handy already, in which case you should use it — calculate "ship's coordinate transformation matrix" once per frame and then use it for all bullets etc.
I'll now explain doing it directly.
The general method of applying a rotation to coordinates (in two dimensions) is this (where (x1,y1) is the original point and (x2,y2) is the new point):
x2 = cos(angle)*x1 - sin(angle)*y1
y2 = sin(angle)*x1 + cos(angle)*y1
Whether this is a clockwise or counterclockwise rotation will depend on the “handedness” of your coordinate system; just try it both ways (+angle and -angle) until you have the right result. Don't forget to use the appropriate units (radians or degrees, but most likely radians) for your angles given the trig functions you have.
Now, you need to apply the translation. I'll continue using the same names, so (x3,y3) is the rotated-and-translated point. (dx,dy) is what we're translating by.
x3 = dx + x2
y3 = dy + x2
As you can see, that's very simple; you could easily combine it with the rotation formulas.
I have described transformations in general. In the particular case of the ship bullets, it works out to this in particular:
bulletX = shipPosX + cos(shipAngle)*gunX - sin(shipAngle)*gunY
bulletY = shipPosY + sin(shipAngle)*gunX + cos(shipAngle)*gunY
If your bullets are turning the wrong direction, negate the angle.
If you want to establish a direction-dependent initial velocity for your bullets (e.g. always-firing-forward guns) then you just apply the rotation but not the translation to the velocity (gunVelX, gunVelY).
bulletVelX = cos(shipAngle)*gunVelX - sin(shipAngle)*gunVelY
bulletVelY = sin(shipAngle)*gunVelX + cos(shipAngle)*gunVelY
If you were to use vector and matrix math, you would be doing all the same calculations as here, but they would be bundled up in single objects rather than pairs of x's and y's and four trig functions. It can greatly simplify your code:
shipTransform = translate(shipX, shipY)*rotate(shipAngle)
bulletPos = shipTransform*gunPos
I've given the explicit formulas because knowing how the bare arithmetic works is useful to the conceptual understanding.
Response to edit:
In the code you edited into your question, you are adding what I assume is the ship position into the coordinates you multiply by sin/cos. Don't do that — just multiply the offset of the gun position from the ship center by sin/cos and only then add that to the ship position. Also, you are using x x; y y on the two lines, where you should be using x y; x y. Here is your code edited to fix those two things:
_bullet.x = this.x + Math.cos( StaticMath.ToRad(this.rotation)) * (-10) - Math.sin( StaticMath.ToRad(this.rotation)) * (+15);
_bullet.y = this.y + Math.sin( StaticMath.ToRad(this.rotation)) * (-10) + Math.cos( StaticMath.ToRad(this.rotation)) * (+15);
This is the code for a gun at offset (-10, 15).

Resources