I'm trying to draw simple scaled points in my custom graphics engine. The points are scaled in pixel space, and the radius of the points are in pixels, but the position of the points fed to the draw function are in world coordinates.
So far, everything is working great, except for a depth clipping issue. The points are of constant size, regardless of how far away they are, which is done by offsetting the vertices in projected/clip space. However, when they are close to surfaces, they partially intersect them in the depth buffer.
Since these points represent world coordinates, I want them to use the depth buffer, and be hidden behind objects that are in front of them. However, when the point is close to a surface, I want to push it toward the camera, so it doesn't partially intersect it. I think it is easier to just always do this push, regardless of the point being close to a surface. What makes the most sense to me is to just push it by its radius, so that all of its vertices are exactly far enough away to avoid clipping into nearby surfaces.
The easiest way I've found to do this is to simply subtract from the Z value in the vertex shader, after transforming into view-projection space. However, I'm having some trouble converting my pixel radius into a depth offset. Regardless of the math I use, what works close up never seems to work far away. I'm thinking maybe this is due to how the z buffer is non-linear, but could be wrong.
Currently, the closest I've been to solving this is the following:
proj_vertex_pos.z -= point_pixel_radius / proj_vertex_pos.w * 100.0
I'm honestly not sure why 100.0 helps make this work yet. I added it simply because dividing the radius by w was too small of a value. Can anyone point me in the right direction? How do I convert my pixel distance into a depth distance? Especially if the depth distance changes scale depending on which depth you are at? Or am I just way off?
The solution was to convert my pixel space radius into world space units, since the z-buffer is still in world space, even after transforming by the view-projection transform. This can be done by converting pixels into a factor (factor = pixels / screen_size), then convert the factor into world space units, which was a little more involved - I had to calculate the world-space size of the screen at a given distance, then multiply the factor by that to get world units. I can post the related code if anyone needs it. There's probably a simpler way to calculate it, but my brain always goes straight for factors.
The reason I was getting different results at different distances was mainly because I was only offsetting the z component of the clip position by the result. It's also necessary to offset the w component, to make the depth offset work at any distance (linear). However, in order to offset the w component, you first have to scale xy by w, modify w as needed, then divide xy by the new w. This resulted in making the math pretty involved, so I changed the strategy to offset the vertex before clip space, which requires calculating the distance to the camera in Z space manually, but it honestly ended up being about the same amount of math either way.
Here is the final vertex shader at the moment. Hopefully the global values make sense. I did not modify this to post it, so please forgive any sillyness in my comments. EDIT: I had to make some edits to this, because I was accidentally moving the vertex along the camera-Z direction instead of directly toward the camera:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// compute offset from vertex to camera
float3 to_cam_offset = Scene.CamPos - vin.Position.xyz;
// compute the Z distance of the camera from the vertex
float cam_z_dist = -dot( Scene.CamZ, to_cam_offset );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * cam_z_dist * 2.0;
// finally, push the vertex toward the camera by the world radius
// + note: moving by radius will only work with surfaces facing the camera, since we are moving toward the camera, rather than away from the surface
// + because of this, we also multiply by another 4, to compensate for nearby surface angles, but there is no scale that would work for every angle
float3 offset = normalize(to_cam_offset) * (radius_world * -4.0);
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz + offset, 1.0) );
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}
Here is the other version that offsets z & w instead of changing things in world space. After edits above, this is probably the more optimal solution:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz, 1.0) );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * pin.ClipPos.w * 2.0;
// offset depth by our world radius
// + we scale this extra to compensate for surfaces with high angles relative to the camera (since we are moving directly at it)
// + notice we have to make the perspective divide before modifying w, then re-apply it after, or xy will be off
pin.ClipPos.xy /= pin.ClipPos.w;
pin.ClipPos.z -= radius_world * 10.0;
pin.ClipPos.w -= radius_world * 10.0;
pin.ClipPos.xy *= pin.ClipPos.w;
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}
Related
So I was wondering how does a circle() function work, and how can I draw to circle without using it (wanted to do something related to it). Does anyone know this stuff?
A classic way of rasterizing a circle is using the Midpoint Circle Algorithm.
It works by tracking the pixels which are as close to the x2 + y2 = r2 isoline as possible. This can even be done with purely integer calculations, which is particularly suitable for low-computation power devices.
A circle is the set of points located at a constant distance from another point, called the center.
If you can draw lines defined by two points, you can draw the representation of a circle on a canvas, knowing its center, and its radius.
The approach is to determine a set of consecutive points located on the circumference, then join them with lines.
for instance, in python (which reads like pseudocode):
import math
def make_circle(center, radius, num_points=40):
"""returns a sequence of points on the circumference
"""
points = [center]
d_theta = 2 * math.pi / num_points
cx, cy = center
for idx in range(num_points + 1):
theta = idx * d_theta
points.append((cx + math.cos(theta) * radius, cy + math.sin(theta) * radius))
return points
And if you want to try it, here it is: circles codeskulptor.
You will see that for display purposes, 40 points on the circumference is enough to give an acceptable rendition.
I have a 4x4 camera matrix comprised of right, up, forward and position vectors.
I raytrace the scene with the following code that I found in a tutorial but don't really entirely understand it:
for (int i = 0; i < m_imageSize.width; ++i)
{
for (int j = 0; j < m_imageSize.height; ++j)
{
u = (i + .5f) / (float)(m_imageSize.width - 1) - .5f;
v = (m_imageSize.height - 1 - j + .5f) / (float)(m_imageSize.height - 1) - .5f;
Ray ray(cameraPosition, normalize(u*cameraRight + v*cameraUp + 1 / tanf(m_verticalFovAngleRadian) *cameraForward));
I have a couple of questions:
How can I find the focal length of my raytracing camera?
Where is my image plane?
Why cameraForward needs to be multiplied with this 1/tanf(m_verticalFovAngleRadian)?
Focal length is a property of lens systems. The camera model that this code uses, however, is a pinhole camera, which does not use lenses at all. So, strictly speaking, the camera does not really have a focal length. The corresponding optical properties are instead expressed as the field of view (the angle that the camera can observe; usually the vertical one). You could calculate the focal length of a camera that has an equivalent field of view with the following formula (see Wikipedia):
FOV = 2 * arctan (x / 2f)
FOV diagonal field of view
x diagonal of film; by convention 24x36 mm -> x=43.266 mm
f focal length
There is no unique image plane. Any plane that is perpendicular to the view direction can be seen as the image plane. In fact, the projected images differ only in their scale.
For your last question, let's take a closer look at the code:
u = (i + .5f) / (float)(m_imageSize.width - 1) - .5f;
v = (m_imageSize.height - 1 - j + .5f) / (float)(m_imageSize.height - 1) - .5f;
These formulas calculate u/v coordinates between -0.5 and 0.5 for every pixel, assuming that the entire image fits in the box between -0.5 and 0.5.
u*cameraRight + v*cameraUp
... is just placing the x/y coordinates of the ray on the pixel.
... + 1 / tanf(m_verticalFovAngleRadian) *cameraForward
... is defining the depth component of the ray and ultimately the depth of the image plane you are using. Basically, this is making the ray steeper or shallower. Assume that you have a very small field of view, then 1/tan(fov) is a very large number. So, the image plane is very far away, which produces exactly this small field of view (when keeping the size of the image plane constant since you already set the x/y components). On the other hand, if the field of view is large, the image plane moves closer. Note that this notion of image plane is only conceptual. As I said, all other image planes are equally valid and would produce the same image. Another way (and maybe a more intuitive one) to specify the ray would be
u * tanf(m_verticalFovAngleRadian) * cameraRight
+ v * tanf(m_verticalFovAngleRadian) * cameraUp
+ 1 * cameraForward));
As you see, this is exactly the same ray (just scaled). The idea here is to set the conceptual image plane to a depth of 1 and scale the x/y components to adapt the size of the image plane. tan(fov) (with fov being the half field of view) is exactly the size of the half image plane at a depth of 1. Just draw a triangle to verify that. Note that this code is only able to produce square image planes. If you want to allow rectangular ones, you need to take into account the ratio of the side lengths.
I have the plane equation describing the points belonging to a plane in 3D and the origin of the normal X, Y, Z. This should be enough to be able to generate something like a 3D arrow. In pcl this is possible via the viewer but I would like to actually store those 3D points inside the cloud. How to generate them then ? A cylinder with a cone on top ?
To generate a line perpendicular to the plane:
You have the plane equation. This gives you the direction of the normal to the plane. If you used PCL to get the plane, this is in ModelCoefficients. See the details here: SampleConsensusModelPerpendicularPlane
The first step is to make a line perpendicular to the normal at the point you mention (X,Y,Z). Let (NORMAL_X,NORMAL_Y,NORMAL_Z) be the normal you got from your plane equation. Something like.
pcl::PointXYZ pnt_on_line;
for(double distfromstart=0.0;distfromstart<LINE_LENGTH;distfromstart+=DISTANCE_INCREMENT){
pnt_on_line.x = X + distfromstart*NORMAL_X;
pnt_on_line.y = Y + distfromstart*NORMAL_Y;
pnt_on_line.z = Z + distfromstart*NORMAL_Z;
my_cloud.points.push_back(pnt_on_line);
}
Now you want to put a hat on your arrow and now pnt_on_line contains the end of the line exactly where you want to put it. To make the cone you could loop over angle and distance along the arrow, calculate a local x and y and z from that and convert them to points in point cloud space: the z part would be converted into your point cloud's frame of reference by multiplying with the normal vector as with above, the x and y would be multiplied into vectors perpendicular to this normal vectorE. To get these, choose an arbitrary unit vector perpendicular to the normal vector (for your x axis) and cross product it with the normal vector to find the y axis.
The second part of this explanation is fairly terse but the first part may be the more important.
Update
So possibly the best way to describe how to do the cone is to start with a cylinder, which is an extension of the line described above. In the case of the line, there is (part of) a one dimensional manifold embedded in 3D space. That is we have one variable that we loop over adding points. The cylinder is a two dimensional object so we have to loop over two dimensions: the angle and the distance. In the case of the line we already have the distance. So the above loop would now look like:
for(double distfromstart=0.0;distfromstart<LINE_LENGTH;distfromstart+=DISTANCE_INCREMENT){
for(double angle=0.0;angle<2*M_PI;angle+=M_PI/8){
//calculate coordinates of point and add to cloud
}
}
Now in order to calculate the coordinates of the new point, well we already have the point on the line, now we just need to add it to a vector to move it away from the line in the appropriate direction of the angle. Let's say the radius of our cylinder will be 0.1, and let's say an orthonormal basis that we have already calculated perpendicular to the normal of the plane (which we will see how to calculate later) is perpendicular_1 and perpendicular_2 (that is, two vectors perpendicular to each other, of length 1, also perpendicular to the vector (NORMAL_X,NORMAL_Y,NORMAL_Z)):
//calculate coordinates of point and add to cloud
pnt_on_cylinder.x = pnt_on_line.x + 0.1 * perpendicular_1.x * 0.1 * cos(angle) + perpendicular_2.x * sin(angle)
pnt_on_cylinder.y = pnt_on_line.y + perpendicular_1.y * 0.1 * cos(angle) + perpendicular_2.y * 0.1 * sin(angle)
pnt_on_cylinder.z = pnt_on_line.z + perpendicular_1.z * 0.1 * cos(angle) + perpendicular_2.z * 0.1 * sin(angle)
my_cloud.points.push_back(pnt_on_cylinder);
Actually, this is a vector summation and if we were to write the operation as vectors it would look like:
pnt_on_line+perpendicular_1*cos(angle)+perpendicular_2*sin(angle)
Now I said I would talk about how to calculate perpendicular_1 and perpendicular_2. Let K be any unit vector that is not parallel to (NORMAL_X,NORMAL_Y,NORMAL_Z) (this can be found by trying e.g. (1,0,0) then (0,1,0)).
Then
perpendicular_1 = K X (NORMAL_X,NORMAL_Y,NORMAL_Z)
perpendicular_2 = perpendicular_1 X (NORMAL_X,NORMAL_Y,NORMAL_Z)
Here X is the vector cross product and the above are vector equations. Note also that the original calculation of pnt_on_line involved a vector dot product and a vector summation (I am just writing this for completeness of the exposition).
If you can manage this then the cone is easy just by changing a couple of things in the double loop: the radius just changes along its length until it is zero at the end of the loop and in the loop distfromstart will not start at 0.
we are programming a 2D game in XNA. Now we have polygons which define our level elements. They are triangulated such that we can easily render them. Now I would like to write a shader which renders the polygons as outlined textures. So in the middle of the polygon one would see the texture and on the border it should somehow glow.
My first idea was to walk along the polygon and draw a quad on each line segment with a specific texture. This works but looks strange for small corners where the textures are forced to overlap.
My second approach was to mark all border vertices with some kind of normal pointing out of the polygon. Passing this to the shader would interpolate the normals across edges of the triangulation and I could use the interpolated "normal" as a value for shading. I could not test it yet but would that work? A special property of the triangulation is that all vertices are on the border so there are no vertices inside the polygon.
Do you guys have a better idea for what I want to achieve?
Here A picture of what it looks right now with the quad solution:
You could render your object twice. A bigger stretched version behind the first one. Not that ideal since a complex object cannot be streched uniformly to create a border.
If you have access to your screen buffer you can render your glow components into a rendertarget and align a full-screen quad to your viewport and add a fullscreen 2D silhouette filter to it.
This way you gain perfect control over the edge by defining its radius, colour, blur. With additional output values such as the RGB values from the object render pass you can even have different advanced glows.
I think rendermonkey had some examples in their shader editor. Its definetly a good starting point to work with and try out things.
Propaply you want calclulate new border vertex list (easy fill example with triangle strip with originals). If you use constant border width and convex polygon its just:
B_new = B - (BtoA.normalised() + BtoC.normalised()).normalised() * width;
If not then it can go more complicated, there is my old but pretty universal solution:
//Helper function. To working right, need that v1 is before v2 in vetex list and vertexes are going to (anti???) cloclwise!
float vectorAngle(Vector2 v1, Vector2 v2){
float alfa;
if (!v1.isNormalised())
v1.normalise();
if (!v2.isNormalised())
v2.normalise();
alfa = v1.dotProduct(v2);
float help = v1.x;
v1.x = v1.y;
v1.y = -help;
float angle = Math::ACos(alfa);
if (v1.dotProduct(v2) < 0){
angle = -angle;
}
return angle;
}
//Normally dont use directly this!
Vector2 calculateBorderPoint(Vector2 vec1, Vector2 vec2, float width1, float width2){
vec1.normalise();
vec2.normalise();
float cos = vec1.dotProduct(vec2); //Calculates actually cosini of two (normalised) vectors (remember math lessons)
float csc = 1.0f / Math::sqrt(1.0f-cos*cos); //Calculates cosecant of angle, This return NaN if angle is 180!!!
//And rest of the magic
Vector2 difrence = (vec1 * csc * width2) + (vec2 * csc * width1);
//If you use just convex polygons (all angles < 180, = 180 not allowed in this case) just return value, and if not you need some more magic.
//Both of next things need ordered vertex lists!
//Output vector is always to in side of angle, so if this angle is.
if (Math::vectorAngle(vec1, vec2) > 180.0f) //Note that this kind of function can know is your function can know that angle is over 180 ONLY if you use ordered vertexes (all vertexes goes always (anti???) cloclwise!)
difrence = -difrence;
//Ok and if angle was 180...
//Note that this can fix your situation ONLY if you use ordered vertexes (all vertexes goes always (anti???) cloclwise!)
if (difrence.isNaN()){
float width = (width1 + width2) / 2.0; //If angle is 180 and border widths are difrent, you cannot get perfect answer ;)
difrence = vec1 * width;
//Just turn vector -90 degrees
float swapHelp = difrence.y
difrence.y = -difrence.x;
difrence.x = swapHelp;
}
//If you don't want output to be inside of old polygon but outside, just: "return -difrence;"
return difrence;
}
//Use this =)
Vector2 calculateBorderPoint(Vector2 A, Vector2 B, Vector2 C, float widthA, float widthB){
return B + calculateBorderPoint(A-B, C-B, widthA, widthB);
}
Your second approach can be possible...
mark the outer vertex (in border) with 1 and the inner vertex (inside) with 0.
in the pixel shader you can choose to highlight, those that its value is greater than 0.9f or 0.8f.
it should work.
I have a system that requires moving an image on the screen. I am currently using a png and just placing it at the desired screen coordinates.
Because of a combination of the screen resolution and the required frame rate, some frames are identical because the image has not yet moved a full pixel. Unfortunately, the resolution of the screen is not negotiable.
I have a general understanding of how sub-pixel rendering works to smooth out edges but I have been unable to find a resource (if it exists) as to how I can use shading to translate an image by less than a single pixel.
Ideally, this would be usable with any image but if it was only possible with a simple shape like a circle or a ring, that would also be acceptable.
Sub-pixel interpolation is relatively simple. Typically you apply what amounts to an all-pass filter with a constant phase shift, where the phase shift corresponds to the required sub-pixel image shift. Depending on the required image quality you might use e.g. a 5 point Lanczos or other windowed sinc function and then apply this in one or both axes depending on whether you want an X shift or a Y shift or both.
E.g. for a 0.5 pixel shift the coefficients might be [ 0.06645, 0.18965, 0.27713, 0.27713, 0.18965 ]. (Note that the coefficients are normalised, i.e. their sum is equal to 1.0.)
To generate a horizontal shift you would convolve these coefficients with the pixels from x - 2 to x + 2, e.g.
const float kCoeffs[5] = { 0.06645f, 0.18965f, 0.27713f, 0.27713f, 0.18965f };
for (y = 0; y < height; ++y) // for each row
for (x = 2; x < width - 2; ++x) // for each col (apart from 2 pixel border)
{
float p = 0.0f; // convolve pixel with Lanczos coeffs
for (dx = -2; dx <= 2; ++dx)
p += in[y][x + dx] * kCoeffs[dx + 2];
out[y][x] = p; // store interpolated pixel
}
Conceptually, the operation is very simple. First you scale up the image (using any method of interpolation, as you like), then you translate the result, and finally you subsample down to the original image size.
The scale factor depends on the precision of sub-pixel translation you want to do. If you want to translate by 0.5 degrees, you need scale up the original image by a factor of 2 then you translate the resulting image by 1 pixel; if you want to translate by 0.25 degrees, you need to scale up by a factor of 4, and so on.
Note that this implementation is not efficient because when you scale up you end up calculating pixel values that you won't actually use because they're just dropped when you subsample back to the original image size. The implementation in Paul's answer is more efficient.