What is near and far clipping distances in 3D Graphics?
If it makes a difference, I am using Ogre 3D render engine.
Near clipping distance and far clipping distance refer to the near and far plane of the viewing frustum.
Anything closer to the eye than the near clipping distance isn't displayed (it's too close), and anything further away from the eye than the far clipping distance isn't displayed either (it's too far away).
Related
I currently have a system in place to near clip vertices, but it only seems to work in view space.
The system is: Given a triangle, the near plane and the near plane normal, I check how many points are behind the clipping plane
If its 1: I create 2 new vertices at the intersections of the near plane, and reform triangles
If its 2: I create 1 new vertice at the intersection of the near plane, and reform triangle
But the problem is, when I try to use this system in homogenous clip space it doesen't work. I can't find the intersections, I don't know what the near plane is in homogenous clip space and I don't even know how to tackle the problem of all the vertices having different w values.
So what algorithm would I use to actually near clip triangles in homogenous clip space?
So far I've tried looking at many many articles to find this answer, but I find they don't give me answers that I am able to understand, instead just spouting loads and loads of math formulas.
Honestly I think this whole problem just stems from my lack of knowledge on 3D maths.
I'm trying to infer an object's direction of movement using dense optical flow in OpenCV. I'm using calcOpticalFlowFarneback() to get flow coordinates and cartToPolar() to acquire vector angles which would indicate direction.
To interpret the results I need to know the reference point for measuring the angle. I have found this blog post indicating that the range of angles is 360°. That tells me that the angle measurement would go along the lines of the unit circle. I couldn't make out much more than that.
The documentation for cartToPolar() doesn't cover this and my attempts at testing it have failed.
It seems that the angle produced by cartToPolar() is in reference to the unit circle rotated clockwise by 90° centered on the image coordinate starting point in the top left corner. It would look like this.
I came to this conclusion by using the dense optical flow example provided by OpenCV. I replaced the line hsv[...,0] = ang*180/np.pi/2 with hsv[...,0] = ang*180/np.pi to get correct angle conversion from radians. Then I tested a video with people moving from top right to bottom left and vice versa. I sampled the dominant color with GIMP and got RGB values which I converted to HSV values. Hue value corresponds to the angle in degrees.
People moving from top right to bottom left produced an angle of about 300° and people moving the other way round produced an angle of about 120°. This hinted at the way the unit circle is positioned.
Looking at the code, fastAtan32f is used to compute the angles. and that seems to be a atan2 implementation.
I have a question about graphics clipping.
The question is: why do we need line clipping or polygon clipping?
Can we just rasterize everything then clip the pixels out of the
clipping window?
Thanks
You could do that, but as people in the comments have said, it is slower.
You can clip the far, left/right and top/bottom planes in screen space.
The problem is if a 3D object is partially behind the camera. You cannot "rasterize everything" behind a camera because typical 3D projection equations that divide by z do not make sense behind the camera - your points/vertices will be inverted/upside-down. If you have color/texture mapping it'll look weird. So at the very least your program will have to clip by the near plane and interpolate all color/texture data to the newly clipped points.
An exception is if you're doing raytracing/raycasting - rays do not go behind the camera so it works.
I am looking for an algorithm for the following problem:
Given:
A 3D triangle mesh. The mesh represents a part of the surface of the earth.
A polyline (a connected series of line segments) whose vertices are always on an edge or on a vertex of a triangle of the mesh. The polyline represents the centerline of a road on the surface of the earth.
I need to calculate and display the road i.e. add half of the road's width on each side of the center line, calculate the resulting vertices in the corresponding triangles of the mesh, fill the area of the road and outline the sides of the road.
What is the simplest and/or most effective strategy to do this? How do I store the data of the road most efficiently?
I see 2 options here:
render thick polyline with road texture
While rendering polyline you need TBN matrix so use
polyline tangent as tangent
surface normal as normal
binormal=tangent x normal
shift actual point p position to
p0=p+d*binormal
p1=p-d*binormal
and render textured line (p0,p1). This approach is not precise match to surface mesh so you need to disable depth or use some sort of blending. Also on sharp turns it could miss some parts of a curve (in that case you can render rectangle or disc instead of line.
create the mesh by shifting polyline to sides by half road size
This produces mesh accurate road fit, but due to your limitations the shape of the road can be very distorted without mesh re-triangulation in some cases. I see it like this:
for each segment of road cast 2 lines shifted by half of road size (green,brown)
find their intersection (aqua dots) with shared edge of mesh with the current road control point (red dot)
obtain the average point (magenta dot) from the intersections and use that as road mesh vertex. In case one of the point is outside shared mesh ignore it. In case both intersections are outside shared edge find closest intersection with different edge.
As you can see this can lead to serious road thickness distortions in some cases (big differences between intersection points, or one of the intersection points is outside surface mesh edge).
If you need accurate road thickness then use the intersection of the casted lines as a road control point instead. To make it possible either use blending or disabling Depth while rendering or add this point to mesh of the surface by re-triangulating the surface mesh. Of coarse such action will also affect the road mesh and you need to iterate few times ...
Another way is use of blended texture for road (like sprites) and compute the texture coordinate for the control points. If the road is too thick then thin it by shifting the texture coordinate ... To make this work you need to select the most far intersection point instead of average ... Compute the real half size of the road and from that compute texture coordinate.
If you get rid of the limitation (for road mesh) that road vertex points are at surface mesh segments or vertexes then you can simply use the intersection of shifted lines alone. That will get rid of the thickness artifacts and simplify things a lot.
I have a polygon, where each vertex has texture coordinates. How can I calculate texture coord for any point lying in the polygon? (I know points coordinates.)
Thanks
p.s. sorry for my english...
I you don't care about perspective than it's simple enough. simply find the barycentric coordinates of the point and do a linear interpolation of the vertices texture coordinates. Here's a nice tutorial about this
If however you're trying to do this in a scene where with a perspective projection, you'll probably want to worry about perspective correction as well. Here's some reading about this issue