So I am in a intro to Computer Graphics class and was struggling to convert object coordinates to world coordinates and so on. Would it just be a transform matrix to convert points from object->world? If anyone has any resources on this that would be extremely helpful.
The object to world transformation is fairly straightforward. When you open a modeler (say blender), you create an object, say a dog.
Let's assume the dog is posed such that its face point sin the -z direction, its paws are at height 0 and it's torso is centered along the xz plane (i.e half it;s body is on one side of the plane and the other half on the other).
That's model space, your dog is centered. However, you probably don;t want your dog there, your dog is somewhere different in world space. And he's likely in a different orientation as well.
Say we want our dog to be at location (36, 43, 0) with hi nose point in the direction (1,0,-1). This means that the dog needs to be moved to (36, 43, 0) which just means adding that vector to the models position. And we want to rotate him by 45 degrees, such that he looks in the desired direction.
The matrix transformation that encodes that rotation and translation is the model to world transformation.
Related
So a polygon mesh is defined as the following:
class Triangle{
int vertices[3]; //vertex indices
float nx, ny, nz; //face-plane normal
};
Is this a convenient way to represent a mesh used with flat shading? Explain
Suggest an object for which this is a good mesh format when used with Gouraud shading. Explain
Suggest an object for which this is a bad mesh format when used with Gouraud shading. Explain
So for 1, I said yes because the face plane normal can be easily converted to a point in the middle of the face. I read somewhere that normals don't have positions?
For 2 I said a ball; more gentle angles
And 3 a box; steeper angles.
I don't know, I don't think I really understand what the normal vector is.
mostly yes
from geometry computations is this OK however from rendering aspect having triangles in indices form only can be sometimes problematic (depends on the rendering engine, HW, etc). Usually is faster to have the triangle points directly in vector form instead of just indexes sometimes triangle contains both... However that is wasting space.
depends on how you classify what is OK and what not.
smooth objects like sphere will look like this
while flat side meshes like cube will be rendered without visible distortions in shape (but with flat shaded like colors only so lighting will be corrupted)
So answer to this is depend on what you want to achieve less lighting error, or better shape recognition or what. Basically using 1 normal for face will turn Gourard into flat shading.
Lighting can be improved by dividing big flat surfaces into more triangles
is unanswerable exactly for the same reasons as #2
So if you want to answer #2,#3 you need to clarify what it means good and bad ...
I made an object tracker that calculates the position of an object recorded in a live camera feed using stereoscopic cameras. The math was simple, once you know the camera distance and orientation. However, now I thought it would be nice to allow me to quickly extract all these parameters, so when I change my setup or cameras I will be able to quickly calibrate it again.
To calculate the object position I made some simplifications/assumptions, which made the math easier: the cameras are in the same YZ plane, so there is only a distance in x between them. Their tilt is also just in the XY plane.
To reverse the triangulation I thought a test pattern (square) of 4 points of which I know the distances to each other would suffice. Ideally I would like to get the cameras' positions (distances to test pattern and each other), their rotation in X (and maybe Y and Z if applicable/possible), as well as their view angle (to translate pixel position to real world distances - that should be a camera constant, but in case I change cameras, it is quite a bit to define accurately)
I started with the same trigonometric calculations, but always miss parameters. I am wondering if there is an existing solution or a solid approach. If I need to add parameter (like distances, they are easy enough to measure), it's no problem (my calculations didn't give me any simple equations with that possibility though).
I also read about Homography in opencv, but it seems it applies to 2D space only, or not?
Any help is appreciated!
A bit of background
I am writing a simple ray tracer in C++. I have most of the core complete but don't understand how to retrieve the world coordinate of a pixel on the image plane. I need this location so that I can cast the ray into the world.
Currently I have a Camera with a position(aka my perspective reference point), a direction (vector) which is not normalized. The directions length signifies the center of the image plane and which way the camera is facing.
There are other values associated with the camera but they should not be relevant.
My image coordinates will range from -1 to 1 and the perspective(focal length), will change based on the distance of the direction associated with the camera.
What I need help with
I need to go from pixel coordinates (say [0, 256] in an image 256 pixels on each side) to my world coordinates.
I will also want to program this so that no matter where the camera is placed and where it is directed, that I can find the pixel in the world coordinates. (Currently the camera will almost always be centered at the origin and will look down the negative z axis. I would like to program this with the future changes in mind.) It is also important to know if this code should be pushed down into my threaded code as well. Otherwise it will be calculated by the main thread and then the ray will be used in the threaded code.
(source: in.tum.de)
I did not make this image and it is only there to give an idea of what I need.
Please leave comments if you need any additional info. Otherwise I would like a simple theory/code example of what to do.
Basically you have to do the inverse process of V * MVP which transforms the point to unit cube dimensions. Look at the following urls for programming help
http://nehe.gamedev.net/article/using_gluunproject/16013/ https://sites.google.com/site/vamsikrishnav/gluunproject
The issue we are trying to solve the issue of locating a point in two different representations of a plane. The first plane we have is rotated to create perspective; the second is a 2d view of that same plane. We have 4 points on each of the plans that we know to be equivalent. The question is if we have an arbitrary point in plane 1, how do we find the corresponding point in plane 2?
It is best probably to illustrate the use case in order to best clarify the question. We have an image illustrated on the left.
Projective plane
2D layout diagram of space
So the givens that we have are the red squares from both pictures. Note that if possible, I’d like it to be possible that the 2D space isn’t necessarily a square. These are available to us ahead of time and known. I also have green dots laid out on the plane in the first image. I’d like to be able to do a projection of the dot in image 1 onto the space in image 2.
Note also for the image 1 I do not have a defined window or eye position. I just know that the red square from image 1 is a transform of the red square form image 2 and that the image 2 is in 2D space.
This is a special case of finding mappings between quadrilaterals that preserve straight lines. These are generally called homographic or projective transforms. Here, one of the quads is a square, so this is a popular special case. You can google these terms ("quad to quad", etc) to find explanations and code, but here are some for you.
Perspective Transform Estimation
a gaming forum discussion
extracting a quadrilateral image to a rectangle
Projective Mappings for Image Warping by Paul Heckbert.
The math isn't particularly pleasant, but it isn't that hard either. You can also find some code from one of the above links.
Update
And this is one of my favorites: Computing a projective transformation
In working with textures, does "UVW mapping" mean the same thing as "UV mapping"?
If so why are there two terms, and what is the "W"?
If not, what's the difference between them?
[Wikipedia currently isn't illuminating on this question: http://en.wikipedia.org/wiki/Talk:UVW_mapping]
U and V are the coordinates for a 2D map. Adding the W component adds a third dimension.
It's tedious to say the least to actually hand generate a 3D texture map, but they are useful if you have a procedural way to generate texture data. E.g. if you wanted your object to look like it's a solid chunk of marble, it may be easiest to "model" the marble "texture" as a 3D procedural texture and then use 3D coordinates to draw data out of the procedural texture.
UVW is to XYZ as XYZ is to world coordinates. Since XYZ was already being used to refer to world coordinates, UV is used to refer to the X and Y (2D) coordinates of a flat map. By extrapolation, the W is the Z in XYZ.
UVW infers a more complex 2d representation which is, in effect, the skin of the object that has been 'unwrapped' from a 3d object. Imagine a box 'unwrapped'. You now have a flat UVW map that you can paint on to your hearts content and then wrap back onto the six-sided box with no distortion. In short the UVW map knows where to rewrap the x, y and z points to reform the box.
Now imagine a sphere 'unwrapped'. You might end up with something like a Mercator projection. The hitch is that with this problem, when you wrap this 2d representation back onto the sphere, you will get some distortion.
The term UV mapping is very commonly used. I don't hear the term UVW as often except as described above.
The term procedural mapping can be misleading. Simply put, it means the computer is following some algorithms to paint a realistic representation of a material, like wood, onto the object, giving you the impression that the grain travels completely through the wood so it can be seen properly on both sides of the object. Procedural mapping can use images or not, or a combination of approaches...it all depends on the 'procedure'.
Lastly, there is no requirement to transform a '3d procedural texture' to 'UVW' first, since UVW and XYZ mean effectively the same thing - they are either referring to the world, or an unwrapped image of on object in the world, or for that matter of a 'chunk' of the world, as in the sky. The point is that UV or UVW refers to image/texture mapping.