I am trying to write a method that calculates the view frustum corners for a camera class I am working on. I know that the MVP matrix is correct because it is pushed into my vertex shader and indeed everything is correct on screen, not only that but I calculate the worldspace frustum planes from this matrix and they are correct. Also those methods are unit tested in a few different camera configurations and give expected results.
I am attempting to calculate the frustum corners by transforming the NDC corners by the inverse of the MVP matrix.
// Fill with the corners in clip space.
QVector< Vector3f > corners( 8 );
corners[0] << 1.0f, 1.0f, 1.0f;
corners[1] << -1.0f, 1.0f, 1.0f;
corners[2] << 1.0f, -1.0f, 1.0f;
corners[3] << -1.0f, -1.0f, 1.0f;
corners[4] << 1.0f, 1.0f, -1.0f;
corners[5] << -1.0f, 1.0f, -1.0f;
corners[6] << 1.0f, -1.0f, -1.0f;
corners[7] << -1.0f, -1.0f, -1.0f;
// Then transform back into worldspace.
for ( Vector3f& c : corners ) {
c = mult( mvp.inverse(), c );
}
Looks respectable enough, but if I put in my MVP (column-major):
-1.68199 0.0 1.68199 0.0
-0.720796 3.00332 -0.720796 0.0
-0.671735 -0.322433 -0.671735 35.8534
-0.669589 -0.321403 -0.669589 37.3363
This is a camera with near clip at 0.8, far at 500.0, FOV at 35.0°, viewport size of 800x600, positioned at [25.0, 12.0, 25.0] looking at the origin (y is the up axis). I get these corners:
-0.988515, 0.00116634, -0.393982
-0.393982, 0.00116634, -0.988515
-0.845201, -0.595974, -0.250669
-0.250669, -0.595974, -0.845201
30.2115, 14.9772, 30.8061
30.8061, 14.9772, 30.2115
30.3548, 14.38, 30.9494
30.9494, 14.38, 30.3548
Don't stare at the numbers too long (you'll go mad), just note that there are no corners hundreds of units from each other which is what you would expect with a far plane of 500.0. What am I not understanding about this procedure?
Without looking at your numbers that heavily, I just make a wild guess:
I hope you didn't forget to divide your transformed corners by their 4th component after extending them to 4D homogenous coordinates (by adding a 1) and multplying them by the inverse MVP.
This is neccessary because the MVP matrix (actually the projection part) and its inverse is not an affine transformation and thus results in a w-value other than 1 when applied to a homogenous vector. This is the same when you (or actually your shader) apply the usual forward MVP multiplication when transforming vectors, just that the fixed function graphics hardware does this division by w for you (so called perspective division, because this is what actually causes farther away objects to be smaller on screen).
Related
Can you please submit a code for drawing a basic wireframe sphere without texturing it. I found plenty of examples but they use 3 kind of buffers like normal,texture and vertices. Is there any simple comprehensive way to draw a sphere using GL_TRIANGLE_FAN or GL_TRIANGLE_STRIP and using only vertex and fragment shader.
Thank you!
void DrawSphere(GLdouble radius, int longitudeSubdiv, int latitudeSubdiv)
{
// issue corresponding GL command
//glPolygonMode(GL_BACK,GL_FILL);
//gluSphere(m_quadrObj,radius,longitudeSubdiv,latitudeSubdiv);
float color1[3] = {1.0,0.0,0.0};
float shininess = 64.0f;
float specularColor[] = {1.0, 1.0f, 1.0f, 1.0f};
glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, shininess); // range 0 ~ 128
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, specularColor);
glPushMatrix();
glTranslatef(1,1,1);// *
glColor3fv(color1);
gluSphere(m_quadrObj,radius,longitudeSubdiv,latitudeSubdiv);
glPopMatrix();
//glColor3fv(color2);
}
I am trying to implement box select in a 3d world. Basically, click, hold mouse, and then unpress mouse, get a box, and then box select. To start, I'm trying to figure out how to get the coordinates of the clicks in 3d.
I have raypicking, and that is not getting the right coordinate (gets origin and direction). It keeps returning the same origin no matter what X/Y for screen is (although the direction is different).
I've also tried:
D3DXVECTOR3 ori = D3DXVECTOR3(sx, sy, 0.0f);
D3DXVECTOR3 out;
D3DXVec3Unproject(&out, &ori, &viewPort, &projectionMat, &viewMat, &worldMat);
And it gets the same thing, the coordinates are very close to each other no matter what coordinates (and are wrong). It's almost like returning the eye, instead of the actual world coordinate.
How do I turn 2d Screen coordinates into 3d using directx 9c?
This is called picking in Direct3D, to select a model in 3D space, you mainly need 3 steps:
Generate the picking ray
Transform the picking ray and the model you want to pick in the same space
Do a intersection test of the picking ray and the model
Generate the picking ray
When we click the mouse on the screen(say the point is s on the screen), the model is selected when the box project on the area surround the point s on the projection window.
so, in order to generate the picking ray with the given screen coordinates (x, y), first we need to transform (x,y) to the projection window, this is can be done by the invert process of viewport transformation. another thing is the point on the projection window was scaled by the project matrix, so we should divide it by the scale factors.
in DirectX, the camera always place at the origin, so the picking ray starts from the origin, and projection window is the near clip plane(z=1).this is what the code has done below.
Ray CalcPickingRay(LPDIRECT3DDEVICE9 Device, int screen_x, int screen_y)
{
float px = 0.0f;
float py = 0.0f;
// Get viewport
D3DVIEWPORT9 vp;
Device->GetViewport(&vp);
// Get Projection matrix
D3DXMATRIX proj;
Device->GetTransform(D3DTS_PROJECTION, &proj);
px = ((( 2.0f * screen_x) / vp.Width) - 1.0f) / proj(0, 0);
py = (((-2.0f * screen_y) / vp.Height) + 1.0f) / proj(1, 1);
Ray ray;
ray._origin = D3DXVECTOR3(0.0f, 0.0f, 0.0f);
ray._direction = D3DXVECTOR3(px, py, 1.0f);
return ray;
}
Transform the picking ray and model into the same space.
We always obtain this by transform the picking ray to world space, simply get the invert of your view matrix, then apply the invert matrix to your pickig ray.
// transform the ray from view space to world space
void TransformRay(Ray* ray, D3DXMATRIX* invertViewMatrix)
{
// transform the ray's origin, w = 1.
D3DXVec3TransformCoord(
&ray->_origin,
&ray->_origin,
invertViewMatrix);
// transform the ray's direction, w = 0.
D3DXVec3TransformNormal(
&ray->_direction,
&ray->_direction,
invertViewMatrix);
// normalize the direction
D3DXVec3Normalize(&ray->_direction, &ray->_direction);
}
Do intersection test
If everything above is well, you can do the intersection test now. this is a ray-box intersection, so you can use function D3DXboxBoundProbe. you can change the visual mode of you box to see if the picking was really work, for example, set the fill mode to solid or wire-frame if D3DXboxBoundProbe return TRUE.
You can perform the picking in response of WM_LBUTTONDOWN.
case WM_LBUTTONDOWN:
{
// Get screen point
int iMouseX = (short)LOWORD(lParam) ;
int iMouseY = (short)HIWORD(lParam) ;
// Calculate the picking ray
Ray ray = CalcPickingRay(g_pd3dDevice, iMouseX, iMouseY) ;
// transform the ray from view space to world space
// get view matrix
D3DXMATRIX view;
g_pd3dDevice->GetTransform(D3DTS_VIEW, &view);
// inverse it
D3DXMATRIX viewInverse;
D3DXMatrixInverse(&viewInverse, 0, &view);
// apply on the ray
TransformRay(&ray, &viewInverse) ;
// collision detection
D3DXVECTOR3 v(0.0f, 0.0f, 0.0f);
if(D3DXSphereBoundProbe(box.minPoint, box.maxPoint &ray._origin, &ray._direction))
{
g_pd3dDevice->SetRenderState(D3DRS_FILLMODE, D3DFILL_SOLID);
}
break ;
}
It turns out, I was handling the problem the wrong/opposite way. Turning 2D to 3D didn't make sense in the end. But as it turns out, converting the vertices from 3D to 2D, then seeing if inside the 2D box was the right answer!
Hello im working on a project and i got a problem
screen/ground shakes when i moved main object.
Normally when i moved with "w" button i dont get a problem.
But if move while im rotating camera i got the problem.
To see: give a degree like 30 with right mouse button.(do not release button)
and keep w button while rotating object.
you will see the shake at the ground.
I think the problem is about my looat function calculation.
gluLookAt(sin(rot*PI/180)*(10-fabs(roty)/4) +movex ,3-(roty/2), cos(rot*PI/180)*(10-(fabs(roty)/4)) +movez , -sin(rot*PI/180)*6 + movex, roty, -cos(rot*PI/180)*6 +movez, 0, 1, 0);
Here is the my rotation function. I draw everything after this func.
void System::rotater(){
if(mouseStates[2][0]==1 && mx!= savex && mx!=mouseStates[2][1]){
rot += (mx-mouseStates[2][1]) * 90 / glutGet(GLUT_WINDOW_WIDTH)/2;
if(rot>360)rot-=360;
if(rot<0)rot= 360+rot;
}
glRotatef(rot,0,1,0);
}
And last my move option is here:
if(a==87 || a==119){
movex -= sin(rot*PI/180)/3;
movez -= cos(rot*PI/180)/3;
}
I've found seen screen shaking issues are commonly to do with the order in which you process events, update objects and the camera, before drawing the scene (or the physics engine is causing it :P). With a high framerate you might not notice small imperfections. You might want to add a Sleep/usleep to help expose these issues. For example, if your camera position is based on the camera rotation, but you update the position before processing the mouse events the camera position/rotation can be a frame out of sync.
This is a good reason to process inputs, movement and rendering in very separate stages.
Your lookAt function does look rather complex. lookAt is very useful when you want a camera that looks-at something, but in this case you're after a camera that looks-from a camera position. This is easier to implement by manually constructing the inverse transformations to position the camera object before drawing the sceene. For example, if you want to move left, translate everything else right (thus inverse)...
glLoadIdentity();
glRotatef(camera.rotX, 1.0f, 0.0f, 0.0f);
glRotatef(camera.rotY, 0.0f, 1.0f, 0.0f);
glRotatef(camera.rotZ, 0.0f, 0.0f, 1.0f);
glTranslatef(-camera.posX, -camera.posY, -camera.posZ);
I'm learning OpenGL 3.3, using some tutorials (http://opengl-tutorial.org). In the tutorial I'm using, there is a vertex shader which does the following:
Tutorial Shader source
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
void main(){
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
}
Yet, when I try to emulate the same behavior in my application, I get the following:
error: implicit cast from "vec4" to "vec3".
After seeing this, I wasn't sure if it was because I was using 4.2 version shaders as opposed to 3.3, so changed everything to match what the author had been using, still receiving the same error afterward.
So, I changed my shader to do this:
My (latest) Source
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
uniform mat4 MVP;
void main()
{
vec4 a = vec4(vertexPosition_modelspace, 1);
gl_Position.xyz = MVP * a;
}
Which, of course, still produces the same error.
Does anyone know why this is the case, as well as what a solution might be to this? I'm not sure if it could be my calling code (which I've posted, just in case).
Calling Code
static const GLfloat T_VERTEX_BUF_DATA[] =
{
// x, y z
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f
};
static const GLushort T_ELEMENT_BUF_DATA[] =
{ 0, 1, 2 };
void TriangleDemo::Run(void)
{
glClear(GL_COLOR_BUFFER_BIT);
GLuint matrixID = glGetUniformLocation(mProgramID, "MVP");
glUseProgram(mProgramID);
glUniformMatrix4fv(matrixID, 1, GL_FALSE, &mMVP[0][0]); // This sends our transformation to the MVP uniform matrix, in the currently bound vertex shader
const GLuint vertexShaderID = 0;
glEnableVertexAttribArray(vertexShaderID);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glVertexAttribPointer(
vertexShaderID, // Specify the ID of the shader to point to (in this case, the shader is built in to GL, which will just produce a white triangle)
3, // Specify the number of indices per vertex in the vertex buffer
GL_FLOAT, // Type of value the vertex buffer is holding as data
GL_FALSE, // Normalized?
0, // Amount of stride
(void*)0 ); // Offset within the array buffer
glDrawArrays(GL_TRIANGLES, 0, 3); //0 => start index of the buffer, 3 => number of vertices
glDisableVertexAttribArray(vertexShaderID);
}
void TriangleDemo::Initialize(void)
{
glGenVertexArrays(1, &mVertexArrayID);
glBindVertexArray(mVertexArrayID);
glGenBuffers(1, &mVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(T_VERTEX_BUF_DATA), T_VERTEX_BUF_DATA, GL_STATIC_DRAW );
mProgramID = LoadShaders("v_Triangle", "f_Triangle");
glm::mat4 projection = glm::perspective(45.0f, 4.0f / 3.0f, 0.1f, 100.0f); // field of view, aspect ratio (4:3), 0.1 units near, to 100 units far
glm::mat4 view = glm::lookAt(
glm::vec3(4, 3, 3), // Camera is at (4, 3, 3) in world space
glm::vec3(0, 0, 0), // and looks at the origin
glm::vec3(0, 1, 0) // this is the up vector - the head of the camera is facing upwards. We'd use (0, -1, 0) to look upside down
);
glm::mat4 model = glm::mat4(1.0f); // set model matrix to identity matrix, meaning the model will be at the origin
mMVP = projection * view * model;
}
Notes
I'm in Visual Studio 2012
I'm using Shader Maker for the GLSL editing
I can't say what's wrong with the tutorial code.
In "My latest source" though, there's
gl_Position.xyz = MVP * a;
which looks weird because you're assigning a vec4 to a vec3.
EDIT
I can't reproduce your problem.
I have used a trivial fragment shader for testing...
#version 330 core
void main()
{
}
Testing "Tutorial Shader source":
3.3.11762 Core Profile Context
Log: Vertex shader was successfully compiled to run on hardware.
Log: Fragment shader was successfully compiled to run on hardware.
Log: Vertex shader(s) linked, fragment shader(s) linked.
Testing "My latest source":
3.3.11762 Core Profile Context
Log: Vertex shader was successfully compiled to run on hardware.
WARNING: 0:11: warning(#402) Implicit truncation of vector from size 4 to size 3.
Log: Fragment shader was successfully compiled to run on hardware.
Log: Vertex shader(s) linked, fragment shader(s) linked.
And the warning goes away after replacing gl_Position.xyz with gl_Position.
What's your setup? Do you have a correct version of OpenGL context? Is glGetError() silent?
Finally, are your GPU drivers up-to-date?
I've had problems with some GPUs (ATi ones, I believe) not liking integer literals when it expects a float. Try changing
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
To
gl_Position = MVP * vec4(vertexPosition_modelspace, 1.0);
I just came across this error message on an ATI Radeon HD 7900 with latest drivers installed while compiling some sample code associated with the book "3D Engine Design for Virtual Globes" (http://www.virtualglobebook.com).
Here is the original fragment shader line:
fragmentColor = mix(vec3(0.0, intensity, 0.0), vec3(intensity, 0.0, 0.0), (distanceToContour < dF));
The solution is to cast the offending Boolean expression into float, as in:
fragmentColor = mix(vec3(0.0, intensity, 0.0), vec3(intensity, 0.0, 0.0), float(distanceToContour < dF));
The manual for mix (http://www.opengl.org/sdk/docs/manglsl) states
For the variants of mix where a is genBType, elements for which a[i] is false, the result for that
element is taken from x, and where a[i] is true, it will be taken from y.
So, since a Boolean blend value should be accepted by the compiler without comment, I think this should go down as an AMD/ATI driver issue.
I am building a robot in openGL and it should move and rotate. When I press the robot should move forward and if I press t then he should rotate 15* about its own local axis and then if i press f he will walk again. I have done, the robot walks and rotates but the problem is he is not rotating with respect to his local axis, he is following (0,0,0). I think i dont understand how the composition of translation and rotation has to be made so that I get my desired effect.
I am trying now with just a scaled sphere. I am adding the display func here, so that it is more clear for you guys:
void display()
{
glEnable(GL_DEPTH_TEST); // need depth test to correctly draw 3D objects
glClearColor(0,0,0,1);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glShadeModel(GL_SMOOTH);
//All color and material stuffs go here
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_NORMALIZE); // normalize normals
glEnable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE);
// set up the parameters for lighting
GLfloat light_ambient[] = {0,0,0,1};
GLfloat light_diffuse[] = {.6,.6,.6,1};
GLfloat light_specular[] = {1,1,1,1};
GLfloat light_pos[] = {10,10,10,1};
glLightfv(GL_LIGHT0,GL_AMBIENT, light_ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, light_specular);
GLfloat mat_specular[] = {.9, .9, .9,1};
GLfloat mat_shine[] = {10};
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, mat_specular);
glMaterialfv(GL_FRONT_AND_BACK, GL_SHININESS, mat_shine);
//color specs ends ////////////////////////////////////////
//glPolygonMode(GL_FRONT_AND_BACK,GL_LINE); // comment this line to enable polygon shades
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90, 1, 1, 100);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glLightfv(GL_LIGHT0, GL_POSITION, light_pos);
gluLookAt(0,0,30,0,0,0,0,1,0);
glRotatef(x_angle, 0, 1,0); // this is just for mouse handling
glRotatef(y_angle, 1,0,0); // this is just for mouse handling
glScalef(scale_size, scale_size, scale_size); // for zooming effect
draw_coordinate();
//Drawing using VBO starts here
glTranslatef(walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
glRotatef(turn,0,1,0);
draw_sphere(3,1,1);
glDisableClientState(GL_VERTEX_ARRAY); // enable the vertex array on the client side
glDisableClientState(GL_NORMAL_ARRAY); // enable the normal array on the client side
glutSwapBuffers();
}
The rotatefunction from opengl is one that rotates around (0,0,0). You have to translate the rotationpoint to the center and then do the rotation.
...
glTranslatef(walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
glTranslatef(-x_rot,-y_rot,-z_rot);
glRotatef(turn,0,1,0);
glTranslatef(x_rot,y_rot,z_rot);
...
So In your case x_rot=walk*sin(M_PI*turn/180), y_rot=0 and z_rot=walk*cos(M_PI*turn/180). The above becomes:
...
glRotatef(turn,0,1,0);
glTranslatef(x_rot=walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
...
If your robot doesn't rotate in its own axis then translate the robot to the center, rotate it and again translate it back to the original position. Keep your translation, rotation, scaling and drawing inside
glPushMatrix();
........your rotation,translation,scalling,drawing goes here..........
glPopMatrix();
These keeps the scene same.
If you don't understand these function then look here.