Screen shakes when the object moves - visual-c++

Hello im working on a project and i got a problem
screen/ground shakes when i moved main object.
Normally when i moved with "w" button i dont get a problem.
But if move while im rotating camera i got the problem.
To see: give a degree like 30 with right mouse button.(do not release button)
and keep w button while rotating object.
you will see the shake at the ground.
I think the problem is about my looat function calculation.
gluLookAt(sin(rot*PI/180)*(10-fabs(roty)/4) +movex ,3-(roty/2), cos(rot*PI/180)*(10-(fabs(roty)/4)) +movez , -sin(rot*PI/180)*6 + movex, roty, -cos(rot*PI/180)*6 +movez, 0, 1, 0);
Here is the my rotation function. I draw everything after this func.
void System::rotater(){
if(mouseStates[2][0]==1 && mx!= savex && mx!=mouseStates[2][1]){
rot += (mx-mouseStates[2][1]) * 90 / glutGet(GLUT_WINDOW_WIDTH)/2;
if(rot>360)rot-=360;
if(rot<0)rot= 360+rot;
}
glRotatef(rot,0,1,0);
}
And last my move option is here:
if(a==87 || a==119){
movex -= sin(rot*PI/180)/3;
movez -= cos(rot*PI/180)/3;
}

I've found seen screen shaking issues are commonly to do with the order in which you process events, update objects and the camera, before drawing the scene (or the physics engine is causing it :P). With a high framerate you might not notice small imperfections. You might want to add a Sleep/usleep to help expose these issues. For example, if your camera position is based on the camera rotation, but you update the position before processing the mouse events the camera position/rotation can be a frame out of sync.
This is a good reason to process inputs, movement and rendering in very separate stages.
Your lookAt function does look rather complex. lookAt is very useful when you want a camera that looks-at something, but in this case you're after a camera that looks-from a camera position. This is easier to implement by manually constructing the inverse transformations to position the camera object before drawing the sceene. For example, if you want to move left, translate everything else right (thus inverse)...
glLoadIdentity();
glRotatef(camera.rotX, 1.0f, 0.0f, 0.0f);
glRotatef(camera.rotY, 0.0f, 1.0f, 0.0f);
glRotatef(camera.rotZ, 0.0f, 0.0f, 1.0f);
glTranslatef(-camera.posX, -camera.posY, -camera.posZ);

Related

LWJGL Fullscreen while keeping aspect ratio?

I want to have a fullscreen mode that keeps the aspect ratio by adding black bars on either side. I tried just creating a display mode, but I can't make it fullscreen unless it's a pre-approved resolution, and when I use a bigger diaplay than the native resolution the pixels become messed up, and lines appeared between all of the tiles in the game for some reason.
I think I need to use FBOs to render the scenario to a texture instead of the window, and then just use a fullscreen approved resolution and render the texture properly stretched out in the center of the screen, but I just don't understand how to render to a texture in order to do that, or how to stretch an image. Could someone please help me?
EDIT
I got fullscreen working, but it makes everything all broken looking There are random lines on the edges of anything that's written to the window. There are no glitchy lines when it's in native resolution though. Here's my code:
Display.setTitle("Mega Man");
try{
Display.setDisplayMode(Display.getDesktopDisplayMode());
Display.create();
}catch(LWJGLException e){
e.printStackTrace();
}
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,WIDTH,HEIGHT,0,1,-1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
try{Display.setFullscreen(true);}catch(Exception e){}
int sh=Display.getHeight();
int sw=WIDTH*sh/HEIGHT;
GL11.glViewport(Display.getWidth()/2-sw/2, 0, sw, sh);
Screenshot of the glitchy fullscreen here: http://sta.sh/021fohgnmxwa
EDIT
Here is the texture rendering code that I use to draw everything:
public static void DrawQuadTex(Texture tex, int x, int y, float width, float height, float texWidth, float texHeight, float subx, float suby, float subd, String mirror){
if (tex==null){return;}
if (mirror==null){mirror = "";}
//subx, suby, and subd are to grab sprites from a sprite sheet. subd is the measure of both the width and length of the sprite, as only images with dimensions that are the same and are powers of 2 are properly displayed.
int xinner = 0;
int xouter = (int) width;
int yinner = 0;
int youter = (int) height;
if (mirror.indexOf("h")>-1){
xinner = xouter;
xouter = 0;
}
if (mirror.indexOf("v")>-1){
yinner = youter;
youter = 0;
}
tex.bind();
glTranslatef(x,y,0);
glBegin(GL_QUADS);
glTexCoord2f(subx/texWidth,suby/texHeight);
glVertex2f(xinner,yinner);
glTexCoord2f((subx+subd)/texWidth,suby/texHeight);
glVertex2f(xouter,yinner);
glTexCoord2f((subx+subd)/texWidth,(suby+subd)/texHeight);
glVertex2f(xouter,youter);
glTexCoord2f(subx/texWidth,(suby+subd)/texHeight);
glVertex2f(xinner,youter);
glEnd();
glLoadIdentity();
}
Just to keep it clean I give you a real answer and not just a comment.
The aspect ratio problem can be solved with help of glViewport. Using this method you can decide which area of the surface that will be rendered to. The default viewport will cover the whole surface.
Since the second problem with the corrupt rendering (also described here https://stackoverflow.com/questions/28846531/sprite-game-in-full-screen-aliasing-issue) appeared after changing viewport I will give my thought about it in this answer as well.
Without knowing exactly how the rendering code for the tile background looks. I would guess that the problem is due to any differences in the resolution between the glViewport and glOrtho calls.
Example: If the glOrtho resolution is half the viewport resolution then each openGL unit is actually 2 pixels. If you then renders a tile between x=0 and x=9 and then the next one between x=10 and x=19 you will get an empty space between them.
To solve this you can change the resolution so that they are the same. Or you can render the tile to overlap, first one x=0 to x=10 second one x=10 to x=20 and so on.
Without seeing the tile rendering code I can't verify it this is the problem though.

unity3d: Use main camera's depth buffer for rendering another camera view

After my main camera renders, I'd like to use (or copy) its depth buffer to a (disabled) camera's depth buffer.
My goal is to draw particles onto a smaller render target (using a separate camera) while using the depth buffer after opaque objects are drawn.
I can't do this in a single camera, since the goal is to use a smaller render target for the particles for performance reasons.
Replacement shaders in Unity aren't an option either: I want my particles to use their existing shaders - i just want the depth buffer of the particle camera to be overwritten with a subsampled version of the main camera's depth buffer before the particles are drawn.
I didn't get any reply to my earlier question; hence, the repost.
Here's the script attached to my main camera. It renders all the non-particle layers and I use OnRenderImage to invoke the particle camera.
public class MagicRenderer : MonoBehaviour {
public Shader particleShader; // shader that uses the main camera's depth buffer to depth test particle Z
public Material blendMat; // material that uses a simple blend shader
public int downSampleFactor = 1;
private RenderTexture particleRT;
private static GameObject pCam;
void Awake () {
// make the main cameras depth buffer available to the shaders via _CameraDepthTexture
camera.depthTextureMode = DepthTextureMode.Depth;
}
// Update is called once per frame
void Update () {
}
void OnRenderImage(RenderTexture src, RenderTexture dest) {
// create tmp RT
particleRT = RenderTexture.GetTemporary (Screen.width / downSampleFactor, Screen.height / downSampleFactor, 0);
particleRT.antiAliasing = 1;
// create particle cam
Camera pCam = GetPCam ();
pCam.CopyFrom (camera);
pCam.clearFlags = CameraClearFlags.SolidColor;
pCam.backgroundColor = new Color (0.0f, 0.0f, 0.0f, 0.0f);
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.useOcclusionCulling = false;
pCam.targetTexture = particleRT;
pCam.depth = 0;
// Draw to particleRT's colorBuffer using mainCam's depth buffer
// ?? - how do i transfer this camera's depth buffer to pCam?
pCam.Render ();
// pCam.RenderWithShader (particleShader, "Transparent"); // I don't want to replace the shaders my particles use; os shader replacement isnt an option.
// blend mainCam's colorBuffer with particleRT's colorBuffer
// Graphics.Blit(pCam.targetTexture, src, blendMat);
// copy resulting buffer to destination
Graphics.Blit (pCam.targetTexture, dest);
// clean up
RenderTexture.ReleaseTemporary(particleRT);
}
static public Camera GetPCam() {
if (!pCam) {
GameObject oldpcam = GameObject.Find("pCam");
Debug.Log (oldpcam);
if (oldpcam) Destroy(oldpcam);
pCam = new GameObject("pCam");
pCam.AddComponent<Camera>();
pCam.camera.enabled = false;
pCam.hideFlags = HideFlags.DontSave;
}
return pCam.camera;
}
}
I've a few additional questions:
1) Why does camera.depthTextureMode = DepthTextureMode.Depth; end up drawing all the objects in the scene just to write to the Z-buffer? Using Intel GPA, I see two passes before OnRenderImage gets called:
(i) Z-PrePass, that only writes to the depth buffer
(ii) Color pass, that writes to both the color and depth buffer.
2) I re-rendered the opaque objects to pCam's RT using a replacement shader that writes (0,0,0,0) to the colorBuffer with ZWrite On (to overcome the depth buffer transfer problem). After that, I reset the layers and clear mask as follows:
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.clearFlags = CameraClearFlags.Nothing;
and rendered them using pCam.Render().
I thought this would render the particles using their existing shaders with the ZTest.
Unfortunately, what I notice is that the depth-stencil buffer is cleared before the particles are drawn (inspite me not clearing anything..).
Why does this happen?
It's been 5 years but I delevoped an almost complete solution for rendering particles in a smaller seperate render target. I write this for future visitors. A lot of knowledge is still required.
Copying the depth
First, you have to get the scene depth in the resolution of your smaller render texture.
This can be done by creating a new render texture with the color format "depth".
To write the scene depth to the low resolution depth, create a shader that just outputs the depth:
struct fragOut{
float depth : DEPTH;
};
sampler2D _LastCameraDepthTexture;
fragOut frag (v2f i){
fragOut tOut;
tOut.depth = tex2D(_LastCameraDepthTexture, i.uv).x;
return tOut;
}
_LastCameraDepthTexture is automatically filled by Unity, but there is a downside.
It only comes for free if the main camera renders with deferred rendering.
For forward shading, Unity seems to render the scene again just for the depth texture.
Check the frame debugger.
Then, add a post processing effect to the main camera that executes the shader:
protected virtual void OnRenderImage(RenderTexture pFrom, RenderTexture pTo) {
Graphics.Blit(pFrom, mSmallerSceneDepthTexture, mRenderToDepthMaterial);
Graphics.Blit(pFrom, pTo);
}
You can probably do this without the second blit, but it was easier for me for testing.
Using the copied depth for rendering
To use the new depth texture for your second camera, call
mSecondCamera.SetTargetBuffers(mParticleRenderTexture.colorBuffer, mSmallerSceneDepthTexture.depthBuffer);
Keep targetTexture empty.
You then must ensure the second camera does not clear the depth, only the color.
For this, disable clear on the second camera completely and clear manually like this
Graphics.SetRenderTarget(mParticleRenderTexture);
GL.Clear(false, true, Color.clear);
I recommend to also render the second camera by hand. Disable it and call
mSecondCamera.Render();
after clearing.
Merging
Now you have to merge the main view and the seperate layer.
Depending on your rendering, you will probably end up with a render texture with so called premultiplied alpha.
To mix this with the rest, use a post processing step on the main camera with
fixed4 tBasis = tex2D(_MainTex, i.uv);
fixed4 tToInsert = tex2D(TransparentFX, i.uv);
//beware premultiplied alpha in insert
tBasis.rgb = tBasis.rgb * (1.0f- tToInsert.a) + tToInsert.rgb;
return tBasis;
Additive materials work out of the box, but alpha blended do not.
You have to create a shader with custom blending to create working alpha blended materials. The blending is
Blend SrcAlpha OneMinusSrcAlpha, One OneMinusSrcAlpha
This changes how the alpha channel is modified for every performed blending.
Results
add blended in front of alpha blended
fx layer rgb
fx layer alpha
alpha blended in front of add blended
fx layer rgb
fx layer alpha
I did not test yet if the performance actually increases.
If anyone has a simpler solution, let me know please.
I managed to reuse camera Z-buffer "manually" in the shader used for rendering. See http://forum.unity3d.com/threads/reuse-depth-buffer-of-main-camera.280460/ for more.
Just alter the particle shader you use already for particle rendering.

2D Screen coordinate to 3D position Directx 9 / Box Select

I am trying to implement box select in a 3d world. Basically, click, hold mouse, and then unpress mouse, get a box, and then box select. To start, I'm trying to figure out how to get the coordinates of the clicks in 3d.
I have raypicking, and that is not getting the right coordinate (gets origin and direction). It keeps returning the same origin no matter what X/Y for screen is (although the direction is different).
I've also tried:
D3DXVECTOR3 ori = D3DXVECTOR3(sx, sy, 0.0f);
D3DXVECTOR3 out;
D3DXVec3Unproject(&out, &ori, &viewPort, &projectionMat, &viewMat, &worldMat);
And it gets the same thing, the coordinates are very close to each other no matter what coordinates (and are wrong). It's almost like returning the eye, instead of the actual world coordinate.
How do I turn 2d Screen coordinates into 3d using directx 9c?
This is called picking in Direct3D, to select a model in 3D space, you mainly need 3 steps:
Generate the picking ray
Transform the picking ray and the model you want to pick in the same space
Do a intersection test of the picking ray and the model
Generate the picking ray
When we click the mouse on the screen(say the point is s on the screen), the model is selected when the box project on the area surround the point s on the projection window.
so, in order to generate the picking ray with the given screen coordinates (x, y), first we need to transform (x,y) to the projection window, this is can be done by the invert process of viewport transformation. another thing is the point on the projection window was scaled by the project matrix, so we should divide it by the scale factors.
in DirectX, the camera always place at the origin, so the picking ray starts from the origin, and projection window is the near clip plane(z=1).this is what the code has done below.
Ray CalcPickingRay(LPDIRECT3DDEVICE9 Device, int screen_x, int screen_y)
{
float px = 0.0f;
float py = 0.0f;
// Get viewport
D3DVIEWPORT9 vp;
Device->GetViewport(&vp);
// Get Projection matrix
D3DXMATRIX proj;
Device->GetTransform(D3DTS_PROJECTION, &proj);
px = ((( 2.0f * screen_x) / vp.Width) - 1.0f) / proj(0, 0);
py = (((-2.0f * screen_y) / vp.Height) + 1.0f) / proj(1, 1);
Ray ray;
ray._origin = D3DXVECTOR3(0.0f, 0.0f, 0.0f);
ray._direction = D3DXVECTOR3(px, py, 1.0f);
return ray;
}
Transform the picking ray and model into the same space.
We always obtain this by transform the picking ray to world space, simply get the invert of your view matrix, then apply the invert matrix to your pickig ray.
// transform the ray from view space to world space
void TransformRay(Ray* ray, D3DXMATRIX* invertViewMatrix)
{
// transform the ray's origin, w = 1.
D3DXVec3TransformCoord(
&ray->_origin,
&ray->_origin,
invertViewMatrix);
// transform the ray's direction, w = 0.
D3DXVec3TransformNormal(
&ray->_direction,
&ray->_direction,
invertViewMatrix);
// normalize the direction
D3DXVec3Normalize(&ray->_direction, &ray->_direction);
}
Do intersection test
If everything above is well, you can do the intersection test now. this is a ray-box intersection, so you can use function D3DXboxBoundProbe. you can change the visual mode of you box to see if the picking was really work, for example, set the fill mode to solid or wire-frame if D3DXboxBoundProbe return TRUE.
You can perform the picking in response of WM_LBUTTONDOWN.
case WM_LBUTTONDOWN:
{
// Get screen point
int iMouseX = (short)LOWORD(lParam) ;
int iMouseY = (short)HIWORD(lParam) ;
// Calculate the picking ray
Ray ray = CalcPickingRay(g_pd3dDevice, iMouseX, iMouseY) ;
// transform the ray from view space to world space
// get view matrix
D3DXMATRIX view;
g_pd3dDevice->GetTransform(D3DTS_VIEW, &view);
// inverse it
D3DXMATRIX viewInverse;
D3DXMatrixInverse(&viewInverse, 0, &view);
// apply on the ray
TransformRay(&ray, &viewInverse) ;
// collision detection
D3DXVECTOR3 v(0.0f, 0.0f, 0.0f);
if(D3DXSphereBoundProbe(box.minPoint, box.maxPoint &ray._origin, &ray._direction))
{
g_pd3dDevice->SetRenderState(D3DRS_FILLMODE, D3DFILL_SOLID);
}
break ;
}
It turns out, I was handling the problem the wrong/opposite way. Turning 2D to 3D didn't make sense in the end. But as it turns out, converting the vertices from 3D to 2D, then seeing if inside the 2D box was the right answer!

trouble with composite rotation and translation in opengl, moving object about its own local axis is not working

I am building a robot in openGL and it should move and rotate. When I press the robot should move forward and if I press t then he should rotate 15* about its own local axis and then if i press f he will walk again. I have done, the robot walks and rotates but the problem is he is not rotating with respect to his local axis, he is following (0,0,0). I think i dont understand how the composition of translation and rotation has to be made so that I get my desired effect.
I am trying now with just a scaled sphere. I am adding the display func here, so that it is more clear for you guys:
void display()
{
glEnable(GL_DEPTH_TEST); // need depth test to correctly draw 3D objects
glClearColor(0,0,0,1);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glShadeModel(GL_SMOOTH);
//All color and material stuffs go here
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_NORMALIZE); // normalize normals
glEnable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE);
// set up the parameters for lighting
GLfloat light_ambient[] = {0,0,0,1};
GLfloat light_diffuse[] = {.6,.6,.6,1};
GLfloat light_specular[] = {1,1,1,1};
GLfloat light_pos[] = {10,10,10,1};
glLightfv(GL_LIGHT0,GL_AMBIENT, light_ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, light_specular);
GLfloat mat_specular[] = {.9, .9, .9,1};
GLfloat mat_shine[] = {10};
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, mat_specular);
glMaterialfv(GL_FRONT_AND_BACK, GL_SHININESS, mat_shine);
//color specs ends ////////////////////////////////////////
//glPolygonMode(GL_FRONT_AND_BACK,GL_LINE); // comment this line to enable polygon shades
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90, 1, 1, 100);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glLightfv(GL_LIGHT0, GL_POSITION, light_pos);
gluLookAt(0,0,30,0,0,0,0,1,0);
glRotatef(x_angle, 0, 1,0); // this is just for mouse handling
glRotatef(y_angle, 1,0,0); // this is just for mouse handling
glScalef(scale_size, scale_size, scale_size); // for zooming effect
draw_coordinate();
//Drawing using VBO starts here
glTranslatef(walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
glRotatef(turn,0,1,0);
draw_sphere(3,1,1);
glDisableClientState(GL_VERTEX_ARRAY); // enable the vertex array on the client side
glDisableClientState(GL_NORMAL_ARRAY); // enable the normal array on the client side
glutSwapBuffers();
}
The rotatefunction from opengl is one that rotates around (0,0,0). You have to translate the rotationpoint to the center and then do the rotation.
...
glTranslatef(walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
glTranslatef(-x_rot,-y_rot,-z_rot);
glRotatef(turn,0,1,0);
glTranslatef(x_rot,y_rot,z_rot);
...
So In your case x_rot=walk*sin(M_PI*turn/180), y_rot=0 and z_rot=walk*cos(M_PI*turn/180). The above becomes:
...
glRotatef(turn,0,1,0);
glTranslatef(x_rot=walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
...
If your robot doesn't rotate in its own axis then translate the robot to the center, rotate it and again translate it back to the original position. Keep your translation, rotation, scaling and drawing inside
glPushMatrix();
........your rotation,translation,scalling,drawing goes here..........
glPopMatrix();
These keeps the scene same.
If you don't understand these function then look here.

How do you rotate SVG images in processing.js

I am just staring with processing.js and I have been having trouble because every time I rotate an image it also changes its location on the screen. So what processing seems to do is, rotate my image around the point I told it to place it, instead of rotating it first around its own axis and then placing it where I told it to (which I figured cannot be done in that way/order).
This is the code
PShape s;
float angle = 0.1; //rads
s = loadShape("sensor.svg");
s.rotate(angle);
//I change this angle manually or with my clickMouse function which isnt shown.
void setup(){
size(400,350);
frameRate(30); //30 frames per seconds
}
void draw(){ //shape( shape, x, y, width, height);
smooth();
fill(153);
ellipse(200, 350/2, 100, 100);
shape(s, 200, 350/2, 20, 20);
ellipse(200, 350/2, 2, 2);
}
What I am basically trying to do is make this "sensor" image rotate in the correct orientation around the circle (ellipse) that I drew. Thats the idea. Its doing neither. Maybe having a click function that rotates the SVG image around the circle. But instead it rotates around the coordinates of the shape(image, x_coord, y_coord, width, height) function. If anyone has any suggestions, I would be so happy! Hope my question makes sense, if it doesnt I would be more than happy to clarify any part of it.
Thanks! :)
It's much easier not to rotate your shape, but to rotate the coordinate system.
void draw() {
translate(s.width/2,s.height/2);
rotate(PI/4);
shape(s);
resetMatrix();
// keep on drawing here
}
This first moves the coordinate system so that (0,0) is on top of the center of your shape, then rotates the entire coordinate system by 45 degrees, then draws your shape. Then you reset the coordinate system and keep drawing as usual.

Resources