I am trying to implement box select in a 3d world. Basically, click, hold mouse, and then unpress mouse, get a box, and then box select. To start, I'm trying to figure out how to get the coordinates of the clicks in 3d.
I have raypicking, and that is not getting the right coordinate (gets origin and direction). It keeps returning the same origin no matter what X/Y for screen is (although the direction is different).
I've also tried:
D3DXVECTOR3 ori = D3DXVECTOR3(sx, sy, 0.0f);
D3DXVECTOR3 out;
D3DXVec3Unproject(&out, &ori, &viewPort, &projectionMat, &viewMat, &worldMat);
And it gets the same thing, the coordinates are very close to each other no matter what coordinates (and are wrong). It's almost like returning the eye, instead of the actual world coordinate.
How do I turn 2d Screen coordinates into 3d using directx 9c?
This is called picking in Direct3D, to select a model in 3D space, you mainly need 3 steps:
Generate the picking ray
Transform the picking ray and the model you want to pick in the same space
Do a intersection test of the picking ray and the model
Generate the picking ray
When we click the mouse on the screen(say the point is s on the screen), the model is selected when the box project on the area surround the point s on the projection window.
so, in order to generate the picking ray with the given screen coordinates (x, y), first we need to transform (x,y) to the projection window, this is can be done by the invert process of viewport transformation. another thing is the point on the projection window was scaled by the project matrix, so we should divide it by the scale factors.
in DirectX, the camera always place at the origin, so the picking ray starts from the origin, and projection window is the near clip plane(z=1).this is what the code has done below.
Ray CalcPickingRay(LPDIRECT3DDEVICE9 Device, int screen_x, int screen_y)
{
float px = 0.0f;
float py = 0.0f;
// Get viewport
D3DVIEWPORT9 vp;
Device->GetViewport(&vp);
// Get Projection matrix
D3DXMATRIX proj;
Device->GetTransform(D3DTS_PROJECTION, &proj);
px = ((( 2.0f * screen_x) / vp.Width) - 1.0f) / proj(0, 0);
py = (((-2.0f * screen_y) / vp.Height) + 1.0f) / proj(1, 1);
Ray ray;
ray._origin = D3DXVECTOR3(0.0f, 0.0f, 0.0f);
ray._direction = D3DXVECTOR3(px, py, 1.0f);
return ray;
}
Transform the picking ray and model into the same space.
We always obtain this by transform the picking ray to world space, simply get the invert of your view matrix, then apply the invert matrix to your pickig ray.
// transform the ray from view space to world space
void TransformRay(Ray* ray, D3DXMATRIX* invertViewMatrix)
{
// transform the ray's origin, w = 1.
D3DXVec3TransformCoord(
&ray->_origin,
&ray->_origin,
invertViewMatrix);
// transform the ray's direction, w = 0.
D3DXVec3TransformNormal(
&ray->_direction,
&ray->_direction,
invertViewMatrix);
// normalize the direction
D3DXVec3Normalize(&ray->_direction, &ray->_direction);
}
Do intersection test
If everything above is well, you can do the intersection test now. this is a ray-box intersection, so you can use function D3DXboxBoundProbe. you can change the visual mode of you box to see if the picking was really work, for example, set the fill mode to solid or wire-frame if D3DXboxBoundProbe return TRUE.
You can perform the picking in response of WM_LBUTTONDOWN.
case WM_LBUTTONDOWN:
{
// Get screen point
int iMouseX = (short)LOWORD(lParam) ;
int iMouseY = (short)HIWORD(lParam) ;
// Calculate the picking ray
Ray ray = CalcPickingRay(g_pd3dDevice, iMouseX, iMouseY) ;
// transform the ray from view space to world space
// get view matrix
D3DXMATRIX view;
g_pd3dDevice->GetTransform(D3DTS_VIEW, &view);
// inverse it
D3DXMATRIX viewInverse;
D3DXMatrixInverse(&viewInverse, 0, &view);
// apply on the ray
TransformRay(&ray, &viewInverse) ;
// collision detection
D3DXVECTOR3 v(0.0f, 0.0f, 0.0f);
if(D3DXSphereBoundProbe(box.minPoint, box.maxPoint &ray._origin, &ray._direction))
{
g_pd3dDevice->SetRenderState(D3DRS_FILLMODE, D3DFILL_SOLID);
}
break ;
}
It turns out, I was handling the problem the wrong/opposite way. Turning 2D to 3D didn't make sense in the end. But as it turns out, converting the vertices from 3D to 2D, then seeing if inside the 2D box was the right answer!
Related
I would like to rotate my watches display to a variable degree like a compass does in real life? So far I have only discovered this function within the samsung API
screen.lockOrientation("portrait-secondary");
I would like more control than this, if that means using the native API, that's fine but I need help on where to look.
You may try using evas_map_util_rotate() function to rotate an object. It rotates the map based on an angle and the center coordinates of the rotation (the rotation point). A positive angle rotates the map clockwise, while a negative angle rotates the map counter-clockwise.
Please have a look into Modifying a Map with Utility Functions section of this link. It contains an example which shows how to rotate an object around its center point by 45 degrees clockwise.
You may also use below sample code snippet.
#param[in] object_to_rotate The object you want to rotate
#param[in] degree The degree you want to rotate
#param[in] cx The rotation's center horizontal position
#param[in] cy The rotation's center vertical position
void view_rotate_object(Evas_Object *object_to_rotate, double degree, Evas_Coord cx, Evas_Coord cy)
{
Evas_Map *m = NULL;
if (object_to_rotate == NULL) {
dlog_print(DLOG_ERROR, LOG_TAG, "object is NULL");
return;
}
m = evas_map_new(4);
evas_map_util_points_populate_from_object(m, object_to_rotate);
evas_map_util_rotate(m, degree, cx, cy);
evas_object_map_set(object_to_rotate, m);
evas_object_map_enable_set(object_to_rotate, EINA_TRUE);
evas_map_free(m);
}
Hope it'll help.
I want to have a fullscreen mode that keeps the aspect ratio by adding black bars on either side. I tried just creating a display mode, but I can't make it fullscreen unless it's a pre-approved resolution, and when I use a bigger diaplay than the native resolution the pixels become messed up, and lines appeared between all of the tiles in the game for some reason.
I think I need to use FBOs to render the scenario to a texture instead of the window, and then just use a fullscreen approved resolution and render the texture properly stretched out in the center of the screen, but I just don't understand how to render to a texture in order to do that, or how to stretch an image. Could someone please help me?
EDIT
I got fullscreen working, but it makes everything all broken looking There are random lines on the edges of anything that's written to the window. There are no glitchy lines when it's in native resolution though. Here's my code:
Display.setTitle("Mega Man");
try{
Display.setDisplayMode(Display.getDesktopDisplayMode());
Display.create();
}catch(LWJGLException e){
e.printStackTrace();
}
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,WIDTH,HEIGHT,0,1,-1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
try{Display.setFullscreen(true);}catch(Exception e){}
int sh=Display.getHeight();
int sw=WIDTH*sh/HEIGHT;
GL11.glViewport(Display.getWidth()/2-sw/2, 0, sw, sh);
Screenshot of the glitchy fullscreen here: http://sta.sh/021fohgnmxwa
EDIT
Here is the texture rendering code that I use to draw everything:
public static void DrawQuadTex(Texture tex, int x, int y, float width, float height, float texWidth, float texHeight, float subx, float suby, float subd, String mirror){
if (tex==null){return;}
if (mirror==null){mirror = "";}
//subx, suby, and subd are to grab sprites from a sprite sheet. subd is the measure of both the width and length of the sprite, as only images with dimensions that are the same and are powers of 2 are properly displayed.
int xinner = 0;
int xouter = (int) width;
int yinner = 0;
int youter = (int) height;
if (mirror.indexOf("h")>-1){
xinner = xouter;
xouter = 0;
}
if (mirror.indexOf("v")>-1){
yinner = youter;
youter = 0;
}
tex.bind();
glTranslatef(x,y,0);
glBegin(GL_QUADS);
glTexCoord2f(subx/texWidth,suby/texHeight);
glVertex2f(xinner,yinner);
glTexCoord2f((subx+subd)/texWidth,suby/texHeight);
glVertex2f(xouter,yinner);
glTexCoord2f((subx+subd)/texWidth,(suby+subd)/texHeight);
glVertex2f(xouter,youter);
glTexCoord2f(subx/texWidth,(suby+subd)/texHeight);
glVertex2f(xinner,youter);
glEnd();
glLoadIdentity();
}
Just to keep it clean I give you a real answer and not just a comment.
The aspect ratio problem can be solved with help of glViewport. Using this method you can decide which area of the surface that will be rendered to. The default viewport will cover the whole surface.
Since the second problem with the corrupt rendering (also described here https://stackoverflow.com/questions/28846531/sprite-game-in-full-screen-aliasing-issue) appeared after changing viewport I will give my thought about it in this answer as well.
Without knowing exactly how the rendering code for the tile background looks. I would guess that the problem is due to any differences in the resolution between the glViewport and glOrtho calls.
Example: If the glOrtho resolution is half the viewport resolution then each openGL unit is actually 2 pixels. If you then renders a tile between x=0 and x=9 and then the next one between x=10 and x=19 you will get an empty space between them.
To solve this you can change the resolution so that they are the same. Or you can render the tile to overlap, first one x=0 to x=10 second one x=10 to x=20 and so on.
Without seeing the tile rendering code I can't verify it this is the problem though.
I use the following fragment shader, which uses the fog effect, to draw my scene:
precision mediump float;
uniform int EnableFog;
uniform float FogMinDist;
uniform float FogMaxDist;
varying lowp vec4 DestinationColor;
varying float EyeToVertexDist;
float computeFogFactor()
{
float fogFactor = 1.0;
if (EnableFog != 0)
{
//Use a bit lower vlaue of FogMaxDist to get a better fog effect - it will make the far end disappear quicker.
float fogMaxDistABitCloser = FogMaxDist * 0.98;
fogFactor = (fogMaxDistABitCloser - EyeToVertexDist) / (fogMaxDistABitCloser - FogMinDist);
fogFactor = clamp(fogFactor, 0.0, 1.0);
}
return fogFactor;
}
void main(void)
{
float fogFactor = computeFogFactor();
gl_FragColor = DestinationColor * fogFactor;
}
And i enable alpha blending:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
The result is the following scene:
My problem is with the places in which the lines overlap - the result is that the color seems darker than the color of both lines:
How i can fix it?
As already described in the comment you are blending the newly drawn line with the background which may already contain colours from another object at certain pixels, in your case where lines overlap. To solve this you will either have to draw your lines without overlapping or make your drawing independent from the current buffer state.
In your specific case you may pass the background colour to your fragment shader via some uniform or even a texture and then do your blending manually in the fragment shader.
In general you might want to draw the grid to some frame buffer object (FBO) with attached texture and then draw the whole texture in a single draw call using your fog shader and blending. The drawing to FBO should then be with disabled blending.
There are other ways such as drawing the grid to a stencil buffer first and then redraw a full-screen rect applying a colour with your shader and blending.
After my main camera renders, I'd like to use (or copy) its depth buffer to a (disabled) camera's depth buffer.
My goal is to draw particles onto a smaller render target (using a separate camera) while using the depth buffer after opaque objects are drawn.
I can't do this in a single camera, since the goal is to use a smaller render target for the particles for performance reasons.
Replacement shaders in Unity aren't an option either: I want my particles to use their existing shaders - i just want the depth buffer of the particle camera to be overwritten with a subsampled version of the main camera's depth buffer before the particles are drawn.
I didn't get any reply to my earlier question; hence, the repost.
Here's the script attached to my main camera. It renders all the non-particle layers and I use OnRenderImage to invoke the particle camera.
public class MagicRenderer : MonoBehaviour {
public Shader particleShader; // shader that uses the main camera's depth buffer to depth test particle Z
public Material blendMat; // material that uses a simple blend shader
public int downSampleFactor = 1;
private RenderTexture particleRT;
private static GameObject pCam;
void Awake () {
// make the main cameras depth buffer available to the shaders via _CameraDepthTexture
camera.depthTextureMode = DepthTextureMode.Depth;
}
// Update is called once per frame
void Update () {
}
void OnRenderImage(RenderTexture src, RenderTexture dest) {
// create tmp RT
particleRT = RenderTexture.GetTemporary (Screen.width / downSampleFactor, Screen.height / downSampleFactor, 0);
particleRT.antiAliasing = 1;
// create particle cam
Camera pCam = GetPCam ();
pCam.CopyFrom (camera);
pCam.clearFlags = CameraClearFlags.SolidColor;
pCam.backgroundColor = new Color (0.0f, 0.0f, 0.0f, 0.0f);
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.useOcclusionCulling = false;
pCam.targetTexture = particleRT;
pCam.depth = 0;
// Draw to particleRT's colorBuffer using mainCam's depth buffer
// ?? - how do i transfer this camera's depth buffer to pCam?
pCam.Render ();
// pCam.RenderWithShader (particleShader, "Transparent"); // I don't want to replace the shaders my particles use; os shader replacement isnt an option.
// blend mainCam's colorBuffer with particleRT's colorBuffer
// Graphics.Blit(pCam.targetTexture, src, blendMat);
// copy resulting buffer to destination
Graphics.Blit (pCam.targetTexture, dest);
// clean up
RenderTexture.ReleaseTemporary(particleRT);
}
static public Camera GetPCam() {
if (!pCam) {
GameObject oldpcam = GameObject.Find("pCam");
Debug.Log (oldpcam);
if (oldpcam) Destroy(oldpcam);
pCam = new GameObject("pCam");
pCam.AddComponent<Camera>();
pCam.camera.enabled = false;
pCam.hideFlags = HideFlags.DontSave;
}
return pCam.camera;
}
}
I've a few additional questions:
1) Why does camera.depthTextureMode = DepthTextureMode.Depth; end up drawing all the objects in the scene just to write to the Z-buffer? Using Intel GPA, I see two passes before OnRenderImage gets called:
(i) Z-PrePass, that only writes to the depth buffer
(ii) Color pass, that writes to both the color and depth buffer.
2) I re-rendered the opaque objects to pCam's RT using a replacement shader that writes (0,0,0,0) to the colorBuffer with ZWrite On (to overcome the depth buffer transfer problem). After that, I reset the layers and clear mask as follows:
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.clearFlags = CameraClearFlags.Nothing;
and rendered them using pCam.Render().
I thought this would render the particles using their existing shaders with the ZTest.
Unfortunately, what I notice is that the depth-stencil buffer is cleared before the particles are drawn (inspite me not clearing anything..).
Why does this happen?
It's been 5 years but I delevoped an almost complete solution for rendering particles in a smaller seperate render target. I write this for future visitors. A lot of knowledge is still required.
Copying the depth
First, you have to get the scene depth in the resolution of your smaller render texture.
This can be done by creating a new render texture with the color format "depth".
To write the scene depth to the low resolution depth, create a shader that just outputs the depth:
struct fragOut{
float depth : DEPTH;
};
sampler2D _LastCameraDepthTexture;
fragOut frag (v2f i){
fragOut tOut;
tOut.depth = tex2D(_LastCameraDepthTexture, i.uv).x;
return tOut;
}
_LastCameraDepthTexture is automatically filled by Unity, but there is a downside.
It only comes for free if the main camera renders with deferred rendering.
For forward shading, Unity seems to render the scene again just for the depth texture.
Check the frame debugger.
Then, add a post processing effect to the main camera that executes the shader:
protected virtual void OnRenderImage(RenderTexture pFrom, RenderTexture pTo) {
Graphics.Blit(pFrom, mSmallerSceneDepthTexture, mRenderToDepthMaterial);
Graphics.Blit(pFrom, pTo);
}
You can probably do this without the second blit, but it was easier for me for testing.
Using the copied depth for rendering
To use the new depth texture for your second camera, call
mSecondCamera.SetTargetBuffers(mParticleRenderTexture.colorBuffer, mSmallerSceneDepthTexture.depthBuffer);
Keep targetTexture empty.
You then must ensure the second camera does not clear the depth, only the color.
For this, disable clear on the second camera completely and clear manually like this
Graphics.SetRenderTarget(mParticleRenderTexture);
GL.Clear(false, true, Color.clear);
I recommend to also render the second camera by hand. Disable it and call
mSecondCamera.Render();
after clearing.
Merging
Now you have to merge the main view and the seperate layer.
Depending on your rendering, you will probably end up with a render texture with so called premultiplied alpha.
To mix this with the rest, use a post processing step on the main camera with
fixed4 tBasis = tex2D(_MainTex, i.uv);
fixed4 tToInsert = tex2D(TransparentFX, i.uv);
//beware premultiplied alpha in insert
tBasis.rgb = tBasis.rgb * (1.0f- tToInsert.a) + tToInsert.rgb;
return tBasis;
Additive materials work out of the box, but alpha blended do not.
You have to create a shader with custom blending to create working alpha blended materials. The blending is
Blend SrcAlpha OneMinusSrcAlpha, One OneMinusSrcAlpha
This changes how the alpha channel is modified for every performed blending.
Results
add blended in front of alpha blended
fx layer rgb
fx layer alpha
alpha blended in front of add blended
fx layer rgb
fx layer alpha
I did not test yet if the performance actually increases.
If anyone has a simpler solution, let me know please.
I managed to reuse camera Z-buffer "manually" in the shader used for rendering. See http://forum.unity3d.com/threads/reuse-depth-buffer-of-main-camera.280460/ for more.
Just alter the particle shader you use already for particle rendering.
I am building a robot in openGL and it should move and rotate. When I press the robot should move forward and if I press t then he should rotate 15* about its own local axis and then if i press f he will walk again. I have done, the robot walks and rotates but the problem is he is not rotating with respect to his local axis, he is following (0,0,0). I think i dont understand how the composition of translation and rotation has to be made so that I get my desired effect.
I am trying now with just a scaled sphere. I am adding the display func here, so that it is more clear for you guys:
void display()
{
glEnable(GL_DEPTH_TEST); // need depth test to correctly draw 3D objects
glClearColor(0,0,0,1);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glShadeModel(GL_SMOOTH);
//All color and material stuffs go here
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_NORMALIZE); // normalize normals
glEnable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE);
// set up the parameters for lighting
GLfloat light_ambient[] = {0,0,0,1};
GLfloat light_diffuse[] = {.6,.6,.6,1};
GLfloat light_specular[] = {1,1,1,1};
GLfloat light_pos[] = {10,10,10,1};
glLightfv(GL_LIGHT0,GL_AMBIENT, light_ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, light_specular);
GLfloat mat_specular[] = {.9, .9, .9,1};
GLfloat mat_shine[] = {10};
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, mat_specular);
glMaterialfv(GL_FRONT_AND_BACK, GL_SHININESS, mat_shine);
//color specs ends ////////////////////////////////////////
//glPolygonMode(GL_FRONT_AND_BACK,GL_LINE); // comment this line to enable polygon shades
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90, 1, 1, 100);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glLightfv(GL_LIGHT0, GL_POSITION, light_pos);
gluLookAt(0,0,30,0,0,0,0,1,0);
glRotatef(x_angle, 0, 1,0); // this is just for mouse handling
glRotatef(y_angle, 1,0,0); // this is just for mouse handling
glScalef(scale_size, scale_size, scale_size); // for zooming effect
draw_coordinate();
//Drawing using VBO starts here
glTranslatef(walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
glRotatef(turn,0,1,0);
draw_sphere(3,1,1);
glDisableClientState(GL_VERTEX_ARRAY); // enable the vertex array on the client side
glDisableClientState(GL_NORMAL_ARRAY); // enable the normal array on the client side
glutSwapBuffers();
}
The rotatefunction from opengl is one that rotates around (0,0,0). You have to translate the rotationpoint to the center and then do the rotation.
...
glTranslatef(walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
glTranslatef(-x_rot,-y_rot,-z_rot);
glRotatef(turn,0,1,0);
glTranslatef(x_rot,y_rot,z_rot);
...
So In your case x_rot=walk*sin(M_PI*turn/180), y_rot=0 and z_rot=walk*cos(M_PI*turn/180). The above becomes:
...
glRotatef(turn,0,1,0);
glTranslatef(x_rot=walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
...
If your robot doesn't rotate in its own axis then translate the robot to the center, rotate it and again translate it back to the original position. Keep your translation, rotation, scaling and drawing inside
glPushMatrix();
........your rotation,translation,scalling,drawing goes here..........
glPopMatrix();
These keeps the scene same.
If you don't understand these function then look here.