I use a 3D object as a container for a few meshes. I am using an Orthographic camera. I vertically rotate the container by 90° like this:
meshContainer.rotation.x = 0;
meshContainer.rotation.y = - 90 * Math.PI / 180;
meshContainer.rotation.z = 0;
camera.updateProjectionMatrix();
I have to do operations on the meshes inside the container depending on their orientation relative to the camera. So, i do this test:
var vector = new THREE.Vector3( 0, 0, -1 );
vector.applyQuaternion(camera.quaternion);
if (vector.dot(geom.faces[i].normal) < 0) { ... }
goem is the geometry of one mesh inside the container.
I just want to check if each face is 'looking at' the camera or not. It works fine when I rotate the container with mouse controls but not with a rotation as above. When I look at the mesh geometries, it seems that the normals of the faces are not udpated when I do the container rotation.
I tried this :
mesh[value].geometry.computeFaceNormals();
mesh[value].geometry.computeVertexNormals();
on each mesh after the container rotation, but with no results.
Does anybody know what to do?
The camera position must be changed but not the mesh container and everything is fine
Related
I can not figure out how to set the object to the three.js scene origin after having moved it. What happens is that I built a scene with an object at the origin and a camera pointing at it.
I am using the mouse to move around the object in the scene.
Then I try to reset the object position so that it goes back to the scene origin like that with the camera pointed at it:
camera.position.x = 10;
camera.position.y = 10;
camera.position.z = 10;
camera.lookAt( scene.position );
group.position.set(0, 0, 0);
I seems fine at first. But when I try to rotate the camera with the mouse around the object, the object shifts back to its previous position and the camera is not centered around it.
I was using controls. Just do this:
controls.target.set( 0, 0, 0 );
After my main camera renders, I'd like to use (or copy) its depth buffer to a (disabled) camera's depth buffer.
My goal is to draw particles onto a smaller render target (using a separate camera) while using the depth buffer after opaque objects are drawn.
I can't do this in a single camera, since the goal is to use a smaller render target for the particles for performance reasons.
Replacement shaders in Unity aren't an option either: I want my particles to use their existing shaders - i just want the depth buffer of the particle camera to be overwritten with a subsampled version of the main camera's depth buffer before the particles are drawn.
I didn't get any reply to my earlier question; hence, the repost.
Here's the script attached to my main camera. It renders all the non-particle layers and I use OnRenderImage to invoke the particle camera.
public class MagicRenderer : MonoBehaviour {
public Shader particleShader; // shader that uses the main camera's depth buffer to depth test particle Z
public Material blendMat; // material that uses a simple blend shader
public int downSampleFactor = 1;
private RenderTexture particleRT;
private static GameObject pCam;
void Awake () {
// make the main cameras depth buffer available to the shaders via _CameraDepthTexture
camera.depthTextureMode = DepthTextureMode.Depth;
}
// Update is called once per frame
void Update () {
}
void OnRenderImage(RenderTexture src, RenderTexture dest) {
// create tmp RT
particleRT = RenderTexture.GetTemporary (Screen.width / downSampleFactor, Screen.height / downSampleFactor, 0);
particleRT.antiAliasing = 1;
// create particle cam
Camera pCam = GetPCam ();
pCam.CopyFrom (camera);
pCam.clearFlags = CameraClearFlags.SolidColor;
pCam.backgroundColor = new Color (0.0f, 0.0f, 0.0f, 0.0f);
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.useOcclusionCulling = false;
pCam.targetTexture = particleRT;
pCam.depth = 0;
// Draw to particleRT's colorBuffer using mainCam's depth buffer
// ?? - how do i transfer this camera's depth buffer to pCam?
pCam.Render ();
// pCam.RenderWithShader (particleShader, "Transparent"); // I don't want to replace the shaders my particles use; os shader replacement isnt an option.
// blend mainCam's colorBuffer with particleRT's colorBuffer
// Graphics.Blit(pCam.targetTexture, src, blendMat);
// copy resulting buffer to destination
Graphics.Blit (pCam.targetTexture, dest);
// clean up
RenderTexture.ReleaseTemporary(particleRT);
}
static public Camera GetPCam() {
if (!pCam) {
GameObject oldpcam = GameObject.Find("pCam");
Debug.Log (oldpcam);
if (oldpcam) Destroy(oldpcam);
pCam = new GameObject("pCam");
pCam.AddComponent<Camera>();
pCam.camera.enabled = false;
pCam.hideFlags = HideFlags.DontSave;
}
return pCam.camera;
}
}
I've a few additional questions:
1) Why does camera.depthTextureMode = DepthTextureMode.Depth; end up drawing all the objects in the scene just to write to the Z-buffer? Using Intel GPA, I see two passes before OnRenderImage gets called:
(i) Z-PrePass, that only writes to the depth buffer
(ii) Color pass, that writes to both the color and depth buffer.
2) I re-rendered the opaque objects to pCam's RT using a replacement shader that writes (0,0,0,0) to the colorBuffer with ZWrite On (to overcome the depth buffer transfer problem). After that, I reset the layers and clear mask as follows:
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.clearFlags = CameraClearFlags.Nothing;
and rendered them using pCam.Render().
I thought this would render the particles using their existing shaders with the ZTest.
Unfortunately, what I notice is that the depth-stencil buffer is cleared before the particles are drawn (inspite me not clearing anything..).
Why does this happen?
It's been 5 years but I delevoped an almost complete solution for rendering particles in a smaller seperate render target. I write this for future visitors. A lot of knowledge is still required.
Copying the depth
First, you have to get the scene depth in the resolution of your smaller render texture.
This can be done by creating a new render texture with the color format "depth".
To write the scene depth to the low resolution depth, create a shader that just outputs the depth:
struct fragOut{
float depth : DEPTH;
};
sampler2D _LastCameraDepthTexture;
fragOut frag (v2f i){
fragOut tOut;
tOut.depth = tex2D(_LastCameraDepthTexture, i.uv).x;
return tOut;
}
_LastCameraDepthTexture is automatically filled by Unity, but there is a downside.
It only comes for free if the main camera renders with deferred rendering.
For forward shading, Unity seems to render the scene again just for the depth texture.
Check the frame debugger.
Then, add a post processing effect to the main camera that executes the shader:
protected virtual void OnRenderImage(RenderTexture pFrom, RenderTexture pTo) {
Graphics.Blit(pFrom, mSmallerSceneDepthTexture, mRenderToDepthMaterial);
Graphics.Blit(pFrom, pTo);
}
You can probably do this without the second blit, but it was easier for me for testing.
Using the copied depth for rendering
To use the new depth texture for your second camera, call
mSecondCamera.SetTargetBuffers(mParticleRenderTexture.colorBuffer, mSmallerSceneDepthTexture.depthBuffer);
Keep targetTexture empty.
You then must ensure the second camera does not clear the depth, only the color.
For this, disable clear on the second camera completely and clear manually like this
Graphics.SetRenderTarget(mParticleRenderTexture);
GL.Clear(false, true, Color.clear);
I recommend to also render the second camera by hand. Disable it and call
mSecondCamera.Render();
after clearing.
Merging
Now you have to merge the main view and the seperate layer.
Depending on your rendering, you will probably end up with a render texture with so called premultiplied alpha.
To mix this with the rest, use a post processing step on the main camera with
fixed4 tBasis = tex2D(_MainTex, i.uv);
fixed4 tToInsert = tex2D(TransparentFX, i.uv);
//beware premultiplied alpha in insert
tBasis.rgb = tBasis.rgb * (1.0f- tToInsert.a) + tToInsert.rgb;
return tBasis;
Additive materials work out of the box, but alpha blended do not.
You have to create a shader with custom blending to create working alpha blended materials. The blending is
Blend SrcAlpha OneMinusSrcAlpha, One OneMinusSrcAlpha
This changes how the alpha channel is modified for every performed blending.
Results
add blended in front of alpha blended
fx layer rgb
fx layer alpha
alpha blended in front of add blended
fx layer rgb
fx layer alpha
I did not test yet if the performance actually increases.
If anyone has a simpler solution, let me know please.
I managed to reuse camera Z-buffer "manually" in the shader used for rendering. See http://forum.unity3d.com/threads/reuse-depth-buffer-of-main-camera.280460/ for more.
Just alter the particle shader you use already for particle rendering.
I am trying to implement box select in a 3d world. Basically, click, hold mouse, and then unpress mouse, get a box, and then box select. To start, I'm trying to figure out how to get the coordinates of the clicks in 3d.
I have raypicking, and that is not getting the right coordinate (gets origin and direction). It keeps returning the same origin no matter what X/Y for screen is (although the direction is different).
I've also tried:
D3DXVECTOR3 ori = D3DXVECTOR3(sx, sy, 0.0f);
D3DXVECTOR3 out;
D3DXVec3Unproject(&out, &ori, &viewPort, &projectionMat, &viewMat, &worldMat);
And it gets the same thing, the coordinates are very close to each other no matter what coordinates (and are wrong). It's almost like returning the eye, instead of the actual world coordinate.
How do I turn 2d Screen coordinates into 3d using directx 9c?
This is called picking in Direct3D, to select a model in 3D space, you mainly need 3 steps:
Generate the picking ray
Transform the picking ray and the model you want to pick in the same space
Do a intersection test of the picking ray and the model
Generate the picking ray
When we click the mouse on the screen(say the point is s on the screen), the model is selected when the box project on the area surround the point s on the projection window.
so, in order to generate the picking ray with the given screen coordinates (x, y), first we need to transform (x,y) to the projection window, this is can be done by the invert process of viewport transformation. another thing is the point on the projection window was scaled by the project matrix, so we should divide it by the scale factors.
in DirectX, the camera always place at the origin, so the picking ray starts from the origin, and projection window is the near clip plane(z=1).this is what the code has done below.
Ray CalcPickingRay(LPDIRECT3DDEVICE9 Device, int screen_x, int screen_y)
{
float px = 0.0f;
float py = 0.0f;
// Get viewport
D3DVIEWPORT9 vp;
Device->GetViewport(&vp);
// Get Projection matrix
D3DXMATRIX proj;
Device->GetTransform(D3DTS_PROJECTION, &proj);
px = ((( 2.0f * screen_x) / vp.Width) - 1.0f) / proj(0, 0);
py = (((-2.0f * screen_y) / vp.Height) + 1.0f) / proj(1, 1);
Ray ray;
ray._origin = D3DXVECTOR3(0.0f, 0.0f, 0.0f);
ray._direction = D3DXVECTOR3(px, py, 1.0f);
return ray;
}
Transform the picking ray and model into the same space.
We always obtain this by transform the picking ray to world space, simply get the invert of your view matrix, then apply the invert matrix to your pickig ray.
// transform the ray from view space to world space
void TransformRay(Ray* ray, D3DXMATRIX* invertViewMatrix)
{
// transform the ray's origin, w = 1.
D3DXVec3TransformCoord(
&ray->_origin,
&ray->_origin,
invertViewMatrix);
// transform the ray's direction, w = 0.
D3DXVec3TransformNormal(
&ray->_direction,
&ray->_direction,
invertViewMatrix);
// normalize the direction
D3DXVec3Normalize(&ray->_direction, &ray->_direction);
}
Do intersection test
If everything above is well, you can do the intersection test now. this is a ray-box intersection, so you can use function D3DXboxBoundProbe. you can change the visual mode of you box to see if the picking was really work, for example, set the fill mode to solid or wire-frame if D3DXboxBoundProbe return TRUE.
You can perform the picking in response of WM_LBUTTONDOWN.
case WM_LBUTTONDOWN:
{
// Get screen point
int iMouseX = (short)LOWORD(lParam) ;
int iMouseY = (short)HIWORD(lParam) ;
// Calculate the picking ray
Ray ray = CalcPickingRay(g_pd3dDevice, iMouseX, iMouseY) ;
// transform the ray from view space to world space
// get view matrix
D3DXMATRIX view;
g_pd3dDevice->GetTransform(D3DTS_VIEW, &view);
// inverse it
D3DXMATRIX viewInverse;
D3DXMatrixInverse(&viewInverse, 0, &view);
// apply on the ray
TransformRay(&ray, &viewInverse) ;
// collision detection
D3DXVECTOR3 v(0.0f, 0.0f, 0.0f);
if(D3DXSphereBoundProbe(box.minPoint, box.maxPoint &ray._origin, &ray._direction))
{
g_pd3dDevice->SetRenderState(D3DRS_FILLMODE, D3DFILL_SOLID);
}
break ;
}
It turns out, I was handling the problem the wrong/opposite way. Turning 2D to 3D didn't make sense in the end. But as it turns out, converting the vertices from 3D to 2D, then seeing if inside the 2D box was the right answer!
I am building a robot in openGL and it should move and rotate. When I press the robot should move forward and if I press t then he should rotate 15* about its own local axis and then if i press f he will walk again. I have done, the robot walks and rotates but the problem is he is not rotating with respect to his local axis, he is following (0,0,0). I think i dont understand how the composition of translation and rotation has to be made so that I get my desired effect.
I am trying now with just a scaled sphere. I am adding the display func here, so that it is more clear for you guys:
void display()
{
glEnable(GL_DEPTH_TEST); // need depth test to correctly draw 3D objects
glClearColor(0,0,0,1);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glShadeModel(GL_SMOOTH);
//All color and material stuffs go here
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_NORMALIZE); // normalize normals
glEnable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE);
// set up the parameters for lighting
GLfloat light_ambient[] = {0,0,0,1};
GLfloat light_diffuse[] = {.6,.6,.6,1};
GLfloat light_specular[] = {1,1,1,1};
GLfloat light_pos[] = {10,10,10,1};
glLightfv(GL_LIGHT0,GL_AMBIENT, light_ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, light_specular);
GLfloat mat_specular[] = {.9, .9, .9,1};
GLfloat mat_shine[] = {10};
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, mat_specular);
glMaterialfv(GL_FRONT_AND_BACK, GL_SHININESS, mat_shine);
//color specs ends ////////////////////////////////////////
//glPolygonMode(GL_FRONT_AND_BACK,GL_LINE); // comment this line to enable polygon shades
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90, 1, 1, 100);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glLightfv(GL_LIGHT0, GL_POSITION, light_pos);
gluLookAt(0,0,30,0,0,0,0,1,0);
glRotatef(x_angle, 0, 1,0); // this is just for mouse handling
glRotatef(y_angle, 1,0,0); // this is just for mouse handling
glScalef(scale_size, scale_size, scale_size); // for zooming effect
draw_coordinate();
//Drawing using VBO starts here
glTranslatef(walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
glRotatef(turn,0,1,0);
draw_sphere(3,1,1);
glDisableClientState(GL_VERTEX_ARRAY); // enable the vertex array on the client side
glDisableClientState(GL_NORMAL_ARRAY); // enable the normal array on the client side
glutSwapBuffers();
}
The rotatefunction from opengl is one that rotates around (0,0,0). You have to translate the rotationpoint to the center and then do the rotation.
...
glTranslatef(walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
glTranslatef(-x_rot,-y_rot,-z_rot);
glRotatef(turn,0,1,0);
glTranslatef(x_rot,y_rot,z_rot);
...
So In your case x_rot=walk*sin(M_PI*turn/180), y_rot=0 and z_rot=walk*cos(M_PI*turn/180). The above becomes:
...
glRotatef(turn,0,1,0);
glTranslatef(x_rot=walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
...
If your robot doesn't rotate in its own axis then translate the robot to the center, rotate it and again translate it back to the original position. Keep your translation, rotation, scaling and drawing inside
glPushMatrix();
........your rotation,translation,scalling,drawing goes here..........
glPopMatrix();
These keeps the scene same.
If you don't understand these function then look here.
I am just staring with processing.js and I have been having trouble because every time I rotate an image it also changes its location on the screen. So what processing seems to do is, rotate my image around the point I told it to place it, instead of rotating it first around its own axis and then placing it where I told it to (which I figured cannot be done in that way/order).
This is the code
PShape s;
float angle = 0.1; //rads
s = loadShape("sensor.svg");
s.rotate(angle);
//I change this angle manually or with my clickMouse function which isnt shown.
void setup(){
size(400,350);
frameRate(30); //30 frames per seconds
}
void draw(){ //shape( shape, x, y, width, height);
smooth();
fill(153);
ellipse(200, 350/2, 100, 100);
shape(s, 200, 350/2, 20, 20);
ellipse(200, 350/2, 2, 2);
}
What I am basically trying to do is make this "sensor" image rotate in the correct orientation around the circle (ellipse) that I drew. Thats the idea. Its doing neither. Maybe having a click function that rotates the SVG image around the circle. But instead it rotates around the coordinates of the shape(image, x_coord, y_coord, width, height) function. If anyone has any suggestions, I would be so happy! Hope my question makes sense, if it doesnt I would be more than happy to clarify any part of it.
Thanks! :)
It's much easier not to rotate your shape, but to rotate the coordinate system.
void draw() {
translate(s.width/2,s.height/2);
rotate(PI/4);
shape(s);
resetMatrix();
// keep on drawing here
}
This first moves the coordinate system so that (0,0) is on top of the center of your shape, then rotates the entire coordinate system by 45 degrees, then draws your shape. Then you reset the coordinate system and keep drawing as usual.