I have this problem on a practice midterm that I don't understand.
void main(void){
int i;
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_PositionIn[i];
EmitVertex();
}
EndPrimitive();
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_PositionIn[i];
gl_Position.xy = gl_Position.yx;
EmitVertex();
}
EndPrimitive();
}
I have been reading documentation, and I think that this is part of a geometry shader, and I think it is inverting the x and y coordinates of each point, but I don't have any way to verify this. I tried checking it in a program and it made slight differences in the coloring of the scene, but it didn't seem to change the geometry at all, so if someone could help explain this that would be awesome. Thanks!
This is indeed a part of a geometry shader.
First part of the shader (ending with first EndPrimitive()) is simplest possible pass-through geometry shader that does absolutely nothing to the geometry.
Second part is almost the same, except for the swizzling with the xy. It duplicates geometry, but changing the x and y coordinates, so it effectively mirrors the image across the line that connects down-left and up-right corner of the screen.
So, geometry is being duplicated and mirrored across the diagonal of the screen.
Related
I want to have a fullscreen mode that keeps the aspect ratio by adding black bars on either side. I tried just creating a display mode, but I can't make it fullscreen unless it's a pre-approved resolution, and when I use a bigger diaplay than the native resolution the pixels become messed up, and lines appeared between all of the tiles in the game for some reason.
I think I need to use FBOs to render the scenario to a texture instead of the window, and then just use a fullscreen approved resolution and render the texture properly stretched out in the center of the screen, but I just don't understand how to render to a texture in order to do that, or how to stretch an image. Could someone please help me?
EDIT
I got fullscreen working, but it makes everything all broken looking There are random lines on the edges of anything that's written to the window. There are no glitchy lines when it's in native resolution though. Here's my code:
Display.setTitle("Mega Man");
try{
Display.setDisplayMode(Display.getDesktopDisplayMode());
Display.create();
}catch(LWJGLException e){
e.printStackTrace();
}
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,WIDTH,HEIGHT,0,1,-1);
glMatrixMode(GL_MODELVIEW);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glHint(GL_LINE_SMOOTH_HINT, GL_NICEST);
try{Display.setFullscreen(true);}catch(Exception e){}
int sh=Display.getHeight();
int sw=WIDTH*sh/HEIGHT;
GL11.glViewport(Display.getWidth()/2-sw/2, 0, sw, sh);
Screenshot of the glitchy fullscreen here: http://sta.sh/021fohgnmxwa
EDIT
Here is the texture rendering code that I use to draw everything:
public static void DrawQuadTex(Texture tex, int x, int y, float width, float height, float texWidth, float texHeight, float subx, float suby, float subd, String mirror){
if (tex==null){return;}
if (mirror==null){mirror = "";}
//subx, suby, and subd are to grab sprites from a sprite sheet. subd is the measure of both the width and length of the sprite, as only images with dimensions that are the same and are powers of 2 are properly displayed.
int xinner = 0;
int xouter = (int) width;
int yinner = 0;
int youter = (int) height;
if (mirror.indexOf("h")>-1){
xinner = xouter;
xouter = 0;
}
if (mirror.indexOf("v")>-1){
yinner = youter;
youter = 0;
}
tex.bind();
glTranslatef(x,y,0);
glBegin(GL_QUADS);
glTexCoord2f(subx/texWidth,suby/texHeight);
glVertex2f(xinner,yinner);
glTexCoord2f((subx+subd)/texWidth,suby/texHeight);
glVertex2f(xouter,yinner);
glTexCoord2f((subx+subd)/texWidth,(suby+subd)/texHeight);
glVertex2f(xouter,youter);
glTexCoord2f(subx/texWidth,(suby+subd)/texHeight);
glVertex2f(xinner,youter);
glEnd();
glLoadIdentity();
}
Just to keep it clean I give you a real answer and not just a comment.
The aspect ratio problem can be solved with help of glViewport. Using this method you can decide which area of the surface that will be rendered to. The default viewport will cover the whole surface.
Since the second problem with the corrupt rendering (also described here https://stackoverflow.com/questions/28846531/sprite-game-in-full-screen-aliasing-issue) appeared after changing viewport I will give my thought about it in this answer as well.
Without knowing exactly how the rendering code for the tile background looks. I would guess that the problem is due to any differences in the resolution between the glViewport and glOrtho calls.
Example: If the glOrtho resolution is half the viewport resolution then each openGL unit is actually 2 pixels. If you then renders a tile between x=0 and x=9 and then the next one between x=10 and x=19 you will get an empty space between them.
To solve this you can change the resolution so that they are the same. Or you can render the tile to overlap, first one x=0 to x=10 second one x=10 to x=20 and so on.
Without seeing the tile rendering code I can't verify it this is the problem though.
I am trying to implement box select in a 3d world. Basically, click, hold mouse, and then unpress mouse, get a box, and then box select. To start, I'm trying to figure out how to get the coordinates of the clicks in 3d.
I have raypicking, and that is not getting the right coordinate (gets origin and direction). It keeps returning the same origin no matter what X/Y for screen is (although the direction is different).
I've also tried:
D3DXVECTOR3 ori = D3DXVECTOR3(sx, sy, 0.0f);
D3DXVECTOR3 out;
D3DXVec3Unproject(&out, &ori, &viewPort, &projectionMat, &viewMat, &worldMat);
And it gets the same thing, the coordinates are very close to each other no matter what coordinates (and are wrong). It's almost like returning the eye, instead of the actual world coordinate.
How do I turn 2d Screen coordinates into 3d using directx 9c?
This is called picking in Direct3D, to select a model in 3D space, you mainly need 3 steps:
Generate the picking ray
Transform the picking ray and the model you want to pick in the same space
Do a intersection test of the picking ray and the model
Generate the picking ray
When we click the mouse on the screen(say the point is s on the screen), the model is selected when the box project on the area surround the point s on the projection window.
so, in order to generate the picking ray with the given screen coordinates (x, y), first we need to transform (x,y) to the projection window, this is can be done by the invert process of viewport transformation. another thing is the point on the projection window was scaled by the project matrix, so we should divide it by the scale factors.
in DirectX, the camera always place at the origin, so the picking ray starts from the origin, and projection window is the near clip plane(z=1).this is what the code has done below.
Ray CalcPickingRay(LPDIRECT3DDEVICE9 Device, int screen_x, int screen_y)
{
float px = 0.0f;
float py = 0.0f;
// Get viewport
D3DVIEWPORT9 vp;
Device->GetViewport(&vp);
// Get Projection matrix
D3DXMATRIX proj;
Device->GetTransform(D3DTS_PROJECTION, &proj);
px = ((( 2.0f * screen_x) / vp.Width) - 1.0f) / proj(0, 0);
py = (((-2.0f * screen_y) / vp.Height) + 1.0f) / proj(1, 1);
Ray ray;
ray._origin = D3DXVECTOR3(0.0f, 0.0f, 0.0f);
ray._direction = D3DXVECTOR3(px, py, 1.0f);
return ray;
}
Transform the picking ray and model into the same space.
We always obtain this by transform the picking ray to world space, simply get the invert of your view matrix, then apply the invert matrix to your pickig ray.
// transform the ray from view space to world space
void TransformRay(Ray* ray, D3DXMATRIX* invertViewMatrix)
{
// transform the ray's origin, w = 1.
D3DXVec3TransformCoord(
&ray->_origin,
&ray->_origin,
invertViewMatrix);
// transform the ray's direction, w = 0.
D3DXVec3TransformNormal(
&ray->_direction,
&ray->_direction,
invertViewMatrix);
// normalize the direction
D3DXVec3Normalize(&ray->_direction, &ray->_direction);
}
Do intersection test
If everything above is well, you can do the intersection test now. this is a ray-box intersection, so you can use function D3DXboxBoundProbe. you can change the visual mode of you box to see if the picking was really work, for example, set the fill mode to solid or wire-frame if D3DXboxBoundProbe return TRUE.
You can perform the picking in response of WM_LBUTTONDOWN.
case WM_LBUTTONDOWN:
{
// Get screen point
int iMouseX = (short)LOWORD(lParam) ;
int iMouseY = (short)HIWORD(lParam) ;
// Calculate the picking ray
Ray ray = CalcPickingRay(g_pd3dDevice, iMouseX, iMouseY) ;
// transform the ray from view space to world space
// get view matrix
D3DXMATRIX view;
g_pd3dDevice->GetTransform(D3DTS_VIEW, &view);
// inverse it
D3DXMATRIX viewInverse;
D3DXMatrixInverse(&viewInverse, 0, &view);
// apply on the ray
TransformRay(&ray, &viewInverse) ;
// collision detection
D3DXVECTOR3 v(0.0f, 0.0f, 0.0f);
if(D3DXSphereBoundProbe(box.minPoint, box.maxPoint &ray._origin, &ray._direction))
{
g_pd3dDevice->SetRenderState(D3DRS_FILLMODE, D3DFILL_SOLID);
}
break ;
}
It turns out, I was handling the problem the wrong/opposite way. Turning 2D to 3D didn't make sense in the end. But as it turns out, converting the vertices from 3D to 2D, then seeing if inside the 2D box was the right answer!
I'm trying to build a simple passthrough geometry shader, but
I cant make it work with
glDrawElements(GL_TRIANGLES, fooSize, GL_UNSIGNED_INT, NULL);
but it does work with
glDrawArrays(GL_TRIANGLES, 0, foo_INDEX);.
the geometry shader is..
#version 400
#extension GL_EXT_geometry_shader4 : enable
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
void main() {
for(int i = 0; i < gl_VerticesIn; i++) {
gl_Position = gl_PositionIn[i];
EmitVertex();
}
EndPrimitive();
}
So, someone has any idea why this geometry shader works with drawArrays and doesnt work with drawElements? Please.
You have written a very confused and confusing geometry shader.
Your first line indicates that you're using GLSL version 4.00. This language has geometry shaders as a core feature; no extensions required. Your second line then enables the EXT_geometry_shader4 extension.
The problem comes from the fact that core geometry shaders and EXT_geometry_shader4 are very different things.
For example, there is no gl_VerticesIn in core geometry shaders. The number of vertices you get is defined by your layout() in; statement. You used triangles, so the answer is 3. You can also find it by using gl_in.length(), where gl_in is an interface block array containing the default vertex input values. Similarly, gl_PositionIn doesn't exist in core geometry shaders; you use gl_in[index].gl_Position.
So your geometry shaders code seems to be designed for EXT_geometry_shader4... except for these lines:
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
These are pure core geometry shaders; in EXT_geometry_shader4, these parameters are specified at link time in OpenGL code, not in the shader itself.
So you've thoroughly confused the compiler by having half of it use the EXT version and half use the core version.
A proper core version of what you're trying to do would be this:
#version 400
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
void main() {
for(int i = 0; i < 3; i++) { // You used triangles, so it's always 3
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
I have no idea if this will fix your problem, but these are problems with your shader.
I am just staring with processing.js and I have been having trouble because every time I rotate an image it also changes its location on the screen. So what processing seems to do is, rotate my image around the point I told it to place it, instead of rotating it first around its own axis and then placing it where I told it to (which I figured cannot be done in that way/order).
This is the code
PShape s;
float angle = 0.1; //rads
s = loadShape("sensor.svg");
s.rotate(angle);
//I change this angle manually or with my clickMouse function which isnt shown.
void setup(){
size(400,350);
frameRate(30); //30 frames per seconds
}
void draw(){ //shape( shape, x, y, width, height);
smooth();
fill(153);
ellipse(200, 350/2, 100, 100);
shape(s, 200, 350/2, 20, 20);
ellipse(200, 350/2, 2, 2);
}
What I am basically trying to do is make this "sensor" image rotate in the correct orientation around the circle (ellipse) that I drew. Thats the idea. Its doing neither. Maybe having a click function that rotates the SVG image around the circle. But instead it rotates around the coordinates of the shape(image, x_coord, y_coord, width, height) function. If anyone has any suggestions, I would be so happy! Hope my question makes sense, if it doesnt I would be more than happy to clarify any part of it.
Thanks! :)
It's much easier not to rotate your shape, but to rotate the coordinate system.
void draw() {
translate(s.width/2,s.height/2);
rotate(PI/4);
shape(s);
resetMatrix();
// keep on drawing here
}
This first moves the coordinate system so that (0,0) is on top of the center of your shape, then rotates the entire coordinate system by 45 degrees, then draws your shape. Then you reset the coordinate system and keep drawing as usual.
I made some simple shading in GLSL of a checkers board:
f(P) = [ floor(Px)+floor(Py)+floor(Pz) ] mod 2
It seems to work well except the fact that i see the interior of the objects but i want to see only the front face.
Any ideas how to fix this? Thanks!
Teapot (glutSolidTeapot()):
Cube (glutSolidCube):
The vertex shader file is:
varying float x,y,z;
void main(){
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
x = gl_Position.x;
y = gl_Position.y;
z = gl_Position.z;
}
And the fragment shader file is:
varying float x,y,z;
void main(){
float _x=x;
float _y=y;
float _z=z;
_x=floor(_x);
_y=floor(_y);
_z=floor(_z);
float sum = (_x+_y+_z);
sum = mod(sum,2.0);
gl_FragColor = vec4(sum,sum,sum,1.0);
}
The shaders are not the problem - the face culling is.
You should either disable the face culling (which is not recommended, since it's bad for performance reasons):
glDisable(GL_CULL_FACE);
or use glCullFace and glFrontFace to set the culling mode, i.e.:
glEnable(GL_CULL_FACE); // enables face culling
glCullFace(GL_BACK); // tells OpenGL to cull back faces (the sane default setting)
glFrontFace(GL_CW); // tells OpenGL which faces are considered 'front' (use GL_CW or GL_CCW)
The argument to glFrontFace depends on application conventions, i.e. the matrix handedness.