Frustum position, origin and culling - graphics

I am writing a simple program that uses perspective projection and I have a bunch of objects drawn in my scene. For perspective projection I am using the following code:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluLookAt(eyePosX, eyePosY, eyePosZ, centerPosX, centerPosY, centerPosZ, 0.0, 1.0, 0.0);
glFrustum(frustumLeft,frustumRight,frustumBottom,frustumTop,frustumNear,frustumFar);
When I have an object drawn with a certain offset on the X axis that does not inside into the frustum, the object is stil drawn, but is elongated and not culled by the frustum.
What are the coordinates of the 8 points in the XYZ space with respect to eyePosX/Y/Z and frustumLeft/Right/Bottom/Top/Near/Far?
How can I tell OpenGL to perform the culling of the objects that are not inside the frustum?

For perspective projection I am using the following code:
There are two possibilities. The first is that you really didn't mean to do what this code does. The second is that you did mean it, but don't fully understand what you've done.
Let's cover the first one now. The look-at matrix should never go inside the GL_PROJECTION matrix. Also, the look-at matrix always comes after the projection matrix. These should always be true unless you're doing something special.
Which leads to the second. If you really intend to rotate and offset the post-projective space, then you cannot expect geometry to be culled against the frustum. Why?
Because OpenGL doesn't do frustum culling. It culls against whatever post-T&L vertex positions you provide. If you rotate the view outside of the frustum, then that's what gets drawn. OpenGL doesn't draw what isn't visible; if you change the view post-projection so that things that wouldn't have been visible are visible now, then you've changed what is and is not visible.

Related

What is the Godot way to define (and refer) custom rectangle areas?

I'm coming from Phaser + Tiled world, where, if I need some rectangle area in game world (like Player's area, spawning area, and so on) I can just draw rectangle in Tiled and then get it coordinates from Phaser.js and use as I need. And I seem to stuck to do similar things in Godot.
For some of tasks I can use Area2D with rectangle inside and collision events. But it is not always enough.
How can I just define rectangle on screen, and get its coordinates? For Sprite object and for Node2D I cannot get bounding rectangle. I can use Area2D + Rectangle and refer to rectangle's 'extent' property to get width/height, but that seems to be overhead for me - Area2D is used in collision detection.
What can I do in general? And what could be done in Godot for following scenarios?
Camera limits. I have Sprite with background gradient which I scale to needed world size, and I'd like to set camera limits on that Sprite's width/height.
Hero movement limits. Half of world is not accessible for player, so any move to x > MIDDLE shall be denied. I can just setup constant MIDDLE in the code, but I'd like to draw allowed area as rectangle and refer to it coordinates.
Spawn area. Mark some place of world (that could be just point, not rectangle) where new objects shall be created by code.
You could use the Rect2 class in a script to define custom rectangles. You can then use it to check whether it contains a Vector2 or another Rect2.

Strategy for rotating objects with normal maps without screwing up lighting

I'm building a simulation with directional lighting where multiple sprites will be rendered in the scene. The sprites will be simple geometry layered with a normal map.
I'm using this example as a starting point for my code.
The problem is that if an object is rotated, the lighting within the scene no longer reflects off the normals on the rotated sprite correctly. For instance, lighting up the right-hand side of an object that is rotated by 180° will brighten the left-hand side of the object. I tested this by rotating block1 in the above example.
No rotation.
180° rotation.
I understand why this is happening. But I'm not sure how to approach resolving it. Is it possible to allow rotated normal maps to correctly reflect light? How?

How can i create an image morpher inside a graphics shader?

Image morphing is mostly a graphic design SFX to adapt one picture into another one using some points decided by the artist, who has to match the eyes some key zones on one portrait with another, and then some kinds of algorithms adapt the entire picture to change from one to another.
I would like to do something a bit similar with a shader, which can load any 2 graphics and automatically choose zones of the most similar colors in the same kinds of zone of the picture and automatically morph two pictures in real time processing. Perhaps a shader based version would be logically alot faster at the task? except I don't even understand how it works at all.
If you know, Please don't worry about a complete reply about the process, it would be great if you have save vague background concepts and keywords, for how to attempt a 2d texture morph in a graphics shader.
There are more morphing methods out there the one you are describing is based on geometry.
morph by interpolation
you have 2 data sets with similar properties (for example 2 images are both 2D) and interpolate between them by some parameter. In case of 2D images you can use linear interpolation if both images are the same resolution or trilinear interpolation if not.
So you just pick corresponding pixels from each images and interpolate the actual color for some parameter t=<0,1>. for the same resolution something like this:
for (y=0;y<img1.height;y++)
for (x=0;x<img1.width;x++)
img.pixel[x][y]=(1.0-t)*img1.pixel[x][y] + t*img2.pixel[x][y];
where img1,img2 are input images and img is the ouptput. Beware the t is float so you need to overtype to avoid integer rounding problems or use scale t=<0,256> and correct the result by bit shift right by 8 bits or by /256 For different sizes you need to bilinear-ly interpolate the corresponding (x,y) position in both of the source images first.
All This can be done very easily in fragment shader. Just bind the img1,img2 to texture units 0,1 pick the texel from them interpolate and output the final color. The bilinear coordinate interpolation is done automatically by GLSL because texture coordinates are normalized to <0,1> no matter the resolution. In Vertex you just pass the texture and vertex coordinates. And in main program side you just draw single Quad covering the final image output...
morph by geometry
You have 2 polygons (or matching points) and interpolate their positions between the 2. For example something like this: Morph a cube to coil. This is suited for vector graphics. you just need to have points corespondency and then the interpolation is similar to #1.
for (i=0;i<points;i++)
{
p(i).x=(1.0-t)*p1.x + t*p2.x
p(i).y=(1.0-t)*p1.y + t*p2.y
}
where p1(i),p2(i) is i-th point from each input geometry set and p(i) is point from the final result...
To enhance visual appearance the linear interpolation is exchanged with specific trajectory (like BEZIER curves) so the morph look more cool. For example see
Path generation for non-intersecting disc movement on a plane
To acomplish this you need to use geometry shader (or maybe even tesselation shader). you would need to pass both polygons as single primitive, then geometry shader should interpolate the actual polygon and pass it to vertex shader.
morph by particle swarms
In this case you find corresponding pixels in source images by matching colors. Then handle each pixel as particle and create its path from position in img1 to img2 with parameter t. It i s the same as #2 but instead polygon areas you got just points. The particle has its color,position you interpolate both ... because there is very slim chance you will get exact color matches and the count ... (histograms would be the same) which is in-probable.
hybrid morphing
It is any combination of #1,#2,#3
I am sure there is more methods for morphing these are just the ones I know of. Also the morphing can be done not only in spatial domain...

Inner and Outer Glow Implementation using Opengl ES 3.0

I want to implement inner and outer glow for a rendered 3D object. Here the glow is to be applied only on the 3D models who have glow enabled and not for the entire scene.
There is one post in stackoverflow that talks about implementing it using modifying the mesh, which in my opinion is difficult and intensive.
Was wondering if it can be achieved through multi-pass rendering? Something like a bloom effect thats applied to specific objects in the scene and only to the inner and outer boundaries.
I assume you want the glow only near the object's contours?
I did an outer glow using a multi-pass approach (after all "regular" drawing):
Draw object to texture (cleared to fully transparent) using constant output shader (using glow color as output), marking the stencil buffer in the process. Use EQUAL depth test if you only want a glow around the part where the object is actually visible on screen. Obviously using the depth buffer used to do normal scene drawing.
Separated gaussian blur on this texture.
Blend result to the output buffer for all pixels that do not have the stencil buffer marked in step 1.
For an inner + outer glow, you could do an edge detection on the result of (1), keeping only marked pixels near the boundary, followed by the blur and and an unmasked blend.
You could also try to combine the edge detection and blurring by using a filter that scales its output based on the variance of all samples in its radius. It would be non-separable though...

Which stage of pipeline should I do culling and clipping and How should I reconstruct triangles after clipping

I'm trying to implement graphic pipeline in software level. I have some problems with clipping and culling now.
Basically, there are two main concerns:
When should back-face culling take place? Eye coordinate, clipping coordinate or window coordinate? I initially made culling process in eye coordinate, thinking this way could relieve the burden of clipping process since many back-facing vertices have already been discarded. But later I realized that in this way vertices need to take 2 matrix multiplications , namely left multiply model-view matrix --> culling --> left multiply perspective matrix, which increases the overhead to some extent.
How do I do clipping and reconstruct triangle? As far as I know, clipping happens in clipping coordinate(after perspective transformation), in another word homogeneous coordinate in which every vertex is being determined whether no not it should be discarded by comparing its x, y, z components with w component. So far so good, right? But after that I need to reconstruct those triangles which have one or two vertices been discarded. I googled that Liang-Barsky algorithm would be helpful in this case, but in clipping coordinate what clipping plane should I use? Should I just record clipped triangles and reconstruct them in NDC?
Any idea will be helpful. Thanks.
(1)
Back-face culling can occur wherever you want.
On the 3dfx hardware, and probably the other cards that rasterised only, it was implemented in window coordinates. As you say that leaves you processing some vertices you don't ever use but you need to weigh that up against your other costs.
You can also cull in world coordinates; you know the location of the camera so you know a vector from the camera to the face — just go to any of the edge vertices. So you can test the dot product of that against the normal.
When I was implementing a software rasteriser for a z80-based micro I went a step beyond that and transformed the camera into model space. So you get the inverse of the model matrix (which was cheap in this case because they were guaranteed to be orthonormal, so the transpose would do), apply that to the camera and then cull from there. It's still a vector difference and a dot product but if you're using the surface normals only for culling then it saves having to transform each and every one of them for the benefit of the camera. For that particular renderer I was then able to work forward from which faces are visible to determine which vertices are visible and transform only those to window coordinates.
(2)
A variant on Sutherland-Cohen is the thing I remember seeing most often. You'd do a forward scan around the outside of the polygon checking each edge in turn and adjusting appropriately.
So e.g. you start with the convex polygon between points (V1, V2, V3). For each clipping plane in turn you'd do something like:
for(Vn in input vertices)
{
if(Vn is on the good side of the plane)
add Vn to output vertices
if(edge from Vn to Vn+1 intersects plane) // or from Vn to 0 if this is the last edge
{
find point of intersection, I
add I to output vertices
}
}
And repeat for each plane. If you're worried about repeated costs then you either need to adopt a structure with an extra level of indirection between faces and edges or just keep a cache. You'd probably do something like dash round the vertices once marking them as in or out, then cache the point of intersection per edge, looked up via the key (v1, v2). If you've set yourself up with the extra level of indirection then store the result in the edge object.

Resources