How to do occlusion culling in world space - graphics

How to do occlusion culling in world space?
Yesterday I did interview with game dev company.
Interviewer asked me "What is advantage of Occlusion culling before in Screen space"
Maybe he think occlusion culling in world space.
But I didn't know can do occlusion culling in world space.
How to do that?

Related

how to find a viewpoint in polygon(maybe concave polygon) that the viewPoint can most edge of the polygon?

as the title. In a room (convex polygon or non-convex), how to find a point when i stand the point ,i can see all the walls(edges) of the room?
This is not a trivial problem; for most polygons, there is no point from which all edges (or all corners) can be seen.
There is ample literature on the topic; you can search for the art gallery problem, or the museum problem
Art Gallery Problem on wikipedia

Ray Marching advantages over Rasterization

I'd like to know what's reason to use Ray Marching/Ray casting over rasterization? Is it better only in specific cases?
Thanks for your answers.
Ray casting and rasterization are two totally different methods of rendering. Rasterization is designed to be very fast, and lighting is typically computed on a per-fragment basis in a fragment shader (or pixel shader). Ray casting (a type of ray tracing) actually simulates light rays in a sense, creating a more accurate render, with a much larger computation time.
The main advantage of ray tracing(in general) is in the quality of the image.ray traced image vs rasterized. Ray tracing actually simulates a ray of light from the camera through each and every pixel of your screen while also taking into effect natural phenomena like reflection and refraction while computing the final color for the pixel,whereas in rasterization the 3D objects are simple squashed onto the pixels of the screen.

How does pixel shading work for ambient occlusion volumes?

I'm trying to understand the ambient occlusion technique described here, but I've trouble comprehending what exactly is the pixel shader doing.
Is the pixel shader invoked on points that belong to the surfaces of occlusion volumes? Can anyone explain on a simple scene (like a cube corner seen from inside) how pixels get their AO values?
(Crossposted from game stackexchange)
Basically Pixel Shader is responsible for reflecting the light in a object depending on what angle the poligons is facing the virtual light source. in that case AO is made possible by the help of pixel shader emulating the light and its amount in every single texture. Note: Pixel Shading is real time so if the object or the viewing point moves, the light also move and make a realistic textures.

Why is collision difficult to effectively compute in graphics engines?

From the oldest games to the very modern, it seems like you can still see through walls or most often the ground in some camera positions.
Why is collision difficult to effectively compute in graphics engines?
Is it rounding/loss of precision accumulating leading to a mis-rendered view?
This is not actually collision in the explicit sense. The camera position is probably not actually "inside" the wall or the ground in those situations, but it is simply very close to it.
In computer 3D graphics the camera has a concept of a near plane and a far plane. Only geometry located between these two planes will be visible, while the rest will be clipped. If you are too close to something and align the camera correctly, then chances are that some parts of the geometry will be too close to the camera as defined by the near plane and as a result that geometry will not be rendered.
Now, the distance to this near plane can be set by the developers, and it can be set to be very short - short enough to ensure that situations like these cannot occur. However, the depth buffer or z buffer that is used to determine which objects are closest to the camera during rendering, and thus which objects to render and which not to render, is closely related to the near and far plane distances.
In graphics hardware the depth buffer is represented using a fixed amount of bits for each pixel, for example 32 bits. These 32 bits must be enough to accurately represent the entire span between the near plane and the far plane. It is also not linear, but will use more precision closer to the camera. As a result, choosing a very small near plane distance will greatly reduce the overall precision of the depth buffer. This can cause annoying flickering throughout the entire scene wherever two objects are very close to each others.
You can read more about this issue here as well as section 12.040 here.
It's not about difficulty (of course, it's not easy to compute collision/clipping of non-convex object), but you still have only like ~33ms to compute whole frame, so some compromise have to be made (collision mesh is not the same like mesh you really see). If there is no time for precise solution (to fulfill all conditions - camera distance, object which have to be seen, collision avoidance), you have to fallback to some "easy" solution like see through the wall.

Question on Specular reflection behaviour?

Why Specular reflected light will be in bright color(usually white) while other parts of the object are reflecting the perceived color wavelength?
From a physical perspective, this is because:
specular reflection results from light bouncing off the surface of material
diffuse reflection results from light bouncing around inside the material
Say you have a piece of red plastic with a smooth surface. The plastic is red because it contains a red dye or pigment. Incoming light that enters the plastic tends to be reflected if red, or absorbed if it is not; this red light bounces around inside the plastic and makes it back out in a more or less random direction (which is why this component is called "diffuse").
On the other hand, some of the incoming light never makes it into the plastic to begin with: it bounces off the surface, instead. Because the surface of the plastic is smooth, its direction is not randomized: it reflects off in a direction based on the mirror reflection angle (which is why it is called "specular"). Since it never hits any of the colorant in the plastic, its color is not changed by selective absorption like the diffuse component; this is why specular reflection is usually white.
I should add that the above is a highly simplified version of reality: there are plenty of cases that are not covered by these two possibilities. However, they are common enough and generally applicable enough for computer graphics work: the diffuse+specular model can give a good visible approximation to many surfaces, especially when combined with other cheap approximation like bump mapping, etc.
Edit: a reference in response to Ayappa's comment -- the mechanism that generally gives rise to specular highlights is called Fresnel reflection. It is a classical phenomenon, depending solely on the refractive index of the material.
If the surface of the material is optically smooth (e.g., a high-quality glass window), the Fresnel reflection will produce a true mirror-like image. If the material is only partly smooth (like semigloss paint) you will get a specular highlight, which may be narrow or wide based on how smooth it is at the microscopic level. If the material is completely rough (either at a microscopic level or at some larger scale which is smaller than your image resolution), then the Fresnel reflection becomes effectively diffuse, and cannot be readily distinguished from other forms of diffuse reflection.
Its a question of wavelength absorption vs reflection.
First, specular reflections do not exist in the real world. Everything you see is mostly reflected light (the rest being emissive or other), including diffuse lighting. Realistically, there is no real difference between diffuse and specular lighting : its all a reflection. Also keep in mind that real world lighting is not clamped to the 0-1 range as pixels are.
Diffusion of light reflected off of a surface is caused by the microscopic roughness of the surface (microfacets). Imagine a surface is made up of millions of microscopic mirrors. If they are all aligned, you get a perfect polished mirror. If they are all randomly oriented, light is scattered in every direction and the resulting reflection is "blurred". Many formulas in computer graphics try to model this microscopic surface roughness, like Oren–Nayar, but usually the simple Lambert model is used because it is computationally cheap.
Colors are a result of wavelength absorption vs reflection. When light energy hits a material, some of that energy is absorbed by that material. Not all wavelengths of the energy are absorbed at the same rate however. If white light bounces off of a surface which absorbs red wavelengths, you will see a green-blue color. The more a surface absorbs light, the darker the color will appear as less and less light energy is returned. Most of the absorbed light energy is converted to thermal energy, and is why black materials will heat up in the sun faster than white materials.
Specular in computer graphics is meant to simulate a strong direct light source reflecting off of a surface like it may do in the real world. Realistically though, you would have to reflect the entire scene in high range lighting and color depth, and specular would be the result of light sources being much brighter than the rest of the reflected scene and returning a much higher amount of light energy after one or more reflections than the rest of the light from the scene. That would be quite computationally painful though! Not feasible for realtime graphics just yet. Lighting with HDR environment maps were an attempt to properly simulate this.
Additional references and explanations :
Specular Reflections :
Specular reflections only differ from diffuse reflections by the roughness of a reflective surface. There is no inherent difference between them, both terms refer to reflected light. Also note that diffusion in this context simply means the scattering of light, and diffuse reflection should not be confused with other forms of light diffusion such as subsurface diffusion (commonly called subsurface scattering or SSS). Specular and diffuse reflections could be replaced with terms like "sharp" reflections and "blurry" reflections of light.
Electromagnetic Energy Absorption by Atoms :
Atoms seek a balanced energy state, so if you add energy to an atom, it will seek to discharge it. When energy like light is passed to an atom, some of the energy is absorbed which excites the atom, causing a gain in thermal energy (heat), the rest is reflected or transmitted (passes "through"). Atoms will absorb energy at different wavelengths at different rates, and the reflected light with modified intensity per wavelength is what gives color. How much energy an atom can absorb depends on it's current energy state and atomic structure.
So, in a very very simple model, ignoring angle of incidence and other factors, say i shine RGB(1,1,1) on a surface which absorbs RGB(0.5,0,0.75), assuming no transmittance is occurring, your reflected light value is RGB(0.5,1.0,0.25).
Now say you shine a light at RGB(2,2,2) on the same surface. The surface's properties have not changed. Reflected light is RGB( 1.5 , 2.0 , 1.25 ). If the sensor receiving this reflected light clamps at 1.0, then perceived light is RGB(1,1,1), or white, even though the material is colored.
Some references :
page at www.physicsclassroom.com
page on ask a scientist
Wikipedia : Atoms
Wikipedia : Energy Levels

Resources