I'm trying to produce a shader to replicate a white plastic object with a colored light inside. Either by having a shader that will be translucent and if I put a light inside the object the light will show through or by having a shader that fakes the effect of a light inside.
The effect im going for is kinda like a light going through a lamp shade similar to these pictures:
Ideally I would be able to control the strength and colour of the light to get it to pulse and rotate through some nice bright fluro colours
Though I'm not sure where to start!
My question is does anyone know the techniques I should be researching to be able to produce such a shader or have an example of the same/similar shader I can use as a starting point? Or even if you want to provide a shader that might do the job
You might want to do some research on Subsurface Scattering to get an idea of how to go about recreating this kind of effect. Subsurface scattering is important for rendering realistic skin but in that case you are generally dealing with a light in front of or behind a translucent object rather than inside it. The same basic principles apply but some of the tricks and hacks used for real time subsurface scattering approximations may not work for your case.
Nice photos. It looks like the type of translucent plastic you use can make a big difference. What I see is that the brightness of the plastic at each point is based on the angle between the ray from the light source to that point, and the surface normal at that point. (The viewer angle is irrelevant.)
When the vector from internal light source to surface point is nearly parallel to the surface normal vector, the surface point is bright; when they're nearly perpendicular to each other, the surface point is dark. So try using the dot product of those two vectors. Don't forget to normalize.
In other words, it's basically diffuse reflection, except that you're adding the effect of internal light sources (transmitted) to the effect of external light sources (reflected). See Lambertian_reflectance as a starting point.
You may also want to add a little specular reflection on top of that.
The third image is more complex: I think it's showing the shadows of the inner surfaces on the outer ones.
You can also fake this effect, by translating diffuse lighting from back face to front face. More specifically, you should mix lighting on both sides using some transfer function. But this method only appicable for thin walled objects.
Related
My raytracer has a point light source, it works as it should, illuminates the scene, but there is a problem, it is not visible, I would like to add glowing objects to the raytracer, for example a sphere that would look like the sun
I need any object to be able to glow whether it's a triangle (or a line?).
Which algorithm should I use?
Sorry for my poor English)
you add sphere which emits light so once your ray hits it the light will be added/multiplied to the ray color ... the glow is done either by atmospheric scattering or just by some semi transparent corona texture rendeed in another pass ... games usually use also bloom filtering which I hate as its too slow and often ugly if overused
So a polygon mesh is defined as the following:
class Triangle{
int vertices[3]; //vertex indices
float nx, ny, nz; //face-plane normal
};
Is this a convenient way to represent a mesh used with flat shading? Explain
Suggest an object for which this is a good mesh format when used with Gouraud shading. Explain
Suggest an object for which this is a bad mesh format when used with Gouraud shading. Explain
So for 1, I said yes because the face plane normal can be easily converted to a point in the middle of the face. I read somewhere that normals don't have positions?
For 2 I said a ball; more gentle angles
And 3 a box; steeper angles.
I don't know, I don't think I really understand what the normal vector is.
mostly yes
from geometry computations is this OK however from rendering aspect having triangles in indices form only can be sometimes problematic (depends on the rendering engine, HW, etc). Usually is faster to have the triangle points directly in vector form instead of just indexes sometimes triangle contains both... However that is wasting space.
depends on how you classify what is OK and what not.
smooth objects like sphere will look like this
while flat side meshes like cube will be rendered without visible distortions in shape (but with flat shaded like colors only so lighting will be corrupted)
So answer to this is depend on what you want to achieve less lighting error, or better shape recognition or what. Basically using 1 normal for face will turn Gourard into flat shading.
Lighting can be improved by dividing big flat surfaces into more triangles
is unanswerable exactly for the same reasons as #2
So if you want to answer #2,#3 you need to clarify what it means good and bad ...
There are some 3D applications which can cast shadow or silhouette below 3D models. They render pretty fast and smooth. I wonder what kind of technology is the standard procedure to get 3D model shadow/silhouette.
For example is there any C++ library like libigl or CGAL to get shadow/silhouette pretty fast? Or maybe GLSL shading is used? Any hint would be appreciated on the standard technology stack.
For rendering, it's trivial. Just project the vertices to the surface (for the case of the XY plane, this just entails setting the Z coordinate to 0) and render the triangles. There'll be a lot of overlap, but since you're just rendering that won't matter.
If you're trying to build a set of polygons representing the silhouette shape, you'll need to instead union the projected triangles using something like the Vatti clipping algorithm.
Computing shadows is a vast and uneasy topic. In the real world, light sources are extended and the shadow edges are not sharp (there is penumbra). Then there are cast shadows, and even self-shadows.
If you limit yourself to punctual light sources (hence sharp shadows), there is a simple principle: if you place an observer at the light source, the faces he will see are illuminated by that light source. Conversely, the hidden surfaces are in the shadow.
For correct rendering, the shadowed areas should be back-projected to the scene and painted black.
By nature, the ray-tracing techniques make this process easy to implement.
i'm programming in WebGL (using OpenGL shaders) simple model loader. I've implemented phong shading in fragment shader. However when i load larger objects than simple monkey/cube and turn camera out of light source, meshes looks strange (aliased?). Some of them are even lightened although they should be hidden (black).
Lightened side is OK:
Other side is wrong:
I calculate normals for every vertex same way, so normals should be OK (when i turn camera on lightened side of car, everything goes right).
Thank you very much for your tips.
This looks like an single sided vs two sided lighting issue to me. In case your mesh consists of only a single "layer" of faces, those will have normals that point into only one direction. If single sided lighting is used, then the backface, i.e. if the light is on the side from which the normal points away, will look weird.
There are three ways to overcome this:
Use two sided illumination
draw the object twice with back faces culled, then flip the normals and culling the front face
Give the mesh thickness, so that there are two sides (you should enable backface culling then)
I think i found bug in my Collada Parser where i do not respect exported normals but i'm calculating new one. This causes inverted normals from time to time (this door mesh of this car for example). Anyway two sided rendering has to be implemented too.
Thank you.
I am using Java to write a very primitive 3D graphics engine based on The Black Art of 3D Game Programming from 1995. I have gotten to the point where I can draw single color polygons to the screen and move the camera around the "scene". I even have a Z buffer that handles translucent objects properly by sorting those pixels by Z, as long as I don't show too many translucent pixels at once. I am at the point where I want to add lighting. I want to keep it simple, and ambient light seems simple enough, directional light should be fairly simple too. But I really want point lighting with the ability to move the light source around and cast very primitive shadows ( mostly I don't want light shining through walls ).
My problem is that I don't know the best way to approach this. I imagine a point light source casting rays at regular angles, and if these rays intersect a polygon it will light that polygon and stop moving forward. However when I think about a scene with multiple light sources and multiple polygons with all those rays I imagine it will get very slow. I also don't know how to handle a case where a polygon is far enough away from a light source that if falls in between two rays. I would give each light source a maximum distance, and if I gave it enough rays, then there should be no point within that distance that any two rays are too far apart to miss a polygon, but that only increases my problem with the number of calculations to perform.
My question to you is: Is there some trick to point light sources to speed them up or just to organize it better? I'm afraid I'll just get a nightmare of nested for loops. I can't use openGL or Direct3D or any other cheats because I want to write my own.
If you want to see my results so far, here is a youtube video. I have already fixed the bad camera rotation. http://www.youtube.com/watch?v=_XYj113Le58&feature=plcp
Lighting for real time 3d applications is (or rather - has in the past generally been) done by very simple approximations - see http://en.wikipedia.org/wiki/Shading. Shadows are expensive - and have generally in rasterizing 3d engines been accomplished via shadow maps & Shadow Volumes. Point lights make shadows even more expensive.
Dynamic real time light sources have only recently become a common feature in games - simply because they place such a heavy burden on the rendering system. And these games leverage dedicated graphics cards. So I think you may struggle to get good performance out of your engine if you decide to include dynamic - shadow casting - point lights.
Today it is commonplace for lighting to be applied in two ways:
Traditionally this has been "forward rendering". In this method, for every vertex (if you are doing the lighting per vertex) or fragment (if you are doing it per-pixel) you would calculate the contribution of each light source.
More recently, "deferred" lighting has become popular, wherein the geometry and extra data like normals & colour info are all rendered to intermediate buffers - which is then used to calculate lighting contributions. This way, the lighting calculations are not dependent on the geometry count. It does however, have a lot of other overhead.
There are a lot of options. Implementing anything much more complex than some the basic models that have been used by dedicated graphics cards over the past couple of years is going to be challenging, however!
My suggestion would be to start out with something simple - basic lighting without shadows. From there you can extend and optimize.
What are you doing the ray-triangle intersection test for? Are you trying to light only triangles which the light would reach? Ray-triangle
intersections for every light with every poly is going to be very expensive I think. For lighting without shadows, typically you would
just iterate through every face (or if you are doing it per vertex, through every vertex) and calculate & add the lighting contribution per light - you would do this just before you start rasterizing as you have to pass through all polys in anycase.
You can calculate the lighting by making use of any illumination model, something very simple like Lambertian reflectance - which shades the surface based upon the dot product of the normal of the surface and the direction vector from the surface to the light. Make sure your vectors are in the same spaces! This is possibly why you are getting the strange results that you are. If your surface normal is in world space, be sure to calculate the world space light vector. There are a bunch of advantages for calulating lighting in certain spaces, you can have a look at that later on, for now I suggest you just get the basics up and running. Also have a look at Blinn-phong - this is the shading model graphics cards used for many years.
For lighting with shadows - look into the links I posted. They were developed because realistic lighting is so expensive to calculate.
By the way, LaMothe had a follow up book called Tricks of the 3D Game Programming Gurus-Advanced 3D Graphics and Rasterization.
This takes you through every step of programming a 3d engine. I am not sure what the black art book covers.