I am implementing the bi-directional path tracing algorithm (using pbrt as a source), but am seeing very hard shadows with a cornell box case that I have set up.
Here is how the rendered result looks like,
The light source can be thought of as a thin box, with only one side emitting light. All other five sides act as occluders.
I would expect to see more in the dark areas (if you look closely, there is a sphere in the room). This specific output is from s == 1, that would equate to regular path tracing, and was generated with 100 samples/ pixel . The output from s == 0 looks even darker, with just one bright spot on the ceiling.
Any ideas why this may be happening? The shadows seem to be in the right place, but the lighting contributions seem too low.
Related
Is there a way to set multiple focal lengths with one projector and software?
As shown in the following illustration, the image from one projector is partially in focus and partially out of focus.
Assuming that the screen output from the projector is viewed with the camera again, is there a way to correct the part out of focus with software?
No this is not doable by SW only.
Because you would need to change the direction of light coming out from the projector's emitter so that after passing the optics will focus further away and SW can affect only Color change...
What you need is to tweak the projector optics changing the focal length of lenses. You can do that by adding another lens in front of projector (with the right focal length in the right distance). My bet you need concave lens (negative focal length) however You need to make sure the cooling of the projector itself will not be affected so it must not reflect too much light back, and also take in mind this will most likely create some color focusing problems. I would simply test this with holding such a lens in hand and see what happens with the image focus while moveing it ... however I got quite a lot of lenses at my disposal which I assume most people do not have.
However you can test this in SW by using any optic lab SW or even write you own and simulate the projector there ... after obtaining the proper parameters for your new lens you can purchase it at any optics (where eye glasses are made/sold) unless the focal length is not too weird...
Another option is to tweak the projectors lens system which is most likely a teleobjective so if you can slightly tweak the distance range between the lens mechanically which could do the trick however the lens movement range usually corresponds to their diameters and apertures so its possible such change will cut of some parts of screen (on the outer borders). Also this usually means loss of warranty as you need to mess up with the device itself and also if not done properly you could damage the lenses for good so I do not advise to do this unless really necessary.
What could be the best way to simulate a led strip light in threejs like in the image? Is there any example around to make something similar?
It really depends on what kind of realism you want. There are always ways of making effects more realistic, but it quickly becomes rather complicated.
If we limit ourselves to THREE.js out-of-the-box functionality, though, we have the following light types at our disposal:
AmbientLight
DirectionalLight
HemisphereLight
PointLight
RectAreaLight
SpotLight
And out of these, I would personally recommend RectAreaLight, because it represents an area light (the shape of which you can specify to match that of your led strip lights), which is a rectangle that emits light uniformly across its face. A lamp such as those in your picture do perhaps not emit light completely uniformly, but depending on what your goal is, it might be a close enough approximation. You may also combine several area lights to pursue certain effects.
This is really confusing the heck out of me. Take a look at this screenshot:
Sometimes, the GtkFrames in my program look like on the left side, and sometimes they look like on the right side - or even a mix of the two! The program is exactly the same. Just running the same program multiple times yields very different looks of the GtkFrames in the program! How can that be?
It seems that there are two different designs of GtkFrame:
The first one has its label centered at the top of the frame and
smoothly dissolves towards the
bottom so that the frame doesn't completely enclose the GtkFrame's
contents.
The second design has its label left-aligned at the top of the frame
and draws a border around the complete GtkFrame.
The problem is now that GTK+ seems to choose one of the two designs entirely at random. I don't see any pattern in which design I'm going to get. It appears to happen completely at random which is really confusing me.
Can somebody shed a light onto this mystery? What is going on here? Is there a way to force GTK+ to use a certain design?
I'm using GTK+ 2.24.10 with the Adwaita theme on Linux Mint.
Good day, I was wondering on how can I implement an effect like the ones on the sweepstakes where you scratch the grey part and reveal a number underneath it. I was wondering how can I implement that in unity, I don't have any clue where to start, thanks in advance.
the idea is, I have 2 overlapping objects, A and B, whereas when I click on a part of B it will be removed and will show a part of A, both A and B are sprite images though
This is not really related to Unity as such. This relates more to a general technical solution for a visual representation you would like to do.
So, let's skip the Unity part.
But even then the question is very general and hard to respond to. There are many ways to achieve this, depending on the result you want.
You could apply a quad object, with a grey texture on top of whatever number box you have and then either use shaders to show the number as you "scratch it". Or you could remove the grey square when clicking it (different behaviour). Or you could do the old school approach and replace pixels, as you "scratch" the box.
Only to mention a few ideas.
But still, the question is very general and hard to answer as it pertains to a general idea, and not so much a concrete question.
Look into how to build Fog of war shader. I would achieve this by rendering A and B with two separate cameras then as you scratch it off it would reveal B in the scratched area.
I am maintaining a multiplatform codebase for Xbox360 and WinXP. I am seeing an issue on the XP side that appears to be related to D3DRS_VIEWPORTENABLE on the Xbox360 version not having an equivalent on WinXP D3D9. This article had an interesting idea, but the only way to construct an identity matrix is to supply negative numbers to D3DVIEWPORT9::X and D3DVIEWPORT9::Height, but they are unsigned numbers. (I tried to put in negative numbers anyway, but nothing interesting happened.)
So, how does one emulate the behavior of D3DRS_VIEWPORTENABLE under WinXP/D3D9?
(For clarity, the result I'm seeing is that a 2d screen-aligned quad works fine on Xbox360 but is offset/stretched on WinXP. In fact, the (0, 0) starts in the center of the screen on WinXP instead of in the lower-left corner like on the Xbox360 as a result of applying the viewport transform.)
Update: I didn't have an Xbox360 devkit at the time I wrote up this question, but I've since gotten one. I commented out the disabling of the D3DRS_VIEWPORTENABLE state, and the exact same behavior resulted on the Xbox360 as on the WinXP build. So, there must be some DirectX magic to bridge the gap here for emulating D3DRS_VIEWPORTENABLE being turned off on WinXP.
Instead of thinking about how you could put negatives into the viewport matrix think about it from the projection matrix.
The viewport matrix is applied directly after the projection matrix. So if you imagine setting the viewport to identity and the multiply it with the wvp (world-view-projection) matrix. ie
world * view * projection * viewport
You can now set the viewport to anything you want.
Of course this isn't actually the best way to attack the problem either. Some drivers will probably make optimisations based upon entries in t e viewport (they may not actually do a matrix multiply, for one). I wouldn't personally use the above system I can foresee too many issues coming from it.
So where does that leave you?
Well actually its still pretty simple. If you multiply the projection matrix by a matrix that is the inverse of the viewport matrix then you will find that when the viewport "matrix" is applied they cancel each other out and you are left with the direct output of the projection. The viewport has now been, effectively, "disabled".