Transparency in Progressive Photon Mapping in cuda - graphics

I am working on a project, which is based on optix. I need to use progressive photon mapping, hence I am trying to use the Progressive Photon Mapping from the samples, but the transparency material is not implemented.
I've googled a lot and also tried to understand other samples that contains transparency material (e.g. Glass, Tutorial, whitted). At last, I got the solution as follows;
Find the hit point (intersection point) (h below)
Generate another ray from that point
use the color of the new generated points
By following you can also find the code of that part, by I do not understand why I get black color(.0f, .0f, 0.f) for the new generated ray (part 3 above).
optix::Ray ray( h, t, rtpass_ray_type, scene_epsilon );
HitPRD refr_prd;
refr_prd.ray_depth = hit_prd.ray_depth+1;
refr_prd.importance = importance;
rtTrace( top_object, ray, refr_prd );
result += (1.0f - reflection) * refraction_color * refr_prd.attenuation;
Any idea will be appreciated.
Please note that refr_prd.attenuation should contains some colors, after using function rtTrace(). I've mentioned reflection and reflaction_color to help you better understand the procedure. You can simply ignore them.

There are a number of methods to diagnose your problem.
Isolate the contribution of the refracted ray, by removing any contribution of the reflection ray.
Make sure you have a miss program. HitPRD::attenuation needs to be written to by all of your closest hit programs and your miss programs. If you suspect the miss program is being called set your miss color to something obviously bad ([1,0,1] is my favorite).
Use rtPrintf in combination with rtContextSetPrintLaunchIndex or setPrintLaunchIndex to print out the individual values of the product to see which term is zero from a given pixel. If you don't restrict the output to a given launch index you will get too much output. You probably also want to print out the depth as well.

Related

How would I construct an integer optimization model corresponding to a graph

Suppose we're given some sort of graph where the feasible region of our optimization problem is given. For example: here is an image
How would I go on about constructing these constraints in an integer optimization problem? Anyone got any tips? Thanks!
Mate, I agree with the others that you should be a little more specific than that paint-ish picture ;). In particular you are neither specifying any objective/objective direction nor are you giving any context, what about this graph should be integer-variable related, except for the existence of disjunctive feasible sets, which may be modeled by MIP-techniques. It seems like your problem is formalization of what you conceptualized. However, in case you are just being lazy and are just interested in modelling disjunctive regions, you should be looking into disjunctive programming techniques, such as "big-M" (Note: big-M reformulations can be problematic). You should be aiming at some convex-hull reformulation if you can attain one (fairly easily).
Back to your picture, it is quite clear that you have a problem in two real dimensions (let's say in R^2), where the constraints bounding the feasible set are linear (the lines making up the feasible polygons).
So you know that you have two dimensions and need two real continuous variables, say x[1] and x[2], to formulate each of your linear constraints (a[i,1]*x[1]+a[i,2]<=rhs[i] for some index i corresponding to the number of lines in your graph). Additionally your variables seem to be constrained to the first orthant so x[1]>=0 and x[2]>=0 should hold. Now, to add disjunctions you want some constraints that only hold when a certain condition is true. Therefore, you can add two binary decision variables, say y[1],y[2] and an additional constraint y[1]+y[2]=1, to tell that only one set of constraints can be active at the same time. You should be able to implement this with the help of big-M by reformulating the constraints as follows:
If you bound things from above with your line:
a[i,1]*x[1]+a[i,2]-rhs[i]<=M*(1-y[1]) if i corresponds to the one polygon,
a[i,1]*x[1]+a[i,2]-rhs[i]<=M*(1-y[2]) if i corresponds to the other polygon,
and if your line bounds things from below:
-M*(1-y[1])<=-a[i,1]*x[1]-a[i,2]+rhs[i] if i corresponds to the one polygon,
-M*(1-y[1])<=-a[i,1]*x[1]-a[i,2]+rhs[i] if i corresponds to the other polygon.
It is important that M is sufficiently large, but not too large to cause numerical issues.
That being said, I am by no means an expert on these disjunctive programming techniques, so feel free to chime in, add corrections or make things clearer.
Also, a more elaborate question typically yields more elaborate and satisfying answers ;) If you had gone to the effort of making up a true small example problem you likely would have gotten a full formulation of your problem or even an executable piece of code in no time.

Adjusting initial bullet angle to match user set distance(scope zero) (for math gods)

So my question is pretty specific, which means it was pretty hard to find anything that could help me on google or stackoverflow.
I want to give users the ability to set the distance/range on their guns. I have almost everything I need to make this happen, I just don't have the angle that I need to add on to the direction angle at which the bullet comes from. I don't know what equation/formula I would need to get this. I am not looking for anything code-specific, just an idea of what/how to do this.
Since I do not know what formula to use, I just started messing around with some numbers with this formula I found:
(This formula applies to actual sniper)
Range = 1000 * ActualTargetHeight/TargetHeightInMils(on the scope)
BulletDrop = BulletDropSpeed*Range^2/2*VelocityOfTheBullet
MilsToRaiseScope = 1000 * BulletDrop * RangeToTarget
I just replaced Range with the whatever zero the user is on.
I have a feeling I would just toss the MilsToRaiseScope into a trigonometry function. But I'm not sure.
If anyone is confused as to what I'm talking about, you can find an example of what I want in Battlefield 4 or any of the Arma games. With snipers, you can zero in the scope on to whatever distance you need so you won't have to adjust for bullet drop on the scope.
Sorry for the long question, just want to make sure everyone understands! :)
Mils corresponds to (military) angular measurement unit of 1/1000 of radian, so it is ready-to-use angle
Second formula looks strange. Height loss depends on time of flight:
dH = g*t^2/2 = g * (Range / VelocityOfTheBullet)^2 / 2
where g is 9.81 m/sec^2
I am using a 2D table look up for this.
I generate the table by doing a whole bunch of test firings at different angles, and record the path of the bullet for each angle.
To analytically determine this, it can get quite complex if aerodynamic drag is involved.
It is also discussed on this game-specific question.
For inspiration, this animated gif is great.

TypeError: src data type = 23 is not supported

hope you're all having a great day so far.
I have a python 3.6 script that applies random (well, in fact it's more like exhaustively all existing) sequences of image transformations with openCV to an image, compare it to a seeken result and try the next one.
Image transformations include:
Thresholding
Morphological transformation
Smoothing
Playing with colors (with cvtColor, but also hand-made algorithm)
Playing with gradients
I won't show the code for this since it's basically just a heavy set of loops and arrays, which I think couldn't be linked to my issue.
Obviously enough most of the tried combinations aren't valid ones since, for example, converting BGR to GRAY might not work well if done after a conversion from BGR to GRAY. I know it's not very pythonic of me, since opposed to EAFP thinking, but since exception-catching cost a lot and happened pretty often, and sometime after some heavy treatment, I wanted to add a few conditions which would prevent most of them.
To do so, I sorted my array of functions and, by checking if I'm within a certain range, I can check the validity of the transformation to come and abort if bad.
if steps[it] >= THREE_CHANNELS_LIMIT:
if len(cur_img.shape) == 3:
if steps[it] >= SINGLE_CHANNEL_LIMIT:
break
elif cur_img.dtype not in BGR_DEPTHS:
break
Where steps[it] is the index pointing to the next function to execute, and THREE_CHANNELS_LIMIT or SINGLE_CHANNEL_LIMIT are the specific index at the border of each function range.
The above code prevents single-channel transformations to be done on multi-channel numpy image.
Now with my issue : from logged exceptions, I can see a few functions, the OpenCV morphological functions, are still throwing errors.
TypeError: src data type = 23 is not supported
Which I think is probably an issue with pixel depth. However, I have no idea what type 23 is/means, and I would like to know it in order to guess how often the issue occurs and determine whether or not I should add another condition or let the try-except statement deal with it.
Searched through the web but found many type = 17, type = 18 or type = 0 issues, but can't seem to find this one.
Is there a file somewhere listing all of OpenCV types used for Error message? Or maybe one of you know about this specific one, which would do the trick for my present case?
And sorry for my innacurate english. Current speelchecker just underline everything so I might have left many typos, too.

GIMP's method of layer compositing/blending

In my quest to add alpha capacity to my image blending tools in Matlab, I've come across a bit of a snag. Among others, I've been using these links as my references as to how foreground and background alpha plays into the composition of both the output color data and output alpha.
My original approach was to simply use a a Src-Over composition for "normal" blend mode and Src-Atop composition for other modes. When compared to the output from GIMP, this produced similar, but differing results. The output alpha matches, but the RGB data differs.
Specifically, the foreground's color influence over the background is zero where the background alpha is zero. After spending a few hours looking naively through the GIMP 2.8.10 source, I notice a few things that confuse me.
Barring certain modes and a few ancillary things that happen during export that I haven't gleaned in the code yet, the approach is approximately thus:
if ~normalmode
FGalpha = min(FGalpha, BGalpha); % << why this?
end
FGalpha = FGalpha * mask * opacity;
OUTalpha = BGalpha + (1 - BGalpha) * FGalpha;
ratio = FGalpha / (OUTalpha + eps);
OUT = OUT * ratio + BG * (1 - ratio);
if normalmode
OUT = cat(3, OUT, OUTalpha);
else
OUT = cat(3, OUT, BGalpha);
end
The points of curiosity lie in the fact that I don't understand conceptually why one would take the minimum of layer alphas for composition. Certainly, this approach produces results which match GIMP, but I'm uncomfortable establishing this as a default behavior if I don't understand the reasoning.
This may be best asked of a GIMP forum somewhere, but I figured it would be more fruitful to approach a general audience. To clarify and summarize:
Does it make sense that colors in a transparent BG region are unaffected by multiplication with an opaque foreground color? Wouldn't this risk causing bleeding of unaltered data near hard mask edges with some future operation?
Although I haven't found anything, are there other applications
out there that use this approach?
Am I wrong to use GIMP's behavior as a reference? I don't have PS to
compare against, and ImageMagick is so flexible that it doesn't
really suggest a particular expected behavior. Certainly, GIMP has
some things it does incorrectly; maybe this is something else that
may change.
EDIT:
I can at least answer the last question by obviating it. I've decided to add support for both SVG 1.2 and legacy GIMP methods. The GEGL methods to be used by GIMP in the future follow the SVG methods, so I figure that suggests the propriety of the legacy methods.
For what it's worth, the SVG methods are all based on a Porter-Duff Src-Over composition. If referring to the documentation, the fact that the blend math is the same gets obfuscated because the blend and composition are algebraically combined using premultiplied alpha to reduce the overall computational cost. With the exception of SoftLight, the core blend math is the same as those used by GIMP and elsewhere.
Any other blend operation (e.g. PinLight, Hue) can be made compatible by just doing:
As = Sa * (1 - Da);
Ad = Da * (1 - Sa);
Ab = Sa * Da;
Ra = As + Ad + Ab; % output alpha
Rc = ( f(Sc,Dc)*Ab + Sc*As + Dc*Ad ) / Ra;
and then doing some algebra if you want to simplify it.

Software to draw graphical models in plate notation

So I see graphical models expressed in plate notation in research papers and online all the time (for example: http://www.cs.princeton.edu/~blei/papers/BleiNgJordan2003.pdf).
Is there a quick and easy way to produce these?? I've searched and searched but all I've found are solutions like GraphViz which are really way more powerful than what I need (and hence much more difficult to use). PGF/Tikz seems like my best bet, but again it seems like overkill.
Maybe my best bet is to just produce them in Inkscape, or bite the bullet and learn PGF/Tikz. They're just so popular that I thought there would be a simpler way to churn them out, but maybe not... TIA.
GraphViz really isn't that hard to learn. The basic language is really simple for these kinds of graphs. It took me just a few moments to replicate (more or less) the first example from that pdf, and the nice thing about it is that, due to it's simplicity, it's quite easy to generate graphs procedurally from some other data source.
Digraph fig1 {
rankdir = LR; //order things from left to right
//define alpha and beta as existing
α [shape=circle];
β [shape=circle];
//not strictly nescessary but helps if you want to
//assign them specific shapes or colours
subgraph cluster_M //names beginning with "cluster" get a box drawn, an odd hack
{
label = "M"
θ [shape=circle];
subgraph cluster_N
{
label = "N"
z [shape=circle];
w [shape=circle, style=filled]
z->w; //quite literally z points at w
}
θ -> z;
}
α -> θ;
β -> w;
}
compiled with
dot -Tpng input.txt -o graph.png
it comes out looking like this. If having the labels below the bubbles was important, you could do that with a couple of extra lines, similarly if specific placement of nodes is important you can adjust that too. In fact, if you don't specify an image format, the default behaviour of dot is to output a version of the input file with co-ordinates for the position of each element.
Here is a more refined fork of Dietz's scripts: https://github.com/jluttine/tikz-bayesnet
Check out the excellent Tikz-package by Laura Dietz, available from http://www.mpi-inf.mpg.de/~dietz/probabilistic-models-tikz.zip. A pdf with some examples is available at http://www.mpi-inf.mpg.de/~dietz/probabilistic-models-tikz.pdf.
I really like GLE (Graphics Layout Engine). It's what Christopher Bishop used in his book, "Pattern Recognition and Machine Learning". It has a simple syntax with variables, loops, and functions, and it supports TeX equations. Results output as either pdf or eps and look very nice.
Lots of examples are available, including this Bayes net from PRML.
As a complement to other answers: a "low-skills" approach I've used is to draw them in Google Slides, with some add-on for producing the formulas.

Resources