I am building a simple tool for manipulating point clouds. I want to be able to do do a polygonal selection on mouse move.
I am working with VTK 5.10 and QVTKWidget in Ubuntu 12.04.
To do this, I built a polygonalSelector class by modifying the test file : TestPolygonSelection.cxx at https://github.com/Kitware/VTK/blob/master/Rendering/Core/Testing/Cxx/TestPolygonSelection.cxx
The modifications are those needed to use VTK 5.10 instead of vtk 6 as described here : (note that I need to use the old version) http://www.vtk.org/Wiki/VTK/VTK_6_Migration/Replacement_of_SetInput
In the code, a polygonal shape is drawn and then a vtkHardwareSelector object vtkNew<vtkHardwareSelector> hardSel; is used for its method of GeneratePolygonSelection:
The problem I am running into is when the next condition is tested :
if (hardSel->CaptureBuffers())
Internally, CaptureBuffers() contains code that does this:
vtkRenderWindow *rwin = this->Renderer->GetRenderWindow();
int rgba[4];
rwin->GetColorBufferSizes(rgba);
if (rgba[0] < 8 || rgba[1] < 8 || rgba[2] < 8)
{
vtkErrorMacro("Color buffer depth must be atleast 8 bit. "
"Currently: " << rgba[0] << ", " << rgba[1] << ", " <<rgba[2]);
return false;
}
I never get past this point because it always returns false. I have no clue how to set the ColorBufferSizes and I have not been able to find info online to clear this point.
This is the error output:
vtkHardwareSelector (0x168dcd0): Color buffer depth must be atleast 8 bit. Currently: 17727456, 0, 23649488
On debugging, the rgba int is never changed (it stays the same before and after the call to rwin->GetColorBufferSizes(rgba) ).
On the Documentation for vtkRenderWindow it states that :
virtual int vtkRenderWindow::GetColorBufferSizes ( int * rgba )
Get the size of the color buffer. Returns 0 if not able to determine otherwise sets R G B and A into buffer.
Implemented in vtkOpenGLRenderWindow, and vtkOpenGLRenderWindow.
Do I need to use vtkOpenGLRenderWindow?
On its Class reference it states that "Application programmers should normally use vtkRenderWindow instead of the OpenGL specific version."
Any ideas?
EDIT
I believe the problem stems from the VTK 5.10 vs VTK 6 differences.
I did manage to implement the polygonal selection using a different approach.
If anyone intends to implement some kind of polygonal selection in the future, they might find these steps useful:
I sub-classed vtkInteractorStyleDrawPolygon and implemented the following steps inside the OnLeftButtonUp() method:
Get points on button release: std::vector<vtkVector2i> points = this->GetPolygonPoints();
Insert points to vtkDoubleArray
Insert the vtkDoubleArray into a vtkPolygon
Get the polygon's numPoints, normal and bounds.
Get pointer to the double array inside the polygon data pts.
pts = static_cast<double*>(polygon->GetPoints()->GetData()->GetVoidPointer(0);
For each point P in the vtkPolyData, do :
inside = polygon->PointInPolygon(P,numPoints, pts, bounds,normal)
Add points to a vtkSelection when inside == 1
Related
I want to use the built in derivative funcitons:
vec3 dpdx = dFdx(p);
vec3 dpdy = dFdy(p);
Inside a compute shader. However I get the following error:
Message ID name: UNASSIGNED-CoreValidation-Shader-InconsistentSpirv
Message: Validation Error: [ UNASSIGNED-CoreValidation-Shader-InconsistentSpirv ] Object 0: handle = 0x5654380d4dd8, name = Logical device: GeForce GT 1030, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0x6bbb14 | SPIR-V module not valid: OpEntryPoint Entry Point <id> '5[%main]'s callgraph contains function <id> 46[%BiplanarMapping_s21_vf3_vf3_f1_], which cannot be used with the current execution modes:
Derivative instructions require DerivativeGroupQuadsNV or DerivativeGroupLinearNV execution mode for GLCompute execution model: DPdx
Derivative instructions require DerivativeGroupQuadsNV or DerivativeGroupLinearNV execution mode for GLCompute execution model: DPdy
%BiplanarMapping_s21_vf3_vf3_f1_ = OpFunction %v4float None %41
Severity: VK_DEBUG_UTILS_MESSAGE_SEVERITY_ERROR_BIT_EXT
I don't seem to find anything on the topic when I search online.
Derivative functions only work in a fragment shader. The derivatives are based on the rate-of-change of the value across the primitive being rendered. Obviously compute shaders don't render primitives, so there is nothing to compute.
Apparently, NVIDIA has an extension that provides some derivative computation capabilities for compute shaders. That's where the weird error comes from.
Derivatives in fragment shaders are computed by subtracting between the same value from adjacent invocations. As such, you can emulate this by using shared variables.
First, you have to make sure that the spatially adjacent invocations are in the same work group. So your work group size needs to be some multiple of 2x2 invocations. Then, you need a shared variable array, which you index by invocations within a work group. Each invocation should write its own value to its own index.
To compute the derivative, issue a barrier (with memoryBarrierShared) after writing the values to the shared variables. Take the difference between one's invocation and the adjacent one in the same 2x2 quad. You should make sure that all invocations in the same quad get the same value, by always subtracting between the lower index and the higher index within the quad. Something like this:
uvec2 quadIndex = gl_LocalInvocationID.xy / 2
/*type*/ derFdX = variable[quadIndex.x + 1][quadIndex.y + 0] - variable[quadIndex.x + 0][quadIndex.y + 0]
/*type*/ derFdY = variable[quadIndex.x + 0][quadIndex.y + 1] - variable[quadIndex.x + 0][quadIndex.y + 0]
The NVIDIA extension basically does this for you, though it's probably more efficient since it wouldn't need the shared variable.
I'm having a problem that I need to make the words I took from an external file "NOT" overlap each other. I have over 50 words that have random text sizes and places when you run it but they overlap.
How can I make them "NOT" overlap each other? the result would probably look like a word cloud.
if you think my codes would help here they are
String [] words;
int index = 0;
void setup ()
{
size (500,500);
background (255);
String [] lines = loadStrings ("alice_just_text.txt");
String entireplay = join(lines, " "); //splits it by line
words = splitTokens (entireplay, ",.?!:-;:()03 "); //splits it by word
for (int i = 0; i < 50; i++) {
float x = random(width);
float y = random(height);
int index = int(random(words.length));
textSize (random(60)); //random font size
fill (0);
textAlign (CENTER);
text (words[index], x, y, width/2, height/2);
println(words[index]);
index++ ;
}
}
Stack Overflow isn't really designed for general "how do I do this" type questions. You'll have much better luck if you post a more specific "I tried X, expected Y, but got Z instead" type question. But I'll try to help in a general sense:
You need to break your problem down into smaller pieces and then take on those pieces one at a time.
For example, you can isolate your problem to making sure rectangles don't overlap, which you can break down even further. There are a number of ways to do that:
You could use a grid to lay out your rectangles. Figure out how many squares a line of text takes up, then find a place in your grid where that word will fit. You could use something like a 2D array of boolean values, for example.
Or you could generate a random location, and then check whether there's already a rectangle there. If so, pick a new random location until you find a clear spot.
In any case, you'll probably need to use collision detection (either point-rectangle or rectangle-rectangle) to determine whether your rectangles are overlapping.
Start small. Create a small example program that just shows two rectangles on the screen. Hardcode their positions at first, but make it so they turn red if they're colliding. Work your way up from there. Make it so you can add rectangles using the mouse, but only let the user add them if there is no overlap. Then add the random location choosing. If you get stuck on a specific step, then post a MCVE and we'll go from there. Good luck.
So I am writing a volume ray caster (for the first time ever) in Java, learning from the code of the great VTK toolkit written in C.
Everything works almost exactly like VTK, except I get this strange artifacts, looking like elevation lines on the volume. I've noticed that VTK also shows them when manipulating the image, but they disappear when the image is static.
I've looked though the code multiple times, and can't find the source of the artifacts. Maybe it is something simple a computer graphics expert knows from the top of his head? :)
More info on my implementation
I am using the gradient method for normal calculations (a standard from what I've found on the internet)
I am using trilinear interpolation for ray point values
This "elevation line" artifacts look like value rounding errors, but I can't find any in my code
Increasing the resolution of the render does not solve the problem
The artifacts do not seem to be "facing" any fixed direction, like the camera position
I'm not attaching the code since it is huge :)
EDIT (ray composite loop)
while (Geometry.pointInsideCuboid(cuboid, position) && result.a > MINIMAL_OPACITY) {
if (currentVoxel.notEquals(previousVoxel)) {
final float value = VoxelUtils.interpolate(position, voxels, buffer);
color = colorLUT.getColor(value);
opacity = opacityLUT.getOpacityFromLut(value);
if (enableShading) {
final Vector3D normal = VoxelUtils.getNormal(position, voxels, buffer);
final float cos = normal.dot(light.fixedDirection);
final float gradientOpacity = cos < 0 ? 0 : cos;
opacity *= gradientOpacity;
if(cos > 0)
color = color.clone().shade(cos, colorLUT.diffuse, colorLUT.specular);
}
previousVoxel.setTo(currentVoxel);
}
if(opacity > 0)
result.accumulate(color, opacity);
position.add(rayStep);
currentVoxel.fromVector(position);
}
Does anyone knows how can I convert from image coordinates acquired like this:
private void renderWindowControl1_Click(object sender, System.EventArgs e)
{
int[] lastPos = this.renderWindowControl1.RenderWindow.GetInteractor().GetLastEventPosition();
Z1TxtBox.Text = (_Slice1 + 1).ToString();
X1TxtBox.Text = lastPos[0].ToString();
Y1TxtBox.Text = (512 - lastPos[1]).ToString();
}
into physical coordinates.
TX Tal
VTK may have an elegant method call, but in general you will need to use the information in your image's image plane module (specifically Equation C.7.6.2.1-1).
http://dicom.nema.org/medical/dicom/current/output/html/part03.html#sect_C.7.6.2
in order to convert between a click and physical location:
There is some insights I got from working on this project:
int[] lastPos = this.renderWindowControl1.RenderWindow.GetInteractor().GetLastEventPosition();
returns the pixel location of the click in the control. It is a problem because if the user zooms in, lastPos does not represent the location in the dicom.
The solution I have found, was to use vtkPropPicker class. Code example can be found here and here.
image_coordinate are in world coordinates but without the origin offset. which mean, that:
1. if we want to get the pixel location (in 512x512 grid): the x,y value should be normalized by pixel spacing, and image orientation. the value of these parameters can be acquired using the equation mentioned in the answer above me Equation C.7.6.2.1-1.
vtkDICOMImageReader _reader;
reader.GetPixelSpacing();
reader.GetImageOrientationPatient();
If we need world physical location, we should add the origin offset for x and y:
reader.GetDataOrigin();
As for Z axis: I didn't need it, so I am not sure.
That is my dime on the matter, maybe there are some more elegant ways, I haven't found them.
I'm trying to render the "mount" scene from Eric Haines' Standard Procedural Database (SPD), but the refraction part just doesn't want to co-operate. I've tried everything I can think of to fix it.
This one is my render (with Watt's formula):
(source: philosoraptor.co.za)
This is my render using the "normal" formula:
(source: philosoraptor.co.za)
And this one is the correct render:
(source: philosoraptor.co.za)
As you can see, there are only a couple of errors, mostly around the poles of the spheres. This makes me think that refraction, or some precision error is to blame.
Please note that there are actually 4 spheres in the scene, their NFF definitions (s x_coord y_coord z_coord radius) are:
s -0.8 0.8 1.20821 0.17
s -0.661196 0.661196 0.930598 0.17
s -0.749194 0.98961 0.930598 0.17
s -0.98961 0.749194 0.930598 0.17
That is, there is a fourth sphere behind the more obvious three in the foreground. It can be seen in the gap left between these three spheres.
Here is a picture of that fourth sphere alone:
(source: philosoraptor.co.za)
And here is a picture of the first sphere alone:
(source: philosoraptor.co.za)
You'll notice that many of the oddities present in both my version and the correct version is missing. We can conclude that these effects are the result of interactions between the spheres, the question is which interactions?
What am I doing wrong? Below are some of the potential errors I've already considered:
Refraction vector formula.
As far as I can tell, this is correct. It's the same formula used by several websites and I verified the derivation personally. Here's how I calculate it:
double sinI2 = eta * eta * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - sqrt(1.0f - sinI2)));
transmit = transmit.normalise();
I found an alternate formula in 3D Computer Graphics, 3rd Ed by Alan Watt. It gives a closer approximation to the correct image:
double etaSq = eta * eta;
double sinI2 = etaSq * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - (sqrt(1.0f - sinI2) / etaSq)));
transmit = transmit.normalise();
The only difference is that I'm dividing by eta^2 at the end.
Total internal reflection.
I tested for this, using the following conditional before the rest of my intersection code:
if (sinI2 <= 1)
Calculation of eta.
I use a stack-like approach for this problem:
/* Entering object. */
if (r.normal.dot(r.dir) < 0)
{
double eta1 = r.iorStack.back();
double eta2 = m.ior;
eta = eta1 / eta2;
r.iorStack.push_back(eta2);
}
/* Exiting object. */
else
{
double eta1 = r.iorStack.back();
r.iorStack.pop_back();
double eta2 = r.iorStack.back();
eta = eta1 / eta2;
}
As you can see, this stores the previous objects that contained this ray in a stack. When exiting the code pops the current IOR off the stack and uses that, along with the IOR under it to compute eta. As far as I know this is the most correct way to do it.
This works for nested transmitting objects. However, it breaks down for intersecting transmitting objects. The problem here is that you need to define the IOR for the intersection independently, which the NFF file format does not do. It's unclear then, what the "correct" course of action is.
Moving the new ray's origin.
The new ray's origin has to be moved slightly along the transmitted path so that it doesn't intersect at the same point as the previous one.
p = r.intersection + transmit * 0.0001f;
p += transmit * 0.01f;
I've tried making this value smaller (0.001f) and (0.0001f) but that makes the spheres appear solid. I guess these values don't move the rays far enough away from the previous intersection point.
EDIT: The problem here was that the reflection code was doing the same thing. So when an object is reflective as well as refractive then the origin of the ray ends up in completely the wrong place.
Amount of ray bounces.
I've artificially limited the amount of ray bounces to 4. I tested raising this limit to 10, but that didn't fix the problem.
Normals.
I'm pretty sure I'm calculating the normals of the spheres correctly. I take the intersection point, subtract the centre of the sphere and divide by the radius.
Just a guess based on doing a image diff (and without reading the rest of your question). The problem looks to me to be the refraction on the back side of the sphere. You might be:
doing it backwards: e.g. reversing (or not reversing) the indexes of refraction.
missing it entirely?
One way to check for this would be to look at the mount through a cube that is almost facing the camera. If the refraction is correct, the picture should be offset slightly but otherwise un-altered. If it's not right, then the picture will seem slightly tilted.
So after more than I year, I finally figured out what was going on here. Clear minds and all that. I was completely off track with the formula. I'm instead using a formula by Heckbert now, which I am sure is correct because I proved it myself using geometry and discrete math.
Here's the correct vector calculation:
double c1 = v.dot(n) * -1;
double c1Sq = pow(c1, 2);
/* Heckbert's formula requires eta to be eta2 / eta1, so I have to flip it here. */
eta = 1 / eta;
double etaSq = pow(eta, 2);
if (etaSq + c1Sq >= 1)
{
Vector transmit = (v / eta) + (n / eta) * (c1 - sqrt(etaSq - 1 + c1Sq));
transmit = transmit.normalise();
...
}
else
{
/* Total internal reflection. */
}
In the code above, eta is eta1 (the IOR of the surface from which the ray is coming) over eta2 (the IOR of the destination surface), v is the incident ray and n is the normal.
There was another problem, which confused the problem some more. I had to flip the normal when exiting an object (which is obvious - I missed it because the other errors were obscuring it).
Lastly, my line of sight algorithm (to determine whether a surface is illuminated by a point light source) was not properly passing through transparent surfaces.
So now my images line up properly :)