I am presently trying to use Povray for generating some caustic patterns. I have to design a lens of an arbitrary shape. The lens can be thought of as a sheet of glass. The incident surface of the lens is a plane, and the refractive surface is of varying height. I have saved this varying Z coordinate as a height map. I was wondering what is the best way of creating this object, and specifying the parameters of the lens, such as refractive index. I want to place a screen behind the lens, where the desired caustic pattern can be observed.
Thanks!
POV Ray has a heightfield object for this very purpose.
You should be able to use your heightmap in it (assuming it's a grayscale bitmap) and assign whatever refractive index you desire.
Make an intersection of a cube and the hightfield
Related
I have got point clouds of different primitive objects (cone, plane, torus, cylinder, sphere, ellipsoid). The all vary in orientation, position and scaling. Furthermore all of them are initialized with a unique set of parameters (e.g. height, radius, etc.) so that their shape can be quiet different (some cones are tall, others are small and fat).
Now to my question:
I am trying to find the objects "principal components". Using PCA doesn't lead to good results, since rotated primitives can have their main variation in any direction (which doesn't have to be necessarily along the length of the objects).
The only chance that I see is to use somehow the symmetry of my primitives. Isn't there a method based on inertia? Maybe some way to find the main symmetry axis and two others perpendicular to it?
Can you give me some advice or point me to papers or implementations (maybe even python)?
Thanks a lot, Merlin.
PS: This is what I get if I only apply a PCA. Especially for cones this doesn't really work. Only cones that are almost identical in shape share the same orientation, but I need them all to point in one direction (e.g. up).
So you got cones and just need to rotate them all in the same direction?
If so you can fit a triangle to them and point the peak (e.g with the perpendicular bisectors of the sides) to your main axis.
You have an interesting problem. Normally used shape descriptors (VFH) that are invariant to shape but not pose (which is what you would want, really) would not be invariant to stretching in the shape.
I think to succeed at this you need to be clearer about the invariants that you are trying to maintain when a shape changes. Is it a topological invariant? If so, then here is a good starting point: https://www.google.com.tr/search?q=topologically+invariant+shape+descriptor
I decided to just stick to simple PCA since it's the only method that is totally generic and doesn't depend on prior (expert) knowledge about the data.
In my Android mapping activity, I have a parallelogram shaped area that I want to tell if points (ie:LatLng) are inside. I've tried using the:
bounds = new LatLngBounds.Builder()
.include(latlngNW)
.include(latlngNE)
.include(latlngSW)
.include(latlngSE)
.build();
and later
if (bounds.contains(currentLatLng) {
.....
}
but it is not that accurate. Do I need to create equations for lines connecting the four corners?
Thanks in advance.
The LatLngBounds appears to create a box from the points included. Given the shape that I'm trying to monitor is a parallelogram, you do need to create equations for each of the edges of the shape and use if statements to determine which side of the line a point lies.
Not an easy solution!
If you wish to build a parallelogram-shaped bounding "box" from a collection of points, and you know the desired angles of the parallelogram's sides, your best bet is to probably define a 2d linear shear transform which will one of those angles to horizontal, and the other to vertical. One may then feed the transformed points into normal "bounding box" routines, and feed the corners of the resulting box through the inverse of the above transform to get a bounding parallelogram.
Note that this approach is generally only suitable for parallelograms, not trapezoids. There are a few special cases where it could be used to find bounding trapezoids [e.g. if the top and bottom were horizontal, and the sides were supposed to converge at a known point (x0-y0), one could map x' = (x-x0)/(y-y0)] but for many kinds of trapezoids, the trapezoid formed by inverse mapping the corners of a horizontal/vertical bounding rectangle may not properly bound the points that are supposed to be within it.
I'm not really sure if this fits in here or better in a scientific computer science or math forum but since I'm searching for a concrete algorithm...
I have a 3d model which is somehow defined either by a mesh or as an algebraic variety and i want to remesh/approximate this thing just using a fixed chosen type of congruent tiles, e.g. isoscele triangles with certain ratio of sides length to the base length. Is there a algorithm for that or does anyone know the right name for the problem? I found some algorithms that come close to what I need, but they all mesh via some tolerance in the length and different sizes of the tiles.
In freeform shapes tiling is achieved via a very complicated algorithm. In real world architecture there is this method of tiling with as many identical tiles as possible and still get the shape, but there are angle tolerances and all sort of tolerances that you can manipulate. check paneling of freeform shapes.
In working with textures, does "UVW mapping" mean the same thing as "UV mapping"?
If so why are there two terms, and what is the "W"?
If not, what's the difference between them?
[Wikipedia currently isn't illuminating on this question: http://en.wikipedia.org/wiki/Talk:UVW_mapping]
U and V are the coordinates for a 2D map. Adding the W component adds a third dimension.
It's tedious to say the least to actually hand generate a 3D texture map, but they are useful if you have a procedural way to generate texture data. E.g. if you wanted your object to look like it's a solid chunk of marble, it may be easiest to "model" the marble "texture" as a 3D procedural texture and then use 3D coordinates to draw data out of the procedural texture.
UVW is to XYZ as XYZ is to world coordinates. Since XYZ was already being used to refer to world coordinates, UV is used to refer to the X and Y (2D) coordinates of a flat map. By extrapolation, the W is the Z in XYZ.
UVW infers a more complex 2d representation which is, in effect, the skin of the object that has been 'unwrapped' from a 3d object. Imagine a box 'unwrapped'. You now have a flat UVW map that you can paint on to your hearts content and then wrap back onto the six-sided box with no distortion. In short the UVW map knows where to rewrap the x, y and z points to reform the box.
Now imagine a sphere 'unwrapped'. You might end up with something like a Mercator projection. The hitch is that with this problem, when you wrap this 2d representation back onto the sphere, you will get some distortion.
The term UV mapping is very commonly used. I don't hear the term UVW as often except as described above.
The term procedural mapping can be misleading. Simply put, it means the computer is following some algorithms to paint a realistic representation of a material, like wood, onto the object, giving you the impression that the grain travels completely through the wood so it can be seen properly on both sides of the object. Procedural mapping can use images or not, or a combination of approaches...it all depends on the 'procedure'.
Lastly, there is no requirement to transform a '3d procedural texture' to 'UVW' first, since UVW and XYZ mean effectively the same thing - they are either referring to the world, or an unwrapped image of on object in the world, or for that matter of a 'chunk' of the world, as in the sky. The point is that UV or UVW refers to image/texture mapping.
I'm learning XNA by doing and, as the title states, I'm trying to see if there's a way to fill a 2D area that is defined by a collection of vertices on a plane. I want to fill with a color, not a file-based texture.
For an example, take a rounded rectangle whose vertices are defined by four quarter-circle triangle fans. The vertices are defined by building a collection of triangles, but the triangles may not be adjacent.
Additionally, I would like to fill it with more than a single color -- i.e. divide the bound area into four vertical bands and have each a different color. You don't have to provide me the code, pointing me towards resources will help a great deal. I can be handy with Google (which I did try first, but have failed miserably).
This is as much an exploration into "what's appropriate for XNA" as it is the implementation of it. Being pretty new to XNA, I'm wanting to also learn what should and shouldn't be done on top of what can and can't be done.
Not too much but here's a start:
The color fill is accomplished by using a shader. Reimer's XNA Tutorials on pixel shaders is a great resource on the topic.
You need to calculate the geometry and build up vertex buffers to hold it. Note that all vector geometry in XNA is in 3D, but using a camera fixed to a plane will simulate 2D.
To add different colors to different triangles you basically need to group geometry into separate vertex buffers. Then, using a shader with a color parameter, for each buffer,
set the appropriate color before passing the buffer to the graphics device. Alternatively, you can use a vertex format containing color information, which basically let you assign a color to each vertex.