I have an array of mouse points, a stroke width, and a softness. I can draw soft circles and soft lines. Which algorithm should I use for drawing my array of points? I want crossed lines to look nice as well as end points.
I would definitely choose the Bezier for that purpose, and in particular I will implement the piecewise cubic Bezier - it is truly easy to implement and grasp and it is widely used by 3D Studio max and Photoshop.
Here is a good source for it:
http://local.wasp.uwa.edu.au/~pbourke/surfaces_curves/bezier/cubicbezier.html
Assuming that you have an order between the points, in order to set the four control points you should go as follows:
I define the tangent between point P[i] and point P[i+1]
T1 = (P[i+1] - P[i-1])
T2 = (P[i+2] - P[i])
And to create the piecewise between two points I do the following:
Control Point Q1: P[i]
Control Point Q2: the point lying along the tangent from Q1 => Q1 + 0.3T1
Control Point Q3: the point lying along the tangent to Q4 => Q4 - 0.3T2
Control Point Q4: P[i+1]
The reason I chose 0.3T is arbitrary in order to give it enough 'strength' but not too much, you can use more elaborated methods that will take care of acceleration (C2 continuity) as well.
Enjoy
Starting from Gooch & Gooch's Non-Photorealistic Rendering, you might find Pham's work useful - see PDF explaining algorithm.
There's a nice overview article by Tateosian which explains the additional techniques in less detail with pretty pictures.Bezier curve drawing alone doesn't produce the effects you want (depending on how fancy you want to get). However, I'd certainly start with Paul's work and see if just using that to draw with your soft brush is good enough.
Be warned there are lots of patents in this space, sigh.
I think maybe you're looking for a spline algorithm.
Here is a spline tutorial, which you might find helpfull:
[http://www.doc.ic.ac.uk/~dfg/AndysSplineTutorial/index.html]
The subject is also covered in most books on graphics programming.
Cheers.
I figured it out - use a very soft gradient circle, draw repeatedly to make a stroke, blend using multiply.
Related
I am looking for an algorithm that given two meshes could clip one using another.
The simplest form of this is clipping a mesh using a plane. I've already implemented that by following something similar to what is described here.
What it does is basically inspecting all mesh vertices and triangles with respect to the plane (the plane's normal and point are given). If the triangle is completely above the plane, it is left untouched. If it falls completely below the plane, it is discarded. If some of the edges of the triangle intersect with the plane, the intersecting points with the plane are calculated and added as the new vertices. Finally a cap is generated for the hole on the place the mesh was cut.
The problem is that the algorithm assumes that the plane is unlimited, therefore whatever is in its path is clipped. In the simplest form, I need an extension of this without the assumption of a plane of "infinite" size.
To clarify, imagine that we have a 3D model of a desk with 2 boxes on it. The boxes are adjacent (but not touching or stacked). The user will define a cutting plane of a limited width and height underneath the first box and performs the cut. We end up with a desk model (mesh) with a box on it and another box (mesh) that can be freely moved around/manipulated.
In the general form, I'd like the user to be able to define a bounding box for the box he/she wants to separate from the desk model and perform the cut using that bounding box.
If I could extend the algorithm I already have to an algorithm with limited-sized planes, that would be great for now.
What you're looking for are constructive solid geometry/boolean algorithms with arbitrary meshes. It's considerably more complex than slicing meshes by an infinite plane.
Among the earliest and simplest research in this area, and a good starting point, is Constructive Solid Geometry for Polyhedral Objects by Trumbore and Hughes.
http://cs.brown.edu/~jfh/papers/Laidlaw-CSG-1986/main.htm
From the original paper:
More elaborate solutions extend upon this subject with a variety of data structures.
The real complexity of the operation lies in the slicing algorithm to slice one triangle against another. The nightmare of implementing robust CSG lies in numerical precision. It's easy when you involve objects far more complex than a cube to run into cases where a slice is made just barely next to a vertex (at which point you have the tough decision of merging the new split vertex or not prior to carrying out more splits), where polygons are coplanar (or almost), etc.
So I suggest initially erring on the side of using very high-precision floating point numbers, possibly even higher than double precision to focus on getting something working correctly and robustly. You can optimize later (first pass should be to use an accelerator like an octree/kd-tree/bvh), but you'll avoid many headaches this way in your first iteration.
This is vastly simpler to implement at render time if you're focusing on a raytracer rather than a modeling software, e.g. With raytracers, all you have to do to do this kind of arbitrary clipping is pretend that an object used to subtract from another has its polygons flipped in the culling process, e.g. It's easy to solve robustly at the ray level, but quite a bit harder to do robustly at the geometric level.
Another thing you can do to make your life so much easier if you can afford it is to voxelize your object, find subtractions/additions/unions of voxels, and then translate the voxels back into a mesh. This is so much easier to make robust, but harder to do efficiently and the voxel->polygon conversion can get quite involved if you want better results than what marching cubes provide.
It's a really tough area to do extremely well and requires perseverance, and thus the reason for the existence of things like this: http://carve-csg.com/about.
If someone is interested, currently there is a solution for this problem in CGAL library. It allows clipping one triangular mesh using another mesh as bounding volume. The usage example can be found here.
I have got point clouds of different primitive objects (cone, plane, torus, cylinder, sphere, ellipsoid). The all vary in orientation, position and scaling. Furthermore all of them are initialized with a unique set of parameters (e.g. height, radius, etc.) so that their shape can be quiet different (some cones are tall, others are small and fat).
Now to my question:
I am trying to find the objects "principal components". Using PCA doesn't lead to good results, since rotated primitives can have their main variation in any direction (which doesn't have to be necessarily along the length of the objects).
The only chance that I see is to use somehow the symmetry of my primitives. Isn't there a method based on inertia? Maybe some way to find the main symmetry axis and two others perpendicular to it?
Can you give me some advice or point me to papers or implementations (maybe even python)?
Thanks a lot, Merlin.
PS: This is what I get if I only apply a PCA. Especially for cones this doesn't really work. Only cones that are almost identical in shape share the same orientation, but I need them all to point in one direction (e.g. up).
So you got cones and just need to rotate them all in the same direction?
If so you can fit a triangle to them and point the peak (e.g with the perpendicular bisectors of the sides) to your main axis.
You have an interesting problem. Normally used shape descriptors (VFH) that are invariant to shape but not pose (which is what you would want, really) would not be invariant to stretching in the shape.
I think to succeed at this you need to be clearer about the invariants that you are trying to maintain when a shape changes. Is it a topological invariant? If so, then here is a good starting point: https://www.google.com.tr/search?q=topologically+invariant+shape+descriptor
I decided to just stick to simple PCA since it's the only method that is totally generic and doesn't depend on prior (expert) knowledge about the data.
I am using Java to write a very primitive 3D graphics engine based on The Black Art of 3D Game Programming from 1995. I have gotten to the point where I can draw single color polygons to the screen and move the camera around the "scene". I even have a Z buffer that handles translucent objects properly by sorting those pixels by Z, as long as I don't show too many translucent pixels at once. I am at the point where I want to add lighting. I want to keep it simple, and ambient light seems simple enough, directional light should be fairly simple too. But I really want point lighting with the ability to move the light source around and cast very primitive shadows ( mostly I don't want light shining through walls ).
My problem is that I don't know the best way to approach this. I imagine a point light source casting rays at regular angles, and if these rays intersect a polygon it will light that polygon and stop moving forward. However when I think about a scene with multiple light sources and multiple polygons with all those rays I imagine it will get very slow. I also don't know how to handle a case where a polygon is far enough away from a light source that if falls in between two rays. I would give each light source a maximum distance, and if I gave it enough rays, then there should be no point within that distance that any two rays are too far apart to miss a polygon, but that only increases my problem with the number of calculations to perform.
My question to you is: Is there some trick to point light sources to speed them up or just to organize it better? I'm afraid I'll just get a nightmare of nested for loops. I can't use openGL or Direct3D or any other cheats because I want to write my own.
If you want to see my results so far, here is a youtube video. I have already fixed the bad camera rotation. http://www.youtube.com/watch?v=_XYj113Le58&feature=plcp
Lighting for real time 3d applications is (or rather - has in the past generally been) done by very simple approximations - see http://en.wikipedia.org/wiki/Shading. Shadows are expensive - and have generally in rasterizing 3d engines been accomplished via shadow maps & Shadow Volumes. Point lights make shadows even more expensive.
Dynamic real time light sources have only recently become a common feature in games - simply because they place such a heavy burden on the rendering system. And these games leverage dedicated graphics cards. So I think you may struggle to get good performance out of your engine if you decide to include dynamic - shadow casting - point lights.
Today it is commonplace for lighting to be applied in two ways:
Traditionally this has been "forward rendering". In this method, for every vertex (if you are doing the lighting per vertex) or fragment (if you are doing it per-pixel) you would calculate the contribution of each light source.
More recently, "deferred" lighting has become popular, wherein the geometry and extra data like normals & colour info are all rendered to intermediate buffers - which is then used to calculate lighting contributions. This way, the lighting calculations are not dependent on the geometry count. It does however, have a lot of other overhead.
There are a lot of options. Implementing anything much more complex than some the basic models that have been used by dedicated graphics cards over the past couple of years is going to be challenging, however!
My suggestion would be to start out with something simple - basic lighting without shadows. From there you can extend and optimize.
What are you doing the ray-triangle intersection test for? Are you trying to light only triangles which the light would reach? Ray-triangle
intersections for every light with every poly is going to be very expensive I think. For lighting without shadows, typically you would
just iterate through every face (or if you are doing it per vertex, through every vertex) and calculate & add the lighting contribution per light - you would do this just before you start rasterizing as you have to pass through all polys in anycase.
You can calculate the lighting by making use of any illumination model, something very simple like Lambertian reflectance - which shades the surface based upon the dot product of the normal of the surface and the direction vector from the surface to the light. Make sure your vectors are in the same spaces! This is possibly why you are getting the strange results that you are. If your surface normal is in world space, be sure to calculate the world space light vector. There are a bunch of advantages for calulating lighting in certain spaces, you can have a look at that later on, for now I suggest you just get the basics up and running. Also have a look at Blinn-phong - this is the shading model graphics cards used for many years.
For lighting with shadows - look into the links I posted. They were developed because realistic lighting is so expensive to calculate.
By the way, LaMothe had a follow up book called Tricks of the 3D Game Programming Gurus-Advanced 3D Graphics and Rasterization.
This takes you through every step of programming a 3d engine. I am not sure what the black art book covers.
What kind of algorithms would generate random "goo balls" like those in World of Goo. I'm using Proccesing, but any generic algorithm would do.
I guess it boils down to how to "randomly" make balls that are kind of round, but not perfectly round, and still looking realistic?
Thanks in advance!
The thing that makes objects realistic in World of Goo is not their shape, but the fact that the behavior of objects is a (more or less) realistic simulation of 2D physics, especially
bending, stretching, compressing (elastic deformation)
breaking due to stress
and all of the above with proper simulation of dynamics, with no perceivable shortcuts
So, try to make the behavior of your objects realistic and that will make them look (feel) realistic.
Not sure if this is what you're looking for since I can't look at that site from work. :)
A circle is just a special case of an ellipse, where the major and minor axes are equal. A squished ball shape is an ellipse where one of the axes is longer than the other. You can generate different lengths for the axes and rotate the ellipse around to get these kinds of irregular shapes.
Maybe Metaballs (wiki) are something to start from.. but I'm not sure.
Otherwise I would suggest a particle approach in which a ball is composed by many particles that stick together, giving an irregularity (mind that this needs a minimal physical engine to handle the spring body that keeps all particles together).
As Unreason said, World of Goo is not so much about shape, but physics simulation.
But an easy way to create ball-like irregular shapes could be to start with n vertices (points) V_1, V_2 ... V_n on a circle and apply some random deformation to it. There are many ways to do that, going from simply moving around some single vertices to complex physical simulations.
Some ideas:
1) Chose a random vertex V_i, chose a random vector T, apply that vector as a translation (movement) to V_i, apply T to all other vertices V_j, too, but scaled down depending on the "distance" from V_i (where distance could be the absolute differenece between j and i, or the actual geometric distance of V_j to V_i). For the scaling factor you could use any function f that is 1 for f(0) and decreasing for increasing distances (basically a radial basis function).
for each V_j
V_j = scalingFactor(distance(V_i, V_j)) * translationVector + V_j
2) You move V_i as in 1, but now you simulate springlike connections between all neigbouring vertices and iteratively move all vertices based on the forces created by stretched springs.
3) For more round shapes you can do 1) or 2) on the control points of a B-spline curve.
Beware of self-intersections when you move vertices too much.
Just some rough ideas, not tested...
I'm toying with the idea of volumetric particles. By 'volumetric' I don't mean actually 3D model per particle - usually it's more expensive and harder to blend with other particles. What I mean is 2D particles that will look as close as possible to be volumetric.
Right now what I/we have tried is particles with additional local Z texture (spherical for example), and we conduct the alpha transparency according to the combination of the alpha value and the closeness by Z which is improved by the fact that particle does not have a single planar Z.
I think a cool add would be interaction with lighting (and shadows as well), but here the question is how will the lighting formula look like (taking transparency into account, let's assume that we are talking about smoke and dust/clouds and not additive blend) - any suggestions would be welcomed.
I also though about adding normal so I can actually squeeze all in two textures:
Diffuse & Alpha texture.
Normal & 256 level precision Z channel texture.
I ask this question to see what other directions can be thought of and to get your ideas regarding the proper lighting equation that might be used.
It sounds like you are asking for information on techniques for the simulation of participating media: "Participating media may absorb, emit and/or scatter light. The simplest participating medium only absorbs light. That means that light passing through the medium is attenuated depending on the density of the medium."
Here are some links to some example images and to Frisvad, Christensen, Jensen's the SIGGRAPH 2007 paper (including the PDF).
A nice paper on using spherical billboards to represent volumetric effects:
http://www.iit.bme.hu/~szirmay/firesmoke_link.htm
Doesn't handle particpating media, though.
See Volume Rendering and Voxel.