I am trying to create a sphere with lines and dots on it. But since I am plotting the sphere, the lines and the points separately in the code, I don't think Mayavi recognized it as a whole figure. So the lines and points on the rear face of the sphere are also visible. Is there a way to make the whole scene opaque and show only one plane at a time? (Opacity only makes the individual element i.e. sphere or line or point opaque; not the whole figure.)
This is what I am getting now:
This is what I want it to look like. As I rotate this sphere, it should only show the points on the surface that is facing us.
Related
I'm confused in using geometry shader for solve this problem:
I have two radius of a circle that every radius is a line and I called that's a line. every line is made of list of points that every points have a color.
so how fill between two lines with lines that's color of every point on a line has between colors of points of two lines on a equal radius.
it is so may be complex so I show it with a picture:
gradient Radiuses
.
I think best way of solving this problem is geometry shader but if you know better way that have good performance, I listen.
thanks.
I am working on a 3d application and am currently looking for a way to project a line segment defined by two points in screen-space onto a three-dimensional polygonal mesh (in my case a triangle mesh). The goal is to find the intersection points in world-space of the line segment with the edges of the mesh.
I can only think of two ways to do this, but neither is ideal. The first is to sample the line segment (in screen-space) at small intervals and ray trace at those intervals to find the world-space coordinates where the ray hits the mesh, but this does not easily give me the intersection points of the line segment with the mesh edges.
The other way I can think of is to somehow back-project the mesh into screen-space, find the intersections there (in 2d) and then project those intersection points back to 3d. The problem with this is that the screen-space coordinate system may change between the selection of the first and second endpoints of the line segment (due to moving the camera).
If any of that was confusing, then here is an image that approximately shows what I'm trying to do (the white dots indicate the points that I want to find). However, in my case the yellow curve is simply a line segment.
[Yunjin Lee, et al. "Mesh scissoring with minima rule and part salience." 2005]
Any help is very much appreciated.
Here's my suggestion:
Project the screen line into world space (getting a plane in world space).
Intersect the plane with the triangles in the mesh, getting a set of edges.
Add the edges to a data structure that keeps only the parts of the edges that are closest to the camera plane (see the diagram below, in which the red line segments and their endpoints are the ones we want to keep). This is like building up an image via a Z-buffer, except that because we know that this set is piecewise linear, we don't have to rasterize it, we can just maintain a sorted list of endpoints.
I have two objects: A sphere and an object. Its an object that I created using surface reconstruction - so we do not know the equation of the object. I want to know the intersecting points on the sphere when the object and the sphere intersect. If we had a sphere and a cylinder, we could solve for the equation and figure out the area and all that but the problem here is that the object is not uniform.
Is there a way to find out the intersecting points or area on the sphere?
I'd start by finding the intersection of triangles with the sphere. First find the intersection of each triangle's plane and the sphere, which gives a circle. Then find the circle's intersection/s with the triangle edges in 2D using line/circle tests. The result will be many arcs which I guess you could approximate with lines. I'm not really sure where to go from here without knowing the end goal.
If it's surface area you're after, maybe a numerical approach would be better. I'd cover the sphere in points and count the number inside the non-uniform object. To find if a point is inside, maybe trace outwards and count the intersections with the surface (if it's odd, the point is inside). You could use the stencil buffer for this if you wanted (similar to stencil shadows).
If you want the volume of intersection a quick google search gives "carve", a mesh based CSG library.
Starting with triangles versus the sphere will give you the points of intersection.
You can take the arcs of intersection with each surface and combine them to make fences around the sphere. Ideally your reconstructed object will be in winged-edge format so you could just step from one fence segment to the next, but with reconstructed surfaces I guess you might need to apply some slightly fuzzy logic.
You can determine which side of each fence is inside the reconstructed object and which side is out by factoring in the surface normals along the fence.
You can then cut the sphere along the fences and add the internal bits to the display.
For the other side of things you could remove any triangle completely inside the sphere and cut those that intersect.
I have a program which is able to parse and interprets OBJ file format in an OpenGL context.
I created a little project in Blender containing a simple sphere with 'Hair' particles on it.
After conversion (separating particules from the sphere) my particles form a new mesh. So I have two meshes in my project (named 'Sphere' and 'Hair'). When I want to export the mesh 'Sphere' in an OBJ file (File/export/Wavefront (.obj)), selecting 'include Normals', after exportation, the file contains all informations about normals (ex: vn 0.5889 0.14501 0.45455, ...).
When I try to do the same thing with particles, selecting 'include Normals' too, I don't have normals in the OBJ file. (Before the exporting I have selected the right mesh.)
So, I don't unsterstand why normals properties are not exported for mesh of type particles.
Here's above the general Blender render of my hair particules. As you can see all particules have a reaction with the light. So Blender use normals properties for thoses particules.
And now, the picture above shows (in Blender 'Edit mode' -> after conversion) that particules are formed of several lines. In my opengl program I use GL_LINES to render the same particules. I just want to have normals information to manage light properties on my particules.
Do you have an idea how to export normals properties for particules meshes?
Thanks in advance for your help.
You are trying to give normals to lines. Let's think about what that means.
When we talk about normal vectors on a surface, we mean "pointing out of the surface"
For triangles, when we define one side to be the "front" face, there is exactly one normal. For lines, any vector perpendicular to the line counts as a normal - there are infinite and any one will "do".
What are some reasons we care about normals in graphics?
Lighting: e.g. diffuse lighting is approximated by using the dot product of the normal with the incident light vector. This doesn't apply to hair though!
Getting a transformation matrix: for this you can pick any normal (do you want to transform into hair-space?)
In short: you either can pick any perpendicular vector for your normal (it's easy to calculate this) or just not use normals at all for your hair. It depends on what you are trying to do.
I'm using DirectX10 to simulate a water surface, and I'm now with a height map,which is a 2D array of the heights(y) at the points (x,z). But to draw it on the screen, I must turn it into a mesh or have a index to draw triangle topology.
But the data is too large to do it manually. Are there any methods for me to draw it on the screen. I hope it's easy to implement. If there is function included in DirectX10 which can make it, the it's the best one for me.
Create a mesh that format a grid of squares (each made of two triangles) and set all vertices y = 0. In the vertex shader sample the heightmap and add the value stored in the heightmap to the y of the vertice.
This might help you.
P.S: If the area you want it to cover is too big you should take a look at terrain LOD techniques (should work the same for water).
I'm sure you can make a mesh out of it. I doubt you can generate the heightmap for a water surface that is too large to "meshify".
Why are you looking at Diamond square. For a 512x512 heightmap all you need to do is define a set of point and then generate the triangles for it. Its really very simple.