I have a set of Minecraft blocks and for each of them I want a volume (of an arbitrary size) containing the voxelized Minecraft block.
A Minecraft block isn't just an AABB that fills the whole block. It generally is a set of children AABBs that can be translated/scaled/rotated (OBBs) and can have a different color/texture for every face. Here's an example:
I have already developed a class called ModelVoxelizer that takes in a 3D triangular model and gives out the voxelization of it and to do that uses the OpenGL graphics pipeline. The issue with it is that it sets on the volume just the contouring voxels of the model. Instead, I want the Minecraft block voxelization to be filled inside.
A slice of the volume I'd (hopefully) get with my current ModelVoxelizer by voxelizing the 3 OBB (not always I have AABB!) that compose the Minecraft block above.
A slice of the volume I want (basically the one above but filled). The voxels inside should have a color averaged from the faces of the texture.
The problem:
So my problem is a rasterization problem where I have the volume (the 3D grid) and a OBB (a part of a Minecraft block) and I have to check which voxels are inside the OBB. For those that are inside/colliding I have to interpolate the values of the faces of the OBB (texture/color), based on the distance of the voxel from said faces.
Is it an already known issue? Am I trying to re-invent the wheel?
I appreciate any kind of suggestion about this, thank you.
Related
STL is the most popular 3d model file format for 3d printing. It records triangular surfaces that makes up a 3d shape.
I read the specification the STL file format. It is a rather simple format. Each triangle is represented by 12 float point number. The first 3 define the normal vector, and the next 9 define three vertices. But here's one question. Three vertices are sufficient to define a triangle. The normal vector can be computed by taking the cross product of two vectors (each pointing from a vertex to another).
I know that a normal vector can be useful in rendering, and by including a normal vector, the program doesn't have to compute the normal vectors every time it loads the same model. But I wonder what would happen if the creation software include wrong normal vectors on purpose? Would it produce wrong results in the rendering software?
On the other hand, 3 vertices says everything about a triangle. Include normal vectors will allow logical conflicts in the information and increase the size of file by 33%. Normal vectors can be computed by the rendering software under reasonable amount of time if necessary. So why should the format include it? The format was created in 1987 for stereolithographic 3D printing. Was computing normal vectors to costly to computers back then?
I read in a thread that Autodesk Meshmixer would disregard the normal vector and graph triangles according to the vertices. Providing wrong normal vector doesn't seem to change the result.
Why do Stereolithography (.STL) files require each triangle to have a normal vector?
At least when using Cura to slice a model, the direction of the surface normal can make a difference. I have regularly run into STL files that look just find when rendered as solid objects in any viewer, but because some faces have the wrong direction of the surface normal, the slicer "thinks" that a region (typically concave) which should be empty is part of the interior, and the slicer creates a "top layer" covering up the details of the concave region. (And this was with an STL exported from a Meshmixer file that was imported from some SketchUp source).
FWIW, Meshmixer has a FlipSurfaceNormals tool to help deal with this.
I'm new to WebGL and for an assignment I'm trying to write a function which takes as argument an object, let's say "objectA". ObjectA will not be rendered but if it overlaps with another object in the scene, let’s say “objectB”, the part of objectB which is inside objectA will disappear. So the effect is that there is a hole in ObjectB without modifying its mesh structure.
I've managed to let it work on my own render engine, based on ray tracing, which gives the following effect:
image initial scene:
image with objectA removed:
In the first image, the green sphere is "objectA" and the blue cube is "objectB".
So now I'm trying to program it in WebGL, but I'm a bit stuck. Because WebGL is based on rasterization rather than ray tracing, it has to be calculated in another way. A possibility could be to modify the Z-buffer algorithm, where the fragments with a z-value lying inside objectA will be ignored.
The algorithm that I have in mind works as follows: normally only the fragment with the smallest z-value will be stored at a particular pixel containing the colour and z-value. A first modification is that at a particular pixel, a list of all fragments belonging to that pixel is maintained. No fragments will be discarded. Secondly per fragment an extra parameter is stored containing the object where it belongs to. Next the fragments are sorted in increasing order according to their z-value.
Then, if the first fragment belongs to objectA, it will be ignored. If the next one belongs to objectB, it will be ignored as well. If the third one belongs to objectA and the fourth one to objectB, the fourth one will be chosen because it lies outside objectA.
So the first fragment belonging to objectB will be chosen with the constraint that the amount of previous fragments belonging to objectA is even. If it is uneven, the fragment will lie inside objectA and will be ignored.
Is this somehow possible in WebGL? I've also tried to implement it via a stencil buffer, based on this blog:
WebGL : How do make part of an object transparent?
But this is written for OpenGL. I transformed the code instructions to WebGL code instructions, but it didn't work at all. But I'm not sure whether it will work with a 3D object instead of a 2D triangle.
Thanks a lot in advance!
Why wouldn't you write raytracer inside the fragment shader (aka pixel shader)?
So you would need to render a fullscreen quad (two triangles) and then the fragment shader would be responsible for raytracing. There are plenty of resources to read/learn from.
This links might be useful:
Distance functions - by iq
How shadertoy works
Simple webgl raytracer
EDIT:
Raytracing and SDFs (signed distance functions aka constructive solid geometry (CSGs)) are good way to handle what you need and how is generally achieved to intersect objects. Intersections, and boolean operators in general, for mesh geometry (i.e. made of polygons) is not done during the rendering, rahter it uses special algorithms that do all the processing ahead of rendering, so the resulting mesh actually exists in the memory and its topology is actually calculated and then just rendered.
Depending on the specific scenario that you have, you might be able to achieve the effect under some requirements and restrictions.
There are few important things to take into account: depth peeling (i.e. storing depth values of multiple fragments per single pixel, triangle orientation (CW or CCW) and polygon face orientation (front-facing or back-facing).
Say, for example, that both of your polygons are convex, then rendering backfacing polygons of ObjectA, then of ObjectB, then frontfacing polygons of A, then of B might achieve the desired effect (I'm not including full calculations for all cases of overlaps that can exist).
Under some other sets of restrictions you might be able to achieve the effect.
In your specific example in question, you have shown frontfacing faces of the cube, then in the second image you can see the backface of the cube. That already implies that you have at least two depth values per pixel stored somehow.
There is also a distinction between intersecting in screen-space, or volumes, or faces. Your example works with faces and is the hardest (there are two cases: the one you've shown where mesh A's pixels who are inside mesh B are simply discarded (i.e. you drilled a hole inside its surface), and there is a case where you do boolean operation where you never put a hole in the surface, but in the volume) and is usually done with algorithm that computes output mesh. SDFs are great for volumes. Screen-space is achieved by simply using depth test to discard some fragments.
Again, too many scenarios and depends on what you're trying to achieve and what are the constraints that you're working with.
Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.
i want to import a set of 3d geometries in to current scene, the imported geometries contains tons of basic componant which may represent an
entire building. The Product Manager want the entire building to be displayed
as a 3d miniature(colors and textures must corrosponding to the original building).
The problem: Is there any algortithms which can handle these large amount of datasin a reasonable time and memory cost.
//worst case: there may be a billion triangle surfaces in the imported data
And, by the way, i am considering another solotion: using a type of textue mapping:
1 take enough snapshots by the software render of the imported objects.
2 apply the images to a surface .
3 use some shader tricks to perform effects like bump-mapping---when the view posisition changed, the texture will alter and makes the viewer feels as if he was looking at a 3d scene.
----my modeller and render are ACIS and hoops, any ideas?
An option is to generate side views of the building at a suitable resolution, using the rendering engine and map them as textures to a parallelipipoid.
The next level of refinement is to obtain a bump or elevation map that you can use for embossing. Not the easiest to do.
If the modeler allows it, you can slice the volume using a 2D grid of "voxels" (actually prisms). You can do that by repeatedly cutting the model in two with a plane. And in every prism, find the vertex closest to the observer. This will give you a 2D map of elevations, with the desired resolution.
Alternatively, intersect parallel "rays" (linear objects) with the solid and keep the first endpoint.
It can also be that your modeler includes a true voxel model, or that rendering can be zone with a Z-buffer that you can access.
In a real-time graphics application, I believe a frame buffer is the memory that holds the final rasterised image that will be displayed for a single frame.
References to deep frame buffers seem to imply there's some caching going on (vertex and material info), but it's not clear what this data is used for, or how.
What specifically is a deep frame buffer in relation to a standard frame buffer, and what are its uses?
Thank you.
Google is your friend.
It can mean two things:
You're storing more than just RGBA per pixel. For example, you might be storing normals or other lighting information so you can do re-lighting later.
Interactive Cinematic Relighting with Global Illumination
Deep Image Compositing
You're storing more than one color and depth value per pixel. This is useful, for example, to support order-independent transparency.
A z buffer is similar to a color buffer which is usually used to store the "image" of a 3D scene, but instead of storing color information (in the form a 2D array of rgb pixels), it stores the distance from the camera to the object visible through each pixel of the framebuffer.
Traditionally, z-buffer only sore the distance from the camera to the nearest object in the 3D for any given pixel in the frame. The good thing about this technique is that if 2 images have been rendered with their z-buffer, then they can be re-composed using a 2D program for instance, but pixels from the image A which are in "front" of the pixels from image "B", will be composed on top of the re-composed image. To decide whether these pixels are in front, we can use the information stored in the images' respective z-buffer. For example, imagine we want to compose pixels from image A and B at pixel coordinates (100, 100). If the distance (z value) stored in the z-buffer at coordinates (100, 100) is 9.13 for image A and 5.64 for image B, the in the recomposed image C, at pixel coordinates (100, 100) we shall put the pixel from the image B (because it corresponds to a surface in the 3D scene which is in front of the object which is visible through that pixel in image A).
Now this works great when objects are opaque but not when they are transparent. So when objects are transparent (such as when we render volumes, clouds, or layers of transparent surfaces) we need to store more than one z value. Also note, that "opacity" changes as the density of the volumetric object or the number of transparent layers increase. Anyway, just to say that a deep image or deep buffer is technically just like a z-buffer but rather than storing only one depth or z values it stores not only more than one depth value but also stores the opacity of the object at each one of these depth value.
Once we have stored this information, it is possible in post-production to properly (that is accurately) recompose 2 or more images together with transparencies. For instance if you render 2 clouds and that these clouds overlap in depth, then their visibility will be properly recomposed as if they had been rendered together in the same scene.
Why would we use such technique at all? Often because rendering scenes containing volumetric elements is generally slow. Thus it's good to render them seprately from other objects in the scene, so that if you need to make tweaks to the solid objects you do not need to re-render the volumetrics elements again.
This technique was mostly made popular by Pixar, in the renderer they develop and sell (Prman). Avatar (Weta Digital in NZ) was one of the first film to make heavy use of deep compositing.
See: http://renderman.pixar.com/resources/current/rps/deepCompositing.html
The cons of this technique: deep images are very heavy. It requires to store many depth values per pixels (and these values are stored as floats). It's not uncomon for such images to be larger than a few hundred to a a couple of gigabytes depending on the image resolution and scene depth complexity. Also you can recompose volume object properly but they won't cast shadow on each other which you would get if you were rendering objects together in the same scene. This make scene management slightly more complex that usual, ... but this is generally dealt with properly.
A lot of this information can be found on scratchapixel.com (for future reference).