How to compute a 3d miniature model from a large set of 3d geometric models - graphics

i want to import a set of 3d geometries in to current scene, the imported geometries contains tons of basic componant which may represent an
entire building. The Product Manager want the entire building to be displayed
as a 3d miniature(colors and textures must corrosponding to the original building).
The problem: Is there any algortithms which can handle these large amount of datasin a reasonable time and memory cost.
//worst case: there may be a billion triangle surfaces in the imported data
And, by the way, i am considering another solotion: using a type of textue mapping:
1 take enough snapshots by the software render of the imported objects.
2 apply the images to a surface .
3 use some shader tricks to perform effects like bump-mapping---when the view posisition changed, the texture will alter and makes the viewer feels as if he was looking at a 3d scene.
----my modeller and render are ACIS and hoops, any ideas?

An option is to generate side views of the building at a suitable resolution, using the rendering engine and map them as textures to a parallelipipoid.
The next level of refinement is to obtain a bump or elevation map that you can use for embossing. Not the easiest to do.
If the modeler allows it, you can slice the volume using a 2D grid of "voxels" (actually prisms). You can do that by repeatedly cutting the model in two with a plane. And in every prism, find the vertex closest to the observer. This will give you a 2D map of elevations, with the desired resolution.
Alternatively, intersect parallel "rays" (linear objects) with the solid and keep the first endpoint.
It can also be that your modeler includes a true voxel model, or that rendering can be zone with a Z-buffer that you can access.

Related

Is there any code for an interactive plotting application for a two dimensional curves

Plotting packages offer a variety of methods for displaying data. Write an interactive plotting application for two dimensionsional curves. Your application should be able to allow the user to choose the mode (line strip or polyline display of the data, bar chart or pie charts), colours, and line styles.
You should start with the GUI editation like this:
Does anyone know of a low level (no frameworks) example of a drag & drop, re-order-able list?
and change it to your primitives (more points per primitive instead of one ... handle each point as (sub)object so you can change its position later).
Then just add tools like add object,del object,... For hand drawing tool use piece wise interpolation cubics
The grid can be done like this:
How to draw dynamic 2D grid that adjusts according to camera zoom: OpenGL
Mouse zooming/panning is also important
Zooming graphics based on current mouse position
Putting all above together into simple editor looks like this:
Using GPU for curve rendering might give you some nice speed and functionality boost:
Is it possible to express "t" variable from Cubic Bezier Curve equation?
Mouse selection of objects might be a speed problem if your scene contains too many objects so in such case its best to use index buffers where you can mouse select with pixel perfect precision for almost free in O(1):
OpenGL 3D-raypicking with high poly meshes
The example is for 3D , in 2D is much simpler ...
Also do not forget to implement save/load functionality to some vector file format. I recommend using SVG it might be complicated to start with it but you can quickly check it contents in any SVG viewer or browser also in notepad as its just a text file. If you use just basic path elements and ignore the rest of SVG features you will see the parsing and creating SVG is not that hard for example See these:
Get Vertices/Edges From BMP or SVG (C#)
Discrete probability distribution plot with given values
For really big datasets you might want to use spatial subdivision techniques (Bounding (Volume)Area Hierarchy, or Quad tree) to ease up the operations...
More in depth implementation details about 2D vector gfx editors depends on language, OS, gfx api and GUI api you using and task you are aiming for ...

Why is a normal vector necessary for STL files?

STL is the most popular 3d model file format for 3d printing. It records triangular surfaces that makes up a 3d shape.
I read the specification the STL file format. It is a rather simple format. Each triangle is represented by 12 float point number. The first 3 define the normal vector, and the next 9 define three vertices. But here's one question. Three vertices are sufficient to define a triangle. The normal vector can be computed by taking the cross product of two vectors (each pointing from a vertex to another).
I know that a normal vector can be useful in rendering, and by including a normal vector, the program doesn't have to compute the normal vectors every time it loads the same model. But I wonder what would happen if the creation software include wrong normal vectors on purpose? Would it produce wrong results in the rendering software?
On the other hand, 3 vertices says everything about a triangle. Include normal vectors will allow logical conflicts in the information and increase the size of file by 33%. Normal vectors can be computed by the rendering software under reasonable amount of time if necessary. So why should the format include it? The format was created in 1987 for stereolithographic 3D printing. Was computing normal vectors to costly to computers back then?
I read in a thread that Autodesk Meshmixer would disregard the normal vector and graph triangles according to the vertices. Providing wrong normal vector doesn't seem to change the result.
Why do Stereolithography (.STL) files require each triangle to have a normal vector?
At least when using Cura to slice a model, the direction of the surface normal can make a difference. I have regularly run into STL files that look just find when rendered as solid objects in any viewer, but because some faces have the wrong direction of the surface normal, the slicer "thinks" that a region (typically concave) which should be empty is part of the interior, and the slicer creates a "top layer" covering up the details of the concave region. (And this was with an STL exported from a Meshmixer file that was imported from some SketchUp source).
FWIW, Meshmixer has a FlipSurfaceNormals tool to help deal with this.

What formula or algorithm can I use to draw a 3D Sphere without using OpenGL-like libs?

I know that there are 4 techniques to draw 3D objects:
(1) Wireframe Modeling and rendering, (2) Additive Modeling, (3) Subtractive Modeling, (4) Splines and curves.
Then, those models go through hidden surface removal algorithm.
Am I correct?
Be that way, What formula or algorithm can I use to draw a 3D Sphere?
I am using a low-level library named WinBGIm from colorado university.
there are 4 techniques to draw 3D objects:
(1) Wireframe Modeling and rendering, (2) Additive Modeling, (3) Subtractive Modeling, (4) Splines and curves.
These are modelling techniques and not rendering techniques. They allow you to mathematically define your mesh's geometry. How you render this data on to a 2D canvas is another story.
There are two fundamental approaches to rendering 3D models on a 2D canvas.
Ray Tracing
The basic idea of ray tracing is to pass a ray from the camera's origin, through the point on the canvas whose colour needs to be determined. Determine which models get hit by it and pick the closest one, determine how it's lit to compute the colour there. This is done by further tracing rays from the hit point to all the light sources in the scene. If you notice, this approach eliminates the need to use hidden surface determination algorithms like the back face culling, z-buffer, etc. since the basic idea is rooted on a hidden surface algorithm (ray tracing).
There are packages, libraries, etc. that help you do this. However, it's common that ray tracers are written from scratch as a college-level project. However, this approach takes more time to render (not to code), but the results are generally more pleasing than the below one. This approach is more popular when you want to render non-interactive visuals like movies.
Rasterization
This approach takes primitives (triangles and quads) that define the models in the scene and sample them at regular intervals (screen pixels they cover) and write it on to a colour buffer. Here hidden surface is usually eliminated using the Z-buffer; a buffer that stores the z-order of the fragment and the closer one wins, when writing to the colour buffer.
Rasterization is the more popular approach with cheap hardware support for it available on most modern computers due to years of research and money that has gone in to it. Libraries like OpenGL and Direct3D are readily available to facilitate development. Although the results are less pleasing than ray tracing, it's faster to render and thus is widely used in interactive, real-time rendering like games.
If you want to not use those libraries, then you have to do what is commonly known as software rendering i.e. you will end up doing what these libraries do.
What formula or algorithm can I use to draw a 3D Sphere?
Depends on which one of the above you choose. If you simply rasterize a 3D sphere in 2D with orthographic projection, all you have to do is draw a circle on the canvas.
If you are looking for hidden lines removal (drawing the edges rather than the inside of the faces), the solution is easy: "back face culling".
Every edge of your model belongs to two faces. For every face you can compute the normal vector and check if it is facing to the observer (by the sign of the dot product of the normal and the direction of the projection line); in other words, if the observer is located in the outer half-space defined by the plane of the face. Then an edge is wholly visible if and only if it belongs to at least one front face.
Usual discretization of the sphere are made by drawing equidistant parallels and meridians. It may be advantageous to adjust the spacing of the parallels so that all tiles are about the same area.

Detecting objects near cursor to snap to - any alternatives to picking ray?

In the problem of detecting objects near the mouse cursor to snap to (in a 3d view), we are using the picking ray method (which basically forms a 3d region of the cursor's immediate neighborhood and then detects objects present in the region).
I wonder if it is the only way to solve the task. Can I use, for example, the view matrix to get the 2D coordinates of the object in view space, then search for any objects in the cursor's vicinity?
I am not happy with the picking ray method because it is relatively expensive, so the question is essentially about whether any space transformation-based method will generally be faster. I am new to 3D programming so please give me a direction to dig into.
You can probably speedup the ray picking process by forming a hierarchy of nested bounding boxes around the objects, and checking for intersection of the rays with the bounding boxes. This way, you can spare a lot of intersection tests.
There is an alternative, exploiting the available rendering engine: instead of rendering to screen with the normal rendering attributes, you can render the same view in an off-screen plane, using flat shading and setting a different color for every object. You will obtain an object map that instantaneously tells you the object id for any pixel.

How to pick a vector graphics element quickly in lots of elements

How to quickly "pick" a 2d element in large number of vector graphics elements, such as polylines, polygons, curves etc.
In Qt, QGraphics can do this easily, but In my program, I don't need this class, I just need QPaint and QWidget.I want to manage and render these elements data myself.
So..
Which related graphics knowledge I need to search in google?, BSP-tree?R-tree?
Give me some advice, Thanks!
Seems that an R-tree is more designed for picking than a BSP-tree. According to the wikipedia article on Spatial Indexing, R-tree is
Typically the preferred method for
indexing spatial data. Objects
(shapes, lines and points) are grouped
using the minimum bounding rectangle
(MBR). Objects are added to an MBR
within the index that will lead to the
smallest increase in its size.
But are you sure it's worth your while to implement the creation, maintenance, and use of the R-tree rather than using QGraphics?

Resources