Does anybody know how to or where I can find info related on how to do a procedural skydome? Any help is welcome.
See this thread from GameDev. There's some example code in C++ in there too.
A skydome is simply a sphere, drawn around the entire level. Just draw a sphere, make sure back face culling is off, and front face culling is on (since you're inside the sphere).
To procedurally generate a sphere is trivial, my usual approach is to start with a hardcoded Icosahedron and subdivide the faces until the required detail is reached. There is a thread on gamedev about generating a sphere:
http://www.gamedev.net/community/forums/topic.asp?topic_id=537269
I'm not sure that really answers your question, seeing your response to the other answer makes me think there is some confusion about what a skydome is. To reiterate it's just a sphere, the important bit is the texture you draw on it.
Related
I saw an infographic online that I wanted to use as a challenge to learn d3.js. The original infographic is here:
http://www.shah3d.com/wp-content/uploads/2012/11/IG-WWF-Dehahs.png
I've made a start here:
http://www.tips-for-excel.com/d3test/arc/arc%20test.html
You'll notice that the original has nice lines that link an arc from the bottom with an arc along the top. So far I can only think of painfully placing lots of circles to achieve this effect, hence the odd red circle currently in my visual. What would people's best methods be to replicate the original graphic? Which element would make this task easier? Arcs? Lines? I imagine I'll have to manipulate my data so that the lines go where they're meant to.
Happy to give more info if needed and thanks for taking time to read this.
There are a few existing diagrams to work off of from d3's galleries:
http://bl.ocks.org/4062006
http://bost.ocks.org/mike/uberdata/
Both use the length of the chord as the width of the stroke (initially), but you can tinker with that, for sure.
Graphically, you could use arcs, or full circles with a clip around it. As to the 'best' way to do it, that may come out of 'requirements' of how your graphic needs to behave (animation, relative arc placement, etc).
Personally, I'd go with path arcs.
I would like to draw millions of line segments to the screen.
Most of the time user will see only certain area of "universe", but the user should have the ability to "zoom" out to see all line segments at once.
My understanding is that the primitive is a "triangle", so I will have to express my line segments as triangles. Millions of triangles.
Is XNA the right tool for this job, or will it be too slow?
Additional Detail:
This is not for a game, but for a modeling program of some processes.
I do not care which language to use (XNA was recommended to me)
P.S.: Please let me know if you need additional detail.
My understanding is that the primitive
is a "triangle", so I will have to
express my line segments as triangles.
Millions of triangles.
Incorrect, XNA can perfectly draw lines for you in the following manner:
GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.LineList, vertexOffset, 0, numVertices, startIndex, primitiveCount);
(Or PrimitiveType.LineStrip if the end vertex of line1 is the start vertex of line2).
Is XNA the right tool for this job, or
will it be too slow?
XNA is "a tool", and if you're drawing a lot of lines this is definately going to be faster than GDI+ and easy to implement than C++ in combo with Unmannaged D3D. Drawing a line is a very cheap operation. I would advice you to just install XNA and do a quick prototype to see how many lines you can draw at the same time. (My guess is at least 1 million). Then see if you really need to use the advanced techniques described by the other posters.
Also the "Polyline simplification" technique suggested by Felice Pollano doesn't work for individual lines, only for models made up of triangles (you can exchange a lot of small triangles for a few bigger once to increase performance but decrease visuals, if you're zoomed out pritty far nobody will notice) It also won't work for "thickened up lines" because they will always consist of 2 triangles. (Unless if you allow bended lines).
When you zoom into details simple bounding box check if the triangle is visible to avoid drawing invisible objects. When user zoom all, you should apply some algorithm of polyline simplification http://www.softsurfer.com/Archive/algorithm_0205/algorithm_0205.htm to avoid have too many things to draw.
This guy had your same problem, and this might help (here are the sources).
Yes, as Felice alludes to, simplifying the problem-set is the name of the game. There are obvious limits to hardware and algorithms, so the only way to draw "more stuff" is actually to draw "less stuff".
You can use techniques such as dividing your scene into an OctTree so that you can do View Frustrum Culling. There are tons of techniques for scaling out what you're drawing. One of my favorites is the use of impostors to create a composite scene which is easier to draw. Here's a paper which explains the technique:
http://academic.research.microsoft.com/Paper/1241430.aspx
Impostors are image-based primitives
commonly used to replace complex
*geometry* in order to reduce the
rendering time needed for displaying
complex scenes. However, a big problem
is the huge amount of memory required
for impostors. This paper presents an
algorithm that automatically places
impostors into a scene so that a
desired frame rate image quality is
always met, while at the same time not
requiring enormous amounts of impostor
memory. The low memory requirements
are provided by a new placement method
and through the simultaneous use of
other acceleration techniques like
visibility culling and geometric
levels of detail.
Is there a formula that i can use to calculate texture coordinates for a complex object not something like cube or sphere?
The texture coordinates are usually set manually by whoever creates the model, using the modelling package.
There are ways of automating the whole process, to a great extent. The results may not be much use if somebody is going to draw the texture based on the UV coordinates, and if you ask the impossible (e.g., mapping a sphere exactly, with no distortion and no seams) then you may not get perfect results -- but for processes such as light mapping this is a common approach.
Levy's LSCM is one approach, as used in Blender, for example. See http://alice.loria.fr/index.php/publications.html?Paper=lscm#2002
Direct3D9 has a UV unwrap tool in its D3DX library; I'm not sure what algorithm it uses, and the documentation isn't amazing, but it does work. See
http://msdn.microsoft.com/en-us/library/bb206321(VS.85).aspx
(Most 3D modelling packages have some kind of automated UV unwrap, too, but in general they never seem to have had too much time spent on them. Presumably the expectation is that somebody will want to go through and fix it up by hand afterwards anyway.)
I'm raytracing and would like to speed it up via some acceleration structure (kd-tree, BVH, whatever). I don't want to code it up myself. What I've tried so far:
Yanking the kd-tree out of pbrt. There are so many intra-dependencies that I couldn't succeed at this without pulling all of pbrt into my code.
CGAL's AABB tree. Frustratingly, this seems to return only the point of intersection. Without knowing which triangle the point came from, I can't efficiently interpolate color over the triangle. I'd love to just extend the notion of "Point" with color, but this doesn't seem possible without writing a lot of template code from scratch.
Writing my own. Okay so I wrote my own grid acceleration class, and it works, but it's nasty and inefficient.
So, if anyone can suggest a simple library that I can use for this purpose I'd really appreciate it! All I need is given a triangle soup and ray, find the closest intersection and return the index of that triangle.
Jaco Bikker wrote this series of tutorials: http://www.devmaster.net/articles/raytracing_series/part7.php
They're very helpful and he includes code at the end for a ray tracer using a kd-tree.
You might be able to use that.
The G3D engine has a ray tracing implementation. Not sure how efficient it is though. It shouldn't bee too much trouble to use the Tree implementation without the rest of the library.
There's a lot of literature on collision detection, and I've read at least a big enough portion of it to be fairly familiar with most techniques. However, there's something that has eluded me for a while, and I figured, since StackOverflow provides access to a large group of brilliant minds at once, I'd ask here first before digging around in the bookshelf.
In this day and age, more and more work is being delegated to GPU rather than CPU, and in a lot of cases this is a good thing. For example, there are geometry shaders to create new geometry, or (slightly less new, but still quite fascinating) vertex shaders to which you can through a bunch of vertexes at, and something elegant will come out of it. What I was considering though, as these primitives exists only on the graphics hardware, how would you perform reliable collision detection with these primitives? Let's assume I have some kind of extremely simplified mesh which is displaced in a vertex shader (I don't have a concrete problem, I'm more playing with the idea, and I haven't gotten very deep into geometry shaders yet).
What I've considered so far is separate 'rendering' passes from suitable angles with shading more or less turned off, and perhaps lower resolution mesh, rendering the inside (with faces flipped inward) of my second primitive along with the mesh I want to test against, and executing an occlusion query for the mesh. If the mesh is completely occluded there'd be no intersection. This would of course require that my second primitive is convex.
Somehow I get the feeling that this kind of test will be extremely expensive as the number of primitives increase (even if a large portion can be culled directly). Does anyone else have another idea or technique? I'm more familiar with opengl and cg than directx, but if you have some examples or so in directx, I guess I'll be able to figure out the opengl counterparts.
All ideas are appreciated, so please brainstorm. :)
It sounds like Dan Horn's article “Stream Reduction Operations for GPGPU Applications” in GPU Gems 2 is exactly what you want. Like all chapters, it's freely available online.