I'm new to glTF and I would have a very basic, and maybe naive, question. Sorry, and thanks for your understanding and your help.
We have a C++ application where we handle geometry primitive entities, like boxes, cones, cylinders, and so forth.
For visualizing the geometry entities we currently use Coin3D, which have corresponding geometry shapes: Box, Cone, ...
We now would like to add a glTF exporter too, and I have started to explore the glTF specs.
I must say, in the official documentation, and on the web, I could not find any support in glTF for basic geometry shapes.
Therefore, my questions are:
is that true, that glTF has no notion of, let's say, a Box, or a Cone? Or did I missed something obvious?
If the answer to 1) is "NO", are there tested/supported/suggested implementations for basic shapes? I have only found some "example" shapes, like the Box here; but I could not find any collection of implementations of basic shapes. Again, did I miss something?
Are there any best practices, or documentation, on how to implement basic geometry shapes in glTF?
The short answer is you're correct, glTF does not currently store basic geometric shapes directly as a box, cone, cylinder, etc. The format is intended to be a runtime delivery format, not an asset interchange format.
As such, the internal data structures within glTF are designed to mimic the raw data that would typically be fed into a GPU using a graphics API such as OpenGL, WebGL, etc. Entire blocks of glTF data can often be pulled off a disk or network and handed over directly to a graphics API for rendering, with minimal pre-processing.
This means that all of your basic shapes must arrive as the GPU expects to find them: triangulated. Even a simple box is made up of twelve triangles, and because the sides don't share normal vectors, the normal "vertex attributes" are different, hence triangles from different sides of the box don't share vertices (again, because the GPU wouldn't accept that as a raw input). The benefit is that a WebGL client doesn't have to think very hard about what to do when it receives a glTF, it can just start cramming data into the graphics pipeline to get things moving.
For a broader overview, the ever-popular glTF - What the Duck diagram is widely considered an excellent starting point, and the glTF Tutorials are a good follow-up to that.
Related
Can anyone help me find resources to study about how to implement transformation functions (Translation, rotation & scaling) using Vulkan API on windows platform?
There are very limited sources on Vulkan as far as I know and I couldn't find a single material about transformations in Vulkan.
If You are looking for a source code, Vulkan Cookbook has code samples implementing each of these transformations (translation, rotation and scaling).
If You are looking for an explanation why they are implemented in such a way, You can look for any resources describing these transformations. Translation, rotation and scaling behave exactly the same, no matter what API You are using. This is pure linear algebra. There are differences in projection matrices due to different coordinate systems being used (Vulkan has depth in a [0; 1] range while OpenGL in a [-1; 1] range; and framebuffer space is inverted along Y axis). But the same methodologies apply when You are constructing them by Yourself, You just need to take into account these different coord systems.
The best resource I have encountered describing why matrices are created in such a way, was Frank D. Luna's Introduction to 3D Game Programming with DirectX.
If you're looking for something like the equivalent of glTranslatef, glScalef, and glRotatef, you're out of luck. Nothing like that exists in Vulkan (or in modern OpenGL, for that matter). With modern rendering API's you manage transforms (view, model, projection) on the CPU side using your own preferred library (I use and recommend GLM). Transforms are passed to the shaders using uniform blocks, typically as mat4 values where they can then be used to transform input vertices.
If you are asking us how to implement these by yourself, you have to study some linear algebra to do this. Previous answers answered this.
If not, then I suggest using GLM. It is a library originally written for OpenGL, so you have to be careful while using it. Axes are flipped, keep that in mind.
I am writing a program that will output 3D model files based on simple geometric shapes (e. g. rectangular prisms & cylinders) with known coordinates in 3-dimensional space. As an example, imagine creating a 3D model of stonehenge. this question suggests that OBJ files are the easiest to generate, but I'm struggling to find a good tutorial or easy-to-use library for doing so.
Can anyone either
(1) describe step-by-step how to create a simple file OR
(2) point me to a tutorial that describes how to do so
Notes:
* Using a GUI-based program to draw such files is not an option for me
* I have no prior experience with 3D modeling
* Other formats such as WRL or DAE would work for me as well
EDIT:
I do not need to use textures, just combinations of simple geometric shapes positioned in 3D space.
I strongly recommend to use some ASCII exchange format there are many out there I usually use these:
*.x DirectX object (it is a C++ source code)
this one is easiest to implement !!! But there are not many tools that can handle them. If you do not want to spend too much time coding then this is the right choice. Just copy the templates (at the start) from any *.x file to get started.
here some specs
*.iges common and importable on most CAD/CAM platform (Catia included)
this one is a bit complicated but for export purposes it is not that bad. It supports Volume operation like +,-,&,^ which are VERY HARD to implement properly but you do not have to use them :)
*.dxf AutoCAD exchange format
this one is even more complicated then IGES. I do not recommend to use it
*.ac AC3D
I first saw this one in flight gear.
here some specs
at first look it is quite easy but the sub-object implementation is really tricky. Unless you use it you should be fine.
This approach is easily verifiable in note pad or by loading to some 3D model viewer. Chose one that is most suitable for your needs and code save/load function to your Apps internal model class/struct. This way you will be compatible with other software and eliminate incompatibility problems which are native to creating 'almost known' binary formats like 3ds,...
In your case I would use IGES (Initial Graphics Exchange Specification)
For export you do not need to implement all just few basic shapes so it would not be too difficult. I code importers which are much much more complicated. Mine IGES loader class is about 30KB of C++ source code look here for more info
You did not provide any info about your 3D mesh model structure and capabilities
like what primitives you use, are your object simple or in skeleton hierarchy, are you using textures, and more ... so it is impossible to answer
Anyway export often looks like this:
create header and structure of target file format
if the format has any directory structure fill it and write it (IGES)
for sub-objects do not forget to add transformation matrices ...
write the chunks you need (points list, faces list, normals, ...)
With ASCII formats you can do this inside String variable so you can easily insert into or modify. Do all thing in memory and write the whole thing to file at the end which is fast and also add capability to work with memory instead of files. This is handy if you want to pack many files to single package file like *.pak or send/receive files through IPC or LAN ...
[Edit1] more about IGES
fileformat specs
I learned IGES from this pdf ... Have no clue where from I got it but this was first valid link I found in google today. I am sure there is some non registration link out there too. It is about 13.7 MB and original name IGES5-3_forDownload.pdf.
win32 viewer
this is free IGES viewer. I do not like the interface and handling but it works. It is necessary to have functional viewer for testing yours ...
examples
here are many tutorial files for many entities there are 3 sub-links (igs,peek,gif) where you can see example file in more ways for better understanding.
exporting to IGES
you did not provide any info about your 3D mesh internal structure so I can not help with export. There are many ways to export the same way so pick one that is closest to your App 3D mesh representation. For example you can use:
point cloud
rotation surfaces
rectangle (QUAD) surfaces
border lines representation (non solid)
trim surface and many more ...
As the title says, I'd like to program a 3d game (probably a BattleZone clone), but without the use of an API like OpenGL, DirectX, and the like. At the heart of the matter, I'd just like to learn how to draw basic 3d shapes to the screen and manipulate them. Don't care if it looks like crap. I've used OpenGL to achieve similar ends before, but really didn't learn about these topics.
The problem is, I have no idea where to start. I downloaded the Doom source code, but it's a bit over my head. Although I've programmed a bit, graphical matters are very much out of my depth.
I'd be very grateful if anyone could offer links or code (in any language) that would help me along in my purpose.
Sounds like an exciting project. I did something similar in the late 90's. Before OpenGL and DirectX became popular, there were a ton of great books on the subject.
Fundamentally you will have to learn how to
Represent 3D geometry
Transform that geometry (translate and rotate)
Project that geometry onto a 2D screen.
Each of those major topics has many sub-topics (for example, complex objects can be constructed from a number of polygons. You may want to limit polygons to being constructed of triangles only, or support other polygons. You may want to load common model formats e.g. .obj files so that you can create models with off the shelf tools).
The topics are way too broad for a detailed answer here. Whole books are written on the subject, including
Black Art of 3D Game Programming (Book, amazingly still available)
For a good introduction to the general topics, have a look at:
http://en.wikipedia.org/wiki/3D_projection
http://en.wikipedia.org/wiki/Orthographic_projection
http://en.wikipedia.org/wiki/Transformation_matrix#Perspective_projection
Doom, which you already looked at, used a special optimization called heightfield rendering and does not allow for rendering of arbitrary 3D shapes (e.g., you will not find a bridge in Doom that you can walk under).
I have the second edition of Computer Graphics: Principles and Practice in C and it uses SRGP (Simple Raster Graphics Programming) and SIGGRAPH which is a wrap-around SRGP, if you look up articles and papers on graphics research you'll see that both these libraries are used a lot, and they are way more direct and low level than the APIs you mentioned. I'm having a hard time locating them, so if you do, please give a link. Note that the third edition is in WPF, so I cannot guarantee much as to it's usefulness, and I don't know if the second edition is still in print, but I have found numerous references to the book, and it's got it's own page in Wikipedia.
Another solution would be the Win32 API which again does not provide much in terms of rendering, but it is trivial to draw dots and lines onto a window. I have written a few tutorials on it, but I didn't cover drawing pixels and lines, so they'll only be useful if you have trouble with the basics of setting up a window. Note that it is not intended for real-time rendering, so it may get slow.
Finally you can look at X11 programming, the foundation of most modern operating systems with a GUI. I haven't found the libraries for Windows, but again I didn't invest too much time on it. I know it is available for CIGWIN and for Linux in general though, and I believe it would be very interesting to look at the core of graphics since you're already looking under the hood of 3D graphics.
I've been studying 3D graphics on my own for a while now and I want to get a greater understanding of just how everything works. What I would like to do is to create a simple game without using DirectX or OpenGL. I understand most of the math I believe, but the problem I am running up against is I do not know how to get control of the pixels being displayed in a window.
How do I specify what color I want each pixel in my window to be?
I understand I will probably run into issues with buffers and image shearing and probably terrible efficiency problems, but I want to create my own program so that I could see from the very lowest level, of the high level language, how the rendering process works. I really have no idea where to start though. I've figured out how to output BMPs, but I would like to have a running program spitting out 20+ frames per second. How do I accomplish this?
You could pick a environment that allows you to fill an array with values for pixels and display it as a bitmap. This way you come closest to poking RGB values in video memory. WPF, Silverlight, HTML5/Javascript can do this. If you do not make it full screen these technologies should suffice for now.
In WPF and Silverlight, use the WriteableBitmap.
In HTML5, use the canvas
Then it is up to you to implement the logic to draw lines, circles, bezier curves, 3D projections.
This is a lot of fun and you will learn a lot.
I'm reading between the lines that you're more interested in having full control over the rendering process from a low level, rather than having a specific interest in how to achieve that on one specific platform.
If that's the case then you will probably get a good bang for your buck looking at a library like SDL which provides you with a frame buffer that you can render to directly but abstracts away a lot of the platform specifics issues. It has been around for quite a while and there are some good tutorials to give you an idea of whether it's the kind of thing you're looking for - see this tutorial and the subsequent one in the same series, which should be enough to get you up and running.
You say you want to create some kind of a rendering engine, meaning desinging you own Pipeline and matrice classes. Which you are to use to transform 3D coordinates to 2D points.
When you have got the 2D points you've been looking for. You can use say for instance on windows, you can select a brush and draw you triangle values while coloring them at the same time.
I do not know why you would need Bitmaps, but if you want to practice say Texturing you can also do that yourself although off course on a weak computer this might take your frames per second significantly.
If you aim is to understand how rendering works on the lowest level. This is with no doubt a good practice.
Jt Schwinschwiga
So I stumbled upon this "new" graphics engine/technology called Unlimited Detail.
This seems to be pretty interesting granted it's real and not a fake.
They have some videos explaining the technology but they only scratch the surface.
What do you think about it? Is it programmatically possible?
Or is it just a scam for investors?
Update:
Since the only answer was based on voxels I have to copy this from their site:
Unlimited Details method is very different to any 3D method that has been invented so far. The three current systems used in 3D graphics are Ray tracing polygons and point cloud/voxels, they all have strengths and weaknesses. Polygons runs fast but has poor geometry, Ray-trace and voxels have perfect geometry but run very slowly.
Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine
The underlying technology is related to something called sparse voxel octrees (see, e.g., this paper), which aren't anything incredibly amazing. What the video doesn't tell you is that these are not at all suited for things that need to be animated, so they're of limited use for anything that uses procedural animation (e.g., all ragdoll physics, etc.). So they're very inflexible. You can get great detail, but you get it in a completely static world.
A rough summary of where things stand with this technology in mainstream games is here. You will also want to check out Samuli Laine's work; he's a Finnish researcher who is focusing a great deal of his attention on this subject and is unlocking some of the secrets to implementing it well.
Update: Yes, the website says it's not "voxel-based". I suspect this is merely an issue of semantics, however, in that what they're using are essentially voxels, but because it's not exactly a voxel they feel safe in being able to claim that it's not voxel-based. In any case, the magic isn't in how similar to a voxel it is -- it's how they select which voxels to actually show. This is the primary determinant of speed.
Right now, there is no incredibly fast way to show voxels (or something approximating a voxel). So either they have developed a completely new, non-peer-reviewed method for filtering voxels (or something like them), or they're lying.
You might find more detail in the following patents:
"A Computer Graphics Method For Rendering Three Dimensional Scenes"
"A Method For Efficent Streaming Of Octree Data For Access"
- Each voxel (they call it a "node") is represented as a single bit, along with information voxels at a finer level of detail.
The full-text can be viewed online here:
https://www.lens.org/lens/search?q=Euclideon+Pty+Ltd&l=en
or
http://worldwide.espacenet.com/searchResults?submitted=true&query=EUCLIDEON