I've created something of a simplistic renderer on my own using OpenGL ES 2.0. Essentially, it's just a class for rendering quads according to a given sprite texture. To elaborate, it's really just a single object that accepts objects that represent quads. Each quad object maintains a a world transform and object transform matrix and furnishes methods for transforming them over a given number of frames and also specifies texture offsets into the sprite. This quad class also maintains a list of transform operations to execute on its matrices. The renderer class then reads all of these properties from the quad and sets up a VBO to draw all quads in the render list.
For example:
Quad q1 = new Quad();
Quad q2 = new Quad();
q1->translate(vector3( .1, .3, 0), 30); // Move the quad to the right and up for 30 frames.
q2->translate(vector3(-.1, -.3, 0), 30); // Move the quad down and to the left for 30 frames.
Renderer renderer;
renderer.addQuads({q1, q2});
It's more complex than this, but you get the simple idea.
From the implementation perspective, on each frame, it transforms the base vertices of each object according to instruction, loads them all into a VBO including info on alpha value, and passes to a shader program to draw all quad at once.
This obviously isn't what I would call a rendering engine, but performs a similar task, just for rendering 2D quads instead of 3D geometry. I'm just curious as to whether I'm on the right track for developing a makeshift rendering engine. I agree that in most cases it's great to use an established rendering engine to get started in understanding them, but from my point of view, I like to have something of an understanding of how things are implemented, as opposed to learning something prebuilt and then learning how it works.
The problem with this approach is that adding new geometry, textures or animations requires writing code. It should be possible to create content for a game engine using established tools, like 3DS, Maya or Blender, which are completely interactive. This requires reading and parsing some standard file format like Collada. I don't want to squash your desire to learn by implementing code yourself, but you really should take a look at the PowerVR SDK, which provides a lot of the important parts for building game engines. The source code is provided and it's free.
Related
Plotting packages offer a variety of methods for displaying data. Write an interactive plotting application for two dimensionsional curves. Your application should be able to allow the user to choose the mode (line strip or polyline display of the data, bar chart or pie charts), colours, and line styles.
You should start with the GUI editation like this:
Does anyone know of a low level (no frameworks) example of a drag & drop, re-order-able list?
and change it to your primitives (more points per primitive instead of one ... handle each point as (sub)object so you can change its position later).
Then just add tools like add object,del object,... For hand drawing tool use piece wise interpolation cubics
The grid can be done like this:
How to draw dynamic 2D grid that adjusts according to camera zoom: OpenGL
Mouse zooming/panning is also important
Zooming graphics based on current mouse position
Putting all above together into simple editor looks like this:
Using GPU for curve rendering might give you some nice speed and functionality boost:
Is it possible to express "t" variable from Cubic Bezier Curve equation?
Mouse selection of objects might be a speed problem if your scene contains too many objects so in such case its best to use index buffers where you can mouse select with pixel perfect precision for almost free in O(1):
OpenGL 3D-raypicking with high poly meshes
The example is for 3D , in 2D is much simpler ...
Also do not forget to implement save/load functionality to some vector file format. I recommend using SVG it might be complicated to start with it but you can quickly check it contents in any SVG viewer or browser also in notepad as its just a text file. If you use just basic path elements and ignore the rest of SVG features you will see the parsing and creating SVG is not that hard for example See these:
Get Vertices/Edges From BMP or SVG (C#)
Discrete probability distribution plot with given values
For really big datasets you might want to use spatial subdivision techniques (Bounding (Volume)Area Hierarchy, or Quad tree) to ease up the operations...
More in depth implementation details about 2D vector gfx editors depends on language, OS, gfx api and GUI api you using and task you are aiming for ...
If this question is off, please let me know as I don't want to clutter the platform with off-topic questions!
Anyways, I'm having a hard time finding information about what's actually going on when an image is rendered because of some code I've written.
Say I wanted to add the numbers 5 and 3. The CPU would write 5 to one register and 3 to another one. The ALU would take care of the calculation and output 8. That's fine, the CPU uses MOVE and ADD to produce a result.
What I don't find any information on however, is what's going on when I want to draw a rectangle. There are importable frameworks for most programming languages which lets you do this. In SpriteKit (Swift & Objc) for example, you would write something like
let node = SKSpriteNode(color: .white, size: CGSize(width: 200, height: 300))
and add node to an SKScene (just a scene containing childNodes) and a white rectangle would "magically" get rendered. What I would like to know is what goes on under the hood. Why does this exact framework let you draw a rectangle. What is the assembly code (say, for Intel Core M) which makes the GPU calculate what this rectangle will look like? And how does SpriteKit build on the basics of Swift/Objective C to actually do this (and could I do this myself)?
Maybe a weird question, but I feel like I have to know (yes, sometimes I'm too curious). Thank you.
P.S. I would love a really detailed answer, not "the CPU 'tells' the GPU to draw a rectangle" - CPUs can't talk!
There are many ways to render convex polygon. The most used in past was ScanLine algorithm where you simply rasterize all the lines of circumference into left/right buffers and then just render using horizontal lines and interpolating the other coordinates along the way (like z,r,g,b,tx,ty,nx,ny,nz...). This was suited for single-thread CPU based SW rendering.
With parallelization (like on GPU) different approach get more popular. It simply renders only triangles (so you need to triangulate your polygons) and renders like this:
compute AABB
so simply min,max of x,y coordinates of the triangle vertexes.
loop through AABB
this is done in parallel and its done by GPU interpolators. Each interpolated (looped) "pixel" is called fragment (as it usually contains more than just color)
for each fragment
compute barycentric coordinates and from the result decide if fragment is inside (s+t<=1) or outside (s+t>1) triangle. If inside invoke Fragment shader.
All this gets done just before Fragment shader stage and usually all this (or majority of it) is implemented in HW so no code.
Nowadays GPU rendering is done by passing geometry to the gfx driver itself. What drivers does under the hood is just guess work for us but most likely they also just pass the geometry and configuration setting to the right places on the GPU (memory, registers, ...).
What the minimum configuration for the program I need to build 3D Graphics from scratch, for example I have only SFML for working with 2d graphics and I need to implement the Camera object that can move & rotate in a space
Where to start and how to implement vector3d -> vector2d conversion functions and other neccessary things
All I have for now is:
angles Phi, Xi, epsilon 1-3 and some object that I can draw on the screen with the following formula
x/y = center.x/y + scale.x/y * dot(point[i], epsilon1/epsilon2)
But this way Im just transforming "world" axis, not the Object points
First you need to implement transform matrix and vector math:
Mathematically compute a simple graphics pipeline
Understanding 4x4 homogenous transform matrices
The rest depends on kind of rendering you want to achieve:
boundary polygonal mesh rendering
This kind of rendering is the native for nowadays gfx cards. You need to implement buffers for:
depth (for filled polygons without z-sorting)
screen (to avoid flickering and also serves as Canvas)
shadow,stencil,aux (for advanced rendering techniques)
they have usually the same resolution as target rendering area. On top of this you need to implement supported primitives rendering at least point,line,triangle. see:
Algorithm to fill triangle
on top of all this you can add textures,shaders and whatever else you want to ...
(back)ray tracing
this kind of rendering is very different and current gfx HW is not build for it. This involves implementing ray/primitives intersections computation, Snell's law and analytical representation of meshes. This way you can also do multi-spectral rendering and more physically accurate effects/processes see:
How can I render an 'atmosphere' over a rendering of the Earth in Three.js? hybrid approach #1+#2
Algorithm for 2D Raytracer
How to implement 2D raycasting light effect in GLSL
Multi-Band Image raster to RGB
The difference between 2D and 3D ray tracer is almost none the only difference is how to compute perpendicular vector ...
There are also different rendering methods like Volume rendering, hybrid methods and others but their implementation is usually task oriented and generic description would most likely just mislead ... Here some 3D ray tracers of mine:
back raytrace through 3D mesh
back raytrace through 3D volume
i want to import a set of 3d geometries in to current scene, the imported geometries contains tons of basic componant which may represent an
entire building. The Product Manager want the entire building to be displayed
as a 3d miniature(colors and textures must corrosponding to the original building).
The problem: Is there any algortithms which can handle these large amount of datasin a reasonable time and memory cost.
//worst case: there may be a billion triangle surfaces in the imported data
And, by the way, i am considering another solotion: using a type of textue mapping:
1 take enough snapshots by the software render of the imported objects.
2 apply the images to a surface .
3 use some shader tricks to perform effects like bump-mapping---when the view posisition changed, the texture will alter and makes the viewer feels as if he was looking at a 3d scene.
----my modeller and render are ACIS and hoops, any ideas?
An option is to generate side views of the building at a suitable resolution, using the rendering engine and map them as textures to a parallelipipoid.
The next level of refinement is to obtain a bump or elevation map that you can use for embossing. Not the easiest to do.
If the modeler allows it, you can slice the volume using a 2D grid of "voxels" (actually prisms). You can do that by repeatedly cutting the model in two with a plane. And in every prism, find the vertex closest to the observer. This will give you a 2D map of elevations, with the desired resolution.
Alternatively, intersect parallel "rays" (linear objects) with the solid and keep the first endpoint.
It can also be that your modeler includes a true voxel model, or that rendering can be zone with a Z-buffer that you can access.
Is it possible to export or convert my 3D models into GLSL ES 2.0? Is there any converter or any exporter tool/addon existing for any editor programs like Blender/3DS MAX/Maya that creates GLSL ES 2.0 code?
I'd like to create my models conveniently in any of the above mentioned editors and then I'd like to export/convert them into GLSL ES 2.0.
I already have a template WebGL code that displays my shaders. I want to replace my fragment shader and vertex shader parts with the GLSL ES code created automatically by a converter or an exporter tool.
I'd like to do something like this (but for GLSL ES 2.0):
Blender to GLSL
You're comparing apples with cars here. OpenGL is a drawing API, GLSL is a programming language for implementing shader code.
3D models are neither of that. The sole question "how can I convert my 3D model to OpanGL?" makes no sense.
Is it possible?
No. Because that's not the purpose of GLSL
Choose a model file format (preferrably implementing a reading parser is straightforward for) implement the parser, fill in apropriate data structures and feed those into the right parts of OpenGL, making the right calls to draw them.
OpenGL itself doesn't deal with models, scenes or even files. GLSL is not even a file format, it's a language.
I'd start with OBJ or STL files. They're reasonably easy to read and interpret and match very closely the primitive types OpenGL uses.
Probably the hardest format to read is .blend files; effectively a .blend file is a dump of the Blender process memory image. It takes a fully featured Blender (or something very similar to it) to make sense of a .blend file.
Update due to comment:
Please, please carefully read what this exporter script you linked to does: It takes an objects material settings (not the model itself) and generates GLSL code, that when used in the right framework (i.e. apropriate uniform and attribute names, matrix setup, etc.) will result in shading operations that resemble those material settings as close as possible. The script does not export a model!
You asked about exporting a 3D model. That would be the mesh of the model and it's attributes to place it in the world. Materials are not what's stored in a OBJ or STL file. They're textures, and yes, shaders. But they're completely independent of the model data itself. It's perfectly possible to use the same material settings on multiple models, or to freely exchange a model's material (textures and shaders), as long as the model provides all the required vertex attributes to make this material work.
Update 2 due to comment:
Do you even understand what a shader does? If not, here's a short synopsis: You have vertex attribute data (in buffers). These indexed attributes are submitted to OpenGL. Using a call to glDrawElements or glDrawArrays the attributes are interpreted as primitives (points, lines or triangles (or quads on older OpenGL versions)). Each primitive is then subjected to a number of transformations.
Mandatory: First step is the vertex shader which responsibility is to determine its final position in the viewport.
Optional: After vertex shading vertices the primitives formed by the vertices undergo tesselation shading. Tesselation is used to refine geometry, for example adding detail to terrain or making curved surfaces smoother.
Optional: Next comes geometry shading which can replace a single vertex with a (small) number of vertices. A geometry shader may even change the primitive type. So a single point could be replaced with a triangle for example (usefull for rendering particle systems).
Mandatory: The last step is fragment shading the primitive. After a primitive's position in the viewport has been determined, each of the pixels it covers is processed in one or more fragments. The fragment shader is a program that determines the final color and translucency in the target framebuffer.
Each shading step is controlled by a user defined program. It is these programs, shaders they are called, that are written in GLSL. Not geometry, no models. Programs! And very simple programs at that. They don't produce geometry from nothing, they always process already existing geometry passed to OpenGL.
Shaders are not used for defining or storing models. They just modify them at rendering time.
Have a look at http://www.inka3d.com which converts your Maya shaders to GLSL. For the models do you need WebGL or OpenGL ES 2.0?