I have been tearing my hair out for a while over this. I need an OpenGL 3.2 Core (no deprecated stuff!) way to efficiently render many sprites, using batching (no instancing).
I've seen examples that do this with geometry alone, but mine also needs to send textures to it, I don't know how to do this.
I need a well done example of it working in action. And looking at how other libs like monogame and such do it isn't much help, because all I'm interested in is the GL code, and it has to have no deprecated stuff in it.
Basically I want to be able to efficiently render thousands+ of sprites, all having textures. The texture is just a spritesheet, so I just need to tell it to render a region of that spritesheet.
I'm disappointed in the amount of material available for programmable pipeline. To the point where it seems like it'd be so much easier to just say screw it and use fixed pipeline, even though I definitely don't want to do that.
So yeah, any full examples that do what I want? Or could somebody more knowledgable write one up? :)
A lot of the examples are "oh, here's how you render 1 triangle". Well that's great, except nobody needs to render only 1 triangle/quad. And they need to be textured in addition to that!
An example that uses VBOs/VAOs/EBOs
ALSO: this means the code can't use glTexPointer and that stuff, but just in raw VBOs/VAOs...
I saw this question and decided to write a little program that does some "sprite" rendering using points and gl_PointSize. I'm not quite sure what you mean by "batching" as opposed to "instancing," but my program uses the glDrawArraysInstanced() call so that I can render multiple points without needing my VBO to be variable sized. My code also doesn't texture the sprites, but that's easy enough to add in (upload the active texture index (the one that was active during your call to glTexSubImage), to a GLSL sampler2D using glUniform1i).
Anyway, here's the program I wrote: http://litherum.blogspot.com/2013/02/sprites-in-opengl-programmable-pipeline.html Hope you can learn from it!
Related
I've been studying 3D graphics on my own for a while now and I want to get a greater understanding of just how everything works. What I would like to do is to create a simple game without using DirectX or OpenGL. I understand most of the math I believe, but the problem I am running up against is I do not know how to get control of the pixels being displayed in a window.
How do I specify what color I want each pixel in my window to be?
I understand I will probably run into issues with buffers and image shearing and probably terrible efficiency problems, but I want to create my own program so that I could see from the very lowest level, of the high level language, how the rendering process works. I really have no idea where to start though. I've figured out how to output BMPs, but I would like to have a running program spitting out 20+ frames per second. How do I accomplish this?
You could pick a environment that allows you to fill an array with values for pixels and display it as a bitmap. This way you come closest to poking RGB values in video memory. WPF, Silverlight, HTML5/Javascript can do this. If you do not make it full screen these technologies should suffice for now.
In WPF and Silverlight, use the WriteableBitmap.
In HTML5, use the canvas
Then it is up to you to implement the logic to draw lines, circles, bezier curves, 3D projections.
This is a lot of fun and you will learn a lot.
I'm reading between the lines that you're more interested in having full control over the rendering process from a low level, rather than having a specific interest in how to achieve that on one specific platform.
If that's the case then you will probably get a good bang for your buck looking at a library like SDL which provides you with a frame buffer that you can render to directly but abstracts away a lot of the platform specifics issues. It has been around for quite a while and there are some good tutorials to give you an idea of whether it's the kind of thing you're looking for - see this tutorial and the subsequent one in the same series, which should be enough to get you up and running.
You say you want to create some kind of a rendering engine, meaning desinging you own Pipeline and matrice classes. Which you are to use to transform 3D coordinates to 2D points.
When you have got the 2D points you've been looking for. You can use say for instance on windows, you can select a brush and draw you triangle values while coloring them at the same time.
I do not know why you would need Bitmaps, but if you want to practice say Texturing you can also do that yourself although off course on a weak computer this might take your frames per second significantly.
If you aim is to understand how rendering works on the lowest level. This is with no doubt a good practice.
Jt Schwinschwiga
Can someone explain in simple terms what an egl image is, and an example of when I would want to use one? I know it can significantly increase performance, but I'm having trouble finding documentation. Before I try and really figure it out, I want to make sure its even worth learning.
Thanks.
So a year later, I have become more of an expert in Android graphics and actually wrote a white paper about EGL Images. If my company lets me publish the paper externally, I'll post it here. For the time being, here is a short answer.
An EGL Image is simply a texture whose content can be updated without having to re-upload to VRAM (meaning no call to glTexImage2D). One of the only drawbacks, besides increased code complexity, is that the application developer has to handle synchronization themselves. In apps that I've written, I had to implement my own "internal" swapchain of EGL Images and manage all the locking primitives myself. Thus, a call to eglSwapBuffers swaps front and back framebuffers as usual, but in a seperate thread there are 2 EGL Images swapping front-to-back as new content becomes available.
The Khronos documentation is at:
http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image.txt
http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image_base.txt
On the other hand, given how opaque most Khronos documentation is, this may not help much. I haven't been able to figure them out myself.
I'd like to implement an old film effect on pictures. Does anyone know a library or even the rare maths involved? I'd like it to cope with red shift for over-exposition and the rest. Even if you don't know the maths or a library, a pointer to any technical doc will be appreciated.
Clarification: I need to write these routines for a project of my own. I'd like to know what kind of processing has to be done and how. Doesn't matter the environment and system, I just need some hint on how process RGB data
You mention Magic Bullet from Red Giant Software in your comments. There's an impressive amount of image processing know-how behind the development of Magic Bullet. You'd probably have an easier time implementing a host interface for After Effects or Final Cut Pro plug-ins and using Magic Bullet.
If you want to see some source code in action, examine the open source projects that do image processing like GIMP, CinePaint, FreeFrame, etc.
You could try with as3 bitmapdata noise function http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/flash/display/BitmapData.html#noise%28%29
If you can find a plugin for Paint.Net, I'm sure you could just use that dll in your program. Is this the sort of thing you're looking for?
Rare maths?
Create several different transparent PNGs with scratch and dust marks on it. Take a pic, adjust the hue, saturation and brightness (algorithms for this aren't that complex) to fade out the pic, then overlay one to many of the scratchy PNGs. The more scratch/dust PNGs you have, the more random the effect you can create.
Not much math here IMHO.
Hey guys, I would like to develop a light/laser show editor and simulator, and for this of course I am going to learn some graphics programming. I am thinking about using C# and XNA.
I was just wondering what aspects of graphics programming I should research or focus on given the project I am working on. I am new to graphics programming so I don't know much about it, but for example I imagine something that I might look into would (possibly?) be volumetric lighting.
For example, what would be a practical way to go about rendering a 'laser' of varied width/color? I read somewhere to just draw a cylinder and apply a shader to it, I would like to confirm that this is the way.
Given that this seems like a big project, I was thinking about starting off by creating light sources and giving them properties so that I can easily manipulate them. I have (mis)read that only a certain amount of lights can be rendered at any given time, I believe eight. Does this only apply to ambient lights? Given this possible limitation, and the fact that most of the lights I will use will be directional, such as head-lights or lasers, what would be a different way to render these? Is that what volumetric lighting would be?
I'd just like to get some things clear before I dive into it. Since I'm new to this I probably didn't make the best use of words, so if something doesn't make sense please let me know. Thanks and sorry for my ignorance.
The answer to this depends on the level of sophistication that you need in your display simulation. Computer graphics is ultimately a simulation of the transport of light; that simulation can be as sophisticated as calculating the fraction of laser light deflected by particles in the atmosphere to the viewer's eyepoint, or as simple as drawing a line. Try out the cylinder effect and see if it works for your project. If you need something more sophisticated, look into shader programming (using Nvidia Cg, for example), and volumetric shading as you mentioned; also post-processing glow effects may be useful. For OpenGL, I believe there is a limit of 8? light sources in a scene, but you could conceivably work around this limit by doing your own shading logic.
Well if it's just for light show simulations I'd imagine your going to need a lot of custom lighting effects - so regardless if you decide to use XNA or straight DirectX your best bet would be to start by learning shader languages and how to program various lighting effects using them. Once you can reproduce the type of laser lighting you want, then you can experiment with the polygons you want to use to represent the lasers. (I've used the cylinder method in some of my work for personal purposes, but I'm not sure how well straight cylinders will fit your purpose).
Although its faster, I think its best not to use vanilla hardware lighting because of its limitations. Pixel shaders can help with you task. Also you may want to chose OpenGL because of portability and its clarity in rendering methods. I worked on Direct3D for several years before switching to OpenGL. OpenGL functions and states are easier to learn and rendering methods (like multi-pass rendering) is a lot clear. If you like to code on C# (which I dont recommend for these tasks), you can use CsGL library to access OpenGL functions.
I woud like to create a cross-platform drawing program. The one requirement for writing my app is that I have pixel level precision over the canvas. For instance, I want to write my own line drawing algorithm rather than rely on someone elses. I do not want any form of anti-aliasing (again, pixel level control is required.) I would like the users interactions on the screen to be quick and responsive (pending my ability to write fast algorithms.)
Ideally, I would like to write this in Python, or perhaps Java as a second choice. The ability to easily make the final app cross-platform is a must. I will submit to different API's on different OS'es if necessary as long as I can write an abstraction layer around them. Any ideas?
addendum: I need the ability to draw on-screen. Drawing out to a file I've got figured out.
I just this week put together some slides and demo code for doing 2d graphics using OpenGL from python using the library pyglet. Here's a representative post: Pyglet week 2, better vertex throughput (or 3D stuff using the same basic ideas)
It is very fast (relatively speaking, for python) I have managed to get around 1,000 independently positioned and oriented objects moving around the screen, each with about 50 vertices.
It is very portable, all the code I have written in this environment works on windows and Linux and mac (and even obscure environments like Pypy) without me ever having to think about it.
Some of these posts are very old, with broken links between them. You should be able to find all the relevant posts using the 'graphics' tag.
The Pyglet library for Python might suit your needs. It lets you use OpenGL, a cross-platform graphics API. You can disable anti-aliasing and capture regions of the screen to a buffer or a file. In addition, you can use its event handling, resource loading, and image manipulation systems. You can probably also tie it into PIL (Python Image Library), and definitely Cairo, a popular cross-platform vector graphics library.
I mention Pyglet instead of pure PyOpenGL because Pyglet handles a lot of ugly OpenGL stuff transparently with no effort on your part.
A friend and I are currently working on a drawing program using Pyglet. There are a few quirks - for example, OpenGL is always double buffered on OS X, so we have to draw everything twice, once for the current frame and again for the other frame, since they are flipped whenever the display refreshes. You can look at our current progress in this subversion repository. (Splatterboard.py in trunk is the file you'll want to run.) If you're not up on using svn, I would be happy to email you a .zip of the latest source. Feel free to steal code if you look into it.
If language choice is open, a Flash file created with Haxe might have a place. Haxe is free, and a full, dynamic programming language. Then there's the related Neko, a virtual machine (like Java's, Ruby's, Parrot...) to run on Mac, Windows and Linux. Being in some ways a new improved form of Flash, naturally it can draw stuff. http://haxe.org/
QT's Canvas an QPainter are very good for this job if you'd like to use C++. and it is cross platform.
There is a python binding for QT but I've never used it.
As for Java, using SWT, pixel level manipulation of a canvas is somewhat difficult and slow so I would not recommend it. On the other hand Swing's Canvas is pretty good and responsive. I've never used the AWT option but you probably don't want to go there.
I would recommend wxPython
It's beautifully cross platform and you can get per pixel control and if you change your mind about that you can use it with libraries such as pyglet or agg.
You can find some useful examples for just what you are trying to do in the docs and demos download.