I have a new project but I'm really not sure where to start, other than I think GNAT Ada would be good. I taught myself Ada three years ago and I have managed some static graphics with GNAT but this is different.
Please bear with me, all I need is a pointer or two towards where I might start, I'm not asking for a solution. My history is in back-end languages that are now mostly obsolete, so graphics is still a bit of a challenge.
So, the project:
With a static background image (a photograph) - and a moveable line with an adjustable cursor somewhere between the ends of the line. I need to rotate the line and adjust the length, as well as to move it around the screen and slide the cursor along the line; I have no problem calculating the positions of each element of the line. Once in place, I need to report on the position of the cursor relative to the overall length of the line. I can probably handle the reporting with what I already know but I have no clue as to how to create a graphic that I can slide around over another image. In the past I have failed to detect mouse events in GNAT Ada and I am sure I will need to get on top of that - in fact if I could, I would probably manage to control the line but doing it over an existing image is beyond me.
If I am wrong to choose GNAT Ada for this, please suggest an alternative.
I have looked in Stackoverflow for anything similar but have found answers only for Blackberry and Java, neither of which seems relevant.
For background, this will be a useful means of measuring relative lengths of the features of insect bodies from photographs, hopefully to set up some definitive identification guides for closely-related species.
With a static background image (a photograph)
So first you need a window to put your interface in. You can get this from any GUI framework (link as given by trashgod in the comments).
and a moveable line with an adjustable cursor somewhere between the ends of the line. I need to rotate the line and adjust the length, as well as to move it around the screen and slide the cursor along the line; I have no problem calculating the positions of each element of the line.
These are affine transformations. They are commonly employed in low-level graphics rendering. You can, like Zerte suggested, employ OpenGL – however modern OpenGL has a steep learning curve for beginners.
GtkAda includes a binding to the Cairo graphics library which supports such transformations, so you can create a window with GtkAda with a Cairo surface and then render your image & line on it. Cairo does have a learning curve and I never used the Ada binding, so I cannot really give an opinion about how complex this will be.
Another library that fully supports what you want to do is SDL which has Ada bindings here. The difference to GtkAda is that SDL is a pure graphics drawing library so you need to „draw“ any interactive controls yourself. On the other hand, setting up a window via SDL and drawing things will be somewhat simpler than doing it via GtkAda & Cairo.
SFML which has also been mentioned in comments is on the same level as SDL. I do not know it well enough to give a more informed opinion but what I said about SDL will most probably also apply to SFML.
In the past I have failed to detect mouse events in GNAT Ada and I am sure I will need to get on top of that - in fact if I could, I would probably manage to control the line but doing it over an existing image is beyond me.
HID event processing is handled by whatever GUI library you use. If you use SDL, you'll get mouse events from SDL; if you use GTK, you'll get mouse events from GTK, and so on.
Related
I have been working through the book The Haskell School of Expression by Paul Hudak and using its associated gtk based graphics library Graphics.SOE.Gtk (link to documentation) for small 2D drawing experiments.
This library is very basic however, and only really has the ability to draw shapes. At the moment I am writing some programs which require particular GUI widgets such as buttons and text boxes. My question is: Is it possible to use the drawing capabilities of the SOE library alongside the GUI widgets found in gtk2hs? E.g be able to write a program where I can click a button which causes the program to draw a triangle shape in another container in the same window.
I have searched online for a way to do this, but most tutorials suggest using cairo to do any graphics drawing with Gtk ; SOE graphic's API has the appearance of being a relatively self contained thing.
No, there's not a really meaningful way for soegtk and regular gtk to interact. The reason is that soegtk keeps all its data types abstract; this is good practice from a "makes it easy for the implementor to change the implementation without changing the interface" point of view but it can be a bit limiting from a "I'm just a user that wants to munge things in ways the interface don't promise to allow" point of view.
You could:
make a copy of the text of the single module in the soegtk package, and adjust the export line to export more things and happily break any abstraction boundaries you dislike
interact non-meaningfully; e.g. have your gtk button open an soegtk window with the graphics of interest
learn a different drawing library, say, cairo or diagrams
I've been studying 3D graphics on my own for a while now and I want to get a greater understanding of just how everything works. What I would like to do is to create a simple game without using DirectX or OpenGL. I understand most of the math I believe, but the problem I am running up against is I do not know how to get control of the pixels being displayed in a window.
How do I specify what color I want each pixel in my window to be?
I understand I will probably run into issues with buffers and image shearing and probably terrible efficiency problems, but I want to create my own program so that I could see from the very lowest level, of the high level language, how the rendering process works. I really have no idea where to start though. I've figured out how to output BMPs, but I would like to have a running program spitting out 20+ frames per second. How do I accomplish this?
You could pick a environment that allows you to fill an array with values for pixels and display it as a bitmap. This way you come closest to poking RGB values in video memory. WPF, Silverlight, HTML5/Javascript can do this. If you do not make it full screen these technologies should suffice for now.
In WPF and Silverlight, use the WriteableBitmap.
In HTML5, use the canvas
Then it is up to you to implement the logic to draw lines, circles, bezier curves, 3D projections.
This is a lot of fun and you will learn a lot.
I'm reading between the lines that you're more interested in having full control over the rendering process from a low level, rather than having a specific interest in how to achieve that on one specific platform.
If that's the case then you will probably get a good bang for your buck looking at a library like SDL which provides you with a frame buffer that you can render to directly but abstracts away a lot of the platform specifics issues. It has been around for quite a while and there are some good tutorials to give you an idea of whether it's the kind of thing you're looking for - see this tutorial and the subsequent one in the same series, which should be enough to get you up and running.
You say you want to create some kind of a rendering engine, meaning desinging you own Pipeline and matrice classes. Which you are to use to transform 3D coordinates to 2D points.
When you have got the 2D points you've been looking for. You can use say for instance on windows, you can select a brush and draw you triangle values while coloring them at the same time.
I do not know why you would need Bitmaps, but if you want to practice say Texturing you can also do that yourself although off course on a weak computer this might take your frames per second significantly.
If you aim is to understand how rendering works on the lowest level. This is with no doubt a good practice.
Jt Schwinschwiga
I've been playing with pygame (on Debian/Lenny).
It seems to work nicely, except for annoying tearing of blits (fullscreen or windowed mode).
I'm using the default SDL X11 driver. Googling suggests that it's a known issue with SDL that X11 provides no vsync facility (even with a display created with FULLSCREEN|DOUBLEBUF|HWSURFACE flags), and I should use the "dga" driver instead.
However, running
SDL_VIDEODRIVER=dga ./mygame.py
throws in pygame initialisation with
pygame.error: No available video device
(despite xdpyinfo showing an XFree86-DGA extension present).
So: what's the trick to getting tear-free vsynced flips ? Either by getting this dga thing working or some other mechanism ?
The best way to keep tearing to a minimum is to keep your frame rate as close to the screen's frequency as possible. The SDL library doesn't have a vsync unless you're running OpenGL through it, so the only way is to approximate the frame rate yourself.
The SDL hardware double buffer isn't guaranteed, although nice when it works. I've seldomly seen it in action.
In my experience with SDL you have to use OpenGL to completely eliminate tearing. It's a bit of an adjustment, but drawing simple 2D textures isn't all that complicated and you get a few other added bonuses that you're able to implement like rotation, scaling, blending and so on.
However, if you still want to use the software rendering, I'd recommend using dirty rectangle updating. It's also a bit difficult to get used to, but it saves loads of processing which may make it easier to keep the updates up to pace and it avoids the whole screen being teared (unless you're scrolling the whole play area or something). As well as the time it takes to draw to the buffer is at a minimum which may avoid the blitting taking place while the screen is updating, which is the cause of the tearing.
Well my eventual solution was to switch to Pyglet, which seems to support OpenGL much better than Pygame, and doesn't have any flicker problems.
Use the SCALED flag and vsync=True when calling set_mode and you should be all set (at least on any systems which actually support this; in some scenarios SDL still can't give you a VSync-capable surface but they are increasingly rare).
I'm interesting in learning about the different layers of abstraction available for making graphical applications.
I see a lot of terms thrown around: At the highest level of abstraction, I hear about things like C#, .NET, pyglet and pygame. Further down, I hear about DirectX and OpenGL. Then there's DirectDraw, SDL, the Win32 API, and still other multi-platform libraries like WxWidgets.
How can I get a good sense of where one of these layers ends and where the next one begins? What is the "lowest possible level" way of creating a window in Windows, in C? What about C++? (A code sample would be divine.) What about in X11? Are the Windows implementations of OpenGL and DirectX built on top of the Win32 API? Where can I begin to learn about these things?
There's another question on SO where Programming Windows is suggested. What about for Linux? Is there an equivalent such book?
I'm aware that this is very low-level, and that there are many friendlier tools available, but I would like to at least learn the basics of what's going on beneath the surface. As much as I'd like to begin slinging windows and vectors right off the bat, starting with something like pygame is too high-level for me; I really need to make the full conceptual circuit of how you draw stuff on a computer.
I will certainly appreciate suggestions for books and resources, but I think it would be stupendously cool if the answers to this question filled up with lots of different ways to get to "Hello world" with different approaches to graphics programming. C? C++? Using OpenGL? Using DirectX? On Windows XP? On Ubuntu? Maybe I ask for too much.
The lowest level would be the graphics card's video RAM. When the computer first starts, the graphics card is typically set to the 80x25 character legacy mode.
You can write text with a BIOS provided interrupt at this point. You can also change the foreground and background color from a palette of 16 distinctive colors. You can use access ports/registers to change the display mode. At this point you could say, load a different font into the display memory and still use the 80x25 mode (OS installations usually do this) or you can go ahead and enable VGA/SVGA. It's quite complicated, that's what drivers are for.
Once the card's in the 'higher' mode you'd change what's on screen by accessing the memory mapped to the video card. It's stored horizontally pixel by pixel with some 'dirty regions' of pixels that aren't mapped to screen at the end of each line which you have to compensate for. But yeah, you could copy the pixels of an image in memory directly to the screen.
For things like DirectX, OpenGL. rather than write directly to the screen, commands are sent to the graphics card and it updates its screen automatically. Commands like "Hey you, draw this image I've loaded into the VRAM here, here and here" or "Draw these triangles with this transformation matrix..." take a fraction of the time compared to pixel by pixel . The CPU will thank you.
DirectX/OpenGL is a programmer friendly library for sending those commands to the card with all the supporting functions to help you get it done smoothly. A more direct approach would only be unproductive.
SDL is an abstraction layer so without bothering to read up on it I'd guess it would have different ways of working on each system. On one it might use semi-direct screen writing, another Direct3D, etc. Whatever's fastest as long as the code stays cross-platform..able.
The GDI/GDI+ and XWindow system. They're designed specifically to draw windows. Originally they drew using the pixel-by-pixel method (which was good enough because they'd only have to redraw when a button was pressed or a window moved, etc.) but now they use Direct3D/OpenGL for accelerated drawing (and special effects). Optimizations depend on the versions and implementations of these libraries.
So if you want the most power and speed, DirectX/openGL is the way to go. SDL is certainly useful for getting the most from a cross-platform environment and integrates with OpenGL anyway. The windowing system comes last but don't underestimate it. Especially with the stuff Microsoft's coming up with lately.
Michael Abrash's Graphics Programming 'Black Book' is a great place to start. Plus you can download it for free!
If you really want to start at the bottom then drawing a line is the most basic operation. Computer graphics is simply about filling in pixels on a grid (screen), so you need to work out which pixels to fill in to get a line that goes from (x0,y0) to (x1,y1).
Check out Bresenham's algorithm to get a feel for what is involved.
To be a good graphics and image processing programmer doesn't require this low level knowledge, but i do hate to be clueless about the insides of what i'm using. I see two ways to chase this - high-level down, or bottom-level up.
Top-down is a matter of following how the action traces from a high-level graphics operation such as to draw a circle, to the hardware. Get to know OpenGL well. Then the source to Mesa (free!) provides a peek at how OpenGL can be implemented in software. The source to Xorg would be next, first to see how the action goes from API calls through the client side to the X server. Finally you dive into a device driver that interfaces with hardware.
Bottom up: build your own graphics hardware. Think of ways it could connect to a computer - how to handle massive numbers of pixels through a few byte-size registers, how DMA would work. Write a device driver, and try designing a graphics library that might be useful for app programmers.
The bottom-up way is how i learned, years ago when it was a possibility with the slow 8-bit microprocessors. The direct experience with circuitry and hardware-software interfacing gave me a good appreciation of the difficult design decisions - e.g. to paint rectangles using clever hardware, in the device driver, or higher level. None of this is of practical everyday value, but provided a foundation of knowledge to understand newer technology.
see Open GPU Documentation section:
http://developer.amd.com/documentation/guides/Pages/default.aspx
HTH
On MSWindows it is easy: you use what the API provides, whether it is the standard windows programming API or the DirectX-family API's: that's what you use, and they are well documented.
In an X windows environment you use whatever X11-libraries that are provided. If you want to understand the principles behind windowing on X, I suggest that you do this, nevermind that many others tell you not to, it will really help you to understand graphics and windowing under X. You can read the documentation on X-programming (google for it). (After this exercise you would appreciate the higher level libraries!)
Apart from the above, at the absolutely lowest level (excluding chip-level) that you can go is to call the interrupts that switch to the various graphics modes available - there are several - and then write to the screen buffers, but for this you would have to use assembler, anything else would be too slow. Going this way will not be portable at all.
Another post mentions Abrash's Black Book - an excellent resource.
Edit: As for books on programming Linux: it is a community thing, there are many howto's around; also find a forum, join it, and as long as you act civilized you will get all the help you can ever need.
Right off the bat, I'd say "you're asking too much." From what little experience I've had, I would recommend reading some tutorials or getting a book on either directX or OpenGL to start out. To go any lower than that would be pretty complex. Most of the books I've seen in OGL or DX have pretty good introductions that explain what the functions/classes do.
Once you get the hang of one of these, you could always dig in to the libraries to see what exactly they're doing to go lower.
Or, if you really, absolutely MUST learn the LOWEST level... read the book in the above post.
libX11 is the lowest level library for X11. I believe the opengl/directx talk to the driver/hardware directly (or emulate unsupported ops), so they would be the lowest level library.
If you want to start with very low level programming, look for x86 assembly code for VGA and fire up a copy of dosbox or similar.
Vulkan api is an api which gives you very low level access to most if not all features of the gpu, computational and graphical, it works on amd and Nvidia gpus (not all)
you can also use CUDA, but it only works on Nvidia gpus and has access to computational features only, no video output.
I woud like to create a cross-platform drawing program. The one requirement for writing my app is that I have pixel level precision over the canvas. For instance, I want to write my own line drawing algorithm rather than rely on someone elses. I do not want any form of anti-aliasing (again, pixel level control is required.) I would like the users interactions on the screen to be quick and responsive (pending my ability to write fast algorithms.)
Ideally, I would like to write this in Python, or perhaps Java as a second choice. The ability to easily make the final app cross-platform is a must. I will submit to different API's on different OS'es if necessary as long as I can write an abstraction layer around them. Any ideas?
addendum: I need the ability to draw on-screen. Drawing out to a file I've got figured out.
I just this week put together some slides and demo code for doing 2d graphics using OpenGL from python using the library pyglet. Here's a representative post: Pyglet week 2, better vertex throughput (or 3D stuff using the same basic ideas)
It is very fast (relatively speaking, for python) I have managed to get around 1,000 independently positioned and oriented objects moving around the screen, each with about 50 vertices.
It is very portable, all the code I have written in this environment works on windows and Linux and mac (and even obscure environments like Pypy) without me ever having to think about it.
Some of these posts are very old, with broken links between them. You should be able to find all the relevant posts using the 'graphics' tag.
The Pyglet library for Python might suit your needs. It lets you use OpenGL, a cross-platform graphics API. You can disable anti-aliasing and capture regions of the screen to a buffer or a file. In addition, you can use its event handling, resource loading, and image manipulation systems. You can probably also tie it into PIL (Python Image Library), and definitely Cairo, a popular cross-platform vector graphics library.
I mention Pyglet instead of pure PyOpenGL because Pyglet handles a lot of ugly OpenGL stuff transparently with no effort on your part.
A friend and I are currently working on a drawing program using Pyglet. There are a few quirks - for example, OpenGL is always double buffered on OS X, so we have to draw everything twice, once for the current frame and again for the other frame, since they are flipped whenever the display refreshes. You can look at our current progress in this subversion repository. (Splatterboard.py in trunk is the file you'll want to run.) If you're not up on using svn, I would be happy to email you a .zip of the latest source. Feel free to steal code if you look into it.
If language choice is open, a Flash file created with Haxe might have a place. Haxe is free, and a full, dynamic programming language. Then there's the related Neko, a virtual machine (like Java's, Ruby's, Parrot...) to run on Mac, Windows and Linux. Being in some ways a new improved form of Flash, naturally it can draw stuff. http://haxe.org/
QT's Canvas an QPainter are very good for this job if you'd like to use C++. and it is cross platform.
There is a python binding for QT but I've never used it.
As for Java, using SWT, pixel level manipulation of a canvas is somewhat difficult and slow so I would not recommend it. On the other hand Swing's Canvas is pretty good and responsive. I've never used the AWT option but you probably don't want to go there.
I would recommend wxPython
It's beautifully cross platform and you can get per pixel control and if you change your mind about that you can use it with libraries such as pyglet or agg.
You can find some useful examples for just what you are trying to do in the docs and demos download.