With the popularity of the Apple iPhone, the potential of the Microsoft Surface, and the sheer fluidity and innovation of the interfaces pioneered by Jeff Han of Perceptive Pixel ...
What are good examples of Graphical User Interfaces which have evolved beyond the
Windows, Icons, ( Mouse / Menu ), and Pointer paradigm ?
Are you only interested in GUIs? A lot of research has been done and continues to be done on tangible interfaces for example, which fall outside of that category (although they can include computer graphics). The User Interface Wikipedia page might be a good place to start. You might also want to explore the ACM CHI Conference. I used to know some of the people who worked on zooming interfaces; the Human Computer Interaction Lab an the University of Maryland also has a bunch of links which you may find interesting.
Lastly I will point out that a lot of innovative user interface ideas work better in demos than they do in real use. I bring that up because your example, as a couple of commenters have pointed out, might, if applied inappropriately, be tiring to use for any extended period of time. Note that light pens were, for the most part, replaced by mice. Good design sometimes goes against naive intuition (mine anyway). There is a nice rant on this topic with regard to 3d graphics on useit.com.
Technically, the interface you are looking for may be called Post-WIMP user interfaces, according to a paper of the same name by Andries van Dam. The reasons why we need other paradigms is that WIMP is not good enough, especially for some specific applications such as 3D model manipulation.
To those who think that UI research builds only cool-looking but non-practical demos, the first mouse was bulky and it took decades to be prevalent. Also Douglas Engelbart, the inventor, thought people would use both mouse and (a short form of) keyboard at the same time. This shows that even a pioneer of the field had a wrong vision about the future.
Since we are still in WIMP era, there are diverse comments on how the future will be (and most of them must be wrong.) Please search for these keywords in Google for more details.
Programming by example/demonstration
In short, in this paradigm, users show what they want to do and computer will learn new behaviors.
3D User Interfaces
I guess everybody knows and has seen many examples of this interface before. Despite a lot of hot debates on its usefulness, a part of 3D interface ongoing research has been implemented into many leading operating systems. The state of the art could be BumpTop. See also: Zooming User Interfaces
Pen-based/Sketch-based/Gesture-based Computing
Though this interface may use the same hardware setup like WIMP but, instead of point-and-click, users command through strokes which are information-richer.
Direct-touch User Interface
This is ike Microsoft's Surface or Apple's iPhone, but it doesn't have to be on tabletop. The interactive surface can be vertical, say wall, or not flat.
Tangible User Interface
This has already been mentioned in another answer. This can work well with touch surface, a set of computer vision system, or augmented reality.
Voice User Interface, Mobile computing, Wearable Computers, Ubiquitous/Pervasive Computing, Human-Robot Interaction, etc.
Further information:
Noncommand User Interface by Jakob Nielsen (1993) is another seminal paper on the topic.
If you want some theoretical concepts on GUIs, consider looking at vis, by Tuomo Valkonen. Tuomo has been extremely critical of WIMP concept for a long, he has developed ion window manager, which is one of many tiling window managers around. Tiling WMs are actually a performance win for the user when used right.
Vis is the idea of an UI which actually adapts to the needs of the particular user or his environment, including vision impairment, tactile preferences (mouse or keyboard), preferred language (to better suit right-to-left languages), preferred visual presentation (button order, mac-style or windows-style), better use of available space, corporate identity etc. The UI definition is presentation-free, the only things allowed are input/output parameters and their relationships. The layout algorithms and ergonomical constraints of the GUI itself are defined exactly once, at system level and in user's preferences. Essentially, this allows for any kind of GUI as long as the data to be shown is clearly defined. A GUI for a mobile device is equally possible as is a text terminal UI and voice interface.
How about mouse gestures?
A somewhat unknown, relatively new and highly underestimated UI feature.
They tend to have a somewhat steeper learning curve then icons because of the invisibility (if nobody tells you they exist, they stay invisible), but can be a real time saver for the more experienced user (I get real aggrevated when I have to browse without mouse gestures).
It's kind of like the hotkey for the mouse.
Sticking to GUIs puts limits on the physical properties of the hardware. Users have to be able to read a screen and respond in some way. The iPhone, for example: It's interface is the whole top surface, so physical size and the IxD are opposing factors.
Around Christmas I wrote a paper exploring the potential for a wearable BCI-controlled device. Now, I'm not suggesting we're ready to start building such devices, but the lessons learnt are valid. I found that most users liked the idea of using language as the primary interaction medium. Crucially though, all expressed concerns about ambiguity and confirmation.
The WIMP paradigm is one that relies on very precise, definite actions - usually button pressing. Additionally, as Nielsen reminds us, good feedback is essential. WIMP systems are usually pretty good at (or at least have the potential to) immediately announcing the receipt and outcome of a users actions.
To escape these paired requirements, it seems we really need to write software that users can trust. This might mean being context aware, or it might mean having some sort of structured query language based on a subset of English, or it might mean something entirely different. What it certainly means though, is that we'd be free of the desktop and finally be able to deploy a seamlessly integrated computing experience.
NUI Group people work primarily on multi-touch interfaces and you can see some nice examples of modern, more human-friendly designs (not counting the endless photo-organizing-app demos ;) ).
People are used to WIMP, the other main issue is that most of the other "Cool" interfaces require specialized hardware.
I'm not in journalism; I write software for a living.
vim!
It's definitely outside the realm of WIMP, but whether it's beyond it or way behind it is up to judgment!
I would recommend the following paper:
Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 201-210. see DOI
Related
Where is the Body of Knowledge for programmers interested in developing applications that simulate traditional artists' materials and tools, such as simulating natural paints?
Is there any substantial body of knowledge or resource for software engineers interested in creating applications that reproduce the effect of painting and drawing media such as watercolor, oil, chalk, charcoal and color pencil?
Clearly the knowledge exists and is shared by software engineers at Adobe, Corel, etc. But out here in the open, where is this information?
So far I've only come across fragmentary knowledge of a little technique here or there, but have not yet found any substantial resource. If you know where I need to look, please point me there.
Where are the best academic resources? Are there any blogs that specialize in this area? Are their organizations that specialize in this?
Eureka! NPR - for those of us heretofore uninitiated - refers to Non-Photorealistic Rendering, the "area of computer graphics that makes images
resembling traditional artistic works (Mould)."
ACM Digital Library appears to be a very extensive source for research and academic papers on NPR. See here, here, and here for a good index of NPAR papers, search for 'NPR' or non-photorealistic rendering' on this page.
This page at the ACM Digital Library show an example of ways to access the material - a member (who has paid annual membership fee) can purchase the paper for $10, a non-member for $15; Alternatively, you can rent an article for such as this for $2.99 for 24 hours.
Non-photorealistic Rendering, Bruce Gooch, Amy Gooch - 2001: A preview of this book is on Google Books and it looks like a bulls-eye for subject matter expertise.
Non-Photorealistic Computer Graphics: Modeling, Rendering, and Animation (The Morgan Kaufmann Series in Computer Graphics) by Thomas Strothotte. This book looks like a goldmine. It covers things like stippling, drawing incorrect lines, drawing artistic lines, simulating painting with wet paint, simulating pencils, strokes and textures, etc.
Microsoft and Adobe have a visible presence in this field of work, and of course employ people who are prominently involved in the NPR field, and you will see their names (or that of their employers') often appear as participants and sponsors of events like the upcoming NPAR 2011 (Sponsored by adobe).
Microsoft has the impressive Project Gustav (video) - a software application dedicated exclusively to very realistically simulating traditional artistic tools such as paint, chalk, pencil, etc.
Detail Preserving Paint Modeling of 3D Brushes.
So I stumbled upon this "new" graphics engine/technology called Unlimited Detail.
This seems to be pretty interesting granted it's real and not a fake.
They have some videos explaining the technology but they only scratch the surface.
What do you think about it? Is it programmatically possible?
Or is it just a scam for investors?
Update:
Since the only answer was based on voxels I have to copy this from their site:
Unlimited Details method is very different to any 3D method that has been invented so far. The three current systems used in 3D graphics are Ray tracing polygons and point cloud/voxels, they all have strengths and weaknesses. Polygons runs fast but has poor geometry, Ray-trace and voxels have perfect geometry but run very slowly.
Unlimited Detail is a fourth system, which is more like a search algorithm than a 3D engine
The underlying technology is related to something called sparse voxel octrees (see, e.g., this paper), which aren't anything incredibly amazing. What the video doesn't tell you is that these are not at all suited for things that need to be animated, so they're of limited use for anything that uses procedural animation (e.g., all ragdoll physics, etc.). So they're very inflexible. You can get great detail, but you get it in a completely static world.
A rough summary of where things stand with this technology in mainstream games is here. You will also want to check out Samuli Laine's work; he's a Finnish researcher who is focusing a great deal of his attention on this subject and is unlocking some of the secrets to implementing it well.
Update: Yes, the website says it's not "voxel-based". I suspect this is merely an issue of semantics, however, in that what they're using are essentially voxels, but because it's not exactly a voxel they feel safe in being able to claim that it's not voxel-based. In any case, the magic isn't in how similar to a voxel it is -- it's how they select which voxels to actually show. This is the primary determinant of speed.
Right now, there is no incredibly fast way to show voxels (or something approximating a voxel). So either they have developed a completely new, non-peer-reviewed method for filtering voxels (or something like them), or they're lying.
You might find more detail in the following patents:
"A Computer Graphics Method For Rendering Three Dimensional Scenes"
"A Method For Efficent Streaming Of Octree Data For Access"
- Each voxel (they call it a "node") is represented as a single bit, along with information voxels at a finer level of detail.
The full-text can be viewed online here:
https://www.lens.org/lens/search?q=Euclideon+Pty+Ltd&l=en
or
http://worldwide.espacenet.com/searchResults?submitted=true&query=EUCLIDEON
I am in the process of developping a game, and after two months of work (not full time mind you), I have come to realise that our specs for the game are lacking a lot of details. I am not a professional game developper, this is only a hobby.
What I would like to receive help or advices for is this: What are the major components that you find in games, that have to be developped or already exists as librairies? The objective of this question is for me to be able to specify more game aspects.
Currently, we had specified pretty much only how we would work on the visual, completely forgetting everything about game logic (AI, Entities interactions, Quest logic (how do we decide whether or not a quest is completed)).
So far, I have found those points:
Physics (collision detection, actual forces, etc.)
AI (pathfinding, objectives, etc.)
Model management
Animation management
Scene management
Combat management
Inventory management
Camera (make sure not to render everything that is in the scene)
Heightmaps
Entities communication (Player with NPC, enemy, other players, etc)
Game state
Game state save system
In order to reduce the scope of this queston, I'd like it if you could specifically discuss aspects related to developping an RPG type of game. I will also point out that I am using XNA to develop this game, but I have almost no grasp of all the classes available yet (pretty much only using the Game component with some classes that are related to it such as GameTime, SpriteBatch, GraphicDeviceManager) but not much more.
You have a decent list, but you are missing storage (save load), text (text is important in RPGs : Unicode, font rendering), probably a macro system for text (something that replaces tokens like {player} with the player characters name), and most important of all content generation tools (map editor, chara editor, dialog editor) because RPGs need content (or auto generation tools if you need ). By the way have links to your work?
I do this exact stuff for a living so if you need more pointers perhaps I can help.
I don't know if this is any help, but I have been reading articles from http://www.gamasutra.com/ for many years.
I don't have a perfect set of tools from the beginning, but your list covers most of the usual trouble for RUNNING the game. But have you found out what each one of the items stands for? How much have you made already? "Inventory Management" sounds very heavy, but some games just need a simple "array" of objects. Takes an hour to program + some graphical integration (if you have your GUI Management done already).
How to start planning
When I develop games in my spare time, I usually get an idea because another game lacks this function/option. Then I start up what ever development tool I am currently using and try to see if I can make a prototype showing this idea. It's not always about fancy graphics, but most often it's more about finding out how to solve a certain problem. Green and red boxes will help you most of the way, but otherwise, use Google Images and do a quick search for prototype graphics. But remember that these images are probably copyrighted, so only use them for internal test purposes and to explain to your graphic artists what type of game/graphic you want to make.
Secondly, you'll find that you need to find/build tools to create the "maps/missions/quests" too. Today many develop their own "object script" where they can easily add new content/path to a game.
Many of the ideas we (my friends and I) have been testing started with a certain prototype of the interface, to see if its possible to generate that sort of screen output first. Then we build a quick'n'dirty map/level-editor that can supply us with test maps.
No game logic at this point, still figuring out if the game-engine in general is running.
My first game-algorithm problem
Back when I was in my teens I had a Commodore 64 and I was wondering, how do they sort 10 numbers in order for a Highscore? It took me quite a while to find a "scalable" way of doing this, but I learned a lot about programming too.
The second problem I found
How do I make a tank/cannon fire a bullet in the correct direction when I fly my helicopter around the screen?
I sat down and drew quick sketches of the actual problem, looked at the bullet lines, tried some theories of my own and found something that seemed to be working (by dividing and multiplying positions etc.) later on in school I discovered this to be more or less Pythagoras. LOL!
Years and many game attempts later
I played "Dune" and the later C&C + the new game Warcraft (v1/v2) - I remember it started to annoyed me how the lame AI worked. The path finding algorithms were frustrating for the player, I thought. They moved in direction of target position and then found a wall, but if the way was to complex, the object just stopped. Argh!
So I first sat with large amounts of paper, then I tried to draw certain scenarios where an "object" (tank/ork/soldier) would go from A to B and then suddenly there was a "structure" (building/other object) in the pathway - what then?
I learned about A-star pathfinding (after solving it first on my own in a similar way, then later reading about the reason for this working). A very "cpu heavy" way of finding a path, but I learned a lot from the process of "cracking this nut". These thoughts have helped me a lot developing other game algortimes over time.
So what I am saying is: I think you'll have to think more of:
How is the game to be played?
What does the user experience look like?
Why would the user want to come back to the game?
What requirements are needed? Broadband? 19" monitor with 1280x1024?
An RPG, yes - but will it be multi-user or single?
Do we need a fast network/server setup or do we need to develop a strong AI for the NPCs?
And much more...
I am not sure this is what you asked for, but I hope you can use it somehow?
There are hundreds of components needed to make a game, from time management to audio. You'll probably need to roll your own GUI, as native OS controls are very non-gamey. You will probably also need all kinds of tools to generate your worlds, exporters to convert models and textures into something suitable for your game etc.
I would strongly recommend that you start with one of the many free or cheap game engines that are out there. Loads of them come with the source code, so you can learn how they have been put together as you go.
When you think you are ready, you can start to replace parts of the engine you are using to better suit your needs.
I agree with Robert Gould's post , especially about tools and I'd also add
Scripting
Memory Management
Network - especially replication of game object states and match-making
oh and don't forget Localisation - particularly for text strings
Effects and effect timers (could be magical effects, could just be stuff like being stunned.)
Character professions, skills, spells (if that kind of game).
World creation tools, to make it easy for non-programmer builders.
Think about whether or not you want PvP. If so, you need to really think about how you're going to do your combat system and any limits you want on who can attack whom.
Equipment, "treasure", values of things and how you want to do the economy.
This is an older question, but IMHO now there is a better answer: use Unity (or something akin to it). It gives you 90% of what you need to make a game up front, so you can jump in and focus directly on the part you care about, which is the gameplay. When you run aground because there's something it doesn't do out of the box, you can usually find a resource in the Asset Store for free or cheap that will save you a lot of work.
I would also add that if you're not working on your game full-time, be mindful of the complexity and the time-frame of the task. If you'll try to integrate so many different frameworks into your RPG game, you can easily end up with several years worth of work; maybe it would be more advisable to start small and only develop the "core" of your game first and not bother about physics, for example. You could still add it in the second version.
Despite all the advances in 3D graphic engines, it strikes me as odd that the same level of attention hasn't been given to audio. Modern games do real-time rendering of 3D scenes, yet we still get more-or-less pre-canned audio accompanying those scenes.
Imagine - if you will - a 3D engine that models not just the physical appearance of items, but also their audio properties. And from these models it can dynamically generate audio based on the materials that come into contact, their velocity, distance from your virtual ears, etcetera. Now, when you're crouching behind the sandbags with bullets flying over your head, each one will yield a unique and realistic sound.
The obvious application of such a technology would be gaming, but I'm sure there are many other possibilities.
Is such a technology being actively developed? Does anyone know of any projects that attempt to achieve this?
Thanks,
Kent
I once did some research toward improving OpenAL, and the problem with simulating 3D audio is that so many of the cues that your mind uses — the slightly different attenuation at various angles, the frequency difference between sounds in front of you and those behind you — are quite specific to your own head and are not quite the same for anyone else!
If you want, say, a pair of headphones to really make it sound like a creature is in the leaves ahead and in front of the character in a game, then you actually have to take that player into a studio, measure how their own particular ears and head change the amplitude and phase of the sound at different distances (amplitude and phase are different, and are both quite important to the way your brain processes sound direction), and then teach the game to attenuate and phase-shift the sounds for that particular player.
There do exist "standard heads" that have been mocked up with plastic and used to get generic frequency-response curves for the various directions around the head, but an average or standard will never sound quite right to most players.
Thus the current technology is basically to sell the player five cheap speakers, have them place them around their desk, and then the sounds — while not particularly well reproduced — actually do sound like they're coming from behind or beside the player because, well, they are coming from the speaker behind the player. :-)
But some games do bother to be careful to compute how sound would be muffled and attenuated through walls and doors (which can get difficult to simulate, because the ear receives the same sound at a few milliseconds different delay through various materials and reflective surfaces in the environment, all of which would have to be included if things were to sound realistic). They tend to keep their libraries under wraps, however, so public reference implementations like OpenAL tend to be pretty primitive.
Edit: here is a link to an online data set that I found at the time, that could be used as a starting point for creating a more realistic OpenAL sound field, from MIT:
http://sound.media.mit.edu/resources/KEMAR.html
Enjoy! :-)
Aureal did this back in 1998. I still have one of their cards, although I'd need Windows 98 to run it.
Imagine ray-tracing, but with audio. A game using the Aureal API would provide geometric environment information (e.g. a 3D map) and the audio card would ray-trace sound. It was exactly like hearing real things in the world around you. You could focus your eyes on the sound sources and attend to given sources in a noisy environment.
As I understand it, Creative destroyed Aureal by means of legal expenses in a series of patent infringement claims (which were all rejected).
In the public domain world, OpenAL exists - an audio version of OpenGL. I think development stopped a long time ago. They had a very simple 3D audio approach, no geometry - no better than EAX in software.
EAX 4.0 (and I think there is a later version?) finally - after a decade - I think have incoporated some of the geometric information ray-tracing approach Aureal used (Creative bought up their IP after they folded).
The Source (Half-Life 2) engine on the SoundBlaster X-Fi already does this.
It really is something to hear. You can definitely hear the difference between an echo against concrete vs wood vs glass, etc...
A little known side area is voip. While games are having actively developed software, you are likely to spent time talking to others while you are gaming as well.
Mumble ( http://mumble.sourceforge.net/ ) is software that uses plugins to determine who is ingame with you. It will then position its audio in a 360 degree area around you, so the left is to the left, behind you sounds like as such. This made a creepily realistic addition, and while trying it out it led to funny games of "marko, polo".
Audio took a massive back turn in vista, where hardware was not allowed to be used to accelerate it anymore. This killed EAX as it was in the XP days. Software wrappers are gradually getting built now.
Very interesting field indeed. So interesting, that I'm going to do my master's degree thesis on this subject. In particular, it's use in first person shooters.
My literature research so far has made it clear that this particular field has little theoretical background. Not a lot of research has been done in this field, and most theory is based on movie-audio theory.
As for practical applications, I haven't found any so far. Of course, there are plenty titles and packages which support real-time audio-effect processing and apply them depending on the general surroundings of the auditor. e.g.: auditor enters a hall, so a echo/reverb effect is applied on the sound samples. This is rather crude. An analogy for visuals would be to subtract 20% of the RGB-value of the entire image when someone turns off (or shoots ;) ) one of five lightbulbs in the room. It's a start, but not very realisic at all.
The best work I found was a (2007) PhD thesis by Mark Nicholas Grimshaw, University of Waikato , called The Accoustic Ecology of the First-Person Shooter
This huge pager proposes a theoretical setup for such an engine, as well as formulating a wealth of taxonomies and terms for analysing game-audio. Also he argues that the importance of audio for first person shooters is greatly overlooked, as audio is a powerful force for emergence into the game world.
Just think about it. Imagine playing a game on a monitor with no sound but picture perfect graphics. Next, imagine hearing game realisic (game) sounds all around you, while closing your eyes. The latter will give you a much greater sense of 'being there'.
So why haven't game developers dove into this full-hearted already? I think the answer to that is clear: it's much harder to sell. Improved images is easy to sell: you just give a picture or movie and it's easy to see how much prettier it is. It's even easily quantifyable (e.g. more pixels=better picture). For sound it's not so easy. Realism in sound is much more sub-conscious, and therefor harder to market.
The effects the real world has on sounds are subconsciously percieved. Most people never even notice most of them. Some of these effects cannot even conciously be heard. Still, they all play a part in the percieved realism of the sound. There is an easy experiment you can do yourself which illustrates this. Next time you're walking on the sidewalk, listen carefully to the background sounds of the enviroment: wind blowing through leaves, all the cars on distant roads, etc.. Then, listen to how this sound changes when you walk nearer or further from a wall, or when you walk under an overhanging balcony, or when you pass an open door even. Do it, listen carefully, and you'll notice a big difference in sound. Probably much bigger than you ever remembered.
In a game world, these type of changes aren't reflected. And even though you don't (yet) consciously miss them, your subconsciously do, and this will have a negative effect on your level of emergence.
So, how good does audio have to be in comparison to the image? More practical: which physical effects in the real world contribute the most to the percieved realism. Does this percieved realism depend on the sound and/or the situation? These are the questions I wish to answer with my research. After that, my idea is to design a practical framework for an audio engine which could variably apply some effects to some or all game audio, depending (dynamically) on the amount of available computing power. Yup, I'm setting the bar pretty high :)
I'll be starting per September 2009. If anyone's interested, I'm thinking about setting up a blog to share my progress and findings.
Janne Louw
(BSc Computer Sciences Universiteit Leiden, The Netherlands)
I'm interesting in learning about the different layers of abstraction available for making graphical applications.
I see a lot of terms thrown around: At the highest level of abstraction, I hear about things like C#, .NET, pyglet and pygame. Further down, I hear about DirectX and OpenGL. Then there's DirectDraw, SDL, the Win32 API, and still other multi-platform libraries like WxWidgets.
How can I get a good sense of where one of these layers ends and where the next one begins? What is the "lowest possible level" way of creating a window in Windows, in C? What about C++? (A code sample would be divine.) What about in X11? Are the Windows implementations of OpenGL and DirectX built on top of the Win32 API? Where can I begin to learn about these things?
There's another question on SO where Programming Windows is suggested. What about for Linux? Is there an equivalent such book?
I'm aware that this is very low-level, and that there are many friendlier tools available, but I would like to at least learn the basics of what's going on beneath the surface. As much as I'd like to begin slinging windows and vectors right off the bat, starting with something like pygame is too high-level for me; I really need to make the full conceptual circuit of how you draw stuff on a computer.
I will certainly appreciate suggestions for books and resources, but I think it would be stupendously cool if the answers to this question filled up with lots of different ways to get to "Hello world" with different approaches to graphics programming. C? C++? Using OpenGL? Using DirectX? On Windows XP? On Ubuntu? Maybe I ask for too much.
The lowest level would be the graphics card's video RAM. When the computer first starts, the graphics card is typically set to the 80x25 character legacy mode.
You can write text with a BIOS provided interrupt at this point. You can also change the foreground and background color from a palette of 16 distinctive colors. You can use access ports/registers to change the display mode. At this point you could say, load a different font into the display memory and still use the 80x25 mode (OS installations usually do this) or you can go ahead and enable VGA/SVGA. It's quite complicated, that's what drivers are for.
Once the card's in the 'higher' mode you'd change what's on screen by accessing the memory mapped to the video card. It's stored horizontally pixel by pixel with some 'dirty regions' of pixels that aren't mapped to screen at the end of each line which you have to compensate for. But yeah, you could copy the pixels of an image in memory directly to the screen.
For things like DirectX, OpenGL. rather than write directly to the screen, commands are sent to the graphics card and it updates its screen automatically. Commands like "Hey you, draw this image I've loaded into the VRAM here, here and here" or "Draw these triangles with this transformation matrix..." take a fraction of the time compared to pixel by pixel . The CPU will thank you.
DirectX/OpenGL is a programmer friendly library for sending those commands to the card with all the supporting functions to help you get it done smoothly. A more direct approach would only be unproductive.
SDL is an abstraction layer so without bothering to read up on it I'd guess it would have different ways of working on each system. On one it might use semi-direct screen writing, another Direct3D, etc. Whatever's fastest as long as the code stays cross-platform..able.
The GDI/GDI+ and XWindow system. They're designed specifically to draw windows. Originally they drew using the pixel-by-pixel method (which was good enough because they'd only have to redraw when a button was pressed or a window moved, etc.) but now they use Direct3D/OpenGL for accelerated drawing (and special effects). Optimizations depend on the versions and implementations of these libraries.
So if you want the most power and speed, DirectX/openGL is the way to go. SDL is certainly useful for getting the most from a cross-platform environment and integrates with OpenGL anyway. The windowing system comes last but don't underestimate it. Especially with the stuff Microsoft's coming up with lately.
Michael Abrash's Graphics Programming 'Black Book' is a great place to start. Plus you can download it for free!
If you really want to start at the bottom then drawing a line is the most basic operation. Computer graphics is simply about filling in pixels on a grid (screen), so you need to work out which pixels to fill in to get a line that goes from (x0,y0) to (x1,y1).
Check out Bresenham's algorithm to get a feel for what is involved.
To be a good graphics and image processing programmer doesn't require this low level knowledge, but i do hate to be clueless about the insides of what i'm using. I see two ways to chase this - high-level down, or bottom-level up.
Top-down is a matter of following how the action traces from a high-level graphics operation such as to draw a circle, to the hardware. Get to know OpenGL well. Then the source to Mesa (free!) provides a peek at how OpenGL can be implemented in software. The source to Xorg would be next, first to see how the action goes from API calls through the client side to the X server. Finally you dive into a device driver that interfaces with hardware.
Bottom up: build your own graphics hardware. Think of ways it could connect to a computer - how to handle massive numbers of pixels through a few byte-size registers, how DMA would work. Write a device driver, and try designing a graphics library that might be useful for app programmers.
The bottom-up way is how i learned, years ago when it was a possibility with the slow 8-bit microprocessors. The direct experience with circuitry and hardware-software interfacing gave me a good appreciation of the difficult design decisions - e.g. to paint rectangles using clever hardware, in the device driver, or higher level. None of this is of practical everyday value, but provided a foundation of knowledge to understand newer technology.
see Open GPU Documentation section:
http://developer.amd.com/documentation/guides/Pages/default.aspx
HTH
On MSWindows it is easy: you use what the API provides, whether it is the standard windows programming API or the DirectX-family API's: that's what you use, and they are well documented.
In an X windows environment you use whatever X11-libraries that are provided. If you want to understand the principles behind windowing on X, I suggest that you do this, nevermind that many others tell you not to, it will really help you to understand graphics and windowing under X. You can read the documentation on X-programming (google for it). (After this exercise you would appreciate the higher level libraries!)
Apart from the above, at the absolutely lowest level (excluding chip-level) that you can go is to call the interrupts that switch to the various graphics modes available - there are several - and then write to the screen buffers, but for this you would have to use assembler, anything else would be too slow. Going this way will not be portable at all.
Another post mentions Abrash's Black Book - an excellent resource.
Edit: As for books on programming Linux: it is a community thing, there are many howto's around; also find a forum, join it, and as long as you act civilized you will get all the help you can ever need.
Right off the bat, I'd say "you're asking too much." From what little experience I've had, I would recommend reading some tutorials or getting a book on either directX or OpenGL to start out. To go any lower than that would be pretty complex. Most of the books I've seen in OGL or DX have pretty good introductions that explain what the functions/classes do.
Once you get the hang of one of these, you could always dig in to the libraries to see what exactly they're doing to go lower.
Or, if you really, absolutely MUST learn the LOWEST level... read the book in the above post.
libX11 is the lowest level library for X11. I believe the opengl/directx talk to the driver/hardware directly (or emulate unsupported ops), so they would be the lowest level library.
If you want to start with very low level programming, look for x86 assembly code for VGA and fire up a copy of dosbox or similar.
Vulkan api is an api which gives you very low level access to most if not all features of the gpu, computational and graphical, it works on amd and Nvidia gpus (not all)
you can also use CUDA, but it only works on Nvidia gpus and has access to computational features only, no video output.