I have a noobish question for any graphics programmer.
I am confused how some games (like Crysis) can support both DirectX 9 (in XP) and 10 (in Vista)?
What I understand so far is that if you write a DX10 app, then it can only runs in Vista.
Maybe they have 2 code bases -- one written in DX9 and another in DX10? But isn't that an overkill?
They have two rendering pipelines, one using DX9 calls and one using DX10 calls. The APIs are not compatible, though a majority of any game engine can be reused for either. If you want some Open Source examples of how different rendering pipelines are done, look at something like Ogre3d, which supports OpenGL, DX9, and (soon)DX10 rendering.
The rendering layer of games is usually a fairly well isolated/abstracted part of the whole application. As far as the game engine is concerned, each frame you are simply building up a list of conceptual objects (trees, characters, etc.). If the game engine chooses to render a particular object, then it's up to the rendering layer how to actually translate that intent into DX draw calls. A DX10 rendering will generate a different set of draw calls to a DX9 layer, but conceptually they are still performing the same action - 'render this tree'.
Rendering is nicely abstracted because it's rare that you want to get any information back from the rendering layer, once the 'render this tree' action is performed, the game engine will just assume that the rendering looks correct. There is little need to handle different potential results from DX9/DX10 rendering calls because 99.9% of the information is going from the engine to the graphics system, and the 0.1% that comes back likely takes the same form between the two APIs.
The application setup is a little more icky, because you've got to ask the system whether or not DX10 is supported and gracefully fall back on DX9 otherwise, but this is standard fare for application setup (in the same way that the game has to pick a resolution, refresh rate, input device, etc.).
It is likely that they have an abstraction layer and they develop against that. At run-time they instantiate the DX9 or DX10 wrapping concrete engines.
I imagine their abstraction is positioned very close to the DirectX layer and simply provides DX9 with sensible manual implementations of DX10 functions or enhances DX9 logic when running on DX10.
Related
I am looking for a way to have an animated character in my game. I would like to have a running animation loop when he is running, a jump animation when he jumps etc.
Does the awe6 framework provide anything like this? Maybe using spritesheets, or separate images for each frame.
If I have to use my own system, are there any popular libraries that can help do this, and that work well with awe6? And how would I use it with the framework?
awe6 is not something that works out of the box but rather an architecture for you to build on. It is not designed to fill your requirements without work from you and requires you to have at least a good understanding to integrate another framework/target runtime for what you want. They do have a wiki however you can start with http://code.google.com/p/awe6/wiki/QuickStart
There are many popular frameworks out there that have a lower barrier to entry, such as haxepunk.com, haxeflixel.com, https://github.com/aduros/flambe they all have their own strengths/weaknesses etc for you to decide on and can all easily complete your requests.
I am designing a little soccer game where the game engine (that computes player moves etc.) runs on a server, and rendering and keyboard/mouse handling is done by the client. For the server (Haskell) I want to use
Happstack for client-server communication
Yampa / Reactimate for the game engine
Every 20ms or so, the client should send keyboard and mouse events to the server via HTTP GET, receive the current game status (JSON-encoded ball and player positions) and render it. I am thinking about using SDL infrastructure for the game loop, input handling and rendering.
The server basically runs two threads: A happstack server receives the HTTP GET, puts the keyboard / mouse commands in a queue, reads the current game status from a second queue and answers the HTTP GET request.
The second thread runs a Yampa game engine, as described in the Yampa Arcade paper: The game engine computes the new round as quickly as possible (no ticks) and puts the result in the render queue.
General question: Does this look like a feasible architecture?
Specific question: How would one design the server side rendering queue: Would one use a Chan for this? If the game engine is quicker on average than the "ticking" on the client side, the queue will get longer and longer. How could this be handled with Chan?
Your comments are very welcome!
Could you explain a bit more about the game itself. When I think of a soccer game I think of a game that requires real-time feed-back where input should be handled instantaneously and I would expect player input information to be sent over the network immediately. 20ms is quite a delay and I believe would be noticeable when the player holds down the key trying to move his/her character it will probably feel jerky the kind of jerky-ness experienced with certain types of garbage collectors.
I also do not understand why you would want to use HTTP for such a game (any game for that matter), almost all games use UDP and I would probably go down this route for your type of game. This tutorial looks great for learning about that kind of stuff.
I would also question your choice of network data format, why would you want a format that would require non-trivial parsing/formating when receiving/sending? I'd imagine that sending lots of data and frequently this would add up significant time. If I was going to use strings I would try to use the simplest format that requires very minimal parsing. On related system that I would work on it was a multi-process real-time system using sockets to communicate, and originally it used xml strings as network data format and it was terribly inefficient and all the processes where all on the same machine.
Regarding Yampa & server-side rendering, so if we think of FRP in the context of games as means of implementing game logic & entities I believe most networked games have server & client entities. Typically objects that are renderable are client entities and non-renderable are server entities, and I guess that some entities have representation on both. So in that case you probably want to have Yampa running on both the server & the client side and I would try to avoid anything related to rendering on the server-side. renderable objects should predominately stick to the client side I believe. Is there a specific reason why you want to have render commands coming from the server?
If you only want to ever give the latest game state, don't use a chan or a queue, use a samplevar: http://hackage.haskell.org/packages/archive/base/latest/doc/html/Control-Concurrent-SampleVar.html
In case you're interested, I also wrote a similar server-client based soccer game in Haskell once. You can find the source code at github (server, client). As I was quite a Haskell beginner back then, I ran into some problems regarding threading (and blogged about them) and never really finished the project, but you can at least see from the code how not to do it. (In the end I ditched the server-client architecture and wrote freekick2.) I do think the architecture itself is feasible, though.
However, like snk_kid writes, I don't know why you'd want to use HTTP. To have it running across a network without (noticeable) latency, you'll probably have to use UDP as well as client side prediction (here's some informative material).
I'm trying to write an application (winforms) that can demonstrate how two oscillating colors will result in a third color. For this I need to be able to switch between two colors very fast (at >50 fps). I'm really hoping to do this in managed code.
Right now I'm drawing two small rectangular bitmaps with solid colors on top of each other. Using GDI+ DrawImage with two in-memory bitmaps in a doublebuffering enabled control doesn't cut it and results in flickering/tearing at high speeds. A timer connected to a slider triggers the switching.
Is this a sensible approach?
Will GDI and BitBLT be better?
Does WPF perform better?
What about DirectX or other
technologies?
I would really appreciate feedback, TIA!
I have never had good luck with GDI to do high speed graphics, so I used DirectX, but MS has dropped support for Managed DirectX, so you may need to do this in unmanaged C++.
Just write your controller in C#, then have a very thin layer of managed C++ that just calls to the unmanaged C++ DLL that has DirectX support.
You will need to get exclusive control of the computer, so that no other application can really use the cpu, otherwise you will find that your framerate can dropped, or at least no be very consistent.
If you use an older version of DirectX, such as DirectX 9.0c, that may still have support for .NET, and I used that to get a framerate for a music program of about 70 frames/second.
Flicker should be avoidable with a double-buffered approach (and by this I don't mean just setting the rendering control's DoubleBuffered property to True - ironically, this will have no effect on flicker).
Tearing can be dealt with via DirectX, but only if you synchronize your frame rate with your monitor's refresh rate. This may not be possible, especially if you need to achieve a specific frame rate (and it doesn't happen to be your monitor's refresh rate).
I don't think WPF gets around the fundamental tearing problem (but I could be wrong).
This will work with GDI, but you won't be able to control flicker, so it's kind of out of the question. Direct X may be a lot of extra fluff just for showing two non-flickering images. Perhaps SDL will work well enough? It's cross platform and you can literally code this effect in less than 30 lines of code.
http://cs-sdl.sourceforge.net/index.php/SimpleExample
Let me explain what I mean by "two-dimensional code editor": imagine of using Inkscape or Gimp in a big canvas (say infinite). The "T - add text" tool is used to write the code. Additionally, all function definitions will be framed and links will connect the called functions.
In other words: you have a very large sheet of (virtual) paper where you can write.
It would be really useful. I don't want to write code as a long list of lines, especially now that big monitors are cheaper.
Is such a code editor out there?
What's your opinion? Would you use a 2d code editor?
I've written 3 or 4 visual editors and my second one worked like this, that was for java and c++ (never published, though I did use it for some published research work)
I still don't like much to write my code 'as a long list of lines'. My point is, after trying a system like this, I tried a windowed system (class outlines in windows, right click to open code editors), then a tree based system...
in the long run (I wrote several apps using all of those), the tree based system with non overlapping windows felt at once most scalable (to different monitor sizes) and foremost, most productive, because dragging the text boxes and links and/or windows in the first version was necessary, without adding much to the programming experience, so it felt wasteful.
If you want to try some of this stuff out, you can google antegram for java (java only) antegram for web (javascript/php/actionscript) and ee-ide (on oogtech.org). I'm not sure if I could dig up the original c++/java textbox + links editor (which could collapse graphs as well, and had an infinite canvas, so pretty close to what you describe).
I'm not working on this as much as I used to as few programmers ever seemed to like it except me, but if you like working the tree way, or feel like adding stuff for your own purposes, ee-ide would be the way to go, as it's nicely modular and easy to extend compared to the rest.
On the commercial side, you can configure visual studio to work with UML-like diagrams. I have a feel it might be a little too heavy (although it's definitely more coding than UML oriented), but I'm not sure, I haven't really tried yet.
This probably doesn't answer your question exactly, but anyway.
Have a look at the NodeBox beta . It is a visual programming environment mostly for creating generative graphics. You can program and edit the nodes with python code, connect and reuse them in multiple ways. (Windows and Mac OS)
Also worth mentioning (in terms of concept) is Field . It is for programming performances and arranges bits of code on a stage/timeline. Very interesting but also very confusing. (Mac OS only)
Third one is vvvv. It is used a lot by graphical artists to create realtime 3d visuals. Node based. (Windows only)
NodeBox and Field are open-source, so if you are looking to create something yourself you can see how it's done there.
Check this out. I came across it today and remembered this question.
Code Bubbles
Developers spend significant time
reading and navigating code fragments
spread across multiple locations. The
file-based nature of contemporary IDEs
makes it prohibitively difficult to
create and maintain a simultaneous
view of such fragments. We propose a
novel user interface metaphor for code
understanding and maintanence based on
collections of lightweight, editable
fragments called bubbles, which form
concurrently visible working sets.
The essential goal of this project is
to make it easier for developers to
see many fragments of code (or other
information) at once without having to
navigate back and forth. Each of these
fragments is shown in a bubble.
A bubble is a fully editable and
interactive view of a fragment such as
a method or collection of member
variables. Bubbles, in contrast to
windows, have minimal border
decoration, avoid clipping their
contents by using automatic code
reflow and elision, and do not overlap
but instead push each other out of the
way. Bubbles exist in a large,
pannable 2-D virtual space where a
cluster of bubbles comprises a
concurrently visible working set.
Bubbles support a lightweight grouping
mechanism, and further support
connections between them.
A quantiative user study indicates
that Code Bubbles increased
performance significantly for two
controlled code understanding tasks. A
qualitative user study with 23
professional developers indicates
substantial interest and enthusiasm
for the approach, despite the radical
departure from what developers are
used to.
http://www.cs.brown.edu/people/acb/codebubbles_site.htm
At one point, LabView had a programming mode like this. You connected program blocks together in a graphical way.
It's been so long since I've used LabView that I don't know if it is still the same.
For me, the MVVM pattern means that there's no code behind the UI controls anyway. The logic is all in a class with properties.
The properties use WPF databinding to update the UI controls. For example, on the form or window, page, whatever, MySearchButton.IsEnabled is bound to ViewModel.MySearchButtonIsEnabled property. So the app logic runs in the ViewModel class and just sets its own properties and the UI updates automatically.
Although this is specific to MS WPF the pattern actually stems from SmallTalk and is found across the development field as MVP. Without WPF one would need to write the databinding or 'presenter' logic, which is common.
This means the UI can be torn off and a new one pasted-in really quickly and with little code knowledge from the UI guy - who, in an ideal world, is a crack creative guy that drives a 70s Citroen.
So my point is that, although it sounds like a neat innovation, a 2D editor like this would be assisting a coding style that is no longer considered optimal.
I woud like to create a cross-platform drawing program. The one requirement for writing my app is that I have pixel level precision over the canvas. For instance, I want to write my own line drawing algorithm rather than rely on someone elses. I do not want any form of anti-aliasing (again, pixel level control is required.) I would like the users interactions on the screen to be quick and responsive (pending my ability to write fast algorithms.)
Ideally, I would like to write this in Python, or perhaps Java as a second choice. The ability to easily make the final app cross-platform is a must. I will submit to different API's on different OS'es if necessary as long as I can write an abstraction layer around them. Any ideas?
addendum: I need the ability to draw on-screen. Drawing out to a file I've got figured out.
I just this week put together some slides and demo code for doing 2d graphics using OpenGL from python using the library pyglet. Here's a representative post: Pyglet week 2, better vertex throughput (or 3D stuff using the same basic ideas)
It is very fast (relatively speaking, for python) I have managed to get around 1,000 independently positioned and oriented objects moving around the screen, each with about 50 vertices.
It is very portable, all the code I have written in this environment works on windows and Linux and mac (and even obscure environments like Pypy) without me ever having to think about it.
Some of these posts are very old, with broken links between them. You should be able to find all the relevant posts using the 'graphics' tag.
The Pyglet library for Python might suit your needs. It lets you use OpenGL, a cross-platform graphics API. You can disable anti-aliasing and capture regions of the screen to a buffer or a file. In addition, you can use its event handling, resource loading, and image manipulation systems. You can probably also tie it into PIL (Python Image Library), and definitely Cairo, a popular cross-platform vector graphics library.
I mention Pyglet instead of pure PyOpenGL because Pyglet handles a lot of ugly OpenGL stuff transparently with no effort on your part.
A friend and I are currently working on a drawing program using Pyglet. There are a few quirks - for example, OpenGL is always double buffered on OS X, so we have to draw everything twice, once for the current frame and again for the other frame, since they are flipped whenever the display refreshes. You can look at our current progress in this subversion repository. (Splatterboard.py in trunk is the file you'll want to run.) If you're not up on using svn, I would be happy to email you a .zip of the latest source. Feel free to steal code if you look into it.
If language choice is open, a Flash file created with Haxe might have a place. Haxe is free, and a full, dynamic programming language. Then there's the related Neko, a virtual machine (like Java's, Ruby's, Parrot...) to run on Mac, Windows and Linux. Being in some ways a new improved form of Flash, naturally it can draw stuff. http://haxe.org/
QT's Canvas an QPainter are very good for this job if you'd like to use C++. and it is cross platform.
There is a python binding for QT but I've never used it.
As for Java, using SWT, pixel level manipulation of a canvas is somewhat difficult and slow so I would not recommend it. On the other hand Swing's Canvas is pretty good and responsive. I've never used the AWT option but you probably don't want to go there.
I would recommend wxPython
It's beautifully cross platform and you can get per pixel control and if you change your mind about that you can use it with libraries such as pyglet or agg.
You can find some useful examples for just what you are trying to do in the docs and demos download.