Is a drag-and-drop interface simply infeasible in Pygame? - graphics

I'm building a simple RPG using Pygame and would like to implement a drag-and-drop inventory. However, even with the consideration of blitting a separate surface, it seems that the entire screen will need to be recalculated every single time the user drags an item around. Would it be best to allow a limited range of motion, or is it simply not feasible to implement such an interface?

redrawing most or all of the screen is a very normal thing, across all windowing systems. this is rarely an issue, since most objects on screen can be drawn quickly.
To make this practical, it's necessary to organize all of the game objects that have to be drawn in such a way that they can be quickly found and drawn in the right order. This often means that objects of a particular type are grouped into some sort of layer. The drawing code can go through each layer, and for each object in each layer, ask the object to draw itself. If a particular layer is costly to draw, because it's got a lot of objects, can store a prerendered surface and blit that instead.
A really simple hack to get a similar effect is to capture the screen at the start of a drag to a surface, and then blit that every frame instead of the whole game. This obviously only makes sense in a game where dragging also means that the rest of the game is effectively paused.

There are many GUI examples on pygame.org, as well as libraries for GUIs.

Related

Render loop vs. explicitely calling update method

I am working on a 3D simulation program using openGL which uses a render loop with a fixed framerate to keep the screen updated as the world changes. Standard procedure really, and for a typical video game this is certainly the best approach (I originally took this code from an openGL game tutorial). But for me, the 3D scene will not be changing as rapidly and unpredictably as in a computer game. It will be possible for the 3D scene itself to change from time to time but in general it won't change between render calls (it's more of a visualisation tool for geometric problems). The user will be able to control the position/orientation of the camera but in general there will be times when the camera won't move for several seconds/minutes (potentially hundreds of render calls) and since the 3D scene is likely be static for the majority of the time, I wonder if I really need a continuous render loop...?
My thinking is that I will remove the automatic render loop and instead I will explicitly call my update method when either,
The 3D scene changes (very rare)
The camera moves (somewhat rare)
As I will be using this largely for research purposes, the scene/camera is likely to stay in one state for several minutes at a time and it seems silly to be continuously updating the frame buffer when it's not changing.
My question then is, is this a good approach? All the online tutorials for 3D graphics rendering seem to deal with game design but that's not really my requirement. In other words, what are the pros and cons of using a render loop vs. manually calling "update()" whenever something changes?
Thanks
There's no problem with this approach, in fact many 3D apps, like 3DS MAX use explicit rendering. You just pick what is better for your needs, in most games scene changes each frame so it's better to have update loop, but if you were doing some chess game, without animated UI you could also use explicit rendering only when the scene changes.
For apps with rare changes, like 3DS or Blender it would be better to call rendering only on change. This way you save the CPU/GPU but also power and your PC don't heat up so much.
With explicit rendering you can also have some performance tricks, like drawing simplified scene when camera moves, for better performance. Then when camera stops you render the full scene in background once again, and replace the low-quality rendering with the new one.

Best PyQt/PySide widget for Pixel drawing?

I'm teaching myself stuff like the astar algorithm, and working with small-ish matrices. To this end, I want to have a direct way of seeding the matrices I'm sending into the function, and I thought a small app that allowed to do limited-color pixel-by-pixel painting would be great.
Basically this: http://www.youtube.com/watch?v=19h1g22hby8
What is a good widget to use, as both the pixels themselves and the canvas for them? I'm very comfortable with QPushButtons and the like, but I'm not that used to a graphics scene. Is that the way to go?
I'd guess something that has built-in methods for detecting when the mouth is hovering on top, and that changes colors quickly... but that makes it seem like a giant QGridLayout with flat QPushButtons might do the trick and yet it seems way unoptimal.
The QGraphicsView with underlying QGraphicsScene would be perfect for this. I would start by adding a whole bunch of QGraphicsRectItem instances to the QGraphicsScene.
Qt does already does drawing, moving and selection of the QGraphicsRectItem instances. You can catch other events, or change the default handling (for example disabling moving), by overriding mouseMoveEvent() and others.

How do I create a real-time rendering window from scratch?

I've been studying 3D graphics on my own for a while now and I want to get a greater understanding of just how everything works. What I would like to do is to create a simple game without using DirectX or OpenGL. I understand most of the math I believe, but the problem I am running up against is I do not know how to get control of the pixels being displayed in a window.
How do I specify what color I want each pixel in my window to be?
I understand I will probably run into issues with buffers and image shearing and probably terrible efficiency problems, but I want to create my own program so that I could see from the very lowest level, of the high level language, how the rendering process works. I really have no idea where to start though. I've figured out how to output BMPs, but I would like to have a running program spitting out 20+ frames per second. How do I accomplish this?
You could pick a environment that allows you to fill an array with values for pixels and display it as a bitmap. This way you come closest to poking RGB values in video memory. WPF, Silverlight, HTML5/Javascript can do this. If you do not make it full screen these technologies should suffice for now.
In WPF and Silverlight, use the WriteableBitmap.
In HTML5, use the canvas
Then it is up to you to implement the logic to draw lines, circles, bezier curves, 3D projections.
This is a lot of fun and you will learn a lot.
I'm reading between the lines that you're more interested in having full control over the rendering process from a low level, rather than having a specific interest in how to achieve that on one specific platform.
If that's the case then you will probably get a good bang for your buck looking at a library like SDL which provides you with a frame buffer that you can render to directly but abstracts away a lot of the platform specifics issues. It has been around for quite a while and there are some good tutorials to give you an idea of whether it's the kind of thing you're looking for - see this tutorial and the subsequent one in the same series, which should be enough to get you up and running.
You say you want to create some kind of a rendering engine, meaning desinging you own Pipeline and matrice classes. Which you are to use to transform 3D coordinates to 2D points.
When you have got the 2D points you've been looking for. You can use say for instance on windows, you can select a brush and draw you triangle values while coloring them at the same time.
I do not know why you would need Bitmaps, but if you want to practice say Texturing you can also do that yourself although off course on a weak computer this might take your frames per second significantly.
If you aim is to understand how rendering works on the lowest level. This is with no doubt a good practice.
Jt Schwinschwiga

Draw from a separate thread with NSOpenGLLayer

I'm working on an app which needs to draw with OpengGL at a refresh rate at least equal to the refresh rate of the monitor. And I need to perform the drawing in a separate thread so that drawing is never locked by intense UI actions.
Actually I'm using a NSOpenGLView in combination with CVDisplayLink and I'm able to achive 60-80FPS without any problem.
Since I need also to display some cocoa controls on top of this view I tried to subclass NSOpenGLView and make it layer-backed, following LayerBackedOpenGLView Apple example.
The result isn't satisfactory and I get a lot of artifacts.
Therefore I've solved the problem using a separate NSWindow to host the cocoa controls and adding this window as a child window of the main window containing the NSOpenGLView.
It works fine and I'm able to get quite the same FPS as the initial implementation.
Since I consider this solution quite like a dirty hack, I'm looking for an alternative and more clean way of achiving what I need.
Few days ago I came across NSOpenGLLayer and I thought that it could be used as a viable solution for my problem.
So finally, after all this preamble, here comes my question:
is it possible to draw to a NSOpenGLLayer from a separate thread using CVDisplayLink callback?.
So far I've tried to implement this but I'm not able to draw from the CVDisplayLink callback. I can only -setNeedsDisplay:TRUE on the NSOpenGLLayer from the CVDisplayLink callback and then perform the drawing in -drawInOpenGLContext:pixelFormat:forLayerTime:displayTime: when it gets automatically called by cocoa. But I suppose that this way I'm drawing from the main thread, isn't it?
After googling for this I've even found this post in which the user claims that under Lion drawing can occur only inside -drawInOpenGLContext:pixelFormat:forLayerTime:displayTime:.
I'm on Snow Leopard at the moment but the app should run flawlessly even on Lion.
Am I missing something?
Yes, it is possible, though not recommend. Call display on the layer from within your CVDisplayLink. This will cause canDrawInContext:... to be called and if it returns YES, drawInContext:... will be called and all this on whatever thread called display. To make the rendered image visible on screen, you have to call [CATransaction flush]. This method has been suggested on the Apple mailing list, though it is not completely problem free (the display method of other view may get called on your background thread as well and not all views support rendering from a background thread).
The recommend way is to make the layer asynchronous and render the OpenGL context on main thread. If you cannot achieve a good framerate that way, since your main thread is busy elsewhere, it is recommend to rather move everything else (pretty much your whole application logic) to other threads (e.g. using Grand Central Dispatch) and only keep user input and drawing code on the main thread. If your window is very big, you may still not get anything better than 30 FPS (one frame ever two screen refreshes), yet that comes from the fact, that CALayer composition seems a rather expensive process and it has been optimized for more or less static layers (e.g. layers containing a picture) and not for layers updating themselves 60 FPS.
E.g. if you are writing a 3D game, you are advised not to mix CALayers with OpenGL content at all. If you need Cocoa UI elements, either keep them separated from your OpenGL content (e.g. split the window horizontally into a part that displays only OpenGL and a part that only displays controls) or draw all controls yourself (which is pretty common for games).
Last but not least, the two window approach is not as exotic as you may think, that's how VLC (the video player) draws its controls over the video image (which is also rendered by OpenGL on Mac).

How do I project lines dynamically on to 3D terrain?

I'm working on a game in XNA for Xbox 360. The game has 3D terrain with a collection of static objects that are connected by a graph of links. I want to draw the links connecting the objects as lines projected on to the terrain. I also want to be able to change the colors etc. of links as players move their selection around, though I don't need the links to move. However, I'm running into issues making this work correctly and efficiently.
Some ideas I've had are:
1) Render quads to a separate render target, and use the texture as an overlay on top of the terrain. I currently have this working, generating the texture only for the area currently visible to the camera to minimize aliasing. However, I'm still getting aliasing issues -- the lines look jaggy, and the game chugs frequently when moving the camera EDIT: it chugs all the time, I just don't have a frame rate counter on Xbox so I only notice it when things move.
2) Bake the lines into a texture ahead of time. This could increase performance, but makes the aliasing issue worse. Also, it doesn't let me dynamically change the properties of the lines without much munging.
3) Make geometry that matches the shape of the terrain by tessellating the line-quads over the terrain. This option seems like it could help, but I'm unsure if I should spend time trying it out if there's an easier way.
Is there some magical way to do this that I haven't thought of? Is one of these paths the best when done correctly?
Your 1) is a fairly good solution. You can reduce the jagginess by filtering -- first, make sure to use bilinear sampling when using the overlay. Then, try blurring the overlay after drawing it but before using it; if you choose a proper filter, it will remove the aliasing.
If it's taking too much time to render the overlay, try reducing its resolution. Without the antialiasing filter, that would just make it jaggier, but with a good filter, it might even look better.
I don't know why the game would chug only when moving the camera. Remember, you should have a separate camera for the overlay -- orthogonal, and pointing down onto the terrain.
Does XNA have a shadowing library? If so, yo could just pretend the lines are shadows.

Resources