Efficiently painting a custom QGraphicsItem - pyqt4

I have a custom QGraphicsItem which has a QPainterPath as member.
As the mouse is dragged, the movement is traced onto this path.
The paint() method in the QGrahicsItem draws the whole path.
The purpose of implementing this is, the whole path is drawn to an image when the mouse is released (The image needs to be free for other drawing as long as possible, so can't directly draw on it).
The issue here is, as the path gets longer, the updating of graphics on QGraphicsScene gets visibly slower.
Is there a way to optimize and speed up the paint() method, while keeping the the path which can be directly drawn to an image?

Related

Erase Pixels From Sprite Cocos2d-JS

I'm getting the feeling this won't be possible, but worth asking anyway I guess.
I have a background sprite and a foreground sprite, both are the same size as the window/view.
As the player sprite moves across the screen I want to delete the pixels it touches to reveal the background sprite.
This is not just for display purposes, I want the gaps the player has drawn or "dug" out of the foreground layer to allow enemies to travel through, or objects to fall into. So hit detection will be needed with the foreground layer.
This is quite complex and maybe Cocos2D-JS is not the best platform to use, if not possible could you recommend another which would be easier to achieve this effect with?
I believe it's possible, but I'm not capable of giving you a proper answer.
All I can say is that you'll most likely have two choices:
a. Make a physics polygonal shape and deform it, then use it as a "filter" to display your terrain image (here's a proof of concept example in another language using box2d).
b. Directly manipulate pixels and use a mask for collision detection (here's pixel-perfect collision detection in cocos2d-js, sadly I've got no info in modifying pixels).

SDL2 / Surface / Texture / Render

I'm trying to learn SDL2. The main difference (as I can see) between the old SDL and SDL2 is that old SDL had window represented by it's surface, all pictures were surfaces and all image operations and blits were surface to surface. In SDL2 we have surfaces and textures. If I got it right, surfaces are in RAM, and textures are in graphics memory. Is that right?
My goal is to make object-oriented wrapper for SDL2 because I had a similar thing for SDL. I want to have class window and class picture (has private texture and surface). Window will have it's contents represented by an instance of the picture class, and all blits will be picture to picture object blits. How to organize these picture operations:
Pixel manipulation should be on surface level?
If I want to copy part of one picture to another without rendering it, it should be on surface level?
Should I blit surface to texture only when I want to render it on the screen?
Is it better to render it all to one surface and then render it to the window texture or to render each picture to the window texture separately?
Generally, when should I use surface and when should I use texture?
Thank you for your time and all help and suggestions are welcome :)
First I need to clarify some misconceptions: The texture based rendering does not work as the old surface rendering did. While you can use SDL_Surfaces as source or destination, SDL_Textures are meant to be used as source for rendering and the complimentary SDL_Renderer is used as destination. Generally you will have to choose between the old rendering framework that is done entirely on CPU and the new one that goes for GPU, but mixing is possible.
So for you questions:
Textures do not provide direct access to pixels, so it is better to do on surfaces.
Depends. It does not hurt to copy on textures if it is not very often and you want to render it accelerated later.
When talking about textures you will always render to SDL_Renderer, and is always better to pre-load surfaces on textures.
As I explained in first paragraph, there is no window texture. You can either use entirely surface based rendering or entirely texture based rendering. If you really need both (if you want to have direct pixel access and accelerated rendering) is better do as you said: blit everything to one surface and then upload to a texture.
Lastly you should use textures whenever you can. The surface use is a exception, use it when you either have to use intensive pixel manipulation or have to deal with legacy code.

Draw an image in -drawrect or load a same image from a file, which is more efficient?

Say I want to display a coordinate graph in UIView which is going to be updated in time. Assume I am implementing -drawrect of UIView and other methods to update the graph.
Since the the coordinate frame in the graph stays the same over time, would it be more efficient to have two UIViews, with one (view1) loading the coordinate frame from an image file as its view thus no need to draw the frame every time the graph updates and the other (view2) having -drawrect implemented and being added as a subview of view1, than have only one UIView where the entire graph is drawn in -drawrect?
Above is just a specific example. What I am wondering is if it's a design pattern to split static UI elements as much as possible from dynamic ones as far as -drawrect is involved and if it does substantially save CPU (or GPU) resource in doing so.
Any insight would be appreciated. Thanks.
My experience is that it is often best to have an offscreen buffer where you draw to, and then copy some or all of the offscreen buffer to the screen. There are a number of ways you can do this, and it depends on the type of view you're using. For example, if you have an NSOpenGLView, you might draw to a texture-backed FBO, and then use that texture to draw a textured quad to the screen. If you are working with CGImages for the static part, you might draw to a CGBitmapContext in main memory, and then turn that bitmap context into a CGImage to draw to the view.
How you draw to your offscreen buffer will also depend on what you're drawing. For something like a graph, you might draw the background once to your offscreen buffer and then add points to a curve you're drawing over it as time progresses, and just copy the portion you need to screen. For something like a game where all the characters and objects in the scene are moving, you might need to draw everything in the scene on every frame.

Can I remap mouse co-ordinates when using Gdiplus::SetPageScale using a GDI Function?

I want to add zoom capability to an app, which at its core is a spf graph app. Now I currently have no zoom, but the ability to select/move, multi-select objects on the graph in the graph window. I started to write my own code to do scaling of the objects and then work out mouse co-ordinates to map clicks and redraws correctly. I didnt complete this as I found the Gdiplus::SetPageScale function, which scales the window fine but I cannot see any GDI function I can use to map the mouse click co-ordinates from the world co-ord's to the page co-ords. I tried TransformPoints(Gdiplus::CoordinateSpaceWorld, ::Gdiplus::CoordinateSpacePage, points, 2) but this does not convert the points and the returned points are (0,0).
So is this even possible with Gdiplus or do I need to write this mapping myself? Any advice appreciated!
You don't want to use Graphics::SetPageScale() in this case. The much more general way is to use the Matrix class instead. Its Scale, Translate and Rotate methods are handy to get the matrix you need. You'll want to use the Scale() method here, possibly Translate() to change the origin.
Before you start drawing, activate the matrix with the Graphics::SetTransform() method. Anything you draw will now automatically be scaled according to the arguments you passed to the Matrix::Scale() method. Mapping a mouse position is now exceedingly simple with Matrix::TransformPoints() method, the exact same transform that was used while drawing is now applied to the mouse coordinates. Even going back from graph coordinates to mouse coordinates is simple, use the Matrix::Invert() method to obtain the inverse transform.
When GDI+ draws, it applies a world transform (which is controlled by Graphics::SetTransform, ScaleTransform, etc.) followed by the page transform (which is controlled by Graphics::SetPageScale and Graphics::SetPageUnit) to transform the points to device coordinates.
So it normally goes like this: World coordinates --[World transform]--> Page coordinates --[Page transform]--> Device coordinates
You can use Graphics::TransformPoints the way you wanted, to map mouse coordinates to world coordinates, but you have to specify Device coordinates as the source space and World coordinates as the destination space.
However, there are good reasons to do it as Hans describes with a Matrix you store separately, most notably that you shouldn't be holding on to your Graphics object for long enough to process mouse input (nor should there be a need to create one then).

2d tile based game design, how do I draw a map with a viewport?

I've been struggling with this for a while.
Presently, I have a grid of 100 by 100 tiles, which belong to a Map.
The Map implements IDrawable. I call Draw() and it draws itself at 0,0 which is fine.
However, I want to expand this to draw essentially a viewport. The player will be drawn on the screen in the middle, and thus I want to display say, 10 tiles in each direction (rather than the entire map).
I'm having trouble thinking up the architecture for this one. I'm in the mindset that things should draw themselves, ie I say player1.Draw() and it draws itself. This would have worked before, where it drew the player at x,y on the screen, but with a viewport it will no longer know where to draw itself.
So should the viewport be told to draw, and examine every object in the game and draw those which are visible? Should the map tiles be objects that are subjected to this? Or should the viewport intelligently draw the map by coupling both together?
I'd love to know how typical scrolling tile games accomplish this.
If it matters, I'm using XNA
Edit to add: Can you do graphics manipulation such as trying the HTML rendering approach, where you tell things to draw, and they return a graphic of themselves, and then the parent places the graphic in the correct location? I'm thinking, if I had 2 viewports side by side for splitscreen, how would I stop them drawing outside the edges?
Possible design:
There's a 2D "world" that contains object instances.
"Object instance" is a sprite reference + its coordinates in the world.
When you draw scene, you request list of visible objects that exist in given 2D area, THEN you draw them.
With such design world can be very huge.
I'm in the mindset that things should draw themselves, ie I say player1.Draw() and it draws itself.
visible things should draw themselves. Objects outside of viewport are not visible.
, how would I stop them drawing outside the edges?
Not sure about XNA, but OpenGL has "scissors test"/"glViewport" and Direct3D 9 has "SetViewport" method that allows you to use part of the screen/window for rendering. There are also clipplanes and stencil buffer (using stencil for 2D clipping is overkill, though) You could also render to texture then render the texture. There are many ways to deal with this.
So should the viewport be told to draw, and examine every object in the game and draw those which are visible?
For a large world, you shouldn't examine every object, because it will be slow. You should be able to find visible object without testing every one of them. For that you'll need some kind of space partitioning - quad trees (because we are in 2D), k-d trees, etc. This way you should be able to handle few thousands (or even hundreds of thousands) of objects, as long as you don't see them all at once.
Should the map tiles be objects that are subjected to this?
If you keep drawing invisible things, FPS will drop.
and they return a graphic of themselves
For 2D game this may be very slow. Remember KISS principle.
Some basic ideas, not specifically for XNA:
objects draw themselves to a "virtual screen" in world coordinates, they don't draw themselves to the screen directly
drawable objects get a "graphics context" object which offers you a drawing API. The "graphics context" knows about the current viewport bounds and realizes the coordinate transformation from world coordinates to screen coordinates (for every drawing operations). The graphics context also does the direct drawing to the screen (or to a background screen buffer, if you need double buffering).
when you have many objects outside the visible bounds of your viewport, then as a performance optimization, your drawing loop can make a before-hand bounds-check for your objects and test if they are completely outside the visible area. If so, there is no need to let them draw themselves.

Resources