What is the most efficient way to manipulate raw pixels in an SDL2 window in Rust? - rust

I would like to efficiently manipulate (on the CPU) the raw pixels that are displayed in an SDL2 window. There is a lot of information on SDL2 out there, but it's mostly for C++ and the relationship to the Rust bindings is not always clear.
I found out that from the window, I can create a canvas, which can create a texture, whose raw data can be manipulated as expected. That texture must be copied into the canvas to display it.
That works, but it seems like it could be possible to directly access a pixel buffer with the contents of the window and mutate that. Is that the case? That would perhaps be a bit faster and automatically resize with the window. Some other answers led me to believe that a Surface already exposes these raw pixels, but
These answers always cautioned that using a Surface is inefficient – is that true even for my use case?
I could not find out how to get the right Surface. There is Window's surface method, but if that holds a reference to an EventPump, how can I use the EventPump in the main loop afterwards?

Related

XOrg server code that draws the mouse pointer

I'm writing an OpenGL application in Linux using Xlib and GLX. I would like to use the mouse pointer to draw and to drag objects in the window. But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen? So I could use the same type of code to place the graphics on the screen and have the mouse pointer and the graphic objects positions always perfectly aligned.
Even a pointer to the relevant source file(s) of XOrg would be great.
So my question is: what code is used by the XOrg server to actually draw the mouse pointer on the screen?
If everything goes right no code at all is drawing the mouse pointer. So called "hardware cursor" support has been around for decades. Essentially it's what's being known as a "sprite engine" in the hardware that takes some small picture and a pair of values (x,y) where it shall appear on the screen. At every frame the graphics hardware sends to the display the cursor image is overlaid at the specific position.
The graphics system is constantly updating the position values based on the input device movements.
Of course there is also graphics hardware that does not have this kind of "sprite engine". But the trick here is, to update often, to update fast and to update late.
But no matter what method I use to draw or move graphic objects, there is always a very noticeable lag between the actual mouse pointer position (as drawn by the X server) and the positions of the objects I draw with the pointer coordinates I get from Xlib (XQueryPointer or the X events) or reading directly from /dev/input/event*
Yes, that happens if you read it at integrate it into your image at the wrong time. The key ingredient to minimizing latency is to draw as late as possible and to integrate as much input for as long as long as possible before you absolutely have to draw things to meet the V-Sync deadline. And the most important trick is not draw what's been in the past, but to draw what will be the state of affairs right at the moment the picture appears on screen. I.e. you have to predict the input for the next couple of frames drawn and use that.
The Kalman filter has become the de-facto standard method for this.

How can I show an image from raw pixel data in GTK using Haskell?

I would like to create an image in Haskell using the Rasterific library and then display that image in a GTK window; the Rasterific library lets me generate an RBGA-formatted 32-bit pixel depth image, but I am having trouble figuring out how I can take this raw image and display it in a window or drawarea or whatever in GTK.
(I've spent a lot of time looking through the documentation, but I've been having a hard time seeing how to fit the parts together, especially since the Haskell documentation is often non-existent and at some point the cairo library gets involved in a way that's not entirely clear to me.)
I wrote a package called AC-EasyRaster-GTK for this exact purpose.
It's a wrapper around gtk2hs. That library gives all the necessary parts, but it's not actually all that easy to figure out. So I wrote a library so I wouldn't have to keep looking this stuff up!
ib_new gives you a new image buffer, ib_write_pixel lets you write a pixel, and ib_display will start the GTK event loop, display the bitmap in a window, and block the calling thread until the user clicks close. Sadly, there's no easy way to chuck an entire array at GTK. (It demands a particular pixel order, which varies by platform...)
I'm sure there's a better way to do this, but I'm not finding it either. You can iterate over all the pixels in the original image using something like forM_ (range ((0,0),(w,h))) and draw them onto a Cairo drawing using something like this: (The Cairo calls are correct but I'm just guessing about the Rasterific functions)
drawPixel color x y = do
setSourceRGBA (red color) (green color) (blue color) (alpha color)
rectangle x y 1 1
paint

Graphical transformation handles in Haskell

I am experimenting with creating GUI and graphics based applications in Haskell using gtk2hs and cairo. Currently I am working on a program where a user can create and manipulate simple geometric shapes on screen.
The three manipulations I want the user to be able to do are: translation, rotation and scaling. The ideal implementation of this would have the transformation handles present in most image manipulation programs such as photoshop:
(i.e Where the object can be translated by dragging somewhere inside it, scaled by dragging the appropriate white box, and rotated by clicking and dragging in the direction of rotation outside of the object's box)
I cannot find a simple way of doing this "out-of-the-box" in either the gtk or cairo documentation, and have been unable to find a suitable library by searching on google. Does anyone know of a Haskell API which would allow me to manipulate graphics in this way or, failing that, know how I would go about implementing my own version of this type of functionality in Haskell?
There are not built-in widgets for this; you'll have to build it yourself by drawing all the appropriate elements (e.g. the actual shape, a bounding box or similar, rectangles on the corners and edges of the bounding bex, etc.) and handling mouse events by checking whether the events fall on these elements or not. It should not be difficult... though it may be a bit tedious.

Z-fighting Direct3D9, only with dynamic buffer

I lock and fill a vertex buffer every frame in Direct3d9 with data from my blendshape code. My shading uses two steps, so I render once with one shader, then draw an additive blend with my other shader.
For reasons beyond me, the data in my vertex buffer is (apparently) slightly different between those two drawing calls, because I have flickering z-fighting where the second pass sometimes renders 'behind' the first.
This is all done in one thread, and the buffer is unlocked a long time before the render calls. Additionally, no changes to any shader instruction take place, so the data should be exactly the same in both calls. If the blendshape happens not to change, no z-fighting takes place.
For now I 'push' the depth a little in my shader, but this is a very inelegant solution.
Why might this data be changed? Why may DirectX make changes to the data in my buffer after I unlock it? Can I force it not to change it?
1st. Are you sure the data is really changed by D3D, or this is just assumption? I'm sure D3D doesn't change your data
2nd. As you said, you have two different shaders drawing your geometry. They mave have different transformation operations. Or because of optimization the transformation in your shaders could be different, thats why your transformed vertices may differ slightly (but enough for z-fighting). I suggest using two passes in one shader/technique.
Or if your still want to use two shaders, you better use shared code for transformation and other identitcal operation.
I can sure that the D3D runtime will not change any data you passed in by a vertex buffer, I did the same thing like you when render two layers terrain, no Z-fighting. But there are indeed some render states will change it while rasterizing the triangles into pixels, they're D3DRS_DEPTHBIAS and D3DRS_SLOPESCALEDEPTHBIAS in D3D9, or the equal values in D3D10_RASTERIZER_DESC structure. If these render states were changed, you should check them.
You also need to be sure that all of the transform matrices or other constants which do calculation with position in the shader are precisely equal, otherwise there will be z-fighting.
I suggest you use some graphic debugging tools to check it. You can use PIX, or PerfHUD or Nsight if you were using NVIDIA card.
I'm sorry for my poor English, it must be hard to understand. But I wish this could help you, thanks.

Advanced Text Rendering with Direct3D

Let me describe the "battlefield" of my task:
Multi-room audio/video chat with more than 1M users;
Custom Direct3D renderer;
What I need to implement is a TextOverVideo feature. The Text itself goes via network and is to be rendered on the recipient side with Direct3D renderer. AFAIK, it is commonly used in game development to create your own texture with letters/numbers and draw this items. Because our application must support many languages, we ought to use a standard. That's why I've been working with ID3DXFont interface but I've found out some unsatisfied limitations.
What I've faced is a lack of scalability. E.g. if user is resizing video window I have to RE-create D3DXFont with new D3DXFONT_DESC while he's doing that. I think it is unacceptable.
That is why the ONLY solution I see (due to my skills) is somehow render the text to a texture and therefore draw sprite with scaling, translation etc.
So, I'm not sure if I go into the correct direction. Please help with advice, experience, literature, sources...
Your question is a bit unclear. As I understand it, you want easily scalable font.
I think it is unacceptable
As far as I know, this is standard behavior for fonts - even for system fonts. They aren't supposed to be easily scalable.
Possible solutions:
Use ID3DXRenderTarget for rendering text onto texture. Font will be filtered when you scale it up too much. Some people will think that it looks ugly.
Write custom library that supports vector fonts. I.e. - it should be able to extract font outline from font, and build text from it. It will be MUCH slower than ID3DXFont (which is already slower than traditional "texture" fonts). Text will be easily scalable. Using this way, you are very likely to get visible artifacts ("noise") for small text. I wouldn't use that approach unless you want huge letters (40+ pixels). Freetype library may have functions for processing font outlines.
Or you could try using D3DXCreateText. This will create 3D text for ONE string. Won't be fast at all.
I'd forget about it. As long as user is happy about overall performance, improving font rendering routines (so their behavior looks nice to you) is not worth the effort.
--EDIT--
About ID3DXRenderTarget.
EVen if you use ID3DXRenderTarget, you'll need ID3DXFont. I.e. you use ID3DXFont to render text onto texture, and then use texture to blit text onto screen.
Because you said that performance is critical, you can delay creation of new ID3DXFont until user stops resizing video. I.e. When user starts resizing video, you use old font, but upscale it using texture. There will be filtering, of course. Once user stops resizing, you create new font when you have time. you probably can do that in separate thread, but I'm not sure about it. OR you could simply always render text in the same resolution as video. This way you won't have to worry about resizing it (it still will be filtered - along with the video). Some video players work this way.
Few more things about ID3DXFont. There is one problem with ID3DXFont - it is slow in situations where you need a lot of text (but you still need it, because it supports unicode, and writing texturefont with unicode support is pain). Last time I worked with it I optimized things by caching commonly used strings in the textures. I.e. any string that was drawn more than 3 frames in the row were rendered onto D3DFMT_A8R8G8B8 texture/render target, and then I've been copying that string from texture instead of using ID3DXFont. Strings that weren't rendered for a while, were removed from texture. That gave some serious boost. This solution, however is tricky - monitoring empty space in the texture, removing unused strings, and defragmenting the texture isn't exactly trivial (there is nothing exceptionally complicated, but it is easy to make a mistake). You won't need such complicated system unless your screen is literally covered by text.
ID3DXFont fonts are flat, always parallel to the screen. D3DXCreateText are meshes that can be scaled and rotated.
Texture fonts are fuzzy and don't look very clear. Not good for an app that uses lots of small text.
I am writing an app that can create 500 text meshes, each mesh averaging 3,000-5,000 vertices. The text meshes are created once, then are static. I get 700 fps on a GeForce 8800.

Resources