How can I show an image from raw pixel data in GTK using Haskell? - haskell

I would like to create an image in Haskell using the Rasterific library and then display that image in a GTK window; the Rasterific library lets me generate an RBGA-formatted 32-bit pixel depth image, but I am having trouble figuring out how I can take this raw image and display it in a window or drawarea or whatever in GTK.
(I've spent a lot of time looking through the documentation, but I've been having a hard time seeing how to fit the parts together, especially since the Haskell documentation is often non-existent and at some point the cairo library gets involved in a way that's not entirely clear to me.)

I wrote a package called AC-EasyRaster-GTK for this exact purpose.
It's a wrapper around gtk2hs. That library gives all the necessary parts, but it's not actually all that easy to figure out. So I wrote a library so I wouldn't have to keep looking this stuff up!
ib_new gives you a new image buffer, ib_write_pixel lets you write a pixel, and ib_display will start the GTK event loop, display the bitmap in a window, and block the calling thread until the user clicks close. Sadly, there's no easy way to chuck an entire array at GTK. (It demands a particular pixel order, which varies by platform...)

I'm sure there's a better way to do this, but I'm not finding it either. You can iterate over all the pixels in the original image using something like forM_ (range ((0,0),(w,h))) and draw them onto a Cairo drawing using something like this: (The Cairo calls are correct but I'm just guessing about the Rasterific functions)
drawPixel color x y = do
setSourceRGBA (red color) (green color) (blue color) (alpha color)
rectangle x y 1 1
paint

Related

Moving a graphic over a background image

I have a new project but I'm really not sure where to start, other than I think GNAT Ada would be good. I taught myself Ada three years ago and I have managed some static graphics with GNAT but this is different.
Please bear with me, all I need is a pointer or two towards where I might start, I'm not asking for a solution. My history is in back-end languages that are now mostly obsolete, so graphics is still a bit of a challenge.
So, the project:
With a static background image (a photograph) - and a moveable line with an adjustable cursor somewhere between the ends of the line. I need to rotate the line and adjust the length, as well as to move it around the screen and slide the cursor along the line; I have no problem calculating the positions of each element of the line. Once in place, I need to report on the position of the cursor relative to the overall length of the line. I can probably handle the reporting with what I already know but I have no clue as to how to create a graphic that I can slide around over another image. In the past I have failed to detect mouse events in GNAT Ada and I am sure I will need to get on top of that - in fact if I could, I would probably manage to control the line but doing it over an existing image is beyond me.
If I am wrong to choose GNAT Ada for this, please suggest an alternative.
I have looked in Stackoverflow for anything similar but have found answers only for Blackberry and Java, neither of which seems relevant.
For background, this will be a useful means of measuring relative lengths of the features of insect bodies from photographs, hopefully to set up some definitive identification guides for closely-related species.
With a static background image (a photograph)
So first you need a window to put your interface in. You can get this from any GUI framework (link as given by trashgod in the comments).
and a moveable line with an adjustable cursor somewhere between the ends of the line. I need to rotate the line and adjust the length, as well as to move it around the screen and slide the cursor along the line; I have no problem calculating the positions of each element of the line.
These are affine transformations. They are commonly employed in low-level graphics rendering. You can, like Zerte suggested, employ OpenGL – however modern OpenGL has a steep learning curve for beginners.
GtkAda includes a binding to the Cairo graphics library which supports such transformations, so you can create a window with GtkAda with a Cairo surface and then render your image & line on it. Cairo does have a learning curve and I never used the Ada binding, so I cannot really give an opinion about how complex this will be.
Another library that fully supports what you want to do is SDL which has Ada bindings here. The difference to GtkAda is that SDL is a pure graphics drawing library so you need to „draw“ any interactive controls yourself. On the other hand, setting up a window via SDL and drawing things will be somewhat simpler than doing it via GtkAda & Cairo.
SFML which has also been mentioned in comments is on the same level as SDL. I do not know it well enough to give a more informed opinion but what I said about SDL will most probably also apply to SFML.
In the past I have failed to detect mouse events in GNAT Ada and I am sure I will need to get on top of that - in fact if I could, I would probably manage to control the line but doing it over an existing image is beyond me.
HID event processing is handled by whatever GUI library you use. If you use SDL, you'll get mouse events from SDL; if you use GTK, you'll get mouse events from GTK, and so on.

What is the most efficient way to manipulate raw pixels in an SDL2 window in Rust?

I would like to efficiently manipulate (on the CPU) the raw pixels that are displayed in an SDL2 window. There is a lot of information on SDL2 out there, but it's mostly for C++ and the relationship to the Rust bindings is not always clear.
I found out that from the window, I can create a canvas, which can create a texture, whose raw data can be manipulated as expected. That texture must be copied into the canvas to display it.
That works, but it seems like it could be possible to directly access a pixel buffer with the contents of the window and mutate that. Is that the case? That would perhaps be a bit faster and automatically resize with the window. Some other answers led me to believe that a Surface already exposes these raw pixels, but
These answers always cautioned that using a Surface is inefficient – is that true even for my use case?
I could not find out how to get the right Surface. There is Window's surface method, but if that holds a reference to an EventPump, how can I use the EventPump in the main loop afterwards?

Graphical transformation handles in Haskell

I am experimenting with creating GUI and graphics based applications in Haskell using gtk2hs and cairo. Currently I am working on a program where a user can create and manipulate simple geometric shapes on screen.
The three manipulations I want the user to be able to do are: translation, rotation and scaling. The ideal implementation of this would have the transformation handles present in most image manipulation programs such as photoshop:
(i.e Where the object can be translated by dragging somewhere inside it, scaled by dragging the appropriate white box, and rotated by clicking and dragging in the direction of rotation outside of the object's box)
I cannot find a simple way of doing this "out-of-the-box" in either the gtk or cairo documentation, and have been unable to find a suitable library by searching on google. Does anyone know of a Haskell API which would allow me to manipulate graphics in this way or, failing that, know how I would go about implementing my own version of this type of functionality in Haskell?
There are not built-in widgets for this; you'll have to build it yourself by drawing all the appropriate elements (e.g. the actual shape, a bounding box or similar, rectangles on the corners and edges of the bounding bex, etc.) and handling mouse events by checking whether the events fall on these elements or not. It should not be difficult... though it may be a bit tedious.

With the Haskell graphics library Gloss, is it possible to mask a picture to only display in a certain extent (ie within a rectangle)

I have been using the Gloss Library for some game programming, and have got to the point where I am having the most difficulty laying out different elements on the screen. I was wondering whether it was possible to limit a Picture type to display only a certain rectangular area of the screen. The library already has the concept of a rectangular area with the Extent type, but there does not appear to be any way to 'subtract' from pictures.
If there was a way of doing this then it seems like creating a View type or similar that takes over responsibility for a certain area of the screen — which can also contain additional views, and with suitable coordinate substitutions between them etc — would be an achievable and sensible goal. But without a way to limit drawing areas it doesn't seem like this would be possible within the Gloss framework.
It seems that clipping is not supported in Gloss.
Nevertheless the recursive drawing of views each with their own relative coordinate system does still seem to be a viable and useful goal, and I am part way through writing code for this now.

OpenGLUT: Texture for a sphere: what's the correct way to do it?

I need to give texture to an sphere: but I want to learn a good way to do it: I read methods which seem not to take advantage of the glut, they only use gl features. Maybe I am trying something unavailable, but I want to call solidsphere and generate the coordinates, I have load the .bmp, but the image don't want to rotate when the solid sphere is rotating, instead, the image (a face) always is facing the screen, as if it is "floating" in the surface. How can I stick it to the solid?? Thanks a bundle.
ok, I got It: instead of calling to the glutSolidSphere, it must be employed the gluSphere, and call to the gluQuadricTexture. Then the face stays in its position, and we can rotate the camera view (with the function gluLookAt) around it and evaluate all the texture wrapping the solid.

Resources