I am experimenting with creating GUI and graphics based applications in Haskell using gtk2hs and cairo. Currently I am working on a program where a user can create and manipulate simple geometric shapes on screen.
The three manipulations I want the user to be able to do are: translation, rotation and scaling. The ideal implementation of this would have the transformation handles present in most image manipulation programs such as photoshop:
(i.e Where the object can be translated by dragging somewhere inside it, scaled by dragging the appropriate white box, and rotated by clicking and dragging in the direction of rotation outside of the object's box)
I cannot find a simple way of doing this "out-of-the-box" in either the gtk or cairo documentation, and have been unable to find a suitable library by searching on google. Does anyone know of a Haskell API which would allow me to manipulate graphics in this way or, failing that, know how I would go about implementing my own version of this type of functionality in Haskell?
There are not built-in widgets for this; you'll have to build it yourself by drawing all the appropriate elements (e.g. the actual shape, a bounding box or similar, rectangles on the corners and edges of the bounding bex, etc.) and handling mouse events by checking whether the events fall on these elements or not. It should not be difficult... though it may be a bit tedious.
Related
I would like to create an image in Haskell using the Rasterific library and then display that image in a GTK window; the Rasterific library lets me generate an RBGA-formatted 32-bit pixel depth image, but I am having trouble figuring out how I can take this raw image and display it in a window or drawarea or whatever in GTK.
(I've spent a lot of time looking through the documentation, but I've been having a hard time seeing how to fit the parts together, especially since the Haskell documentation is often non-existent and at some point the cairo library gets involved in a way that's not entirely clear to me.)
I wrote a package called AC-EasyRaster-GTK for this exact purpose.
It's a wrapper around gtk2hs. That library gives all the necessary parts, but it's not actually all that easy to figure out. So I wrote a library so I wouldn't have to keep looking this stuff up!
ib_new gives you a new image buffer, ib_write_pixel lets you write a pixel, and ib_display will start the GTK event loop, display the bitmap in a window, and block the calling thread until the user clicks close. Sadly, there's no easy way to chuck an entire array at GTK. (It demands a particular pixel order, which varies by platform...)
I'm sure there's a better way to do this, but I'm not finding it either. You can iterate over all the pixels in the original image using something like forM_ (range ((0,0),(w,h))) and draw them onto a Cairo drawing using something like this: (The Cairo calls are correct but I'm just guessing about the Rasterific functions)
drawPixel color x y = do
setSourceRGBA (red color) (green color) (blue color) (alpha color)
rectangle x y 1 1
paint
I have been working through the book The Haskell School of Expression by Paul Hudak and using its associated gtk based graphics library Graphics.SOE.Gtk (link to documentation) for small 2D drawing experiments.
This library is very basic however, and only really has the ability to draw shapes. At the moment I am writing some programs which require particular GUI widgets such as buttons and text boxes. My question is: Is it possible to use the drawing capabilities of the SOE library alongside the GUI widgets found in gtk2hs? E.g be able to write a program where I can click a button which causes the program to draw a triangle shape in another container in the same window.
I have searched online for a way to do this, but most tutorials suggest using cairo to do any graphics drawing with Gtk ; SOE graphic's API has the appearance of being a relatively self contained thing.
No, there's not a really meaningful way for soegtk and regular gtk to interact. The reason is that soegtk keeps all its data types abstract; this is good practice from a "makes it easy for the implementor to change the implementation without changing the interface" point of view but it can be a bit limiting from a "I'm just a user that wants to munge things in ways the interface don't promise to allow" point of view.
You could:
make a copy of the text of the single module in the soegtk package, and adjust the export line to export more things and happily break any abstraction boundaries you dislike
interact non-meaningfully; e.g. have your gtk button open an soegtk window with the graphics of interest
learn a different drawing library, say, cairo or diagrams
I have been using the Gloss Library for some game programming, and have got to the point where I am having the most difficulty laying out different elements on the screen. I was wondering whether it was possible to limit a Picture type to display only a certain rectangular area of the screen. The library already has the concept of a rectangular area with the Extent type, but there does not appear to be any way to 'subtract' from pictures.
If there was a way of doing this then it seems like creating a View type or similar that takes over responsibility for a certain area of the screen — which can also contain additional views, and with suitable coordinate substitutions between them etc — would be an achievable and sensible goal. But without a way to limit drawing areas it doesn't seem like this would be possible within the Gloss framework.
It seems that clipping is not supported in Gloss.
Nevertheless the recursive drawing of views each with their own relative coordinate system does still seem to be a viable and useful goal, and I am part way through writing code for this now.
I've been studying 3D graphics on my own for a while now and I want to get a greater understanding of just how everything works. What I would like to do is to create a simple game without using DirectX or OpenGL. I understand most of the math I believe, but the problem I am running up against is I do not know how to get control of the pixels being displayed in a window.
How do I specify what color I want each pixel in my window to be?
I understand I will probably run into issues with buffers and image shearing and probably terrible efficiency problems, but I want to create my own program so that I could see from the very lowest level, of the high level language, how the rendering process works. I really have no idea where to start though. I've figured out how to output BMPs, but I would like to have a running program spitting out 20+ frames per second. How do I accomplish this?
You could pick a environment that allows you to fill an array with values for pixels and display it as a bitmap. This way you come closest to poking RGB values in video memory. WPF, Silverlight, HTML5/Javascript can do this. If you do not make it full screen these technologies should suffice for now.
In WPF and Silverlight, use the WriteableBitmap.
In HTML5, use the canvas
Then it is up to you to implement the logic to draw lines, circles, bezier curves, 3D projections.
This is a lot of fun and you will learn a lot.
I'm reading between the lines that you're more interested in having full control over the rendering process from a low level, rather than having a specific interest in how to achieve that on one specific platform.
If that's the case then you will probably get a good bang for your buck looking at a library like SDL which provides you with a frame buffer that you can render to directly but abstracts away a lot of the platform specifics issues. It has been around for quite a while and there are some good tutorials to give you an idea of whether it's the kind of thing you're looking for - see this tutorial and the subsequent one in the same series, which should be enough to get you up and running.
You say you want to create some kind of a rendering engine, meaning desinging you own Pipeline and matrice classes. Which you are to use to transform 3D coordinates to 2D points.
When you have got the 2D points you've been looking for. You can use say for instance on windows, you can select a brush and draw you triangle values while coloring them at the same time.
I do not know why you would need Bitmaps, but if you want to practice say Texturing you can also do that yourself although off course on a weak computer this might take your frames per second significantly.
If you aim is to understand how rendering works on the lowest level. This is with no doubt a good practice.
Jt Schwinschwiga
I'm building a simple RPG using Pygame and would like to implement a drag-and-drop inventory. However, even with the consideration of blitting a separate surface, it seems that the entire screen will need to be recalculated every single time the user drags an item around. Would it be best to allow a limited range of motion, or is it simply not feasible to implement such an interface?
redrawing most or all of the screen is a very normal thing, across all windowing systems. this is rarely an issue, since most objects on screen can be drawn quickly.
To make this practical, it's necessary to organize all of the game objects that have to be drawn in such a way that they can be quickly found and drawn in the right order. This often means that objects of a particular type are grouped into some sort of layer. The drawing code can go through each layer, and for each object in each layer, ask the object to draw itself. If a particular layer is costly to draw, because it's got a lot of objects, can store a prerendered surface and blit that instead.
A really simple hack to get a similar effect is to capture the screen at the start of a drag to a surface, and then blit that every frame instead of the whole game. This obviously only makes sense in a game where dragging also means that the rest of the game is effectively paused.
There are many GUI examples on pygame.org, as well as libraries for GUIs.