Why doesn't Gloss render in native resolution? - haskell

On OSX (with 2560 x 1600 native resolution), Gloss displays everything at zoom-factor 2x.
Giving a window size of 300 x 300 to the display function creates a window of 600 x 600. All content in that window is also twice as big (in each dimension), regardless of whether drawn with Gloss or loaded as a sprite (I'm using Juicy for that). Scaling the content down does not give the same clean result as when displayed in the actual native resolution.
Is there a way to make Gloss render in full native resolution?
I'm still new to Gloss and hope I haven't missed anything obvious.
Here is the code...
module Main where
import Graphics.Gloss
import Graphics.Gloss.Juicy
import Codec.Picture
main :: IO ()
main = loadJuicy "someimg.png" >>= maybe ( print "Nope" ) displayImg
displayImg :: Picture -> IO ()
displayImg p = display ( InWindow "Image" ( 300, 300 ) ( 100, 100 ) ) white ( pictures [ p, translate 32 32 $ circleSolid 4 ] )
... and the corresponding render:
Update:
This seems to be a general issue with OpenGL and retina displays (actually the way OSX pixels are calculated internally). Since, as I understand, Gloss doesn't really allow low-level access my guess is that this is not fixable.
Update 2:
This seems to be a particular issue with GLUT as the underlying backend for Gloss. Rebuilding Gloss enabling GLFW and disabling GLUT should fix the issue.

Gloss can be used with native resolution under OSX with hdpi-display when the default window management backend GLUT is exchanged for GLFW. To do this rebuild Gloss with the appropriate flags:
cabal install -f -GLUT -f GLFW
(Note: With GLFW I could not use some modules in Gloss anymore, e.g. Gloss.Data.Picture or maybe more importantly Graphics.Gloss.Juicy. Using only Graphics.Gloss.Rendering works though. Related to the resolution: make sure to draw your pictures in the size of the framebuffer, not the window size as those may differ.)

Related

What is the difference between gi-cairo and cairo libraries

I making a program to draw some vector graphics on top of a GTK program written with Haskell.
I upgraded my program to the gi-gtk library in replacement of Gtk2Hs to benefit of Gtk3 and when I see tutorial/example about drawing in Gtk windows with cairo and/or diagram, both gi-cairo and cairo (from Gtk2Hs) are needed at the same time.
For an example I can see :
import GI.Gtk
import qualified GI.Cairo (Context(..))
import qualified Graphics.Rendering.Cairo as Cairo
import qualified Graphics.Rendering.Cairo.Internal as Cairo (Render(runRender))
import qualified Graphics.Rendering.Cairo.Types as Cairo (Cairo(Cairo))
and I don't understand why GI.Cairo (gi-cairo) and Graphics.Rendering.Cairo (cairo) must be imported at the same time.
Does GI.Cairo aim to replace Graphics.Rendering.Cairo or to complete it ?
Is Graphics.Rendering.Cairo still updated or is it a good idea to use another library ?
Any information/explanations about these two libraries would be helpful
TL;DR: treat anything based on GTK2HS as deprecated. Use things with the gi- prefix instead.
Originally there were the GTK2HS libraries, which were a manually written binding to GTK (I think, not sure about the details).
Part of this effort was a binding to the Cairo library. The Cairo functions all take a Cairo context object as an argument, which is used to carry state like the current colour and line style. The Haskell binding wraps this up using a Reader monad inside the Render newtype so you don't have to worry about the context object.
However maintaining this was slow and mandraulic, and there was an obvious better way: GTK has built-in support for other language bindings provided by metadata embedded in the source code. So it is perfectly possible to generate all the GTK bindings automatically, and also to extend this to other libraries which use the GTK metadata system. This is what gi-gtk and its relatives do.
Unfortunately Cairo doesn't work like that. It doesn't include the GTK metadata for the actual drawing API, so the gi-cairo interface is no use for anything.
For some time the only way around this was a manual bodge to bridge the gap between the gi-gtk family and the GTK2HS Cairo library, but no longer. Now there is a complete solution:
The gi-cairo-render library, which is essentially the same as the old GTK2HS library but using the gi-cairo version of the context object.
The associated gi-cairo-connector library which lets you switch between functions that require an explicit Cairo context object and functions that work in the Render monad.
You most often need an explicit Cairo context for drawing text using Pango. The Pango library has its own Context object (confusingly, both types are called Context and you have to import them qualified to disambiguate). You get the Pango context from a Cairo context using createContext, and to do this in the middle of a Render action you have to extract the current context from the Render monad using one of the Connector functions.
The code you have quoted is using the old manual bodge; the Render and runRender internal functions are used get at the Cairo context in the GTK2HS version of the Cairo binding. If you look at the code that calls these functions you will probably see it doing something unsafe with pointers to coerce between the GI.Cairo.Context type and Graphics.Rendering.Cairo.Types.Cairo context type, which both point to the same thing under the hood.

what is the replacement for gtk.gdk.get_default_root_window().get_pointer() which is now deprecated

I am using Python 3 and GTK 3, running Debian Buster. I need to find the mouse pointer location on the screen globally, not just within the application's own windows.
I used to use gtk.gdk.get_default_root_window().get_pointer() and it works, but this was marked as deprecated in an error message. I searched all over the place and found references in GTK 3 documentation to "devices" but got lost pretty fast. Experimenting, I was not able to find out how to do this. I might have succeeded if I had understood how to get a "device," - not a specific device, but just the "device" representing the mouse pointer as it appears on the screen. I do not care what device is putting the pointer there. For now, I did what I needed to do using XWindows calls, but this seems too specific to the OS and I would like to stick with GTK for this.
Absolutely obvious:
GdkDisplay * gdk_display_get_default (void);
// then pass it to
GdkSeat * gdk_display_get_default_seat (GdkDisplay *display);
// then pass it to
GdkDevice * gdk_seat_get_pointer (GdkSeat *seat);
// and finally, pass it to
void gdk_device_get_position (GdkDevice *device, ... );
Honestly, IDK what I'm doing and GDK documentation doesn't have a single place to describe all those Screens, Displays, Monitors etc, but try this.

Unbound module graphics in ocaml

I'm using ocaml toplevel and used:
#load "graphics.cma";;
The library got loaded, but when I'm trying:
open Graphics;;
I'm getting unbounded module Graphics error.
I used #list to list all packages and "graphics" was there in the list.
I have seen all related answers but still don't get why I'm getting this
error.
I don't know what symbol ** means in your code snippet, whether you tried to use some sort of markup, or not, but this symbols shouldn't be there:
# #load "graphics.cma";;
# open Graphics;;
# open_graph "";;
- : unit = ()
#
Make sure that you literally input this directive (#-including): #load "graphics.cma";;
If this still doesn't work, you can try #require "graphics";;. This is, by the way, a preferred way to load libraries and packages in modern OCaml.

How to exit Haxe/OpenFL program?

I am making a game using Haxe, OpenFL (Formerly NME) and HaxeFlixel.
However, problem is, I can't seem to find a good way to make a Flixel button that will shutdown the game when pressed. I was planning to make a "quit" button on the main menu.
Is there any simple method to do so or is it impossible?
It depends on the compilation target: I'm going to assume you're compiling to CPP (Windows EXE). In which case you should just be able to use the following:
import flash.system.System; // Or nme.system.System if you're using NME
...
// in your FlxButton callback:
System.exit(0);
I can't test right now so I don't know what effect this would have in Flash (i.e. you may have to wrap it in a conditional compilation flag for cpp), but I do know that it won't work for iOS.

OpenGL shaders without glut?

I'm unable to use GLTools stock shaders in an OpenGL context that was not created by glut.
I'm learning OpenGL from the Superbible 5th edition. The provided GLShaderManager and gltMakeCube (from GLTools) allow me to render some cubes (GLBatch::Draw()) in a window created by glut, using the following code from the Superbible:
gltSetWorkingDirectory(values[0]);
glutInit(&count, values);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(1200, 400);
glutCreateWindow("Test");
I'm trying to remove the glut code from this program, following the example here http://www.opengl.org/wiki/Programming_OpenGL_in_Linux:_Programming_Animations_with_GLX_and_Xlib to create a window and attach a context to it. (I'm running Ubuntu)
I've managed to create a window and a context, and to call glClearColor successfully to change the window's background color. However, my cubes are no longer being drawn. The example program linked above works properly, but it doesn't seem to use a shader, instead relying on calls to glColor3f and glVertex3f to build a cube.
Why doesn't my technique of rendering with GLTools' stock shaders work in a window/context that's not created by glut?

Resources