Gtk2Hs : Missing functions when migrating from Gtk2 to Gtk3 - haskell

I wrote a Haskell program with Gtk2Hs dealing with Gtk2. But when I tried to build my program with Gtk3, GHC complains about missing functions which doesn't exist anymore :
• Variable not in scope:
widgetGetSize :: GtkGL.GLDrawingArea -> IO (Integer, Integer)
• Variable not in scope:
renderWithDrawable :: t1 -> Render () -> IO ()
Do you know if there are functions in Gtk3 which could replace these functions ?
Is there another way in Gtk3 to get the size of a widget ?
Note : I still can build my program with Gtk2 but I woulk like to anticipate a full migration to Gtk3

GtkGLArea — A widget for custom drawing with OpenGL
API: ( C ) ( Vala )
GtkGLArea is a widget that allows drawing with OpenGL.
GtkGLArea sets up its own GdkGLContext for the window it creates, and
creates a custom GL framebuffer that the widget will do GL rendering
onto. It also ensures that this framebuffer is the default GL
rendering target when rendering.
In order to draw, you have to connect to the “render” signal, or
subclass GtkGLArea and override the GtkGLAreaClass.render() virtual
function.
The GtkGLArea widget ensures that the GdkGLContext is associated with
the widget's drawing area, and it is kept updated when the size and
position of the drawing area changes.
GtkWidget Size
To get the widget size, use the GtkAllocation getters. Notice that GtkWidget has request size methods but the allocated size may differ.
Width: ( C ) ( Vala )
Height: ( C ) ( Vala )
GtkAllocation: ( C ) ( Vala )

Related

unity3d: Use main camera's depth buffer for rendering another camera view

After my main camera renders, I'd like to use (or copy) its depth buffer to a (disabled) camera's depth buffer.
My goal is to draw particles onto a smaller render target (using a separate camera) while using the depth buffer after opaque objects are drawn.
I can't do this in a single camera, since the goal is to use a smaller render target for the particles for performance reasons.
Replacement shaders in Unity aren't an option either: I want my particles to use their existing shaders - i just want the depth buffer of the particle camera to be overwritten with a subsampled version of the main camera's depth buffer before the particles are drawn.
I didn't get any reply to my earlier question; hence, the repost.
Here's the script attached to my main camera. It renders all the non-particle layers and I use OnRenderImage to invoke the particle camera.
public class MagicRenderer : MonoBehaviour {
public Shader particleShader; // shader that uses the main camera's depth buffer to depth test particle Z
public Material blendMat; // material that uses a simple blend shader
public int downSampleFactor = 1;
private RenderTexture particleRT;
private static GameObject pCam;
void Awake () {
// make the main cameras depth buffer available to the shaders via _CameraDepthTexture
camera.depthTextureMode = DepthTextureMode.Depth;
}
// Update is called once per frame
void Update () {
}
void OnRenderImage(RenderTexture src, RenderTexture dest) {
// create tmp RT
particleRT = RenderTexture.GetTemporary (Screen.width / downSampleFactor, Screen.height / downSampleFactor, 0);
particleRT.antiAliasing = 1;
// create particle cam
Camera pCam = GetPCam ();
pCam.CopyFrom (camera);
pCam.clearFlags = CameraClearFlags.SolidColor;
pCam.backgroundColor = new Color (0.0f, 0.0f, 0.0f, 0.0f);
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.useOcclusionCulling = false;
pCam.targetTexture = particleRT;
pCam.depth = 0;
// Draw to particleRT's colorBuffer using mainCam's depth buffer
// ?? - how do i transfer this camera's depth buffer to pCam?
pCam.Render ();
// pCam.RenderWithShader (particleShader, "Transparent"); // I don't want to replace the shaders my particles use; os shader replacement isnt an option.
// blend mainCam's colorBuffer with particleRT's colorBuffer
// Graphics.Blit(pCam.targetTexture, src, blendMat);
// copy resulting buffer to destination
Graphics.Blit (pCam.targetTexture, dest);
// clean up
RenderTexture.ReleaseTemporary(particleRT);
}
static public Camera GetPCam() {
if (!pCam) {
GameObject oldpcam = GameObject.Find("pCam");
Debug.Log (oldpcam);
if (oldpcam) Destroy(oldpcam);
pCam = new GameObject("pCam");
pCam.AddComponent<Camera>();
pCam.camera.enabled = false;
pCam.hideFlags = HideFlags.DontSave;
}
return pCam.camera;
}
}
I've a few additional questions:
1) Why does camera.depthTextureMode = DepthTextureMode.Depth; end up drawing all the objects in the scene just to write to the Z-buffer? Using Intel GPA, I see two passes before OnRenderImage gets called:
(i) Z-PrePass, that only writes to the depth buffer
(ii) Color pass, that writes to both the color and depth buffer.
2) I re-rendered the opaque objects to pCam's RT using a replacement shader that writes (0,0,0,0) to the colorBuffer with ZWrite On (to overcome the depth buffer transfer problem). After that, I reset the layers and clear mask as follows:
pCam.cullingMask = 1 << LayerMask.NameToLayer ("Particles");
pCam.clearFlags = CameraClearFlags.Nothing;
and rendered them using pCam.Render().
I thought this would render the particles using their existing shaders with the ZTest.
Unfortunately, what I notice is that the depth-stencil buffer is cleared before the particles are drawn (inspite me not clearing anything..).
Why does this happen?
It's been 5 years but I delevoped an almost complete solution for rendering particles in a smaller seperate render target. I write this for future visitors. A lot of knowledge is still required.
Copying the depth
First, you have to get the scene depth in the resolution of your smaller render texture.
This can be done by creating a new render texture with the color format "depth".
To write the scene depth to the low resolution depth, create a shader that just outputs the depth:
struct fragOut{
float depth : DEPTH;
};
sampler2D _LastCameraDepthTexture;
fragOut frag (v2f i){
fragOut tOut;
tOut.depth = tex2D(_LastCameraDepthTexture, i.uv).x;
return tOut;
}
_LastCameraDepthTexture is automatically filled by Unity, but there is a downside.
It only comes for free if the main camera renders with deferred rendering.
For forward shading, Unity seems to render the scene again just for the depth texture.
Check the frame debugger.
Then, add a post processing effect to the main camera that executes the shader:
protected virtual void OnRenderImage(RenderTexture pFrom, RenderTexture pTo) {
Graphics.Blit(pFrom, mSmallerSceneDepthTexture, mRenderToDepthMaterial);
Graphics.Blit(pFrom, pTo);
}
You can probably do this without the second blit, but it was easier for me for testing.
Using the copied depth for rendering
To use the new depth texture for your second camera, call
mSecondCamera.SetTargetBuffers(mParticleRenderTexture.colorBuffer, mSmallerSceneDepthTexture.depthBuffer);
Keep targetTexture empty.
You then must ensure the second camera does not clear the depth, only the color.
For this, disable clear on the second camera completely and clear manually like this
Graphics.SetRenderTarget(mParticleRenderTexture);
GL.Clear(false, true, Color.clear);
I recommend to also render the second camera by hand. Disable it and call
mSecondCamera.Render();
after clearing.
Merging
Now you have to merge the main view and the seperate layer.
Depending on your rendering, you will probably end up with a render texture with so called premultiplied alpha.
To mix this with the rest, use a post processing step on the main camera with
fixed4 tBasis = tex2D(_MainTex, i.uv);
fixed4 tToInsert = tex2D(TransparentFX, i.uv);
//beware premultiplied alpha in insert
tBasis.rgb = tBasis.rgb * (1.0f- tToInsert.a) + tToInsert.rgb;
return tBasis;
Additive materials work out of the box, but alpha blended do not.
You have to create a shader with custom blending to create working alpha blended materials. The blending is
Blend SrcAlpha OneMinusSrcAlpha, One OneMinusSrcAlpha
This changes how the alpha channel is modified for every performed blending.
Results
add blended in front of alpha blended
fx layer rgb
fx layer alpha
alpha blended in front of add blended
fx layer rgb
fx layer alpha
I did not test yet if the performance actually increases.
If anyone has a simpler solution, let me know please.
I managed to reuse camera Z-buffer "manually" in the shader used for rendering. See http://forum.unity3d.com/threads/reuse-depth-buffer-of-main-camera.280460/ for more.
Just alter the particle shader you use already for particle rendering.

Creating UI dynamically in android

I want to make UI for tablet like shown in images.This UI will repeat according to number of items. And the position of each block (i.e small or big block) can change dynamically. Please give me suggestion which view,layout i should use.
Any suggestion would be appreciated...thanks in advance.
If you are targeting a API v14 or above, you can use GridLayout: http://developer.android.com/reference/android/widget/GridLayout.html
Using LinearLayout where each category is added and using RelativeLayout for each category is a good option. You should customize onlayout method of the relative layout as a function of number of products it contains.
You can use fragments, listview as well.
EDIT after image change: You can just extend relativelayout and override onLayout method. This will perform better than nested layouts and I can't think of anything else you can get away with less effort.
EDIT for elaboration:
Here is an example class I use. Basically in the onlayout method you make every decision and tell all the child views to lay them in a rectangle by calling layout method of each of them and passing the parameters of the rectangle. See how I use layout method of child view to dictate the rectangle.
public class KnockPile extends HandPile
{
private static final String TAG = "KnockPile";
static final int EXPECTED_CARDS = 3;
int mColoring; // this is the coloring group this pile is associated with.
// every card added to this pile should be set with this color.
public KnockPile(Context context, GameActivity ref , int ingroup)
{
super(context );
mColoring = ingroup;
mActivityReference = ref;
setWillNotDraw(false);
setClickable(true);
setOnDragListener(this);
mCardBorders = new ArrayList<Integer>();
final LayoutTransition transition = new LayoutTransition();
setLayoutTransition(transition );
//transition .enableTransitionType(LayoutTransition.CHANGING);
//transition .disableTransitionType(LayoutTransition.CHANGE_APPEARING);
}
/**
* this overrides the card ordering of handpile. This method lays the cards in a linear fashion.
* Possibly overlapping fashion. This is not dependent to orientation of screen.
*
* #see com.kavhe.kondi.gin.layouts.HandPile#onLayout(boolean, int, int, int, int)
*/
#Override
protected void onLayout(boolean changed, int l, int t, int r, int b) {
int availablewidth = getMeasuredWidth() ;//this is the maximum availlable
int distance = availablewidth /Math.max(getChildCount(), 1) ; // the horizontal distance between two cards
Log.v(TAG,"available width onlayout is :" + availablewidth +"and distance is " + distance);
int cardheight = getHeight();
int cardwidth = cardheight*3/4;
if(distance > cardwidth) distance = cardwidth;
mCardBorders.clear();
for(int i = 0 ; i < mCards.size() ; i++)
{
//save this border into a global variable so that it can be used when one of the cards dragged
//by user to show that user can insert to that location
mCardBorders.add( i*distance);
mCards.get(i).layout( i*distance , 0 , cardwidth+ i*distance , cardheight);
}
}
}
and this is from documentation
protected void onLayout (boolean changed, int left, int top, int right, int bottom)
Called from layout when this view should assign a size and position to each of its children. Derived classes with children should override this method and call layout on each of their children.
Parameters
changed This is a new size or position for this view
left Left position, relative to parent
top Top position, relative to parent
right Right position, relative to parent
bottom Bottom position, relative to parent
It is also possible through table layout with its row span property
Are the "products" always going to fit nicely on lines as you have drawn? If then, you can use LinearLayouts (Vertical and Horizontal) nested in each other.
To design such UIs for Tablets you should use Fragments and inside the fragment add/remove your respective layouts. If you are taking Relative Layouts then proper usage of RelativeLayout.LayoutParams is required.
If you are looking for a best example which uses Fragments try Google IO app source. It uses various dynamic UI features.
Hope it helps !
With the help of StaggeredGridView I sloved my requirement.
Thanks to all for help and support.

Why Android ScrollView calls onScrollChanged() when the height of its child is less than the screen?

I have a ScrollView with a RelativeLayout as its only child.
Lately I noticed that the method:
#Override
protected void onScrollChanged(int l, int t, int oldl, int oldt) {}
is called even when the child height is less than the ScrollView height as you can see below:
this refers to my ScrollView and rl refers to the RelativeLayout.
Is this a normal behavior or am I missing something?
Edit:
I'm constantly adding and removing views from the RelativeLayout so disabling the scroll/touch etc. each time doesn't seem like a good solution.
My current fix is to check whether the scroll actually changed anything, if not, I'm just ignoring it:
// in case no actual scroll has occurred - just return.
if (t == oldt && l == oldl) {
return;
}
This is a normal behavior
If you want to stop ScrollView Scrolling then you must try this
scroll.setEnabled(false);
scroll.setFocusable(false);
scroll.setHorizontalScrollBarEnabled(false);
scroll.setVerticalScrollBarEnabled(false);
Hope so this will work

FunGen in Ubuntu

I've just installed Ubuntu succesfully, mainly to make it easier to work Haskell libraries than in Windows.
When I run some Haskell code I was working on, it just knocks my socks off. I'm using FunGen libraries for my game and I got this error when I tried to run it.
freeglut (FunGen app): ERROR: Internal error <FBConfig with necessary capabilities nt found> in function fgOpenWindow
X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 4 (X_DestroyWindow)
Resource id in failed request: 0x0
Serial number of failed request: 33
Current serial number in output stream: 36
After some web searching, I found a way to fix this in C code, (using GlutDouble instead of GlDouble), and I am using the type Graphics.Rendering.OpenGL.GLdouble in my Haskell code.
a liitle more research told me that type GlDouble = Double, so , this isn't the cause, in addition, i just took off the gldouble part in code and still it does not work.
So, here is some simple code that drive me to the previous error:
module Main where
import Graphics.UI.Fungen
width, height :: Int
width = 600
height = 400
w = fromIntegral width
h = fromIntegral height
main :: IO ()
main = do
let winConfig = ((200, 200), (width, height), "game");
gameMap = (textureMap 0 w h w h);
funInit winConfig gameMap [] () () [] gameCycle (Timer 30) []
gameCycle :: IOGame () () () () ()
gameCycle = do
showFPS TimesRoman24 (w-40,0) 1.0 0.0 0.0
about versions, ive got : freeglut3 2.6.0-1ubuntu2, ghc 6.12.3, fungen 0.3 ,haskell glut 2.2.2.0 and ubuntu 11.04
Has this happened to anyone else?
Just a guess, but skimming https://bugs.freedesktop.org/show_bug.cgi?id=24226 and http://ubuntuforums.org/archive/index.php/t-333966.html makes it sound like you might get results by trying different GL[UT] initialization parameters. See FunGEn's Graphics/UI/Fungen/Init.hs, and GLUT's initialization api. Maybe have FunGEn's funInit explicitly set indirect mode:
initialize "FunGen app" ["-indirect"]

GLSL - Front vs. Back faces of polygons

I made some simple shading in GLSL of a checkers board:
f(P) = [ floor(Px)+floor(Py)+floor(Pz) ] mod 2
It seems to work well except the fact that i see the interior of the objects but i want to see only the front face.
Any ideas how to fix this? Thanks!
Teapot (glutSolidTeapot()):
Cube (glutSolidCube):
The vertex shader file is:
varying float x,y,z;
void main(){
gl_Position = gl_ProjectionMatrix * gl_ModelViewMatrix * gl_Vertex;
x = gl_Position.x;
y = gl_Position.y;
z = gl_Position.z;
}
And the fragment shader file is:
varying float x,y,z;
void main(){
float _x=x;
float _y=y;
float _z=z;
_x=floor(_x);
_y=floor(_y);
_z=floor(_z);
float sum = (_x+_y+_z);
sum = mod(sum,2.0);
gl_FragColor = vec4(sum,sum,sum,1.0);
}
The shaders are not the problem - the face culling is.
You should either disable the face culling (which is not recommended, since it's bad for performance reasons):
glDisable(GL_CULL_FACE);
or use glCullFace and glFrontFace to set the culling mode, i.e.:
glEnable(GL_CULL_FACE); // enables face culling
glCullFace(GL_BACK); // tells OpenGL to cull back faces (the sane default setting)
glFrontFace(GL_CW); // tells OpenGL which faces are considered 'front' (use GL_CW or GL_CCW)
The argument to glFrontFace depends on application conventions, i.e. the matrix handedness.

Resources