Transparency not working as desired with Haskell OpenGL - haskell

I've done this polyhedron with Haskell OpenGL:
The sides of the polyhedron are transparent:
renderPrimitive Quads $ do
materialDiffuse Front $= col
......
where col is a color with "a" component equal to 0.2.
And in the main function, the blending is enabled:
blend $= Enabled
blendFunc $= (SrcAlpha, OneMinusSrcAlpha)
One can see that the sides are indeed transparent. However the edges of the polyhedron do not appear in the interior. Why that? I have tried numerous initialDisplayMode, such as:
initialDisplayMode $= [RGBAMode, DoubleBuffered, WithDepthBuffer, WithAlphaComponent]
I have these material colors and light colors:
clearColor $= Color4 0 0 0 0
materialSpecular Front $= white
materialShininess Front $= 50
lighting $= Enabled
light (Light 0) $= Enabled
position (Light 0) $= Vertex4 0 0 100 1
ambient (Light 0) $= white
diffuse (Light 0) $= white
specular (Light 0) $= white

Related

Blended lines do not look as expected

I use the following fragment shader, which uses the fog effect, to draw my scene:
precision mediump float;
uniform int EnableFog;
uniform float FogMinDist;
uniform float FogMaxDist;
varying lowp vec4 DestinationColor;
varying float EyeToVertexDist;
float computeFogFactor()
{
float fogFactor = 1.0;
if (EnableFog != 0)
{
//Use a bit lower vlaue of FogMaxDist to get a better fog effect - it will make the far end disappear quicker.
float fogMaxDistABitCloser = FogMaxDist * 0.98;
fogFactor = (fogMaxDistABitCloser - EyeToVertexDist) / (fogMaxDistABitCloser - FogMinDist);
fogFactor = clamp(fogFactor, 0.0, 1.0);
}
return fogFactor;
}
void main(void)
{
float fogFactor = computeFogFactor();
gl_FragColor = DestinationColor * fogFactor;
}
And i enable alpha blending:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
The result is the following scene:
My problem is with the places in which the lines overlap - the result is that the color seems darker than the color of both lines:
How i can fix it?
As already described in the comment you are blending the newly drawn line with the background which may already contain colours from another object at certain pixels, in your case where lines overlap. To solve this you will either have to draw your lines without overlapping or make your drawing independent from the current buffer state.
In your specific case you may pass the background colour to your fragment shader via some uniform or even a texture and then do your blending manually in the fragment shader.
In general you might want to draw the grid to some frame buffer object (FBO) with attached texture and then draw the whole texture in a single draw call using your fog shader and blending. The drawing to FBO should then be with disabled blending.
There are other ways such as drawing the grid to a stencil buffer first and then redraw a full-screen rect applying a colour with your shader and blending.

trouble with composite rotation and translation in opengl, moving object about its own local axis is not working

I am building a robot in openGL and it should move and rotate. When I press the robot should move forward and if I press t then he should rotate 15* about its own local axis and then if i press f he will walk again. I have done, the robot walks and rotates but the problem is he is not rotating with respect to his local axis, he is following (0,0,0). I think i dont understand how the composition of translation and rotation has to be made so that I get my desired effect.
I am trying now with just a scaled sphere. I am adding the display func here, so that it is more clear for you guys:
void display()
{
glEnable(GL_DEPTH_TEST); // need depth test to correctly draw 3D objects
glClearColor(0,0,0,1);
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glShadeModel(GL_SMOOTH);
//All color and material stuffs go here
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_NORMALIZE); // normalize normals
glEnable(GL_COLOR_MATERIAL);
glColorMaterial(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE);
// set up the parameters for lighting
GLfloat light_ambient[] = {0,0,0,1};
GLfloat light_diffuse[] = {.6,.6,.6,1};
GLfloat light_specular[] = {1,1,1,1};
GLfloat light_pos[] = {10,10,10,1};
glLightfv(GL_LIGHT0,GL_AMBIENT, light_ambient);
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_SPECULAR, light_specular);
GLfloat mat_specular[] = {.9, .9, .9,1};
GLfloat mat_shine[] = {10};
glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, mat_specular);
glMaterialfv(GL_FRONT_AND_BACK, GL_SHININESS, mat_shine);
//color specs ends ////////////////////////////////////////
//glPolygonMode(GL_FRONT_AND_BACK,GL_LINE); // comment this line to enable polygon shades
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(90, 1, 1, 100);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glLightfv(GL_LIGHT0, GL_POSITION, light_pos);
gluLookAt(0,0,30,0,0,0,0,1,0);
glRotatef(x_angle, 0, 1,0); // this is just for mouse handling
glRotatef(y_angle, 1,0,0); // this is just for mouse handling
glScalef(scale_size, scale_size, scale_size); // for zooming effect
draw_coordinate();
//Drawing using VBO starts here
glTranslatef(walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
glRotatef(turn,0,1,0);
draw_sphere(3,1,1);
glDisableClientState(GL_VERTEX_ARRAY); // enable the vertex array on the client side
glDisableClientState(GL_NORMAL_ARRAY); // enable the normal array on the client side
glutSwapBuffers();
}
The rotatefunction from opengl is one that rotates around (0,0,0). You have to translate the rotationpoint to the center and then do the rotation.
...
glTranslatef(walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
glTranslatef(-x_rot,-y_rot,-z_rot);
glRotatef(turn,0,1,0);
glTranslatef(x_rot,y_rot,z_rot);
...
So In your case x_rot=walk*sin(M_PI*turn/180), y_rot=0 and z_rot=walk*cos(M_PI*turn/180). The above becomes:
...
glRotatef(turn,0,1,0);
glTranslatef(x_rot=walk*sin(M_PI*turn/180),0,walk*cos(M_PI*turn/180));
...
If your robot doesn't rotate in its own axis then translate the robot to the center, rotate it and again translate it back to the original position. Keep your translation, rotation, scaling and drawing inside
glPushMatrix();
........your rotation,translation,scalling,drawing goes here..........
glPopMatrix();
These keeps the scene same.
If you don't understand these function then look here.

RenderMan incident vector inconsistencies

I am working on translating a series of light and surface shaders from 3Delight to PRMan, and I have discovered a difference between the two that I cannot work out. It seems that when a surface shader is being evaluated for transmission opacity due to a trace in a light shader, the incident vector I in PRMan is being set to the surface's normal.
In my example scene, there is a hemisphere floating above a disc. A distant light from above is projecting traced transmission values onto the surfaces behind them (a little backwards for a light, but this is a demo). The surface on the hemisphere is rendering as a solid coloured by its normals when viewed by the camera, but with the opacity of the incident direction when queried for transmission.
This is what I expect it to look like, and what I receive from 3Delight:
Note that the floor is solid nearly pure green; the colour we would expect if the incident angle is vertical. However, this is what I receive when I render the exact same scene with PRMan:
It appears to be projecting the normals.
I have attempted fetching values via rayinfo and calculating a new I, but those values all match with what I is actually set to. I have also noticed discrepancies with E, but I have not been able to nail down what it is being set to in PRMan.
Q: How can I get the incident vector `I that I am expecting?
Contents of scene.rib:
Display "falloff.tiff" "framebuffer" "rgba"
Projection "perspective" "fov" [17]
Format 400 400 1
ShadingRate 0.25
PixelSamples 3 3
# Move the camera
Translate 0 -0.65 10
Rotate 30 -1 0 0
Option "searchpath" "string shader" ".:&"
WorldBegin
LightSource "projector" "projector_light"
"point to" [0 -1 0]
Surface "matte"
TransformBegin
Rotate 90 1 0 0
Disk 0 1.25 360
TransformEnd
Surface "inspect_incident"
Attribute "visibility" "integer transmission" [1]
Attribute "shade" "string transmissionhitmode" "shader"
TransformBegin
Translate 0 1 0
Rotate -90 1 0 0
Sphere 1 0 1 360
TransformEnd
WorldEnd
Contents of projector.sl:
light projector(
float intensity = 1;
color lightcolor = 1;
point from = point "shader" (0,0,0);
point to = point "shader" (0,0,1);
float maxdist = 1e12;
) {
uniform vector dir = normalize(to - from);
solar(dir, 0.0) {
Cl = intensity * lightcolor * (1 - transmission(Ps, Ps - dir * maxdist));
}
}
Contents of inspect_incident.sl:
class inspect_incident() {
public void opacity(output color Oi) {
vector In = normalize(I);
Oi = color((In + 1) / 2);
}
public void surface(output color Ci, Oi) {
vector Nn = normalize(N);
Ci = color((Nn + 1) / 2);
Oi = 1;
}
}
Quoting the documentation for the special __computesOpacity parameter of surface shaders:
A value of 0 means that the shader does not compute opacity (i.e Oi == Os). This can be used to override
a transmission hit mode of shader. For such shaders, the opacity()
method will be skipped for transmission rays.
A value of 1 means that the shader does indeed compute opacity. Such
shaders will be run to evaluate their opacity for transmission rays.
That result may be cached by the renderer, and thus must be
view-independent.
A value of 2 means that the shader computes opacity in view-dependent
manner. As such the renderer will avoid caching opacity for
transmission rays. The opacity is still cached for the purpose of
controlling continuations on diffuse and specular rays, but
view-dependent shadows may be generated using areashadow() or
transmission(). For mode 2, the opacity() method must only depend on
view-dependent entities within a check for raytype == "transmission"..
And quoting Brian from Pixar:
Set it to 2 to do what you want. What you're seeing is the render runs
opacity once with the domes' I, and caches that.

A Question on OpenGL ES 2.0 and Alpha / Stencil Tests

I have a quad covering the area between -0.5, 0.5 and 0.5, -0.5 on a cleared viewport with a stencil and alpha buffer. In the fragment shader I apply a texture which happens to have a shape -- in this case a circle -- outside of which it is fully transparent.
I am trying to figure out how I can essentially "cut" that non-alpha textured shape out of the next draw of the shape, such that I draw the first quad, offset to some degree (say between -0.3, 0.5 and 0.8, -0.5) and draw again, and only the non-overlap of the non-alpha texture is drawn of the second quad's texture.
It is easy enough doing this with a stencil buffer, such that it applies to the quad and is blind to the texture, however I would like to apply it to the texture.
So as an example of the function what I want actually rendered of the conceptual circle texture would be a crescent in that case. I am not sure what tests I should be using for this.
I think you want to stick with the stencil buffer, but the alpha test isn't available in ES 2.0 per the philosophy that anything that can be done in a shader isn't supplied as fixed functionality.
Instead, you can insert one of your own choosing inside the fragment shader, thanks to the discard keyword. Supposing you had the most trivial textured fragment shader:
varying mediump vec2 texCoordVarying;
uniform sampler2D tex2D;
void main()
{
gl_FragColor = texture2D(tex2D, texCoordVarying);
}
You could throw in an alpha test so that pixels with an alpha of less than 0.1 don't proceed down the pipeline, and hence don't affect the stencil buffer with:
varying mediump vec2 texCoordVarying;
uniform sampler2D tex2D;
void main()
{
vec4 colour = texture2D(tex2D, texCoordVarying);
if(colour.a > 0.1)
gl_FragColor = colour;
else
discard;
}

How to change properties of DrawingArea in Gtk2Hs

Can someone please point me in the right direction when it comes to changing properties of an element in Gtk2Hs.
For example, how do I change the background-color of a DrawingArea?
There are various methods for modifying a widget's style. For example to modify the background style you can use widgetModifyBg (corresponding to the C function gtk_widget_modify_bg()). In principle, if you change the style for one state (e.g. StateNormal) then you should also change it for the others.
Y would suggest you describe the styles you want in an RC file, and then load that file from your application, but it seems that functions like gtk_rc_parse() are not bound in gtk2hs.
Here's an example:
import Graphics.UI.Gtk
main = do
initGUI
window <- windowNew
window `onDestroy` mainQuit
drawingArea <- drawingAreaNew
window `containerAdd` drawingArea
widgetModifyBg drawingArea StateNormal (Color 0xffff 0 0)
widgetShowAll window
mainGUI
If you need to do custom drawing based on a widget's styles, you can do that using widgetGetState, the widgetStyle property and the styleGet* family of functions (e.g. styleGetText). Here's an example of that:
import Graphics.Rendering.Cairo
import Graphics.UI.Gtk hiding (fill)
import Graphics.UI.Gtk.Gdk.Events (Event(Expose))
expose widget rect = do
state <- widgetGetState widget
style <- widget `get` widgetStyle
(Color red green blue) <- styleGetText style state
drawWindow <- widgetGetDrawWindow widget
renderWithDrawable drawWindow $ do
moveTo 50 50
setFontSize 20
setSourceRGB (fromIntegral red / 0xffff)
(fromIntegral green / 0xffff)
(fromIntegral blue / 0xffff)
showText "O HAI"
fill
return False
main = do
initGUI
window <- windowNew
window `onDestroy` mainQuit
drawingArea <- drawingAreaNew
drawingArea `onExpose` \(Expose sent area region count) ->
expose drawingArea area
window `containerAdd` drawingArea
widgetShowAll window
mainGUI

Resources