Apply color using VAO in jogl - colors

I am trying to color my polygons in jogl. I have stored the vertices in an array, an index array for the triangle order and a color array. The code is as follows, but the problem that I am facing is that the triangle are white, and not the color from the color buffer.
float f[] = {1000,2000,-4000,-2000,-2000,-4000,2000,-2000,-4000,1000,-4000,-4000};
FloatBuffer buffer = GLBuffers.newDirectFloatBuffer(12);
this.coordCount = 12;
buffer.put(f);
buffer.rewind();
int indx[] = {0,1,2,1,3,2};
IntBuffer indxBuffer = GLBuffers.newDirectIntBuffer(6); //Total number of vertices
this.indexCount = 6;
indxBuffer.put(indx);
indxBuffer.rewind();
float color[] = {1,0,1,0,0,0,0,0,0,1,0,0};
FloatBuffer colorBuffer = GLBuffers.newDirectFloatBuffer(12);
colorBuffer.put(color);
colorBuffer.rewind();
gl.glDisable(GL.GL_TEXTURE_2D);
gl.glEnableClientState(GLPointerFunc.GL_COLOR_ARRAY);
gl.glEnableClientState(GLPointerFunc.GL_VERTEX_ARRAY);
gl.glFrontFace(GL.GL_CCW);
gl.glVertexPointer(3, GL.GL_FLOAT, 0, buffer);
gl.glColorPointer(3, GL.GL_FLOAT, 0, colorBuffer);
gl.glDrawElements(GL.GL_TRIANGLES, this.indexCount, GL.GL_UNSIGNED_INT, indxBuffer);
gl.glDisableClientState(GLPointerFunc.GL_COLOR_ARRAY);
gl.glDisableClientState(GLPointerFunc.GL_VERTEX_ARRAY);
gl.glEnable(GL.GL_TEXTURE_2D);
I am doing this rendering on NASA world wind globe. But I don't think that should cause any problems. Can someone help me figure out the problem? I am stuck on this for a while.
Thanks,

Got the solution, Just had to enable color and material.
gl.glEnable(GL2.GL_COLOR_MATERIAL);
gl.glColorMaterial(GL2.GL_FRONT_AND_BACK, GL2.GL_DIFFUSE);

Related

Is it possible to test if an arbitrary pixel is modifiable by the shader?

I am writing a spatial shader in godot to pixelate an object.
Previously, I tried to write outside of an object, however that is only possible in CanvasItem shaders, and now I am going back to 3D shaders due rendering annoyances (I am unable to selectively hide items without using the culling mask, which being limited to 20 layers is not an extensible solution.)
My naive approach:
Define a pixel "cell" resolution (ie. 3x3 real pixels)
For each fragment:
If the entire "cell" of real pixels is within the models draw bounds, color the current pixel as per the lower-left (where the pixel that has coordinates that are the multiple of the cell resolution).
If any pixel of the current "cell" is out of the draw bounds, set alpha to 1 to erase the entire cell.
psuedo-code for people asking for code of the likely non-existant functionality that I am seeking:
int cell_size = 3;
fragment {
// check within a cell to see if all pixels are part of the object being drawn to
for (int y = 0; y < cell_size; y++) {
for (int x = 0; x < cell_size; x++) {
int erase_pixel = 0;
if ( uv_in_model(vec2(FRAGCOORD.x - (FRAGCOORD.x % x), FRAGCOORD.y - (FRAGCOORD.y % y))) == false) {
int erase_pixel = 1;
}
}
}
albedo.a = erase_pixel
}
tl;dr, is it possible to know if any given point will be called by the fragment function?
On your object's material there should be a property called Next Pass. Add a new Spatial Material in this section, open up flags and check transparent and unshaded, and then right-click it to bring up the option to convert it to a Shader Material.
Now, open up the new Shader Material's Shader. The last process should have created a Shader formatted with a fragment() function containing the line vec4 albedo_tex = texture(texture_albedo, base_uv);
In this line, you can replace "texture_albedo" with "SCREEN_TEXTURE" and "base_uv" with "SCREEN_UV". This should make the new shader look like nothing has changed, because the next pass material is just sampling the screen from the last pass.
Above that, make a variable called something along the lines of "pixelated" and set it to the following expression:
vec2 pixelated = floor(SCREEN_UV * scale) / scale; where scale is a float or vec2 containing the pixel size. Finally replace SCREEN_UV in the albedo_tex definition with pixelated.
After this, you can have a float depth which samples DEPTH_TEXTURE with pixelated like this:
float depth = texture(DEPTH_TEXTURE, pixelated).r;
This depth value will be very large for pixels that are just trying to render the background onto your object. So, add a conditional statement:
if (depth > 100000.0f) { ALPHA = 0.0f; }
As long as the flags on this new next pass shader were set correctly (transparent and unshaded) you should have a quick-and-dirty pixelator. I say this because it has some minor artifacts around the edges, but you can make scale a uniform variable and set it from the editor and scripts, so I think it works nicely.
"Testing if a pixel is modifiable" in your case means testing if the object should be rendering it at all with that depth conditional.
Here's the full shader with my modifications from the comments
// NOTE: Shader automatically converted from Godot Engine 3.4.stable's SpatialMaterial.
shader_type spatial;
render_mode blend_mix,depth_draw_opaque,cull_back,unshaded;
//the size of pixelated blocks on the screen relative to pixels
uniform int scale;
void vertex() {
}
//vec2 representation of one used for calculation
const vec2 one = vec2(1.0f, 1.0f);
void fragment() {
//scale SCREEN_UV up to the size of the viewport over the pixelation scale
//assure scale is a multiple of 2 to avoid artefacts
vec2 pixel_scale = VIEWPORT_SIZE / float(scale * 2);
vec2 pixelated = SCREEN_UV * pixel_scale;
//truncate the decimal place from the pixelated uvs and then shift them over by half a pixel
pixelated = pixelated - mod(pixelated, one) + one / 2.0f;
//scale the pixelated uvs back down to the screen
pixelated /= pixel_scale;
vec4 albedo_tex = texture(SCREEN_TEXTURE,pixelated);
ALBEDO = albedo_tex.rgb;
ALPHA = 1.0f;
float depth = texture(DEPTH_TEXTURE, pixelated).r;
if (depth > 10000.0f)
{
ALPHA = 0.0f;
}
}

3D vessel surface reconstruction

I have a 3D vascular free-hand ultrasound volume containing one vessel, and I am trying to reconstruct the surface of the vessel. The 3D volume is constructed from a stack of 2D images/B-scans, and the contour of the vessel in each B-scan has been segmented; that is, I have an ellipse representing the contour of the vessel in each B-scan in the volume. I have tried to reconstruct the contour of the vessel by following the VTK example of 'GenerateModelsFromLabels.cxx' (http://www.vtk.org/Wiki/VTK/Examples/Cxx/Medical/GenerateModelsFromLabels). However, the result is not a smooth surface from one frame to another as I would have hoped for it to be. It is discontinuous and irregular, and the surface doesn't connect the vessel contours between two adjacent frames in the volume if the displacement between the ellipses is large. In my approach, I basically used DiscreteMarchingCubes -> WindowedSincPolyDataFilter -> GeometryFilter.
I played around with the passband, smoothingIterations and featureAngle parameters, and I was able to obtain the best following result:
As you can see, it is not a smooth continuous surface with a lot of uninterpolated "holes" between adjacent frames, but it is all right. Can it be made better? I also tried using a 3D Delaunay triangulation, but it only gave me the convex hull, which is not the output I expected. I would like to know if there is a better approach towards reconstructing a surface that closely follows the contour of the vessel from one B-scan to the next in a volume?
A minimal working example is shown below:
vtkSmartPointer<vtkImageData> vesselVolume =
vtkSmartPointer<vtkImageData>::New();
int totalImages = 210;
for (int z = 0; z < totalImages; z++)
{
std::string strFile = "E:/datasets/vasc/rendering/contour/" + std::to_string(z + 1) + ".png";
cv::Mat im = cv::imread(strFile, CV_LOAD_IMAGE_GRAYSCALE);
if (z == 0)
{
vesselVolume->SetExtent(0, im.cols, 0, im.rows, 0, totalImages - 1);
vesselVolume->SetSpacing(1, 1, 1);
vesselVolume->SetOrigin(0, 0, 0);
vesselVolume->AllocateScalars(VTK_UNSIGNED_CHAR, 0);
}
std::vector<cv::Point2i> locations; // output, locations of non-zero pixels
cv::findNonZero(im, locations);
for (int nzi = 0; nzi < locations.size(); nzi++)
{
unsigned char* pixel = static_cast<unsigned char*>(vesselVolume->GetScalarPointer(locations[nzi].x, locations[nzi].y, z));
pixel[0] = 255;
}
}
vtkSmartPointer<vtkDiscreteMarchingCubes> discreteCubes =
vtkSmartPointer<vtkDiscreteMarchingCubes>::New();
discreteCubes->SetInputData(vesselVolume);
discreteCubes->GenerateValues(1, 255, 255);
discreteCubes->ComputeNormalsOn();
vtkSmartPointer<vtkWindowedSincPolyDataFilter> smoother =
vtkSmartPointer<vtkWindowedSincPolyDataFilter>::New();
unsigned int smoothingIterations = 10;
double passBand = 2;
double featureAngle = 360.0;
smoother->SetInputConnection(discreteCubes->GetOutputPort());
smoother->SetNumberOfIterations(smoothingIterations);
smoother->BoundarySmoothingOff();
//smoother->FeatureEdgeSmoothingOff();
smoother->FeatureEdgeSmoothingOn();
smoother->SetFeatureAngle(featureAngle);
smoother->SetPassBand(passBand);
smoother->NonManifoldSmoothingOn();
smoother->BoundarySmoothingOn();
smoother->NormalizeCoordinatesOn();
smoother->Update();
vtkSmartPointer<vtkThreshold> selector =
vtkSmartPointer<vtkThreshold>::New();
selector->SetInputConnection(smoother->GetOutputPort());
selector->SetInputArrayToProcess(0, 0, 0,
vtkDataObject::FIELD_ASSOCIATION_CELLS,
vtkDataSetAttributes::SCALARS);
vtkSmartPointer<vtkMaskFields> scalarsOff =
vtkSmartPointer<vtkMaskFields>::New();
// Strip the scalars from the output
scalarsOff->SetInputConnection(selector->GetOutputPort());
scalarsOff->CopyAttributeOff(vtkMaskFields::POINT_DATA,
vtkDataSetAttributes::SCALARS);
scalarsOff->CopyAttributeOff(vtkMaskFields::CELL_DATA,
vtkDataSetAttributes::SCALARS);
vtkSmartPointer<vtkGeometryFilter> geometry =
vtkSmartPointer<vtkGeometryFilter>::New();
geometry->SetInputConnection(scalarsOff->GetOutputPort());
geometry->Update();
vtkSmartPointer<vtkPolyDataMapper> mapper =
vtkSmartPointer<vtkPolyDataMapper>::New();
mapper->SetInputConnection(geometry->GetOutputPort());
mapper->ScalarVisibilityOff();
mapper->Update();
vtkSmartPointer<vtkRenderWindow> renderWindow =
vtkSmartPointer<vtkRenderWindow>::New();
vtkSmartPointer<vtkRenderWindowInteractor> renderWindowInteractor =
vtkSmartPointer<vtkRenderWindowInteractor>::New();
renderWindowInteractor->SetRenderWindow(renderWindow);
vtkSmartPointer<vtkRenderer> renderer =
vtkSmartPointer<vtkRenderer>::New();
renderWindow->AddRenderer(renderer);
renderer->SetBackground(.2, .3, .4);
vtkSmartPointer<vtkActor> actor =
vtkSmartPointer<vtkActor>::New();
actor->SetMapper(mapper);
renderer->AddActor(actor);
renderer->ResetCamera();
renderWindow->Render();
renderWindowInteractor->Start();
Assuming that your problem is hand shaking between slices, one possible way to improve your result is to apply slice to slice registration. It should be easy to try using ImageJ. Use the transforms between slices to also transform your labeled images. Then run your transformed label images through your current pipeline.

Color over color: how to get the resulting color?

I have the next task:
let's say, there are two colors: color1 and color2
color1 is semi-transparent (color2 maybe too)
I know ARGB values of color1 and color2
how to get the ARGB value of color which you get by overlaying color1 and color2 and vice versa?
Here is an image of what I am looking for:
And here is a code snippet (C#):
private Color getOverlapColor(Color frontColor, Color backColor)
{
//return...
}
The simplest way is to assume a linear scale.
int blend(int front, int back, int alpha)
=> back + (front - back) * alpha / 255;
And then:
Color getOverlapColor(Color front, Color back)
{
var r = blend(front.Red, back.Red, front.Alpha);
var g = blend(front.Green, back.Green, front.Alpha);
var b = blend(front.Blue, back.Blue, front.Alpha);
var a = unsure;
return new Color(r, g, b, a);
}
I'm unsure about how to calculate the resulting alpha:
if both front.Alpha and back.Alpha are 0, the resulting is also 0.
if front.Alpha is 0, the result is back.Alpha.
if front.Alpha is 255, the value of back.Alpha doesn't matter.
if front.Alpha and back.Alpha are both 50%, the result must be larger than 50%.
But I'm sure someone already figured out all of the above. Some SVG renderer, or GIMP, or some other image processing library should already have this code, carefully tested and proven in practice.

How to remove the effect of light / shadow on my model in XNA?

I am developing a small game and I would draw a field-ground(land) with a repeated texture. My problem is the rendered result. This gives the impression of seeing everything around my cube looked as if a light shadow.
Is it possible to standardize the light or remove the shadow effect in my drawing function?
Sorry for my bad english..
Here is a screenshot to better understand my problem.
Here my code draw function (instancing model with vertexbuffer)
// Draw Function (instancing model - vertexbuffer)
public void DrawModelHardwareInstancing(Model model,Texture2D texture, Matrix[] modelBones,
Matrix[] instances, Matrix view, Matrix projection)
{
if (instances.Length == 0)
return;
// If we have more instances than room in our vertex buffer, grow it to the neccessary size.
if ((instanceVertexBuffer == null) ||
(instances.Length > instanceVertexBuffer.VertexCount))
{
if (instanceVertexBuffer != null)
instanceVertexBuffer.Dispose();
instanceVertexBuffer = new DynamicVertexBuffer(Game.GraphicsDevice, instanceVertexDeclaration,
instances.Length, BufferUsage.WriteOnly);
}
// Transfer the latest instance transform matrices into the instanceVertexBuffer.
instanceVertexBuffer.SetData(instances, 0, instances.Length, SetDataOptions.Discard);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (ModelMeshPart meshPart in mesh.MeshParts)
{
// Tell the GPU to read from both the model vertex buffer plus our instanceVertexBuffer.
Game.GraphicsDevice.SetVertexBuffers(
new VertexBufferBinding(meshPart.VertexBuffer, meshPart.VertexOffset, 0),
new VertexBufferBinding(instanceVertexBuffer, 0, 1)
);
Game.GraphicsDevice.Indices = meshPart.IndexBuffer;
// Set up the instance rendering effect.
Effect effect = meshPart.Effect;
//effect.CurrentTechnique = effect.Techniques["HardwareInstancing"];
effect.Parameters["World"].SetValue(modelBones[mesh.ParentBone.Index]);
effect.Parameters["View"].SetValue(view);
effect.Parameters["Projection"].SetValue(projection);
effect.Parameters["Texture"].SetValue(texture);
// Draw all the instance copies in a single call.
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();
Game.GraphicsDevice.DrawInstancedPrimitives(PrimitiveType.TriangleList, 0, 0,
meshPart.NumVertices, meshPart.StartIndex,
meshPart.PrimitiveCount, instances.Length);
}
}
}
}
// ### END FUNCTION DrawModelHardwareInstancing
The problem is the cube mesh you are using. The normals are averaged, but I guess you want them to be orthogonal to the faces of the cubes.
You will have to use a total of 24 vertices (4 for each side) instead of 8 vertices. Each corner will have 3 vertices with the same position but different normals, one for each adjacent face:
If the FBX exporter cannot be configured to correctly export the normals simply create your own cube mesh:
var vertices = new VertexPositionNormalTexture[24];
// Initialize the vertices, set position and texture coordinates
// ...
// Set normals
// front face
vertices[0].Normal = new Vector3(1, 0, 0);
vertices[1].Normal = new Vector3(1, 0, 0);
vertices[2].Normal = new Vector3(1, 0, 0);
vertices[3].Normal = new Vector3(1, 0, 0);
// back face
vertices[4].Normal = new Vector3(-1, 0, 0);
vertices[5].Normal = new Vector3(-1, 0, 0);
vertices[6].Normal = new Vector3(-1, 0, 0);
vertices[7].Normal = new Vector3(-1, 0, 0);
// ...
It looks like you've got improperly calculated / no normals.
Look at this example, specifically part 3.
A normal is a vector that describes the direction that light would reflect off that vertex/poly if shined orthogonally to it.
I like this picture to demonstrate The blue lines are the normal direction at each particular point on the curve.
In XNA, you can calculate the normal of a polygon with vertices vert1,vert2,and vert3 like so:
Vector3 dir = Vector3.Cross(vert2 - vert1, vert3 - vert1);
Vector3 norm = Vector3.Normalize(dir);
In a lot of cases this is done automatically by modelling software so the calculation is unnecessary. You probably do need to perform that calculation if you're creating your cubes in code though.

BlackBerry - image 3D transform

I know how to rotate image on any angle with drawTexturePath:
int displayWidth = Display.getWidth();
int displayHeight = Display.getHeight();
int[] x = new int[] { 0, displayWidth, displayWidth, 0 };
int[] x = new int[] { 0, 0, displayHeight, displayHeight };
int angle = Fixed32.toFP( 45 );
int dux = Fixed32.cosd(angle );
int dvx = -Fixed32.sind( angle );
int duy = Fixed32.sind( angle );
int dvy = Fixed32.cosd( angle );
graphics.drawTexturedPath( x, y, null, null, 0, 0, dvx, dux, dvy, duy, image);
but what I need is a 3d projection of simple image with 3d transformation (something like this)
Can you please advice me how to do this with drawTexturedPath (I'm almost sure it's possible)?
Are there any alternatives?
The method used by this function(2 walk vectors) is the same as the oldskool coding tricks used for the famous 'rotozoomer' effect. rotozoomer example video
This method is a very fast way to rotate, zoom, and skew an image. The rotation is done simply by rotating the walk vectors. The zooming is done simply by scaling the walk vectors. The skewing is done by rotating the walkvectors in respect to one another (e.g. they don't make a 90 degree angle anymore).
Nintendo had made hardware in their SNES to use the same effect on any of the sprites and or backgrounds. This made way for some very cool effects.
One big shortcoming of this technique is that one can not perspectively warp a texture. To do this, every new horizontal line, the walk vectors should be changed slightly. (hard to explain without a drawing).
On the snes they overcame this by altering every scanline the walkvectors (In those days one could set an interrupt when the monitor was drawing any scanline). This mode was later referred to as MODE 7 (since it behaved like a new virtual kind of graphics mode). The most famous games using this mode were Mario kart and F-zero
So to get this working on the blackberry, you'll have to draw your image "displayHeight" times (e.g. Every time one scanline of the image). This is the only way to achieve the desired effect. (This will undoubtedly cost you a performance hit since you are now calling the drawTexturedPath function a lot of times with new values, instead of just one time).
I guess with a bit of googling you can find some formulas (or even an implementation) how to calc the varying walkvectors. With a bit of paper (given your not too bad at math) you might deduce it yourself too. I've done it myself too when I was making games for the Gameboy Advance so I know it can be done.
Be sure to precalc everything! Speed is everything (especially on slow machines like phones)
EDIT: did some googling for you. Here's a detailed explanation how to create the mode7 effect. This will help you achieve the same with the Blackberry function. Mode 7 implementation
With the following code you can skew your image and get a perspective like effect:
int displayWidth = Display.getWidth();
int displayHeight = Display.getHeight();
int[] x = new int[] { 0, displayWidth, displayWidth, 0 };
int[] y = new int[] { 0, 0, displayHeight, displayHeight };
int dux = Fixed32.toFP(-1);
int dvx = Fixed32.toFP(1);
int duy = Fixed32.toFP(1);
int dvy = Fixed32.toFP(0);
graphics.drawTexturedPath( x, y, null, null, 0, 0, dvx, dux, dvy, duy, image);
This will skew your image in a 45º angle, if you want a certain angle you just need to use some trigonometry to determine the lengths of your vectors.
Thanks for answers and guidance, +1 to you all.
MODE 7 was the way I choose to implement 3D transformation, but unfortunately I couldn't make drawTexturedPath to resize my scanlines... so I came down to simple drawImage.
Assuming you have a Bitmap inBmp (input texture), create new Bitmap outBmp (output texture).
Bitmap mInBmp = Bitmap.getBitmapResource("map.png");
int inHeight = mInBmp.getHeight();
int inWidth = mInBmp.getWidth();
int outHeight = 0;
int outWidth = 0;
int outDrawX = 0;
int outDrawY = 0;
Bitmap mOutBmp = null;
public Scr() {
super();
mOutBmp = getMode7YTransform();
outWidth = mOutBmp.getWidth();
outHeight = mOutBmp.getHeight();
outDrawX = (Display.getWidth() - outWidth) / 2;
outDrawY = Display.getHeight() - outHeight;
}
Somewhere in code create a Graphics outBmpGraphics for outBmp.
Then do following in iteration from start y to (texture height)* y transform factor:
1.create a Bitmap lineBmp = new Bitmap(width, 1) for one line
2.create a Graphics lineBmpGraphics from lineBmp
3.paint i line from texture to lineBmpGraphics
4.encode lineBmp to EncodedImage img
5.scale img according to MODE 7
6.paint img to outBmpGraphics
Note: Richard Puckett's PNGEncoder BB port used in my code
private Bitmap getMode7YTransform() {
Bitmap outBmp = new Bitmap(inWidth, inHeight / 2);
Graphics outBmpGraphics = new Graphics(outBmp);
for (int i = 0; i < inHeight / 2; i++) {
Bitmap lineBmp = new Bitmap(inWidth, 1);
Graphics lineBmpGraphics = new Graphics(lineBmp);
lineBmpGraphics.drawBitmap(0, 0, inWidth, 1, mInBmp, 0, 2 * i);
PNGEncoder encoder = new PNGEncoder(lineBmp, true);
byte[] data = null;
try {
data = encoder.encode(true);
} catch (IOException e) {
e.printStackTrace();
}
EncodedImage img = PNGEncodedImage.createEncodedImage(data,
0, -1);
float xScaleFactor = ((float) (inHeight / 2 + i))
/ (float) inHeight;
img = scaleImage(img, xScaleFactor, 1);
int startX = (inWidth - img.getScaledWidth()) / 2;
int imgHeight = img.getScaledHeight();
int imgWidth = img.getScaledWidth();
outBmpGraphics.drawImage(startX, i, imgWidth, imgHeight, img,
0, 0, 0);
}
return outBmp;
}
Then just draw it in paint()
protected void paint(Graphics graphics) {
graphics.drawBitmap(outDrawX, outDrawY, outWidth, outHeight, mOutBmp,
0, 0);
}
To scale, I've do something similar to method described in Resizing a Bitmap using .scaleImage32 instead of .setScale
private EncodedImage scaleImage(EncodedImage image, float ratioX,
float ratioY) {
int currentWidthFixed32 = Fixed32.toFP(image.getWidth());
int currentHeightFixed32 = Fixed32.toFP(image.getHeight());
double w = (double) image.getWidth() * ratioX;
double h = (double) image.getHeight() * ratioY;
int width = (int) w;
int height = (int) h;
int requiredWidthFixed32 = Fixed32.toFP(width);
int requiredHeightFixed32 = Fixed32.toFP(height);
int scaleXFixed32 = Fixed32.div(currentWidthFixed32,
requiredWidthFixed32);
int scaleYFixed32 = Fixed32.div(currentHeightFixed32,
requiredHeightFixed32);
EncodedImage result = image.scaleImage32(scaleXFixed32, scaleYFixed32);
return result;
}
See also
J2ME Mode 7 Floor Renderer - something much more detailed & exciting if you writing a 3D game!
You want to do texture mapping, and that function won't cut it. Maybe you can kludge your way around it but the better option is to use a texture mapping algorithm.
This involves, for each row of pixels, determining the edges of the shape and where on the shape those screen pixels map to (the texture pixels). It's not so hard actually but may take a bit of work. And you'll be drawing the pic only once.
GameDev has a bunch of articles with sourcecode here:
http://www.gamedev.net/reference/list.asp?categoryid=40#212
Wikipedia also has a nice article:
http://en.wikipedia.org/wiki/Texture_mapping
Another site with 3d tutorials:
http://tfpsly.free.fr/Docs/TomHammersley/index.html
In your place I'd seek out a simple demo program that did something close to what you want and use their sources as base to develop my own - or even find a portable source library, I´m sure there must be a few.

Resources