How to Create RGB Bitmap Using Visual C++? - visual-c++

I've been given an assignment to create a RGB bitmap image, provided that the configuration values are given.
And I,ve been told to use visual c++ along with opencv to create the image.
As I'm new in both visual c++ and OpenCV, how to use those tools to create Bitmap? Is there any tutorial that I can use?

This is a really broad question, because I have no idea where your image data is coming from. Are you reading in other image data and saving it to a bitmap? Are you transforming it somehow? If not, are you just programatically filling in the pixels (i.e. create a bitmap filled with a specific color). I'll assume for a minute that the latter is the case.
// Create an empty 3 channel (RGB) image
IplImage* img = cvCreateImage(cvSize(640,480), IPL_DEPTH_8U, 3);
// Iterate over all of the rows of the image
for(int y = 0; y < 480; ++y)
{
// Iterate over all of the columns of each row
for(int x = 0; x < 640; ++x)
{
// Set each pixel to solid red
((uchar *)(img->imageData + y*img->widthStep))[x*img->nChannels + 0] = 0; // B
((uchar *)(img->imageData + y*img->widthStep))[x*img->nChannels + 1] = 0; // G
((uchar *)(img->imageData + y*img->widthStep))[x*img->nChannels + 2] = 255; // R
}
}
// Save the image data as a bitmap
cvSaveImage("ImAfraidICantLetYouDoThatDave.bmp", img);
// Clean up our memory
cvReleaseImage(&img);
I based all of this on code available at:
http://www.cs.iit.edu/~agam/cs512/lect-notes/opencv-intro/opencv-intro.html
Since you're new to visual studio, a fair warning: getting opencv set up under windows is less than trivial. There are plenty of tutorials out there though, so I'm sure you can figure it out. I hope this is helpful.

Related

Custom filter bank is not generating the expected output

Please, refer to this article.
I have implemented the section 4.1 (Pre-processing).
The preprocessing step aims to enhance image features along a set of
chosen directions. First, image is grey-scaled and filtered with a
sharpening filter (we subtract from the image its local-mean filtered
version), thus eliminating the DC component.
We selected 12 not overlapping filters, to analyze 12 different
directions, rotated with respect to 15° each other.
GitHub Repositiry is here.
Since, the given formula in the article is incorrect, I have tried two sets of different formulas.
The first set of formula,
The second set of formula,
The expected output should be,
Neither of them are giving proper results.
Can anyone suggest me any modification?
GitHub Repository is here.
Most relevalt part of the source code is here:
public List<Bitmap> Apply(Bitmap bitmap)
{
Kernels = new List<KassWitkinKernel>();
double degrees = FilterAngle;
KassWitkinKernel kernel;
for (int i = 0; i < NoOfFilters; i++)
{
kernel = new KassWitkinKernel();
kernel.Width = KernelDimension;
kernel.Height = KernelDimension;
kernel.CenterX = (kernel.Width) / 2;
kernel.CenterY = (kernel.Height) / 2;
kernel.Du = 2;
kernel.Dv = 2;
kernel.ThetaInRadian = Tools.DegreeToRadian(degrees);
kernel.Compute();
//SleuthEye
kernel.Pad(kernel.Width, kernel.Height, WidthWithPadding, HeightWithPadding);
Kernels.Add(kernel);
degrees += degrees;
}
List<Bitmap> list = new List<Bitmap>();
Bitmap image = (Bitmap)bitmap.Clone();
//PictureBoxForm f = new PictureBoxForm(image);
//f.ShowDialog();
Complex[,] cImagePadded = ImageDataConverter.ToComplex(image);
Complex[,] fftImage = FourierTransform.ForwardFFT(cImagePadded);
foreach (KassWitkinKernel k in Kernels)
{
Complex[,] cKernelPadded = k.ToComplexPadded();
Complex[,] convolved = Convolution.ConvolveInFrequencyDomain(fftImage, cKernelPadded);
Bitmap temp = ImageDataConverter.ToBitmap(convolved);
list.Add(temp);
}
return list;
}
Perhaps the first thing that should be mentioned is that the filters should be generated with angles which should increase in FilterAngle (in your case 15 degrees) increments. This can be accomplished by modifying KassWitkinFilterBank.Apply as follow (see this commit):
public List<Bitmap> Apply(Bitmap bitmap)
{
// ...
// The generated template filter from the equations gives a line at 45 degrees.
// To get the filter to highlight lines starting with an angle of 90 degrees
// we should start with an additional 45 degrees offset.
double degrees = 45;
KassWitkinKernel kernel;
for (int i = 0; i < NoOfFilters; i++)
{
// ... setup filter (unchanged)
// Now increment the angle by FilterAngle
// (not "+= degrees" which doubles the value at each step)
degrees += FilterAngle;
}
This should give you the following result:
It is not quite the result from the paper and the differences between the images are still quite subtle, but you should be able to notice that the scratch line is most intense in the 8th figure (as would be expected since the scratch angle is approximately 100-105 degrees).
To improve the result, we should feed the filters with a pre-processed image in the same way as described in the paper:
First, image is grey-scaled and filtered with a sharpening filter (we subtract from the image its local-mean filtered version), thus eliminating the DC component
When you do so, you will get a matrix of values, some of which will be negative. As a result this intermediate processing result is not suitable to be stored as a Bitmap. As a general rule when performing image processing, you should keep all intermediate results in double or Complex as appropriate, and only convert back the final result to Bitmap for visualization.
Integrating your changes to add image sharpening from your GitHub repository while keeping intermediate results as doubles can be achieve by changing the input bitmap and temporary image variables to use double[,] datatype instead of Bitmap in the KassWitkinFilterBank.Apply method (see this commit):
public List<Bitmap> Apply(double[,] bitmap)
{
// [...]
double[,] image = (double[,])bitmap.Clone();
// [...]
}
which should give you the following result:
Or to better highlight the difference, here is figure 1 (0 degrees) on the left, next to figure 8 (105 degrees) on the right:

two meshes, same texture, different offset?

Using three.js, I'm working on a web page to display a flip cube (a.k.a. magic cube; see e.g. the video on this page).
On a flip cube, there are typically images that are spread out across multiple pieces of the cube. For example, the boat image shown above is spread across the faces of four cubelets. In three.js terms, there are multiple meshes that need to use the same image for their material texture, but each at a different offset.
As far as I understand it, in three.js, offset is a property of a texture, not of a material or a mesh. Therefore, it would appear that you cannot have a single texture that is used at a different offset in two different places.
So does that mean that in order to have different parts of the boat image shown on four different faces, I have to create four separate textures, meaning that we load the boat image into memory four times? I'm hoping that's not the case.
Here's a relevant piece of the code:
// create an array with the textures
var textureArray = [];
var texNames = ['boat', 'camels', 'elephants', 'hippo',
'natpark', 'ostrich', 'coatofarms-w', 'kenyamap-w', 'nairobi-w'];
texNames.map(function(texName) {
textureArray.push(THREE.ImageUtils.loadTexture(
'images/256/' + texName + '.jpg' ));
});
// Create a material for each texture.
for (var x=0; x <= 1; x++) {
for (var y=0; y <= 1; y++) {
for (var z=0; z <= 1; z++) {
var materialArray = [];
textureArray.map(function(tex) {
// Learned: cannot set this offset for one material,
// without it affecting all materials that use this texture.
tex.offset.x = x * 0.2;
tex.offset.y = y * 0.2;
materialArray.push(new THREE.MeshBasicMaterial( { map: tex }));
});
var cubeMaterial = new THREE.MeshFaceMaterial(materialArray.slice(0, 6));
var cube = new THREE.Mesh( cubeGeom, cubeMaterial );
cube.position.set(x * 50 - 25, y * 50 - 25, z * 50 - 25);
scene.add(cube);
}
}
}
If you look at it on http://www.huttar.net/lars-kathy/tmp/flipcube.html, you'll see that all the texture images are displayed offset by the same amount on each cubelet face, even though they are set to different offsets on different cubelets. This seems to confirm that you can't have different uses of the same texture with different offsets.
How can I get different meshes to use the same texture at different offsets, so I don't have to load the same image multiple times into multiple textures?
What you say is true. Instead of adjusting the texture offsets, adjust the face vertex UVs of the geometry.
EDIT: There is another solution more in line with what you want to do. You can clone a texture like so:
var tex = new THREE.Texture.clone();
Cloning a texture will result in the loaded image being reused, and the new texture can have it's own offsets. Do not try to clone the texture until the image loads, however.
With this alternate approach, you do not have to adjust UVs, and you do not have to load an image more than once.
three.js r.58

Is it possible to calculate the width of an Excel column using .Net or OpenXml framework without using System.Drawing objects Graphics and Bitmap?

I'm developing a class, which allows users to create Excel spreadsheets on the fly (using OpenXML api) and I need to calculate columns width, so that they auto-fit the widest cell in the column.
I have the following code to calculate each column's width (using the formula from here and this tutorial):
private double CalculateColumnWidth(int textLength)
{
var font = new System.Drawing.Font("Calibri", 11);
float digitMaximumWidth = 0;
using(var graphics = Graphics.FromImage(new Bitmap(200, 200)))
{
for(var i = 0; i < 10; ++i)
{
var digitWidth = graphics.MeasureString(i.ToString(), font).Width;
if (digitWidth > digitMaximumWidth)
digitMaximumWidth = digitWidth;
}
}
return Math.Truncate((textLength * digitMaximumWidth + 5.0) / digitMaximumWidth * 256.0) / 256.0;
}
This works fine, the only question is:
Is there any way to get rid of the Bitmap and Graphics objects, that I don't really need to calculate the Excel's column width? Why is the Graphics object necessary to do this?
Thx in advance
"Column width measured as the number of characters of the maximum digit width of the numbers 0, 1, 2, …, 9 as rendered in the normal style's font. There are 4 pixels of margin padding (two on each side), plus 1 pixel padding for the gridlines.
Reference: http://msdn.microsoft.com/en-us/library/documentformat.openxml.spreadsheet.column.aspx
You need to calculate the width of each number 0 - 10 and determine which of those has the largest width. An easy way to accomplish this in .Net is to use MeasureString in System.Drawing.Graphics one of it's constructors requires a valid Bitmap. If your main process contains a window, i.e. you are a desktop windows app, you could construct the graphic object without a bitmap using:
Graphics graphics = Graphics.FromHwnd(Process.GetCurrentProcess().MainWindowHandle)
It is also possible to use classes in System.Windows.Media part of WPF see:http://stackoverflow.com/questions/1528525/alternatives-to-system-drawing-for-use-with-asp-net

Pattern Recognition for image comparision in .net

Can anybody share code or algorithm(using pattern recognition) for image comparision in .net.
I need to compare 2 images of different resolution and textures and the find the difference . Now i have code to find the difference between 2 images using C#
// Load the images.
Bitmap bm1 = (Bitmap) (Image.FromFile(txtFile1.Text));
Bitmap bm2 = (Bitmap) (Image.FromFile(txtFile2.Text));
// Make a difference image.
int wid = Math.Min(bm1.Width, bm2.Width);
int hgt = Math.Min(bm1.Height, bm2.Height);
Bitmap bm3 = new Bitmap(wid, hgt);
// Create the difference image.
bool are_identical = true;
int r1;
int g1;
int b1;
int r2;
int g2;
int b2;
int r3;
int g3;
int b3;
Color eq_color = Color.Transparent;
Color ne_color = Color.Transparent;
for (int x = 0; x <= wid - 1; x++)
{
for (int y = 0; y <= hgt - 1; y++)
{
if (bm1.GetPixel(x, y).Equals(bm2.GetPixel(x, y)))
{
bm3.SetPixel(x, y, eq_color);
}
else
{
bm1.SetPixel(x, y, ne_color);
are_identical = false;
}
}
}
// Display the result.
picResult.Image = bm1;
Bitmap Logo = new Bitmap(picResult.Image);
Logo.MakeTransparent(Logo.GetPixel(1, 1));
picResult.Image = (Image)Logo;
//this.Cursor = Cursors.Default;
if ((bm1.Width != bm2.Width) || (bm1.Height != bm2.Height))
{
are_identical = false;
}
if (are_identical)
{
MessageBox.Show("The images are identical");
}
else
{
MessageBox.Show("The images are different");
}
//bm1.Dispose()
// bm2.Dispose()
BUT this compare if the 2 images are of same resolution and size.if some shadow is there on one image(but the 2 images are same) it shows the difference between the image..so i am trying to compare using pattern recognition.
As nailxx said, there is no "100% working free code" or something. Some years ago I helped implementing a "face recognition" app, and one of the things we used was "Locale binary patterns". Its not too easy, but it gave quite good results. Find a paper about it here:
Local binary patterns
Edit: I'm afraid I can't find the paper that I have used these days, it was shorter and fixed on the LBP itself and not how to use it with textures.
Your request is a really complex scientific (not even engineering) task.
The basic obvious algorithm is the following:
Somehow select all object on both comparing images.
This part is relatively simple and can be solved in many ways.
Compare all objects. This part is a task for scientists, considering the fact that they can be shifted, rotated, resized, and so on. :)
However, this can be solved in the case of you have a fixed number of entities to recognize. Like "circle", "triangle","rectange","line".

BlackBerry - image 3D transform

I know how to rotate image on any angle with drawTexturePath:
int displayWidth = Display.getWidth();
int displayHeight = Display.getHeight();
int[] x = new int[] { 0, displayWidth, displayWidth, 0 };
int[] x = new int[] { 0, 0, displayHeight, displayHeight };
int angle = Fixed32.toFP( 45 );
int dux = Fixed32.cosd(angle );
int dvx = -Fixed32.sind( angle );
int duy = Fixed32.sind( angle );
int dvy = Fixed32.cosd( angle );
graphics.drawTexturedPath( x, y, null, null, 0, 0, dvx, dux, dvy, duy, image);
but what I need is a 3d projection of simple image with 3d transformation (something like this)
Can you please advice me how to do this with drawTexturedPath (I'm almost sure it's possible)?
Are there any alternatives?
The method used by this function(2 walk vectors) is the same as the oldskool coding tricks used for the famous 'rotozoomer' effect. rotozoomer example video
This method is a very fast way to rotate, zoom, and skew an image. The rotation is done simply by rotating the walk vectors. The zooming is done simply by scaling the walk vectors. The skewing is done by rotating the walkvectors in respect to one another (e.g. they don't make a 90 degree angle anymore).
Nintendo had made hardware in their SNES to use the same effect on any of the sprites and or backgrounds. This made way for some very cool effects.
One big shortcoming of this technique is that one can not perspectively warp a texture. To do this, every new horizontal line, the walk vectors should be changed slightly. (hard to explain without a drawing).
On the snes they overcame this by altering every scanline the walkvectors (In those days one could set an interrupt when the monitor was drawing any scanline). This mode was later referred to as MODE 7 (since it behaved like a new virtual kind of graphics mode). The most famous games using this mode were Mario kart and F-zero
So to get this working on the blackberry, you'll have to draw your image "displayHeight" times (e.g. Every time one scanline of the image). This is the only way to achieve the desired effect. (This will undoubtedly cost you a performance hit since you are now calling the drawTexturedPath function a lot of times with new values, instead of just one time).
I guess with a bit of googling you can find some formulas (or even an implementation) how to calc the varying walkvectors. With a bit of paper (given your not too bad at math) you might deduce it yourself too. I've done it myself too when I was making games for the Gameboy Advance so I know it can be done.
Be sure to precalc everything! Speed is everything (especially on slow machines like phones)
EDIT: did some googling for you. Here's a detailed explanation how to create the mode7 effect. This will help you achieve the same with the Blackberry function. Mode 7 implementation
With the following code you can skew your image and get a perspective like effect:
int displayWidth = Display.getWidth();
int displayHeight = Display.getHeight();
int[] x = new int[] { 0, displayWidth, displayWidth, 0 };
int[] y = new int[] { 0, 0, displayHeight, displayHeight };
int dux = Fixed32.toFP(-1);
int dvx = Fixed32.toFP(1);
int duy = Fixed32.toFP(1);
int dvy = Fixed32.toFP(0);
graphics.drawTexturedPath( x, y, null, null, 0, 0, dvx, dux, dvy, duy, image);
This will skew your image in a 45º angle, if you want a certain angle you just need to use some trigonometry to determine the lengths of your vectors.
Thanks for answers and guidance, +1 to you all.
MODE 7 was the way I choose to implement 3D transformation, but unfortunately I couldn't make drawTexturedPath to resize my scanlines... so I came down to simple drawImage.
Assuming you have a Bitmap inBmp (input texture), create new Bitmap outBmp (output texture).
Bitmap mInBmp = Bitmap.getBitmapResource("map.png");
int inHeight = mInBmp.getHeight();
int inWidth = mInBmp.getWidth();
int outHeight = 0;
int outWidth = 0;
int outDrawX = 0;
int outDrawY = 0;
Bitmap mOutBmp = null;
public Scr() {
super();
mOutBmp = getMode7YTransform();
outWidth = mOutBmp.getWidth();
outHeight = mOutBmp.getHeight();
outDrawX = (Display.getWidth() - outWidth) / 2;
outDrawY = Display.getHeight() - outHeight;
}
Somewhere in code create a Graphics outBmpGraphics for outBmp.
Then do following in iteration from start y to (texture height)* y transform factor:
1.create a Bitmap lineBmp = new Bitmap(width, 1) for one line
2.create a Graphics lineBmpGraphics from lineBmp
3.paint i line from texture to lineBmpGraphics
4.encode lineBmp to EncodedImage img
5.scale img according to MODE 7
6.paint img to outBmpGraphics
Note: Richard Puckett's PNGEncoder BB port used in my code
private Bitmap getMode7YTransform() {
Bitmap outBmp = new Bitmap(inWidth, inHeight / 2);
Graphics outBmpGraphics = new Graphics(outBmp);
for (int i = 0; i < inHeight / 2; i++) {
Bitmap lineBmp = new Bitmap(inWidth, 1);
Graphics lineBmpGraphics = new Graphics(lineBmp);
lineBmpGraphics.drawBitmap(0, 0, inWidth, 1, mInBmp, 0, 2 * i);
PNGEncoder encoder = new PNGEncoder(lineBmp, true);
byte[] data = null;
try {
data = encoder.encode(true);
} catch (IOException e) {
e.printStackTrace();
}
EncodedImage img = PNGEncodedImage.createEncodedImage(data,
0, -1);
float xScaleFactor = ((float) (inHeight / 2 + i))
/ (float) inHeight;
img = scaleImage(img, xScaleFactor, 1);
int startX = (inWidth - img.getScaledWidth()) / 2;
int imgHeight = img.getScaledHeight();
int imgWidth = img.getScaledWidth();
outBmpGraphics.drawImage(startX, i, imgWidth, imgHeight, img,
0, 0, 0);
}
return outBmp;
}
Then just draw it in paint()
protected void paint(Graphics graphics) {
graphics.drawBitmap(outDrawX, outDrawY, outWidth, outHeight, mOutBmp,
0, 0);
}
To scale, I've do something similar to method described in Resizing a Bitmap using .scaleImage32 instead of .setScale
private EncodedImage scaleImage(EncodedImage image, float ratioX,
float ratioY) {
int currentWidthFixed32 = Fixed32.toFP(image.getWidth());
int currentHeightFixed32 = Fixed32.toFP(image.getHeight());
double w = (double) image.getWidth() * ratioX;
double h = (double) image.getHeight() * ratioY;
int width = (int) w;
int height = (int) h;
int requiredWidthFixed32 = Fixed32.toFP(width);
int requiredHeightFixed32 = Fixed32.toFP(height);
int scaleXFixed32 = Fixed32.div(currentWidthFixed32,
requiredWidthFixed32);
int scaleYFixed32 = Fixed32.div(currentHeightFixed32,
requiredHeightFixed32);
EncodedImage result = image.scaleImage32(scaleXFixed32, scaleYFixed32);
return result;
}
See also
J2ME Mode 7 Floor Renderer - something much more detailed & exciting if you writing a 3D game!
You want to do texture mapping, and that function won't cut it. Maybe you can kludge your way around it but the better option is to use a texture mapping algorithm.
This involves, for each row of pixels, determining the edges of the shape and where on the shape those screen pixels map to (the texture pixels). It's not so hard actually but may take a bit of work. And you'll be drawing the pic only once.
GameDev has a bunch of articles with sourcecode here:
http://www.gamedev.net/reference/list.asp?categoryid=40#212
Wikipedia also has a nice article:
http://en.wikipedia.org/wiki/Texture_mapping
Another site with 3d tutorials:
http://tfpsly.free.fr/Docs/TomHammersley/index.html
In your place I'd seek out a simple demo program that did something close to what you want and use their sources as base to develop my own - or even find a portable source library, I´m sure there must be a few.

Resources