I use the following code to display OpenCV's IplImage. It works well for the RGB image.But for the gray image (only one channel), the image is upside down (inverted).
if( m_img && m_img->depth == IPL_DEPTH_8U )
{
uchar buffer[sizeof(BITMAPINFOHEADER) + 1024];
BITMAPINFO* bmi = (BITMAPINFO*)buffer;
int bmp_w = m_img->width, bmp_h = m_img->height;
FillBitmapInfo( bmi, bmp_w, bmp_h, Bpp(), m_img->origin );
from_x = MIN( MAX( from_x, 0 ), bmp_w - 1 );
from_y = MIN( MAX( from_y, 0 ), bmp_h - 1 );
int sw = MAX( MIN( bmp_w - from_x, w ), 0 );
int sh = MAX( MIN( bmp_h - from_y, h ), 0 );
SetDIBitsToDevice(
dc, x, y, sw, sh, from_x, from_y, from_y, sh,
m_img->imageData + from_y*m_img->widthStep,
bmi, DIB_RGB_COLORS );
}
I looked at the function in link here and showed that lpvBits is for RGB color. How can I modify for gray image?
Thanks
Use GDI+
Load the colored Bitmap
Create a ColorMatrix with 30% Red 59% Green
11% Blue
Create an ImageAttributes
Set the ColorMatricx into the ImageAttributes
Create a new bitmap
Use DrawImage with the ImageAtributes and draw the colored bitmap into the new bitmap.
There is a nuch of C# code samples, but the GDI+ interface in C+ is nearly identical.
Related
I'm trying to get the pixels of an ellipse from an image.
For example, I draw an ellipse on a random image (sample geeksforgeeks code):
import cv2
path = r'C:\Users\Rajnish\Desktop\geeksforgeeks\geeks.png'
image = cv2.imread(path)
window_name = 'Image'
center_coordinates = (120, 100)
axesLength = (100, 50)
angle = 0
startAngle = 0
endAngle = 360
color = (0, 0, 255)
thickness = 5
image = cv2.ellipse(image, center_coordinates, axesLength,
angle, startAngle, endAngle, color, thickness)
cv2.imshow(window_name, image)
It gives output like below:
Now, I want to get the pixel value of boundary line of ellipse. If it is possible I would like to get the pixel of ellipse using cv2.ellipse() back as an array of coordinates.
Can anyone help me with this please.
There is no direct OpenCV way probably to get these points of the ellipse but you can extract your points via indirect way like this:
mask = cv2.inRange(image, np.array(color), np.array(color))
contour = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2][0]
contour will store the outer points of your red ellipse.
Here, what I have done is created a mask image of the ellipse and found the externalmost contour's points that is the required thing.
If you want to obtain points (locations) on an ellipse, you can use ellipse2Poly() function.
If the argument type of ellipse2Poly() is inconvenient, calculating by yourself is most convenient way.
This sample code is C ++, but what calculated is clear.
//Degree -> Radian
inline double RadFromDeg( double Deg ){ return CV_PI*Deg/180.0; }
//Just calculate points mathematically.
// Arguments are same as cv::ellipse2Poly (alothough ellipse parameters is cv::RotateRect).
void My_ellipse2Poly(
const cv::RotatedRect &EllipseParam,
double StartAngle_deg,
double EndAngle_deg,
double DeltaAngle_deg,
std::vector< cv::Point2d > &DstPoints
)
{
double Cos,Sin;
{
double EllipseAngleRad = RadFromDeg(EllipseParam.angle);
Cos = cos( EllipseAngleRad );
Sin = sin( EllipseAngleRad );
}
//Here, you will be able to reserve the destination vector size, but in this sample, it was omitted.
DstPoints.clear();
const double HalfW = EllipseParam.size.width * 0.5;
const double HalfH = EllipseParam.size.height * 0.5;
for( double deg=StartAngle_deg; deg<EndAngle_deg; deg+=DeltaAngle_deg )
{
double rad = RadFromDeg( deg );
double u = cos(rad) * HalfW;
double v = sin(rad) * HalfH;
double x = u*Cos + v*Sin + EllipseParam.center.x;
double y = u*Sin - v*Cos + EllipseParam.center.y;
DstPoints.emplace_back( x,y );
}
}
I am using processing and the minim library and trying to create a 3D real-time visualisation for live audio input.
i have boxes drawn and is responding to the kick, snare , and hi hats of the audio input. I am looking to make these boxes rotate also responding to the kick ect. how could i make these boxes rotate?
if ( beat.isKick() ) kickSize = 200;
if ( beat.isSnare() ) snareSize = 250;
if ( beat.isHat() ) hatSize = 200;
translate ( width/4, height/4);
box(kickSize);
translate( - width/4, - height/4);
translate ( width/2, height/3);
sphere(snareSize);
translate( - width/2, - height/3);
translate ( 3*width/4, height/4);
box(hatSize);
translate( - 3*width/4, - height/4);
kickSize = constrain(kickSize * 0.95, 1, 32);
snareSize = constrain(snareSize * 0.95, 1, 32);
hatSize = constrain(hatSize * 0.95, 1, 32);
Use pushMatrix();popMatrix(); calls to isolate coordinate systems for each object:
pushMatrix();
translate ( width/4, height/4);
box(kickSize);
popMatrix();
pushMatrix();
translate ( width/2, height/3);
sphere(snareSize);
popMatrix();
pushMatrix();
translate ( 3*width/4, height/4);
box(hatSize);
popMatrix();
Have a look at the 2D transformations Processing tutorial for more details.
The same principle applies to 3D
I would like draw sphere in pure OpenGL ES 2.0 without any engines. I write next code:
int GenerateSphere (int Slices, float radius, GLfloat **vertices, GLfloat **colors) {
srand(time(NULL));
int i=0, j = 0;
int Parallels = Slices ;
float tempColor = 0.0f;
int VerticesCount = ( Parallels + 1 ) * ( Slices + 1 );
float angleStep = (2.0f * M_PI) / ((float) Slices);
// Allocate memory for buffers
if ( vertices != NULL ) {
*vertices = malloc ( sizeof(GLfloat) * 3 * VerticesCount );
}
if ( colors != NULL) {
*colors = malloc( sizeof(GLfloat) * 4 * VerticesCount);
}
for ( i = 0; i < Parallels+1; i++ ) {
for ( j = 0; j < Slices+1 ; j++ ) {
int vertex = ( i * (Slices + 1) + j ) * 3;
(*vertices)[vertex + 0] = radius * sinf ( angleStep * (float)i ) *
sinf ( angleStep * (float)j );
(*vertices)[vertex + 1] = radius * cosf ( angleStep * (float)i );
(*vertices)[vertex + 2] = radius * sinf ( angleStep * (float)i ) *
cosf ( angleStep * (float)j );
if ( colors ) {
int colorIndex = ( i * (Slices + 1) + j ) * 4;
tempColor = (float)(rand()%100)/100.0f;
(*colors)[colorIndex + 0] = 0.0f;
(*colors)[colorIndex + 1] = 0.0f;
(*colors)[colorIndex + 2] = 0.0f;
(*colors)[colorIndex + (rand()%4)] = tempColor;
(*colors)[colorIndex + 3] = 1.0f;
}
}
}
return VerticesCount;
}
I'm drawing it with using next code:
glDrawArrays(GL_TRIANGLE_STRIP, 0, userData->numVertices);
Where userData->numVertices - VerticesCount from function GenerateSphere.
But on screen draws series triangles, these aren't sphere approximation!
I think, I need to numerate vertices and use OpenGL ES 2.0 function glDrawElements() (with array, contained number vertices). But series of triangles drawn on the screen is not a sphere approximation.
How can I draw sphere approximation? How specify order vertices (indices in OpenGL ES 2.0 terms)?
Before you start with anything in OpenGL ES, here is some advice:
Avoid bloating CPU/GPU performance
Removing intense cycles of calculations by rendering the shapes offline using another program will surely help. These programs will provide additional details about the shapes/meshes apart from exporting the resultant collection of points [x,y,z] comprising the shapes etc.
I went through all this pain way back, because I kept trying to search for algorithms to render spheres etc and then trying to optimize them. I just wanted to save your time in the future. Just use Blender and then your favorite programming language to parse the obj files that are exported from Blender, I use Perl. Here are the steps to render sphere: (use glDrawElements because the obj file contains the array of indices)
1) Download and install Blender.
2) From the menu, add sphere and then reduce the number of rings and segments.
3) Select the entire shape and triangulate it.
4) Export an obj file and parse it for the meshes.
You should be able to grasp the logic to render sphere from this file: http://pastebin.com/4esQdVPP. It is for Android, but the concepts are same.
Hope this helps.
I struggled with spheres and other geometric shapes. I worked at it a while and created an Objective-C class to create coordinates, normals, and texture coordinates both using indexed and non-indexed mechanisms, the class is here:
http://www.whynotsometime.com/Why_Not_Sometime/Code_Snippets.html
What is interesting to see the resulting triangles representing the geometry is to reduce the resolution (set the resolution property before generating the coordinates). Also, you can use GL_LINE_STRIP instead of GL_TRIANGLES to see a bit more.
I agree with the comment from wimp that since calculating the coordinates generally happens once, not many CPU cycles are used. Also, sometimes one does want to draw only a ball or world or...
I have a problem with my canvas.
My canvas in first width = 1300, height = 500
Then I resize it to width = 800px, height = 500
I try setViewBox to zoom it. But mouse not fix with element when I drag them.
#canvas.resize(800, 500)
#canvas.setViewBox(0,0, ??, ??)
how to calculate it???
Thank for your help. :)
You can calculate the necessary dimensions using an approach like this:
function recalculateViewBox( canvas )
{
var max_x = 0, max_y = 0;
canvas.forEach( function( el )
{
var box = el.getBBox();
max_x = Math.max( max_x, box.x2 );
max_y = Math.max( max_y, box.y2 );
} );
if ( max_x && max_y )
canvas.setViewBox( 0, 0, max_x, max_y );
}
Essentially, you simply walk the contents of the canvas and construct a meta-bounding box, then adjust the viewBox to it.
If you wanted to get a little fancy, you could always animate the viewbox so that it transitions fluidly to it's new size. Not functionally important, but moderately sexy...
I have a problem with my program written in Visual C++ using OpenCV:
i have to capture frames from webcam and find all the various rectangle (it doesn't matter the color).
I try to modify the samples in c, squares.c, but it doesn't work as well, because the program wait any key (different from 'q') to continue.
This is the code. Someone can tell me where is the problem???
Thank you in advance.
//
// Object Detection of squares
// Take images from webcam and find the square in them
//
//
#include "stdafx.h"
#include <stdio.h>
#include <math.h>
#include <string.h>
int thresh = 50;
IplImage* img = 0;
IplImage* img0 = 0;
CvMemStorage* storage = 0;
//const char* wndname = "Square Detection Demo with Webcam";
// helper function:
// finds a cosine of angle between vectors
// from pt0->pt1 and from pt0->pt2
double angle( CvPoint* pt1, CvPoint* pt2, CvPoint* pt0 )
{
double dx1 = pt1->x - pt0->x;
double dy1 = pt1->y - pt0->y;
double dx2 = pt2->x - pt0->x;
double dy2 = pt2->y - pt0->y;
return (dx1*dx2 + dy1*dy2)/sqrt((dx1*dx1 + dy1*dy1)*(dx2*dx2 + dy2*dy2) + 1e-10);
}
// returns sequence of squares detected on the image.
// the sequence is stored in the specified memory storage
CvSeq* findSquares4( IplImage* img, CvMemStorage* storage )
{
CvSeq* contours;
int i, c, l, N = 11;
CvSize sz = cvSize( img->width & -2, img->height & -2 );
IplImage* timg = cvCloneImage( img ); // make a copy of input image
IplImage* gray = cvCreateImage( sz, 8, 1 );
IplImage* pyr = cvCreateImage( cvSize(sz.width/2, sz.height/2), 8, 3 );
IplImage* tgray;
CvSeq* result;
double s, t;
// create empty sequence that will contain points -
// 4 points per square (the square's vertices)
CvSeq* squares = cvCreateSeq( 0, sizeof(CvSeq), sizeof(CvPoint), storage );
// select the maximum ROI in the image
// with the width and height divisible by 2
cvSetImageROI( timg, cvRect( 0, 0, sz.width, sz.height ));
// down-scale and upscale the image to filter out the noise
cvPyrDown( timg, pyr, 7 );
cvPyrUp( pyr, timg, 7 );
tgray = cvCreateImage( sz, 8, 1 );
// find squares in every color plane of the image
for( c = 0; c < 3; c++ )
{
// extract the c-th color plane
cvSetImageCOI( timg, c+1 );
cvCopy( timg, tgray, 0 );
// try several threshold levels
for( l = 0; l < N; l++ )
{
// hack: use Canny instead of zero threshold level.
// Canny helps to catch squares with gradient shading
if( l == 0 )
{
// apply Canny. Take the upper threshold from slider
// and set the lower to 0 (which forces edges merging)
cvCanny( tgray, gray, 0, thresh, 5 );
// dilate canny output to remove potential
// holes between edge segments
cvDilate( gray, gray, 0, 1 );
}
else
{
// apply threshold if l!=0:
// tgray(x,y) = gray(x,y) < (l+1)*255/N ? 255 : 0
cvThreshold( tgray, gray, (l+1)*255/N, 255, CV_THRESH_BINARY );
}
// find contours and store them all as a list
cvFindContours( gray, storage, &contours, sizeof(CvContour),
CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cvPoint(0,0) );
// test each contour
while( contours )
{
// approximate contour with accuracy proportional
// to the contour perimeter
result = cvApproxPoly( contours, sizeof(CvContour), storage,
CV_POLY_APPROX_DP, cvContourPerimeter(contours)*0.02, 0 );
// square contours should have 4 vertices after approximation
// relatively large area (to filter out noisy contours)
// and be convex.
// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
if( result->total == 4 &&
fabs(cvContourArea(result,CV_WHOLE_SEQ)) > 1000 &&
cvCheckContourConvexity(result) )
{
s = 0;
printf("ciclo for annidato fino a 5\t\n");
for( i = 0; i < 5; i++ )
{
// find minimum angle between joint
// edges (maximum of cosine)
if( i >= 2 )
{
t = fabs(angle(
(CvPoint*)cvGetSeqElem( result, i ),
(CvPoint*)cvGetSeqElem( result, i-2 ),
(CvPoint*)cvGetSeqElem( result, i-1 )));
s = s > t ? s : t;
}
}
// if cosines of all angles are small
// (all angles are ~90 degree) then write quandrange
// vertices to resultant sequence
if( s < 0.3 )
for( i = 0; i < 4; i++ )
cvSeqPush( squares,
(CvPoint*)cvGetSeqElem( result, i ));
}
// take the next contour
contours = contours->h_next;
}
}
}
// release all the temporary images
cvReleaseImage( &gray );
cvReleaseImage( &pyr );
cvReleaseImage( &tgray );
cvReleaseImage( &timg );
return squares;
}
// the function draws all the squares in the image
void drawSquares( IplImage* img, CvSeq* squares )
{
CvSeqReader reader;
IplImage* cpy = cvCloneImage( img );
int i;
// initialize reader of the sequence
cvStartReadSeq( squares, &reader, 0 );
// read 4 sequence elements at a time (all vertices of a square)
for( i = 0; i < squares->total; i += 4 )
{
CvPoint pt[4], *rect = pt;
int count = 4;
// read 4 vertices
CV_READ_SEQ_ELEM( pt[0], reader );
CV_READ_SEQ_ELEM( pt[1], reader );
CV_READ_SEQ_ELEM( pt[2], reader );
CV_READ_SEQ_ELEM( pt[3], reader );
// draw the square as a closed polyline
cvPolyLine( cpy, &rect, &count, 1, 1, CV_RGB(0,255,0), 3, CV_AA, 0 );
}
cvSaveImage("squares.jpg",cpy);
//show the resultant image
//cvShowImage( wndname, cpy );
cvReleaseImage( &cpy );
//return cpy;
}
int _tmain(int argc, _TCHAR* argv[])
{
int key = 0;
IplImage* frame =0;
IplImage* squares=0;
// create memory storage that will contain all the dynamic data
storage = cvCreateMemStorage(0);
CvCapture *camera = cvCreateCameraCapture(CV_CAP_ANY); /* Usa USB camera */
frame = cvQueryFrame(camera);
frame = cvQueryFrame(camera);
frame = cvQueryFrame(camera);
while(key!='q'){
frame = cvQueryFrame(camera);
frame = cvQueryFrame(camera);
if(frame!=NULL){
printf("Got frame\t\n");
cvSaveImage("frame.jpg", frame);
/*img0*/ img = cvLoadImage("frame.jpg");
//img = cvCloneImage( img0 );
cvNamedWindow( "img0", CV_WINDOW_AUTOSIZE);
cvShowImage("img0",/*img0*/img);
// find and draw the squares
drawSquares( img, findSquares4( img, storage ) );
squares = cvLoadImage("squares.jpg");
// create window and a trackbar (slider)
//with parent "image" and set callback
//(the slider regulates upper threshold,
//passed to Canny edge detector)
cvNamedWindow( "main", CV_WINDOW_AUTOSIZE);
cvShowImage("main", squares);
/* wait for key.
Also the function cvWaitKey takes care of event processing */
key = cvWaitKey(0);
}
}
// release both images
cvReleaseImage( &img );
cvReleaseImage( &img0 );
cvReleaseCapture(&camera);
cvDestroyWindow("main");
cvDestroyWindow("img0");
// clear memory storage - reset free space position
cvClearMemStorage( storage );
return 0;
}
I believe your problem is here:
/* wait for key.
Also the function cvWaitKey takes care of event processing */
key = cvWaitKey(0);
Try changing 0 to 10.
I see some other problems in your code. For instance, you create windows inside the while loop, which is not good. Try moving cvNamedWindow() function calls outside your while loop. Also, I'm not sure why you query camera for frames and do not process them?
If your problem is that the window dissappers without waiting for any hit from the keyboard, you can add a cvWaitKey(0) at the end of the code.
Also a getch( ) at the end would help. make sure you include in the headers.