Why can't I render onto a Skia canvas holding a 8 bit Grayscale bitmap - skia

I am trying to do some basic drawing with skia. Since I'm working on grayscale images I want to use the corresponding color type. The minimal Example I want to use is:
int main(int argc, char * const argv[])
{
int width = 1000;
int heigth = 1000;
float linewidth = 10.0f;
SkImageInfo info = SkImageInfo::Make(
width,
heigth,
SkColorType::kAlpha_8_SkColorType,
SkAlphaType::kPremul_SkAlphaType
);
SkBitmap img;
img.allocPixels(info);
SkCanvas canvas(img);
canvas.drawColor(SK_ColorBLACK);
SkPaint paint;
paint.setColor(SK_ColorWHITE);
paint.setAlpha(255);
paint.setAntiAlias(false);
paint.setStrokeWidth(linewidth);
paint.setStyle(SkPaint::kStroke_Style);
canvas.drawCircle(500.0f, 500.0f, 100.0f, paint);
bool success = SkImageEncoder::EncodeFile("B:\\img.png", img,
SkImageEncoder::kPNG_Type, 100);
return 0;
}
But the saved image does not contain the circle that was drawn. If I replace kAlpha_8_SkColorType with kN32_SkColorType I get the expected result. How can I draw the circle onto a 8 bit grayscale image? I'm working with Visual Studio 2013 on a 64bit Windows machine.
kN32_SkColorType type result
kAlpha_8_SkColorType result

You should use kGray_8_SkColorType than kAlpha_8_SkColorType.
The kAlpha_8_SkColorType used for bitmap mask.

Related

how to draw a line in a video with opencv 3.0.0 using c++

Please let me know if you have any If you have a source code to draw a line in a video with opencv 3.0.0 using c++
cordially
First of all you should consider that a video is basicly just some images displayed quickly after each other. Therefor you only need to know how to draw a line onto an image to draw it in a video (do the same for each frame). The cv::line function is documented here : http://docs.opencv.org/3.0-beta/modules/imgproc/doc/drawing_functions.html.
int main(int argc, char** argv)
{
// read the camera input
VideoCapture cap(0);
if (!cap.isOpened())
return -1;
Mat frame;
/// Create Window
namedWindow("Result", 1);
while (true) {
//grab and retrieve each frames of the video sequentially
cap >> frame;
//draw a line onto the frame
line(frame, Point(0, frame.rows / 2), Point(frame.cols, frame.rows / 2), Scalar(0), 3);
//display the result
imshow("Result", frame);
//wait some time for the frame to render
waitKey(30);
}
return 0;
}
This will draw a horizontal, black, 3 pixel thick line on the video-feed from your webcam.

Direct3D Window->Bounds.Width/Height differs from real resolution

I noticed a strange behaviour with Direct3D while doing this tutorial.
The dimensions I am getting from the Window Object differ from the configured resolution of windows. There I set 1920*1080, the width and height from the Winows Object is 1371*771.
CoreWindow^ Window = CoreWindow::GetForCurrentThread();
// set the viewport
D3D11_VIEWPORT viewport = { 0 };
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = Window->Bounds.Width; //should be 1920, actually is 1371
viewport.Height = Window->Bounds.Height; //should be 1080, actually is 771
I am developing on an Alienware 14, maybe this causes this problem, but I could not find any answers, yet.
CoreWindow sizes, pointer locations, etc. are not expressed in pixels. They are expressed in Device Independent Pixels (DIPS). To convert to/from pixels you need to use the Dots Per Inch (DPI) value.
inline int ConvertDipsToPixels(float dips) const
{
return int(dips * m_DPI / 96.f + 0.5f);
}
inline float ConvertPixelsToDips(int pixels) const
{
return (float(pixels) * 96.f / m_DPI);
}
m_DPI comes from DisplayInformation::GetForCurrentView()->LogicalDpi and you get the DpiChanged event when and if it changes.
See DPI and Device-Independent Pixels for more details.
You should take a look at the Direct3D UWP Game templates on GitHub, and check out how this is handled in Main.cpp.

XLib Window background has colours inverted

I'm almost there with my little "window background from PNG image" project in Linux. I use pure X11 API and the minimal LodePNG to load the image. The problem is that the background is the negative of the original PNG image and I don't know what could be the problem.
This is basically the code that loads the image then creates the pixmap and applies the background to the window:
// required headers
// global variables
Display *display;
Window window;
int window_width = 600;
int window_height = 400;
// main entry point
// load the image with lodePNG (I didn't modify its code)
vector<unsigned char> image;
unsigned width, height;
//decode
unsigned error = lodepng::decode(image, width, height, "bg.png");
if(!error)
{
// And here is where I apply the image to the background
Screen* screen = NULL;
screen = DefaultScreenOfDisplay(display);
// Creating the pixmap
Pixmap pixmap = XCreatePixmap(
display,
XDefaultRootWindow(display),
width,
height,
DefaultDepth(display, 0)
);
// Creating the graphic context
XGCValues gr_values;
gr_values.function = GXcopy;
gr_values.background = WhitePixelOfScreen(display);
// Creating the image from the decoded PNG image
XImage *ximage = XCreateImage(
display,
CopyFromParent,
DisplayPlanes(display, 0),
ZPixmap,
0,
(char*)&image,
width,
height,
32,
4 * width
);
// Place the image into the pixmap
XPutImage(
display,
pixmap,
gr_context,
ximage,
0, 0,
0, 0,
window_width,
window_height
);
// Set the window background
XSetWindowBackgroundPixmap(display, window, pixmap);
// Free up used resources
XFreePixmap(display, pixmap);
XFreeGC(display, gr_context);
}
The image is decoded (and there's the possibility to be badly decoded) then it is applied to the background but, as I said, the image colors are inversed and I don't know why.
MORE INFO
After decoding I encoded the same image into a PNG file which is identical to the decoded one, so it looks like the problem is not related to LodePNG but to the way I play with XLib in order to place it on the window.
EVEN MORE INFO
Now I compared the inverted image with the original one and found out that somewhere in my code the RGB is converted to BGR. If one pixel on the original image is 95, 102, 119 on the inverted one it is 119, 102, 95.
I found the solution here. I am not sure if is the best way but the simpler for sure.

Converting image from Triclops into opencv RGB image Mat.

I want to use OpenCV to do some processing on rectified images from the Bumblebee2 camera. I am using FlyCapture2 and Triclops to grab images from the sensor and rectify them. I want to convert the TriclopsColorImage into a cv::Mat to use with OpenCV.
From the TriclopsColorImage object I can get the following:
int nrows; // The number of rows in the image
int ncols; //The number of columns in the image
int rowinc; //The row increment of the image
unsigned char * blue; //pixel data for the blue band of the image
unsigned char * red; //pixel data for the red band of the image
unsigned char * green; //pixel data for the green band of the image
I don't know how to convert this information into a cv::Mat image so that I can work on it. Can someone please point me in the right direction?
I haven't tested this and I don't know what version of OpenCV you're using, but something like the following should point you in the right direction. So, assuming your variable names from the question:
cv::Mat R(nrows, ncols, CV_8UC1, red, rowinc);
cv::Mat G(nrows, ncols, CV_8UC1, green, rowinc);
cv::Mat B(nrows, ncols, CV_8UC1, blue, rowinc);
std::vector<cv::Mat> array_to_merge;
array_to_merge.push_back(B);
array_to_merge.push_back(G);
array_to_merge.push_back(R);
cv::Mat colour;
cv::merge(array_to_merge, colour);
Alternate solution I came up with after a while.
// Create a cv::Mat to hold the rectified color image.
cv::Mat cvColorImage(colorImage.nrows, colorImage.ncols,CV_8UC3);
unsigned char* bp = colorImage.blue;
unsigned char* rp = colorImage.red;
unsigned char* gp = colorImage.green;
for(int row = 0; row< colorImage.nrows; row++){
for( int col =0; col< colorImage.rowinc; col++){
//printf("%u %u %u ",*bp,*rp,*gp);
bp++;
rp++;
gp++;
cvColorImage.at<cv::Vec3b>(row,col)[0] = *bp;
cvColorImage.at<cv::Vec3b>(row,col)[1] = *gp;
cvColorImage.at<cv::Vec3b>(row,col)[2] = *rp;
}
cout << endl;
}
imshow("colorimage", cvColorImage);
waitKey(300);

cvFindContours always returns 0 - OpenCV

I'm calling the cvFindContours function inside a separate thread that I've created to handle all OpenCV work while another is kept for OpenGL stuff.
I noticed that my cvFindContours function always returns 0 when this code is executed inside a separate thread. It worked fine before, when executed in the main thread itself. I used breakpoints and Watches to evaluate value changes. everything else (variables) gets values except for contourCount (value: 0).
Any clue?
// header includes goes here
CvCapture* capture = NULL;
IplImage* frame = NULL;
IplImage* image;
IplImage* gray;
IplImage* grayContour;
CvMemStorage *storage;
CvSeq *firstcontour=NULL;
CvSeq *polycontour=NULL;
int contourCount = 0;
DWORD WINAPI startOCV(LPVOID vpParam){
capture = cvCaptureFromCAM(0); // NOTE 1
capture = cvCaptureFromCAM(0);
frame = cvQueryFrame(capture);
image = cvCreateImage(cvGetSize(frame), IPL_DEPTH_8U,3);
gray = cvCreateImage(cvGetSize(image), IPL_DEPTH_8U,1);
grayContour = cvCreateImage(cvGetSize(image), IPL_DEPTH_8U,1);
storage = cvCreateMemStorage (0);
firstcontour=NULL;
while(1){
frame = cvQueryFrame(capture);
cvCopy(frame,image);
cvCvtColor(image,gray,CV_BGR2GRAY);
cvSmooth(gray,gray,CV_GAUSSIAN,3);
cvThreshold (gray, gray, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);
cvNot(gray,gray);
cvCopy(gray,grayContour);
contourCount=cvFindContours (grayContour, storage, &firstcontour, sizeof (CvContour),
CV_RETR_CCOMP);
polycontour=cvApproxPoly(firstcontour,sizeof(CvContour),storagepoly,CV_POLY_APPROX_DP,3,1); // Error starts here (Pls refer to stack trace)
}
// goes on...
}
int main(int argc, char** argv){
DWORD qThreadID;
HANDLE ocvThread = CreateThread(0,0,startOCV, NULL,0, &qThreadID);
initGL(argc, argv); //some GL intitialization functions
glutMainLoop(); // draw some 3D objects
CloseHandle(ocvThread);
return 0;
}
NOTE1: these lines had to be duplicated due to the error mentioned at How to avoid "Video Source -> Capture source" selection in OpenCV 2.3.0 - Visual C++ 2008
Environment:
OpenCV 2.3.0
Visual C++ 2008
EDIT
Traces
opencv_core230d.dll!cv::error(const cv::Exception & exc={...}) Line 431 C++
opencv_imgproc230d.dll!cvPointSeqFromMat(int seq_kind=20480, const void * arr=0x00000000, CvContour * contour_header=0x01a6f514, CvSeqBlock * block=0x01a6f4f4) Line 47 + 0xbd bytes C++
opencv_imgproc230d.dll!cvApproxPoly(const void * array=0x00000000, int header_size=88, CvMemStorage * storage=0x017e7b40, int method=0, double parameter=3.0000000000000000, int parameter2=1) Line 703 + 0x28 bytes C++
Project.exe!startOCV(void * vpParam=0x00000000) Line 267 + 0x24 bytes C++
All this stuff boils down to the function CV_Assert( arr != 0 && contour_header != 0 && block != 0 ) in cvPointSeqFromMat and it fails since arr it requires is empty.
Your variable contourCount is not doing what you think it's doing. From the contours.cpp source file:
/*F///////////////////////////////////////////////////////////////////////////////////////
// Name: cvFindContours
// Purpose:
// Finds all the contours on the bi-level image.
// Context:
// Parameters:
// img - source image.
// Non-zero pixels are considered as 1-pixels
// and zero pixels as 0-pixels.
// step - full width of source image in bytes.
// size - width and height of the image in pixels
// storage - pointer to storage where will the output contours be placed.
// header_size - header size of resulting contours
// mode - mode of contour retrieval.
// method - method of approximation that is applied to contours
// first_contour - pointer to first contour pointer
// Returns:
// CV_OK or error code
// Notes:
//F*/
You are getting CV_OK == 0, which means it successfully ran. cvFindContours does not return the number of contours found to you. It merely lets you known if it failed or not. You should use the CvSeq* first_contour to figure out the number of contours detected.
Hope that helps!

Resources