opencv use graphics card by default - graphics

im doing circle detection on vdeo feed and i am finding that opencv plays my video back extremely slow. I was wondering if opencv makes use of the graphics card or just the CPU. is there a way to tell opencv to use the graphics card??

OpenCV can be compiled with GPU support using CUDA. Some methods like SURF point extraction have a gpu implementation. See here for more information.
In order to use the GPU support, you need to have CUDA installed and compile the OpenCV source code with the USE_CUDA flag set in CMake.

Related

Compatibility of Webgazer on tablet

Will webgazer library work on Android tablet, will the calibration be good? What can be the accuracy of calibration?
I tried to calibrate the webgazer library but there was too much disturbance in the calibration

Does it matter if faces are slightly distorted for OpenCV Face Detection & Recognition?

I am writing a program using OpenCV face detection & recognition, using this as a guide
Does it matter if faces are distorted? As in, I'm thinking of placing camera* over a peephole in a door and there is intrinsic distortion in that. Will OpenCV still be able to detect & recognise?
System: Raspberry Pi 4 OS
Python Version: 3.x
*PS: If anyone can recommend a good RPi camera which would work well over a peephole that would be great. Thinking of RPi V2 Camera currently.
Thanks! :-)
Firstly, the camera quality is not so important to detect face(or other objects) because I worked with worse(low resolution like 0.5 Mpx) cameras even on these cameras the results are fine. The main point about detecting objects depends on the algorithm you used. The popular algorithms:
Haar Cascade Face Detector in OpenCV
Deep Learning based Face Detector in OpenCV
HoG Face Detector in Dlib
Deep Learning based Face Detector in Dlib
According to documentation you shared, Haar Cascade is the algorithm which you are thinking to use. Haar Cascade can work faster but there can be some problems also(like occlusion, problem with many faces, and some distorted face images ets.) There is a very good documentation about the comparison of these algorithms in this link.
Here is also a tutorial about haar cascade face detection.
I don't think using peophole camera will be a problem to detect faces.

create 3d model of an equipment from 2d images

GOAL: I have to create a 3d model of a machine part. I have about 25 images of the same thing taken from different angles.
Progress: I am able to extract the coordinates for a label that is on the machine for most of the images.
Problem: but I have no idea how to proceed. I have read a bit about aero-triangulation, but I couldn't figure out how to implement it. I would really appreciate it, if you could guide me in the right direction.
It would be really helpful, if you could provide your solutions using python and opencv.
Edit: sorry but I cannot upload the code for this one as it is confidential. don't blame me please I am just an intern. Although I can tell that I cropped a template of the label from an image and then used Sift to match that template on all the images to get the coordinates of the label.
If you want to implement things yourself with OpenCV, I would command looking at SIFT (or SURF) features, RANSAC and the epipolar constraint. I believe the OpenCV cookbook describe those. Warning: math involved. And I don't know how to do dense mapping in OpenCV.
I know the GUI program "VisualSFM" that can automatically recreate 3D model from images. It uses SFM and other command line utilities behind the scenes. Since everything is opensource, you could create a python wrapper around the actual libraries (I found https://github.com/mapillary/OpenSfM asking Google). VisualSFM prints the command it calls, so a hacky way could be to call the same commands from python.
If it is a simple shape and you don't want to automate it, it could be faster to model it yourself (and the result could look better). In 1.5 week I managed to learn the basics of blender and to model a guitar necklace: https://youtu.be/BCGKsh51TNA . And I would now be able to do it in less than 1h. How long are you ready to invest to find a solution with OpenCV?

Quality of webcam affects Computer Vision?

I am new with OpenCV and Computer Vision and at the moment I am working on a program which needs to find circles while capturing videos.
I am using OpenCV for Python with the class HoughCircles to find the needed shape.
I actually need to capture the video from a webcam because it needs to be perpendicular to the horizontal sheet of paper where I am inserting circles.
However, when I try to capture the video from this webcam (Tecknet around 10£-12$), other than recognise only the needed circles it displays additional hundreds. I've compared this with my Macbook Pro webcam, which recognises perfectly the circles on the video.
Before I proceed on working on this, I'd like to have a feedback from you guys, as I am a beginner and I thought that any webcam could be ok.
Is it actually the quality of the webcam? Is it the class I am using in OpenCV or there can be other factors such as light conditions that can influence?
Thank you in advance.

Displaying dicom datatset using VTK

I want to process a dicom dataset and display it using VTK.
How can i know in advance if the graphic card will be able to display the volume?
i've tried using glGetIntegerv(GL_MAX_TEXTURE_BUFFER_SIZE_EXT,size) that gives you the maximum number of texels that can be rendered using the graphic card, and then try to compare it with the output of m_vtkImageReader->GetOutput()->GetDimensions(dimensions). i thought that if dimensions.x*dimensions.y*dimensions.z > size then the vtk will throw an error, but it didn't happened.
I'll be glad to hear about other ways, or maybe someone can point me were i'm wrong.
VTK provide gpu-based volume processing, and non gpu. You may try to use VtkSmartVolumeMapper. This mapper select best mapper, from vtk mappers, for your card. It is display volume fine with notebook via unichrome videocard with 32 mb of memory.

Resources