Quality of webcam affects Computer Vision? - python-3.x

I am new with OpenCV and Computer Vision and at the moment I am working on a program which needs to find circles while capturing videos.
I am using OpenCV for Python with the class HoughCircles to find the needed shape.
I actually need to capture the video from a webcam because it needs to be perpendicular to the horizontal sheet of paper where I am inserting circles.
However, when I try to capture the video from this webcam (Tecknet around 10£-12$), other than recognise only the needed circles it displays additional hundreds. I've compared this with my Macbook Pro webcam, which recognises perfectly the circles on the video.
Before I proceed on working on this, I'd like to have a feedback from you guys, as I am a beginner and I thought that any webcam could be ok.
Is it actually the quality of the webcam? Is it the class I am using in OpenCV or there can be other factors such as light conditions that can influence?
Thank you in advance.

Related

Does it matter if faces are slightly distorted for OpenCV Face Detection & Recognition?

I am writing a program using OpenCV face detection & recognition, using this as a guide
Does it matter if faces are distorted? As in, I'm thinking of placing camera* over a peephole in a door and there is intrinsic distortion in that. Will OpenCV still be able to detect & recognise?
System: Raspberry Pi 4 OS
Python Version: 3.x
*PS: If anyone can recommend a good RPi camera which would work well over a peephole that would be great. Thinking of RPi V2 Camera currently.
Thanks! :-)
Firstly, the camera quality is not so important to detect face(or other objects) because I worked with worse(low resolution like 0.5 Mpx) cameras even on these cameras the results are fine. The main point about detecting objects depends on the algorithm you used. The popular algorithms:
Haar Cascade Face Detector in OpenCV
Deep Learning based Face Detector in OpenCV
HoG Face Detector in Dlib
Deep Learning based Face Detector in Dlib
According to documentation you shared, Haar Cascade is the algorithm which you are thinking to use. Haar Cascade can work faster but there can be some problems also(like occlusion, problem with many faces, and some distorted face images ets.) There is a very good documentation about the comparison of these algorithms in this link.
Here is also a tutorial about haar cascade face detection.
I don't think using peophole camera will be a problem to detect faces.

previous steps before to calculate disparity? Is rectification needed?

I want to do stereo vision and finally find the real distance to the objects from cameras. I have done image rectification.Now I want to calculate disparity. My question is, to do disparity, do I need to rectify images first? Thank you!
Yes, disparity needs rectified images. Since the stereo matching is done with epipolar lines, rectified images ensure that all the distortions are rectified and hence the algorithm can search blocks in a straight line. For a basic level you can try out StereoBM provided by openCV using the recitified stereo image pair.
Raw frames from camera -> Rectification -> Disparity map -> Depth perception.
This will be the pipeline for any passive stereo camera.

Using microphone input to create a music visualization in real time on a 3D globe

I am involved in a side project that has a loop of LEDs around 1.5m in diameter with a rotor on the bottom which spins the loop. A raspberry pi controls the LEDs so that they create what appears to be a 3D globe of light. I am interested in a project that takes a microphone input and turns it into a column of pixels which is rendered on the loop in real time. The goal of this is to see if we can have it react to music in real-time. So far I've come up with this idea:
Using a FFT to quickly turn the input sound into a function that maps to certain pixels to certain colors based on the amplitude of the resultant function at frequencies, so the equator of the globe would respond to the strength of the lower-frequency sound, progressing upwards towards the poles which would respond to high frequency sound.
I can think of a few potential problems, including:
Performance on a raspberry pi. If the response lags too far behind the music it wouldn't seem to the observer to be responding to the specific song he/she is also hearing.
Without detecting the beat or some overall characteristic of the music that people understand it might be difficult for the observers to understand the output is correlated to the music.
The rotor has different speeds, so the image is only stationary if the rate of spin is matched perfectly to the refresh rate of the LEDs. This is a problem, but also possibly helpful because I might be able to turn down both the refresh rate and the rotor speed to reduce the computational load on the raspberry pi.
With that backstory, I should probably now ask a question. In general, how would you go about doing this? I have some experience with parallel computing and numerical methods but I am totally ignorant of music and tone and what-not. Part of my problem is I know the raspberry pi is the newest model, but I am not sure what its parallel capabilities are. I need to find a few linux friendly tools or libraries that can do an FFT on an ARM processor, and be able to do the post-processing in real time. I think a delay of ~0.25s or about would be acceptable. I feel like I'm in over my head so I thought id ask you guys for input.
Thanks!

Displaying dicom datatset using VTK

I want to process a dicom dataset and display it using VTK.
How can i know in advance if the graphic card will be able to display the volume?
i've tried using glGetIntegerv(GL_MAX_TEXTURE_BUFFER_SIZE_EXT,size) that gives you the maximum number of texels that can be rendered using the graphic card, and then try to compare it with the output of m_vtkImageReader->GetOutput()->GetDimensions(dimensions). i thought that if dimensions.x*dimensions.y*dimensions.z > size then the vtk will throw an error, but it didn't happened.
I'll be glad to hear about other ways, or maybe someone can point me were i'm wrong.
VTK provide gpu-based volume processing, and non gpu. You may try to use VtkSmartVolumeMapper. This mapper select best mapper, from vtk mappers, for your card. It is display volume fine with notebook via unichrome videocard with 32 mb of memory.

SDL: FPS problems with simple bitmap

I am currently working on a game in SDL which has destructible terrain. At the moment the terrain is one large (5000*500, for testing) bitmap which is randomly generated.
Each frame the main surface is cleared and the terrain bitmap is blitted into it. The current resolution is 1200 * 700, so when I was testing 1200 * 500 pixels were visible at most of the points.
Now the problem is: The FPS are already dropping! I thought one simple bitmap shouldn't show any effect - but I am already falling down to ~24 FPS with this!
Why is blitting & drawing a bitmap of that size so slow?
Am I taking a false approach at destructible terrain?
How have games like Worms done this? The FPS seem really high although there's definitely a lot of pixels drawn in there
Whenever you initialize a surface, do it the following way:
SDL_Surface* mySurface;
SDL_Surface* tempSurface;
tempSurface = SDL_LoadIMG("./path/to/image/image.jpg_or_whatever");
/* SDL_LoadIMG() is correct name? Not sure now, I`m at work, so I can`t verify it. */
mySurface = SDL_DisplayFormat(tempSurface);
SDL_FreeSurface(tempSurface);
The SDL_DisplayFormat() method converts the pixel format of your surface to the format the video surface uses. If you don`t do it the way I described above, SDL does this each time the surface is blitted.
And always remember: just blit the necessary parts that really are visible to the player.
That`s my first guess, why you are having performance problems. Post your code or ask more specific questions, if you want more tipps. Good luck with your game.
If you redraw the whole screen at each frame your will always get a bad FPS. You have to redraw only part of the screen which have changed. You can also try to use SDL_HWSURFACE to use hardware but it won't work on every graphical card.
2d in SDL is pretty slow and there isn't much you can do to make it faster (on windows at least it uses GDI for drawing by default.) Your options are:
Go opengl and start using textured quads for sprites.
Try SFML. It provides a hardware accelerated 2d environment.
Use SDL 1.3 Get a source snapshot it is unstable and still under development but hardware accelerated 2d is supposed to be one of the main selling points.

Resources