Compatibility of Webgazer on tablet - web

Will webgazer library work on Android tablet, will the calibration be good? What can be the accuracy of calibration?
I tried to calibrate the webgazer library but there was too much disturbance in the calibration

Related

Does it matter if faces are slightly distorted for OpenCV Face Detection & Recognition?

I am writing a program using OpenCV face detection & recognition, using this as a guide
Does it matter if faces are distorted? As in, I'm thinking of placing camera* over a peephole in a door and there is intrinsic distortion in that. Will OpenCV still be able to detect & recognise?
System: Raspberry Pi 4 OS
Python Version: 3.x
*PS: If anyone can recommend a good RPi camera which would work well over a peephole that would be great. Thinking of RPi V2 Camera currently.
Thanks! :-)
Firstly, the camera quality is not so important to detect face(or other objects) because I worked with worse(low resolution like 0.5 Mpx) cameras even on these cameras the results are fine. The main point about detecting objects depends on the algorithm you used. The popular algorithms:
Haar Cascade Face Detector in OpenCV
Deep Learning based Face Detector in OpenCV
HoG Face Detector in Dlib
Deep Learning based Face Detector in Dlib
According to documentation you shared, Haar Cascade is the algorithm which you are thinking to use. Haar Cascade can work faster but there can be some problems also(like occlusion, problem with many faces, and some distorted face images ets.) There is a very good documentation about the comparison of these algorithms in this link.
Here is also a tutorial about haar cascade face detection.
I don't think using peophole camera will be a problem to detect faces.

Using microphone input to create a music visualization in real time on a 3D globe

I am involved in a side project that has a loop of LEDs around 1.5m in diameter with a rotor on the bottom which spins the loop. A raspberry pi controls the LEDs so that they create what appears to be a 3D globe of light. I am interested in a project that takes a microphone input and turns it into a column of pixels which is rendered on the loop in real time. The goal of this is to see if we can have it react to music in real-time. So far I've come up with this idea:
Using a FFT to quickly turn the input sound into a function that maps to certain pixels to certain colors based on the amplitude of the resultant function at frequencies, so the equator of the globe would respond to the strength of the lower-frequency sound, progressing upwards towards the poles which would respond to high frequency sound.
I can think of a few potential problems, including:
Performance on a raspberry pi. If the response lags too far behind the music it wouldn't seem to the observer to be responding to the specific song he/she is also hearing.
Without detecting the beat or some overall characteristic of the music that people understand it might be difficult for the observers to understand the output is correlated to the music.
The rotor has different speeds, so the image is only stationary if the rate of spin is matched perfectly to the refresh rate of the LEDs. This is a problem, but also possibly helpful because I might be able to turn down both the refresh rate and the rotor speed to reduce the computational load on the raspberry pi.
With that backstory, I should probably now ask a question. In general, how would you go about doing this? I have some experience with parallel computing and numerical methods but I am totally ignorant of music and tone and what-not. Part of my problem is I know the raspberry pi is the newest model, but I am not sure what its parallel capabilities are. I need to find a few linux friendly tools or libraries that can do an FFT on an ARM processor, and be able to do the post-processing in real time. I think a delay of ~0.25s or about would be acceptable. I feel like I'm in over my head so I thought id ask you guys for input.
Thanks!

Energy simulation of a RISC-V chip

Can the RISC-V simulators estimate the energy consumption of a Rocket chip?
For instance, is there a way to produce traces that can be fed to McPAT?
To estimate Rocket Chip's energy, we use Chisel's Verilog backend to generate RTL which we feed into CAD tools for gate-level simulation.
The simulators provided by Berkeley (QEMU, Rocket Chip, spike) currently do not support interfacing with McPAT, but this could be a great community contribution for those without access to CAD tools or wanting to simulate at a higher rate.

Windows Phone 8 get device orientation

For a Windows Phone application I need the device orientation (e.g. 0,90,180, and 270 degrees) but the layout is fixed to portrait.
At the moment I am calculating the orientation from the acceleration sensor readings using the atan2(-accleeration_x,acceleration_y). Is there a build-in method to get the device orientation instead of manually using the acceleration sensor?
Regards,
Edit: someone asked a similar question almost the same time here: Device Orientation in Windows Phone 8

opencv use graphics card by default

im doing circle detection on vdeo feed and i am finding that opencv plays my video back extremely slow. I was wondering if opencv makes use of the graphics card or just the CPU. is there a way to tell opencv to use the graphics card??
OpenCV can be compiled with GPU support using CUDA. Some methods like SURF point extraction have a gpu implementation. See here for more information.
In order to use the GPU support, you need to have CUDA installed and compile the OpenCV source code with the USE_CUDA flag set in CMake.

Resources