Can I get network camera direction? - onvif

I am selecting a network camera.
It turned out that the network camera is standardized by ONVIF.
In the ONVIF standard, PTZ control gives speed and operating time.
I want to get the camera orientation (angle).
Can I get the camera orientation with the ONVIF standard?
Furthermore, I want to change the orientation of the camera by giving an orientation (angle).
First of all, is there any camera that can feed back the angle of the camera?

I want to get the camera orientation (angle).
You can look at the GetStatus onvif function to get the current position. It doesn't work with all onvif certified cameras for some reason... (I had trouble with Foscam and Amcrest cameras in the past)
https://www.onvif.org/onvif/ver20/imaging/wsdl/imaging.wsdl#op.GetStatus
I want to change the orientation of the camera by giving an
orientation (angle).
You can look at the AbsoluteMove onvif function to set the position of your camera to the absolute values (angle) you want.
https://www.onvif.org/onvif/ver20/ptz/wsdl/ptz.wsdl#op.AbsoluteMove
Here's the list of all "available" functionalities of onvif. I say "available" because, just like the earlier GetStatus, some cameras will return you an error when calling them for some reason...
https://www.onvif.org/onvif/ver20/util/operationIndex.html

Related

Is Onvif TranslationSpaceFov translation space required to centre camera view via conversion of x,y coords

This link poses the same question and provides a solution. I need to understand how TranslationSpaceFov relates to the solution:
Converting x/y values from on screen click to ONVIF PTZ pan/tilt values
Does my camera need to provide this translation space?
My camera does not provide this translation space, can I add it?
If you want to do go to center action the camera has to have this translation space and you want to add it into the PTZ configuration.
When the camera doesn't provide this space you simply cannot implement it by yourself and if so the action would be very inaccurate.

Quality of webcam affects Computer Vision?

I am new with OpenCV and Computer Vision and at the moment I am working on a program which needs to find circles while capturing videos.
I am using OpenCV for Python with the class HoughCircles to find the needed shape.
I actually need to capture the video from a webcam because it needs to be perpendicular to the horizontal sheet of paper where I am inserting circles.
However, when I try to capture the video from this webcam (Tecknet around 10£-12$), other than recognise only the needed circles it displays additional hundreds. I've compared this with my Macbook Pro webcam, which recognises perfectly the circles on the video.
Before I proceed on working on this, I'd like to have a feedback from you guys, as I am a beginner and I thought that any webcam could be ok.
Is it actually the quality of the webcam? Is it the class I am using in OpenCV or there can be other factors such as light conditions that can influence?
Thank you in advance.

Capturing only pixels from Google Glass camera

I would like to capture only a few pixels from the Google Glass camera at regular intervals, to obtain color data over time. Is there a way, to save battery life, to only capture a few pixels rather than take a full image every time and have to post-render it (which is much more intensive and battery-consuming)? Perhaps this is configured on the hardware level, and thus I cannot do such a thing.
As an alternative, I was hoping the light sensor would give RGB data, but it appears to be a monochromatic light level that is provided in units of lux.

Kinect Background Removal and Particle dynamics using opengl?

I'm making an application with KINECT SDK 1.8, OpenGL and DirectShow.
My objective is to extract the human with KINECT background removal API and add avi video's frame as a background using DirectShow's IVMR9WindowlessControl9:GetCurrentImage. Then draw about 1000 square textures which have some transparency to make a fire(particle dynamics).
I add video background if the extracted image's pixel's alpha is 0. If the pixel's alpha is 0, I add video frame's pixel in there.
When KINECT didn't detect human, and video frame's pixel fills the background, the fire comes out very well.
The problem is this : When KINECT detected human, and video frame's pixel fills the background except the human area, the fire comes out, but there are few particles consisting fire, and sometimes, the other particles which weren't showed up when the human was detect showed up and then disappear in a moment.
After the KINECT couldn't detect the human, the hidden particles just show up and it runs very well.

Gyroscope sensor: what is the axis around which the device's rotating?

i am coding 1 app which is same app "iHandy Level Free" on google play.
i am using gyroscope sensor, but i don't know what is the axis around which the device's rotating ? because when i rotate, tilt device, 3 values x, y, z are change too.
thanks
The Android follows ENU(east, North, UP ), have a look at this Application Note : http://www.st.com/st-web-ui/static/active/jp/resource/technical/document/application_note/DM00063297.pdf
convention , so you will get a bigger value for the axis around which the device is being rotated.
It is not possible to get a Zero value around any axis no matter how gently you move the device .You are bound get some angular rate around the tationary axis (which you are assuming to be stationary)

Resources