This is a follow up to my previous question,
OpenCV PS 3 Eye
Can someone suggest a library that would allow me grab frames from camera without too much fuss (like video videoinput lib for windows) and pass them to opencv within my application?
I had a parallel problem using a completely different webcam: worked well in cheese/etc, v4l-info showed proper setup, but openCV would fail with:
HIGHGUI ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
Unable to stop the stream.: Bad file descriptor
After much flailing I found that at least one guy had similar problems with webcams in various applications.
In blind faith I promptly punched in export LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so and «poof» it worked.
The openCV v4l2 interface is not as robust as the v4l implementation and the export is a quick workaround (openCV appears to revert to v4l).
With a quick browse of opencv/modules/highgui/src/cap_v4l.cpp it would appear as though openCV would like to use v4l2.
I'm running Ubuntu Lucid 2.6.32-28-generic x86_64, libv4l-0 v0.6.4-1ubuntu1 with openCV pulled from the HEAD of the repo a few days ago.
In the course of explaining this I've resolved my issue. It turns out that openCV forces the resolution on a v4l2 device to 640x480 by default - and my device had a max 320x240 resolution which caused the fault when testing for the format type in opencv::highgui::cap_v41::try_palette_v4l2. I changed DEFAULT_V4L_WIDTH and, DEFAULT_V4L_HEIGHT.
Related
I wanted to install the OV7251 camera driver to work with a module I've recently purchased, the Arducam OV7251 MIPI, as I need to perform SLAM-like system called Virtual Inertial Navigation (VIN) and global shutter cameras are preferred for this. As far as my system goes, I'm using ROS Kinetic on an RPI-3B+ running Ubuntu 16.04 . I am using this camera as it is near my price point (<20$), and goes through the RPI's CSI Port, which sources say is easier and faster than ones going through USB.
I wanted to take this camera and publish its data to a topic, that way the repository I'm using for VIN, OpenVINS, can track the camera's position. Now, the camera that I'm using doesn't have much on it other than the manufacturer's github page, which does not work on Ubuntu, and cannot connect to ROS. Now, I'm decently inexperienced with RPI's, ROS included, since I wanted to originally perform this on an Arduino, but that was majorly impossible, so I doubt I would be able to write a simple ROS node, let alone one that connected with the CSI port.
Currently, I am unable to find many libraries for this, and help given to me has proved to be un-substantial. The camera does not natively have drivers supported on RPI, which is why I cannot find any /dev/video libraries, cheese turns up nothing, and the command $ Vcgencmd get_camera returns no detected devices. Someone suggested kernel hacking, in order to enable the module in menuconfig using libraries like the ones here. While I do not know much about kernel hacking, he reccomended that I follow this guide and after I run the defconfig line, I should search for "OV7251" in menuconfig and modularize the only one which popped up. And despite flashing and repeating this process multiple times to ensure I did not choose the wrong branch, the rpi-5.4.y branch, or wrong model, the RPI-3B+, I ended up being stuck on the rainbow screen after I rebooted every time. I know that the rainbow screen either means low power, which it wasn't because I had it run before, or a kernel error, which would most likely make sense.
Now, while I would most definitely like to fix the rainbow screen error, I would also like to know, how after installing the OV7251 driver, how do I get it working with ROS to send data to topics? Since I doubt I could write my own node, is there a library that I could look for to perform this, or would libraries that did not work previously due to a missing driver suddenly work now, or would I have to take an existing one and modify it? In any case, A more low-level tutorial to accomplish this would be quite handy seeing as I am new.
But, in the case this is not software, and the reason this camera is not supported is for good reason, is there any other cheap global shutter camera I can work with? I couldn't seem to find many over my various searches, but maybe you all have better luck/experience in this field. Although, I did manage to find another library by this same manufacturer which happens to support my camera model and even has a ROS node that works on ubuntu. However, I believe that if this can be done, then so can doing so by just the CSI port rather than buying an additional 40$ USB camera hat for the pi, and along with that, I am starting to doubt the validity of this companies repositories.
Yet the fact I am finding little information on the topic of this camera alone on the CSI port of an RPI and how renowned this company it scares me that it could be impossible, which if it is, do link me some other good and hopefully well-documented cameras, which could very well be a lot to ask for. And if it is just simply impossible to get the results I want with the parameters I have set, then how badly would a rolling shutter camera affect VIN'S performance, and furthermore is there any special dataset designed for rolling shutter which could minimize the drop in quality? This terrain is all too new to me.
Ok, so I got a rpi engineer to add a dtoverlay for the ov7251 in the rpi's firmware, and the most recent rpi-update has the overlay in the kernel.
I did sudo rpi-update to install the update, i then added dtoverlay=ov7251 to /boot/config.txt in order to enable the overlay, and i edited it by running sudo nano /boot/config.txt. And the repository only has one dependency, v4l-utils, which is installed easily enough by running sudo apt-get install v4l-utils. Finally i ran sudo reboot to initialize the changes.
And in order to pull the images into ROS, i edited a v4l2 node called usb_cam in order to accept the pixel format that the ov7251 camera uses (Y10). My fork can be found here. In order to install it, (since the docs for the original repo say very little on installation), i ran:
cd ~/catkin_ws/src
git clone https://github.com/ai-are-better-than-humans/usb_cam.git
cd ..
catkin_make
and then after that all you have to do is roslaunch usb_cam usb_cam-test.launch to start the node. Mine started out dark, so i had to go into the launch file and mess around with the brightness for a bit. And while youre there, make sure the pixel_format parameter has a value of Y10
You should get a sensor_msgs::Image message being published to a topic named "<camera_name>/image_raw", you can run rqt_graph to visualize it. Big thanks to 6by9 over at raspberry pi forums, dont think i could have gotten it done without him, he did alot of work that im very thankfull for. Thought id share the knowledge back here in case anyone finds it usefull.
EDIT: I hear you can also compile with catkin_make --pkg usb_cam -DCMAKE_BUILD_TYPE=Release instead of catkin_make if the node takes too much CPU. Also, if you see a ton of error messages while compiling, its fine, it still should work, but if you want to get rid of them you can refer to this answer from a ros thread:
It looks like you need to install libavcodec. I don't know the exact
command to install it off the top of my head, but the format will look
like this:
sudo apt-get install libavcodec
The exact package name might not be
libavcodec. It maybe looks something like libavcodec-VERSION-NUMBER or
libavcodec-dev. In these situations you can search for packages with a
command like this:
apt-cache search libavcodec
This will find all packages that have text
containing "libavcodec". This should find the correct package for you
to install.
I have seen different modules like OpenCV and Videocapture for taking fast shots from the computer webcam, but these are only for Python 2. I thought I would make one work with Pygame, but I got many errors. I found different pages including pygame's website that said it only works with Linux.
Are there any modules for Python 3.4 for Windows that can quickly take shots from the webcam?
OpenCV can apparently be installed on Windows with Python 3, according to this answer here.
After OpenCV, my 2nd recommendation is to use GStreamer, and this is apparently possible on your specific platform according to this answer.
I am using freescale gpu sdk,Open GLES APIs for drawing and Gstreamer APIs for camera streaming for ARM architecture. It is possible in my case to do them separately but i want to know is there any way to show camera stream and draw something on it?
Thanks in Advance.
Some of freescale's processor (such as imx6) have multiple framebuffer overlay (/dev/fb0, /dev/fb1, /dev/fb2, ...).
You can then stream camera content on fb1, and draw on fb0, for exemple.
knowing that all those frambuffer are not activated by default.
It depends on your concrete root file system but if you are using the one generated with Freescale Yocto for i.MX6 the default configuration is at /usr/share/vssconfig
In that file you can specify which framebuffer gstreamer uses. By default /dev/fb0 is the BACKGROUND framebuffer and /dev/fb1 is the FOREGROUND framebuffer.
You can make gstreamer to draw in /dev/fb0 while you draw using cairo over /dev/fb1 (mmap /dev/fb1 and cairo_image_surface_create_for_data) controlling the transperency level with ioctls() over /dev/fb1.
In fact, I don't really know the behavior of X11. That's why I suggest you to disable X11 and make direct rendering with openGL via openGL DRI (Direct Rendering infrastructure) driver and DRM (Direct Rendering Manager) on one of the two framebuffers, and stream your camera on the other fb. (May be I am wrong and I hope someone else will correct me if it is the case)
This is a french documentation on how DRM and DRI works.
I have already faced this problem in the past.
I had to stream video with GStreamer and draw text over with pango. First thing I did was to generate a minimal image (with GStreamer enabled of course) but without any X11 library. For me (maybe it's different on your module), GStreamer used the /dev/fb1 node by default, and I then used /dev/fb0 for pango rendering.
It was quite easy to do that after several tests. So I also suggest you to make tests, try different things, different way, and I hope it will work as you want.
While trying to use OpenCV for face detection on Windows, I need to pull in almost all the libraries (2d, 3d, ml, gui etc.). Otherwise my program wouldn't run. I am not really sure why I need any GUI for something as algorithmic as object detection. What is the minimal set of libraries required and is there a special way to build OpenCV such that there aren't that many dependencies?
You need opencv_core to get base objects like cv::Mat, opencv_imgproc to use thresholds, histograms and other image pre-processing, and opencv_highgui for reading, writing and displaying images, and using video streams from cameras and video files. That's all I can tell you without knowing how to run openCV on Windows, and not knowing which version of openCV You are using. As far as I know there is no way of building only some parts of openCV.
Generally from my experience You only need to add libraries associated to headers which You are using. So, if you have problems with tracking them try to avoid using #include "opencv2/opencv.hpp" and try a bit harder way of #include "opencv2/core/core.hpp" etc.
Yes, you can build OpenCV without certain library features. OpenCV uses CMake, which requires a little learning if you don't know it already, but you can uncheck OpenCV components you don't need in the CMake build configuration.
You can get away without using highgui in your app if you can read images with some other library (but not sure if you can build OpenCV without it).
Also - you will need to #include "opencv2/objdetect/objdetect.hpp" for support of Haar cascade classifiers (as of OpenCV 2.3.1).
I'm attempting to open a video file using openCV 1.0's highgui.cvCreateFileCapture(path) function on a Fedora 11 system. Unfortunately, this function is always returning null. I've attempted to use it on a few different video formats, and I've even taken the steps recommend on the openCV wiki (http://opencv.willowgarage.com/wiki/VideoCodecs) to use mencoder to transcode to RAWI420 as follows:
$ mencoder in.avi -ovc raw -vf format=i420 -o out.avi
This has seemed to have no effect, so I'm a bit stuck. No error is produced, null (or, since I'm using the python wrapper, None) is returned. I have ffmpeg, ffmpeg-devel and ffmpeg-libs installed so I think I should have appropriate codec support. Does anyone know how this could be resolved, or in lieu of a resolution, what steps could be taken to debug the issue?
I was having this problem on Ubuntu 10.10, and for me it was a problem with the libraries. I couldn't find out which library was the one that was missing, but I discovered that executing the installation scripts for Openframeworks before compiling OpenCV worked!
It depends on how you installed OpenCV. OpenCV can use one of many different engines for reading video files, including ffmpeg, gstreamer, and xine (I believe). Make sure that your installation is indeed using ffmpeg as the engine. The easiest way I can think of to do this is by calling "ldd programname" and seeing if ffmpeg is listed as one of the dependencies. Furthermore, you need to make sure that the engine is capable of processing your video codec.
OpenCV is unfortunately very quiet about what causes errors. Returning NULL could mean, "unable to handle codec", and it could mean, "access denied". You could run your process through strace and see what the system calls are returning as one possible option. Worst case scenario, you'll need to use a debugger and walk through the code as you call cvCreateFileCapture. Hope this helps.
OpenCV is already on version 2.x
Do yourself a favour and update it to version 2.1 (at least)