OpenCV 1.0: cvCreateFileCapture Always Returns Null under Fedora 11 - linux

I'm attempting to open a video file using openCV 1.0's highgui.cvCreateFileCapture(path) function on a Fedora 11 system. Unfortunately, this function is always returning null. I've attempted to use it on a few different video formats, and I've even taken the steps recommend on the openCV wiki (http://opencv.willowgarage.com/wiki/VideoCodecs) to use mencoder to transcode to RAWI420 as follows:
$ mencoder in.avi -ovc raw -vf format=i420 -o out.avi
This has seemed to have no effect, so I'm a bit stuck. No error is produced, null (or, since I'm using the python wrapper, None) is returned. I have ffmpeg, ffmpeg-devel and ffmpeg-libs installed so I think I should have appropriate codec support. Does anyone know how this could be resolved, or in lieu of a resolution, what steps could be taken to debug the issue?

I was having this problem on Ubuntu 10.10, and for me it was a problem with the libraries. I couldn't find out which library was the one that was missing, but I discovered that executing the installation scripts for Openframeworks before compiling OpenCV worked!

It depends on how you installed OpenCV. OpenCV can use one of many different engines for reading video files, including ffmpeg, gstreamer, and xine (I believe). Make sure that your installation is indeed using ffmpeg as the engine. The easiest way I can think of to do this is by calling "ldd programname" and seeing if ffmpeg is listed as one of the dependencies. Furthermore, you need to make sure that the engine is capable of processing your video codec.
OpenCV is unfortunately very quiet about what causes errors. Returning NULL could mean, "unable to handle codec", and it could mean, "access denied". You could run your process through strace and see what the system calls are returning as one possible option. Worst case scenario, you'll need to use a debugger and walk through the code as you call cvCreateFileCapture. Hope this helps.

OpenCV is already on version 2.x
Do yourself a favour and update it to version 2.1 (at least)

Related

What exact form of *.wav files are supported by wxWidgets?

The wxBell() command does nothing on Linux (Ubuntu) and I read a suggestion to use wxSound.
Now I found a license free sound sample for the 'wrong-answer'-sound here:
http://www.orangefreesounds.com/wrong-answer-sound-effect/
Unfortunately, that is in *.mp3 format. So I found an online conversion program here:
https://www.online-convert.com/result/57548c3f-6cf3-40b5-9dcc-f7c3e5f03ab3
It offers various options, like 32 bit floating point, signed or unsigned integers of 8, 16, 24 or 32 bits, either little or big endian, different sample rates etc.
But when the constructor of wxSound tries to read the converted file, I get: Sound file '../wrong-answer.wav' is in unsupported format. (At least it can find it).
This while I can play at least one of those converted files (16 bits signed integer, 44100Hz, mono) by double clicking on it in nautilus. (The video player seems to be called Totem).
But the big question is: what bit resolution, sampling rate, #channels and PCM format will be acceptable for wxSound?
And this is a lot of hassle for a simple beeping/buzzing sound. Even my ZX Spectrum could do this in 1983 without extra resource files. There you had a beep command where you could pass the frequency and duration. Isn't something similar possible without having to use SDL (a linux native API call for instance)?
Bonus points: is there a solution that works over ssh, now we all work at home? Software runs on company server. We get the GUI at home with ssh -X, but sound?
wxBell() uses the "bell" configured in your desktop environment, so its behaviour depends on the platform.
As for wxSound, it's unfortunately a bit difficult to say what exactly it doesn't like in your file because it has several checks, but normally it shouldn't fail on valid WAV data. If you built wxWidgets yourself, the simplest way to find out what's wrong is to run the program under gdb, do b wxSound::LoadWAV and execute this function step by step to see which check fails.
Tips I got from here:
https://trac.wxwidgets.org/ticket/14899
Try install first "oss-compat", reboot and test.
Try install "alsa-oss" also.
I haven't checked yet whether this works.

Install OV7251 driver in RPI-3B+ to use with ROS

I wanted to install the OV7251 camera driver to work with a module I've recently purchased, the Arducam OV7251 MIPI, as I need to perform SLAM-like system called Virtual Inertial Navigation (VIN) and global shutter cameras are preferred for this. As far as my system goes, I'm using ROS Kinetic on an RPI-3B+ running Ubuntu 16.04 . I am using this camera as it is near my price point (<20$), and goes through the RPI's CSI Port, which sources say is easier and faster than ones going through USB.
I wanted to take this camera and publish its data to a topic, that way the repository I'm using for VIN, OpenVINS, can track the camera's position. Now, the camera that I'm using doesn't have much on it other than the manufacturer's github page, which does not work on Ubuntu, and cannot connect to ROS. Now, I'm decently inexperienced with RPI's, ROS included, since I wanted to originally perform this on an Arduino, but that was majorly impossible, so I doubt I would be able to write a simple ROS node, let alone one that connected with the CSI port.
Currently, I am unable to find many libraries for this, and help given to me has proved to be un-substantial. The camera does not natively have drivers supported on RPI, which is why I cannot find any /dev/video libraries, cheese turns up nothing, and the command $ Vcgencmd get_camera returns no detected devices. Someone suggested kernel hacking, in order to enable the module in menuconfig using libraries like the ones here. While I do not know much about kernel hacking, he reccomended that I follow this guide and after I run the defconfig line, I should search for "OV7251" in menuconfig and modularize the only one which popped up. And despite flashing and repeating this process multiple times to ensure I did not choose the wrong branch, the rpi-5.4.y branch, or wrong model, the RPI-3B+, I ended up being stuck on the rainbow screen after I rebooted every time. I know that the rainbow screen either means low power, which it wasn't because I had it run before, or a kernel error, which would most likely make sense.
Now, while I would most definitely like to fix the rainbow screen error, I would also like to know, how after installing the OV7251 driver, how do I get it working with ROS to send data to topics? Since I doubt I could write my own node, is there a library that I could look for to perform this, or would libraries that did not work previously due to a missing driver suddenly work now, or would I have to take an existing one and modify it? In any case, A more low-level tutorial to accomplish this would be quite handy seeing as I am new.
But, in the case this is not software, and the reason this camera is not supported is for good reason, is there any other cheap global shutter camera I can work with? I couldn't seem to find many over my various searches, but maybe you all have better luck/experience in this field. Although, I did manage to find another library by this same manufacturer which happens to support my camera model and even has a ROS node that works on ubuntu. However, I believe that if this can be done, then so can doing so by just the CSI port rather than buying an additional 40$ USB camera hat for the pi, and along with that, I am starting to doubt the validity of this companies repositories.
Yet the fact I am finding little information on the topic of this camera alone on the CSI port of an RPI and how renowned this company it scares me that it could be impossible, which if it is, do link me some other good and hopefully well-documented cameras, which could very well be a lot to ask for. And if it is just simply impossible to get the results I want with the parameters I have set, then how badly would a rolling shutter camera affect VIN'S performance, and furthermore is there any special dataset designed for rolling shutter which could minimize the drop in quality? This terrain is all too new to me.
Ok, so I got a rpi engineer to add a dtoverlay for the ov7251 in the rpi's firmware, and the most recent rpi-update has the overlay in the kernel.
I did sudo rpi-update to install the update, i then added dtoverlay=ov7251 to /boot/config.txt in order to enable the overlay, and i edited it by running sudo nano /boot/config.txt. And the repository only has one dependency, v4l-utils, which is installed easily enough by running sudo apt-get install v4l-utils. Finally i ran sudo reboot to initialize the changes.
And in order to pull the images into ROS, i edited a v4l2 node called usb_cam in order to accept the pixel format that the ov7251 camera uses (Y10). My fork can be found here. In order to install it, (since the docs for the original repo say very little on installation), i ran:
cd ~/catkin_ws/src
git clone https://github.com/ai-are-better-than-humans/usb_cam.git
cd ..
catkin_make
and then after that all you have to do is roslaunch usb_cam usb_cam-test.launch to start the node. Mine started out dark, so i had to go into the launch file and mess around with the brightness for a bit. And while youre there, make sure the pixel_format parameter has a value of Y10
You should get a sensor_msgs::Image message being published to a topic named "<camera_name>/image_raw", you can run rqt_graph to visualize it. Big thanks to 6by9 over at raspberry pi forums, dont think i could have gotten it done without him, he did alot of work that im very thankfull for. Thought id share the knowledge back here in case anyone finds it usefull.
EDIT: I hear you can also compile with catkin_make --pkg usb_cam -DCMAKE_BUILD_TYPE=Release instead of catkin_make if the node takes too much CPU. Also, if you see a ton of error messages while compiling, its fine, it still should work, but if you want to get rid of them you can refer to this answer from a ros thread:
It looks like you need to install libavcodec. I don't know the exact
command to install it off the top of my head, but the format will look
like this:
sudo apt-get install libavcodec
The exact package name might not be
libavcodec. It maybe looks something like libavcodec-VERSION-NUMBER or
libavcodec-dev. In these situations you can search for packages with a
command like this:
apt-cache search libavcodec
This will find all packages that have text
containing "libavcodec". This should find the correct package for you
to install.

How do I play a wav file from a Free Pascal application running on Linux?

I have a multi-platform application written in Free Pascal. This application plays a short sound on some event. On Windows, I can do this by MMSystem and sndPlaySound('sound.wav'). However, I don't know how to do this on Linux without external libraries.
I have a solution to play it with SDL and OpenAL, but I don't want any dependency on these libraries to play one short sound. Does there exist a Linux command line player that exists on most distros by default? The file format doesn't matter; I will convert it.
mplayer is command line and graphical. You can start it on tty and pty.
You could try aplay, but that has a dependency on ALSA. Maybe sox?
The program mplayer - "the movie player" gives you the option to use a graphical user interface or to use the console. So i would imagine it has a solution to your problem.
Are you looking to BEEP, BLEEP and BOOP and BOP ( and low frequency fart) ? Use sox. If youre looking to play a file: use sox or SDL.
You need a for looped array to get a sort-of piano effect, like a song. Its ugly, messy, and cant be tweaked much like the ole PC speaker, but its passable.
Beep is probably want you want, tho. Install the package, put one on your motherboard(YEAH...no hookup? use sox), and enable the pcspkr module. (On ubuntu its blacklisted by default.) If BEEP produces nothing, try sox.
At least youll have something. Yes, you can check for loaded modules and installed packages. I believe Ive done both.

Grabbing Images from a Webcam to be used with OpenCV

This is a follow up to my previous question,
OpenCV PS 3 Eye
Can someone suggest a library that would allow me grab frames from camera without too much fuss (like video videoinput lib for windows) and pass them to opencv within my application?
I had a parallel problem using a completely different webcam: worked well in cheese/etc, v4l-info showed proper setup, but openCV would fail with:
HIGHGUI ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
Unable to stop the stream.: Bad file descriptor
After much flailing I found that at least one guy had similar problems with webcams in various applications.
In blind faith I promptly punched in export LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so and «poof» it worked.
The openCV v4l2 interface is not as robust as the v4l implementation and the export is a quick workaround (openCV appears to revert to v4l).
With a quick browse of opencv/modules/highgui/src/cap_v4l.cpp it would appear as though openCV would like to use v4l2.
I'm running Ubuntu Lucid 2.6.32-28-generic x86_64, libv4l-0 v0.6.4-1ubuntu1 with openCV pulled from the HEAD of the repo a few days ago.
In the course of explaining this I've resolved my issue. It turns out that openCV forces the resolution on a v4l2 device to 640x480 by default - and my device had a max 320x240 resolution which caused the fault when testing for the format type in opencv::highgui::cap_v41::try_palette_v4l2. I changed DEFAULT_V4L_WIDTH and, DEFAULT_V4L_HEIGHT.

How to programmatically create videos?

Is there a freely available library to create a MPEG (or any other simple video format) out of an image sequence ?
It must run on Linux too, and ideally have Python bindings.
I know there's mencoder (part of the mplayer project), and ffmpeg, which both can do this.
ffmpeg is a great (open source) program for building all kinds of video, and converting one type of video (a sequence of images in this case) into other types of video.
Usually it is utilized from the command line, but that is really just a wrapper around its internal libraries. It is expressly available to be used from within another program.
There are also python bindings that wrap the c api, though this particular project doesn't seem to be getting the best support (there are probably other projects out there doing the same thing).
There's also this link where someone has used ffmpeg to do something similar to what you're looking for.
GStreamer is a popular choice. It's a full multimedia framework much like DirectShow or QuickTime, has the advantage of having legally licensed codecs available, and has excellent Python bindings.
in c++ OpenCV (open source Computer Vision library from Intel) let you create an AVI file and just push frames inside...
but it's like shooting with a cannon to a fly.
Not a library, but mplayer has the ability to encode JPEG sequences to any kind of format. It runs on Linux, Windows, BSD and other platforms and you can write a python script if you want to use it with python.
ffmpeg has an API and also python bindings, seems to be the way to go !
Thanks
ffmpeg minimal runnable C example
I have provided a full runnable example at: How to resize a picture using ffmpeg's sws_scale()?

Resources