Beaglebone Black Video Capture: Error "select timeout" - linux

Hey I'm following Derek Molloy's tutorial:
http://derekmolloy.ie/beaglebone/beaglebone-video-capture-and-image-processing-on-embedded-linux-using-opencv/#comment-30209
Using a Logitech c310 webcam, that is supported by the Linux UVC drivers.
root#beaglebone:/boneCV# v4l2-ctl --all
Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : UVC Camera (046d:081b)
Bus info : usb-musb-hdrc.1.auto-1
Driver version: 3.8.13
Capabilities : 0x84000001
Video Capture
Streaming
Format Video Capture:
Width/Height : 640/480
Pixel Format : 'YUYV'
Field : None
Bytes per Line: 1280
Size Image : 614400
Colorspace : SRGB
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 640, Height 480
Default : Left 0, Top 0, Width 640, Height 480
Pixel Aspect: 1/1
Video input : 0 (Camera 1: ok)
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 30.000 (30/1)
Read buffers : 0
Priority: 2
So we can see it is read by the Beagleboard no problem.
When I try to capture the video, I simply get this error:
root#beaglebone:/boneCV# ./capture -f -c 600 -o > output.raw
Force Format 1
select timeout
Looking at other threads, people don't seem to know how to answer this question, can anyone with experience on this project help me out?

If you compare the image size of YUYV and that of MJPEG you will notice that the former is much larger than the latter. BBB has limited bandwidth on its USB port so thats why you cannot operate your camera in YUYV format. MJPEG outputs compressed video stream. Different opencv versions tend to change the resolution that you set with v4l2-ctl command so you have to change the resolution in the boneCV code. I'm not sure how its done in c++ but in python, check Changing camera resolution in opencv code. According to Matthew, Bandwidth limitations he tested and found out the bandwidth to be 13.2MB/s.

Well I can say the issue is resolved. After rebooting and trying the camera again after several hours, it magically seems to work.
The only thing I changed is the capture call to be simpler it is now:
./capture -o > output.raw
I haven't converted the raw file to mpeg4 yet, since I'm installing ffmpeg as I type this, however I can confirm that grabbing still images is working. The filesize of the output.raw is confirmation that it is indeed capturing video as well. If anyone finds this and is stuck, I will be glad to give assistance as much as I can.
Strangely, it only seems to capture video after using the picture grabber program first. So there must be something the grabber is initializing that isn't happening in the capture.
UPDATE: Ok it turns out that the YUYV video mode is not working but the mjpeg does, putting it into grabber mode initialized mjpeg mode and that's why it worked. Not sure why YUYV doesn't work yet.

Related

capture v4l : no capture device - but Video Capture Multiplanar?

I would like to capture images from a camera. I used the usual v4lgrab example (https://www.kernel.org/doc/html/v5.4/media/uapi/v4l/capture.c.html) but fail. I get a message that it is not a capture device.
Taking a closer look it is "Video Capture Multiplanar"
Try to read about but don't understand. Also I am new to V4L, so can someone give me a hint how to go on?
v4l2-dbg -D
Driver info:
Driver name : mxc-isi-cap
Card type : mxc-isi-cap
Bus info : platform:32e00000.isi:cap_devic
Driver version: 5.4.24
Capabilities : 0x84201000
Video Capture Multiplanar
Streaming
Extended Pix Format
Device Capabilities
Do I undestand it right, it is a device driver issue, nothing to deal with sensorchip?

Python Sounddevice Callback returning an array with zeros

I am trying the Python Sounddevice lib to stream audio from the microphone
self.audio_streamer = sd.Stream(device=self.input_device, channels=self.channels,
samplerate=self.sampling_rate, dtype='int16',
callback=self.update_audio_feed, blocksize=self.audio_block_size,
latency='low')```
def update_audio_feed(self, indata, outdata, frames, time, status):
print("update_audio_feed")
if status:
print(status, file=sys.stderr)
print(indata)
outdata.fill(0)
Output :
The indata is an array with 0's always from the callback.
update_audio_feed
[[0]
[0]
[0]
...
[0]
[0]
[0]]
Sounddevice is detectingt the mic fine but not getting the signal :
Device Info: {'name': 'MacBook Pro Microphone', 'hostapi': 0, 'max_input_channels': 1, 'max_output_channels': 0, 'default_low_input_latency': 0.04852607709750567, 'default_low_output_latency': 0.01, 'default_high_input_latency': 0.05868480725623583, 'default_high_output_latency': 0.1, 'default_samplerate': 44100.0}
Sampling rate: 44100.0
The issue on my mac was a security/ permissions issue . When I tried running the python script through Visual Studio Console it did not work... But when I tried the mac Terminal it prompted for the microphone and everything started to work..
More details here :
https://www.reddit.com/r/MacOS/comments/9lwyz0/mojave_not_privacy_settings_blocking_all_mic/
https://github.com/spatialaudio/python-sounddevice/issues/267
I've been using sounddevice without major issues on a number of Macs for a few months now.
Firstly, have you tried the wire.py example? That works out of the box for me.
Two things that I noticed in your code:
I haven't tried specifying the blocksize. I have only used the default value of 0. I could well believe that may be causing you issues.
You've specified a "low" latency Stream. At the very least on OSX10.13 this produces very unstable audio (lots of input underflows). If stable audio is important to you, I would recommend you consider latency options higher than "high". For reference, Audacity uses 100ms and obtains stable audio. Also, input underflows often mean indata is filled with zeros.
For those interested in this problem in the future, you may wish to look at the issue posted on sounddevice at GitHub.
I had the same issue on MacOS, but I was running the script from vscode. Actually vscode wouldn't ask for microphone permission, so it will assume it has this permission (which it didn't) and you will get an empty array.
Switched to running the script from terminal and everything changed, I got a permission request and everything went well.

How to capture still image from webcam on linux

I am trying to write a C++/Qt program for linux, where I take a still image photo from a webcam, make some transformations to a photo (cropping, resizing, etc.), and save it to a jpeg file.
But I have encountered some problems. The main problem is that standart UVC (usb video device class) linux driver currently does not support direct still image capture: http://www.ideasonboard.org/uvc/ .
So, there are two possible ways to capture still image. You can take one frame from the video stream from the camera, or you can take a separate photo, like a digital portable camera. The second way is not supported in linux uvc driver, so the first method is the only way. But the problem is, that if you want to take a frame from the video stream, the size of the photo can't be bigger than the size of video in the video preview window. So, if I want to take 2 megapixel photo, I must start videostream with the size 1600x1200, which is not so comfortable (At least, in Qt the size of the videostream depends on the videopreview window size).
I know that there is video for linux 2 API, which may be helpful in this task, but I don't know how to use it. I am currently learning gstreamer, but I can't now figure out how to do what I need using these tools.
So, I will appreciate any help. I think it is not a hard problem for people who know Linux, GStreamer, v4l2 API, and other linux-specific things.
By the way, the program will be used only with web-camera Logitech C270 HD.
Please, help me. I don't know what API or framework can help me do this. May be you know.
Unfortunately the C4V2 calls in opencv did not work for still image capture with any camera I have tried out of the box using the UVC driver.
To debug the issue I have been playing with trying to accomplish this with c code calling c4v2 directly.
I have been playing with the example code found here. It uses the method of pulling frames from the video stream.
You can compile it with:
gcc -O2 -Wall `pkg-config --cflags --libs libv4l2` filename.c -o filename
I have experimented with 3 logitech cameras. The best of the lot seems to be the Logitech C910. But even it has significant issues.
Here are the problems I have encountered trying to accomplish your same task with this code.
It works pretty much every time with width and height set to 1920x1080.
When I query other possibilities directly from the command line using for example:
v4l2-ctl --list-formats-ext
and I try some of the other "available" smaller sizes it hangs in the select waiting for the camera to release the buffer.
Also when I try to set other sizes directly from the command line using for example:
v4l2-ctl -v height=320 -v width=240 -v pixelformat=YUYV
Then check with
v4l2-ctl -V
I find that it returns the correct pixel format but quite often not the correct size.
Apparently this camera which is listed on the UVC site as being UVC and therefore v4l2 compatible is not up to snuff. I suspect it is just as bad for other cameras. The other two I tried were also listed as compatible on the site but had worse problems.
I did some more testing on the LogitechC910 after I posted this. I thought I would post the results in case it helps someone else out.
I wrote a script to test v4l2 grabber code mentioned above on all the formats the camera claims it supports when it is queried with v4l2 here are the results:
640x480 => Hangs on clearing buffer
160x120 => Works
176x144 => Works
320x176 => Works
320x240 => Works
432x240 => Works
352x288 => Works
544x288 => Works
640x360 => Works
752x416 => Hangs on clearing buffer
800x448 => Hangs on clearing buffer
864x480 => Works
960x544 => Works
1024x576 => Works
800x600 => Works
1184x656 => Works
960x720 => Works
1280x720 => Works
1392x768 => Works
1504x832 => Works
1600x896 => Works
1280x960 => Works
1712x960 => Works
1792x1008 => Works
1920x1080 => Works
1600x1200 => Works
2048x1536 => Works
2592x1944 => Hangs on clearing buffer.
It turns out that the default setting of 640x480 doesnt work and that is what trapped me and most others who have posted on message boards.
Since it is grabbing a video frame the first frame it grabs when starting up may have incorrect exposure (often black or close to it). I believe this is because since it is being used as a video camera it adjusts exposure as it goes and doesnt care about the first frames. I believe this also trapped me and other who saw the first frame as black or nearly black and thought it was some kind of error. Later frames have the correct exposure
It turns out that opencv with python wrappers works fine with this camera if you avoid the land mines listed above and ignore all the error messages. The error messages are due to the fact while the camera accepts v4l2 commands it doesnt respond correctly. So if you set the width it actually gets set correctly but it responds with an incorrect width.
To run under opencv with python wrappers you can do the following:
import cv2
import numpy
cap = cv2.VideoCapture(0) #ignore the errors
cap.set(3, 960) #Set the width important because the default will timeout
#ignore the error or false response
cap.set(4, 544) #Set the height ignore the errors
r, frame = cap.read()
cv2.imwrite("test.jpg", frame)
**Download And Install 'mplayer'**
mplayer -vo png -frames 1 tv://
mplayer -vo png -frames 1 tv://
might give a green screen output as the camera is not yet ready.
mplayer -vo png -frames 2 tv://
You can try increasing the number of frames and choose a number from which the camera gives correct images.
What about this program?
#include<opencv2/opencv.hpp>
using namespace cv;
int main()
{
VideoCapture webcam;
webcam.open(0);
Mat frame;
char key;
while(true)
{
webcam >> frame;
imshow("My Webcam",frame);
key = waitKey(10);
if(key=='s')
break;
}
imwrite("webcam_capture.jpg", frame);
webcam.release();
return 0;
}
This will capture a picture of maximum size allowed by your webcam. Now you can add effects or resize the captured image with Qt. And OpenCV is very very easy to integrate with Qt, :)

Directshow continious capture

I have Mp4 Capture Application in direct show. In my application i need to capture 30 min(or say some dynamic value) video continuously
for the I made one WaitableTimer Routine , after 30 minutes i want to stop the capture and then start again ... but after i stop capture on next sample in start capture the stream not get added to the file. to start next capture , i have to release all the capture variables again get devices and build graph and then i can start capture.
Cant i simply stop capture , then rename the output file and again start capture?? is anything needed to add additional to do that??
Please help me on this.
Thanks
Edit :
Below is the graph i use for recording
Video Source --> x264vfw - H.264/MPEG-4 AVC Codec --------->GDCL MPEG-4 Multiplexer --> File Writer
|
Audio Source --> ACM Wrapper --> Monogram AAC Encoder --|
We do something similar to capture DV-Avi's. Have you tried to:
stop the graph
remove File-Writer
create new File-Writer (and configure)
connect File-Writer to mux
and start again
If this does not work then there is something wrong with the Muxer or one of the other filters. You can easily test this, just replace the Muxer with an audio and video-renderer and try to play, stop, play.
You can also just try another MP4 mux filter, like the monogram mp4 mux.

Determine whether an audio file is encoded in Apple Lossless (ALAC)

There are a number of audio files that have .m4a suffix and these are encoded in one of AAC or Apple Lossless (ALAC). I want to choose only audio files encoded in Apple Lossless of them. Is there any way to determine this? I tried FFmpeg, but it says all of them are encoded in AAC.
Edit: I am currently on Windows.
If you have the FFmpeg package, you should have ffprobe.
Give this a try:
ffprobe -v error -select_streams a:0 -show_entries stream=codec_name -of default=noprint_wrappers=1:nokey=1 file.m4a
-v error: to hide the startup text
-select_streams a:0: to select the first audio track
-show_entries stream=codec_name: to display only the codec type
-of default=noprint_wrappers=1:nokey=1: to remove extra formatting
This will print out just aac or alac. Perfect for scripting.
Here is a file that has a description of M4A (best I could find so far) on page 67:
http://iweb.dl.sourceforge.net/project/audiotools/audio%20formats%20reference/2.14/audioformats_2.14_letter.pdf
A typical M4A begins with an 'ftyp' atom indicating its file type...
10.2.1 the ftyp atom
[0 31] ftyp Length [32 63] 'ftyp' (0x66747970)
[64 95] Major Brand [96 127] Major Brand Version
[128 159] Compatible Brandā‚ ...
The 'Major Brand' and 'Compatible Brand' elds are ASCII strings.
'Major Brand Version' is an integer.
At first I figured 'ftyp' would be where format is determined, but judging by this list that is more like the file type itself (already known as m4a):
http://www.ftyps.com/index.html
http://www.ftyps.com/what.html Describes a bit more of the format.
If ftyp doesn't differentiate, then I think that the 'Major Brand' field might refer to the fourcc's on this page:
http://wiki.multimedia.cx/index.php?title=QuickTime_container
The one for Apple Lossless being 'alac' and AAC is probably 'mp4a'
Apple's Lossless format open source page indicates that the ftype is 'alac' (slightly contradictory to above)
http://alac.macosforge.org/trac/browser/trunk/ALACMagicCookieDescription.txt
So far what I can tell is that the 4 bytes following ftyp are always (in a smallish sample size) 'M4A '.
Somewhere in the first ~200 (hex) bytes or so there is an ascii 'mp4a' for AAC compression or an 'alac' for Apple Lossless. The 'alac' always seems to come in pairs ~30 bytes apart ('mp4a' only once).
Sorry that's not more specific, if I find the exact location or prefix I'll update again. (My guess is the earlier part of the header has a size specified somewhere.)
You can do it with Core Audio.
Something like:
CFStringRef pathToFile;
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, pathToFile, kCFURLPOSIXPathStyle, false);
ExtAudioFileRef inputFile;
ExtAudioFileOpenURL(inputFileURL, &inputFile);
AudioStreamBasicDescription fileDescription;
UInt32 propertySize = sizeof(fileDescription);
ExtAudioFileGetProperty(inputFile,
kExtAudioFileProperty_FileDataFormat,
&propertySize,
&fileDescription);
if(fileDescription.mFormatID == kAudioFormatAppleLossless){
// file is apple lossless
}
On a Mac, you select the file you want and then right click. Find "Get Info" and click that and a window will pop up with extra information about the file you selected. It should say next to "Codecs:" "AAC" or "Apple Lossless"
I hope I helped those Mac users out there that had the same question (and possibly Windows users in some way even though I am not familiar with the OS.)
try using http://sourceforge.net/projects/mediainfo/
"MediaInfo is a convenient unified display of the most relevant technical and tag data for video and audio files." - sourceforge project description
This is how info is displayed.
General
Complete name : C:\Downloads\recit24bit.m4a
Format : MPEG-4
Format profile : Apple audio with iTunes info
Codec ID : M4A
File size : 2.62 MiB
Duration : 9s 9ms
Overall bit rate : 2 441 Kbps
Track name : 24 bit recital ALAC Test File
Performer : N\A
Comment : Test File
Audio
ID : 1
Format : ALAC
Codec ID : alac
Codec ID/Info : Apple Lossless Format
Duration : 9s 9ms
Bit rate mode : Variable
Bit rate : 2 438 Kbps
Channel(s) : 2 channels
Sampling rate : 22.7 KHz
Bit depth : 24 bits
Stream size : 2.62 MiB (100%)
Language : English
Check audio section for codec/encoding details.

Resources