My goal is to stream a USB-Webcam from a Raspberry using VLC. The generated stream should be able to be shown using simple HTML visible on the most popular browsers.
So I use a simple ""-object in my HTML:
<video id="video" src="http://quarkcam:8080" autoplay="true" width="800" height="600" controls>/video>
The vlc-Command to create the Stream is the following: (using OGG which seemed to be the right choice for compatibility (feel free to correct me on this))
cvlc v4l2:///dev/video0 :v4l2-standard= :v4l2-width=800 :v4l2-height=600 :live-caching=100 :sout="#transcode{vcodec=theo,vb=800,acodec=vorb,ab=128,channels=2,samplerate=4410,scodec=none,fps=15}:http{mux=ogg,dst=:8080/}" :no-sout-all :sout-keep
While this works technically I have to reduce the resolution to 800x600 and framerate to 15fps on a Raspberry Pi 4 to make this work without constantly buffering. (theoretical maximum from webcam: 30fps on 1600x1200)
Are there better options for vcodec Codecs that can be used that would provide a stream that better fits to the Pi's hardware capabilities and STILL can be simply included into am HTML-page? I do NOT need to get the maximum available from the hardware but would at least be able to have a stable 30fps stream.
Raspberry Pi 4, while much powerful than previous iterations, is still not capable of decoding video and parallelly stream it, at least on the default Pi OS.
I suggest you to use MotionEyeOS - https://github.com/motioneye-project/motioneyeos. It is supposed to work on Raspberry Pi 4 with Raspbian OS kernel: 4.19 (raspbian)
https://github.com/motioneye-project/motioneyeos/wiki/Installation
A well documented tutorial is given here: https://randomnerdtutorials.com/install-motioneyeos-on-raspberry-pi-surveillance-camera-system/#:~:text=What%20is%20MotionEyeOS%3F,Raspberry%20Pi%20(all%20versions)%3B
Thank You,
Have a Great Day
Naveen
Related
We have a setup with a Windows 7 machine where we installed Dante Virtual Soundcard and start that soundcard with ASIO capabilities. The soundcard will receive audio over the network from a Tesira server. We want to capture the audio to files (highly preferring each channel to a separate file). The files will be played back on a later moment. There will likely be 6 channels or more.
In the same setup we use ffmpeg to capture some video which is working fine, with Direct Show. So for audio we wanted to use the same setup, since ffmpeg is able to record audio as well. However, there seems to be no option to select the ASIO devices which the virtual soundcard probably creates. So the question is what command line to use for ffmpeg, or what to install? Or which other program can record ASIO from command line?
I already tried installing:
Asio4all (actually wrong way around)
sox (don't know why actually)
HiFi Cable Asio Bridge (from VB-audio, not enough channels even with donate version)
Voicemeeter (from VB-Audio, not enough channels and actually mixes down)
O Deus Asio link, this might be an interesting option but it did not let me configure any route, any suggestions?
One thing I noticed is that the virtual soundcard can also be set to use WDM. Then I can see the devices with ffmpeg -list_devices true -f dshow -i duymmy, but recording does not yield any result, I have to ctrl-c to make it stop instead of q, and the file is zero bytes. Supposedly this is because the data over the network is all ASIO formatted and the Tesira Server cannot send "WDM data". FFmpeg stops at selecting the capture pin for audio only
EDIT:
I ran ffmpeg with high verbosity and when selecting the WDM soundcard it stops at Selecting pin Capture on audio only. Also when requesting the options it gives the same line for 22 times: min ch=1 bits=8 rate= 11025 max ch=2 bits=16 rate= 44100
You might use Voicemeeter instead of HIFI-Cable / ASIO-Bridge. Voicemeeter is a virtual audio device mixer able to connect everything together, any audio point, in any interface and any app together (including ASIO DAW)... Download & User Manual on www.voicemeeter.com
To answer my own question: it is not possible to capture sound from an ASIO device with ffmpeg. Maybe I will write the code for it if I need it...
I could however solve my issues by separating the two streams of audio data we have (AVB and Dante). These where on the same switch and maybe it is a bug in the firmware, maybe misconfiguration.
Thanks for your help!
How do I get the output from an ASIO device to IceCast2 or FFMpeg?
Duplicate?
And if not, Place the output for ffmpeg -f dshow -i "audio=your_device_name_in_dshow" -list_options
For a computer vision project that I am working on I need to grab images using a Logitech C920 webcam. I am using OpenCV's VideoCapture to do that, but the problem that I am facing is that the image that I take at a certain moment does not show the latest thing that the camera sees. That is, if I take an image at timestamp t, it shows what the camera saw at timestamp (t - delta), so to say.
I did this by writing a program that increments a counter and shows it on the screen. I pointed the camera at the screen and let it record. When the counter reached a certain value, say 10000, it would grab an image and save it with the filename "counter_value.png" (e.g. 10000.png). That way I was able to compare the current value of the counter with the current value seen by the camera. I noticed that most of the time the delay is about 4-5 frames, but it is not a fixed value.
I saw similar posts about this issue, but none of them really helped. Some people recommended putting the frame grabbing routine into a separate thread and updating a "current_frame" Mat variable. I tried that, but for some reason the issue is still present. Someone else mentioned that the camera worked well on Windows (but I need to use Linux, though). I tried running the same code on Windows and indeed the delay was only about 1 frame (which might as well be that the camera did not see the counter because the screen did not update fast enough).
I then decided to run a simple webcam viewer based only on V4L2 code, thinking that the issue might be coming from OpenCV. I again experienced the same delay, which makes me believe that the driver is using some sort of buffer to cache the images.
I am new to V4L2 and I really need to solve this problem as soon as possible, so my questions to you guys are:
Has anyone found a solution for getting the latest image using V4L2 (and maybe OpenCV)?
If there is no way to solve it using V4L2, does anyone know another alternative to fixing this issue on Linux?
Regards,
Mihai
It looks like that there will be always a delay between the VideoCapture::grab() call and when the frame is actually taken. This is because of frame buffering that is done at hardware/SO level and you cannot avoid that.
OpenCV provides the VideoCapture::get( CV_CAP_PROP_POS_MEC) ) method to give you the exact time a frame was captured, but this is only possible if the camera supports it.
Recently a problem has been discovered in V4L OpenCV implementation:
http://answers.opencv.org/question/61099/is-it-possible-to-get-frame-timestamps-for-live-streaming-video-frames-on-linux/
And a few days ago a fix has been pulled:
https://github.com/Itseez/opencv/pull/3998
In the end, if you have the right setup, you can know what is the time a frame was taken (and therefore compensate).
It is possible the problem is with the Linux UVC driver, but I have been using Microsoft LifeCam Cinemas for machine vision on Ubuntu 12.04 and 14.04 machines, and have not seen a 4-5 frame delay. I operate them in low light conditions, though, in which case they reduce the frame rate to 7.5 fps.
One other possible culprit is a delay in the webcam depending what format is used. The C920 appears to support H.264 (which few webcams do), so Logitech may have put most effort to make this work well, yet OpenCV appears not to support H.264 on Linux; see this answer for what formats it supports. The same question also has an answer with a kernel hack(!) to fix an issue with the UVC driver.
PS: to check the format actually used in my case, I added
fprintf(stderr, ">>> palette: %d\n", capture->palette);
at this line in the OpenCV code.
I have a very complicated audio setup for a project. Here's what we have:
3 applications playing sound
2 applications recording sound
2 sound cards
I really don't really have the code to any of these applications. All I want to do is monitor and control the audio streams. Here are a few examples of operations I'd like to do while the applications are running:
Mute one of the incoming audio streams.
Have one of the incoming audio streams do a "solo" (be the only stream that can "talk").
Get a graph (about 30 seconds worth) of the audio that each stream produced.
Send one of the audio streams to soundcard #1, but all three audio streams to soundcard #2.
I would likely switch audio streams every 2 minutes or so with one of the operations listed above. A GUI would be preferred. I started looking at the sound systems in Linux and it gets extremely complex and I feel like there have been many new advances in the past few years. I see jack, pulseaudio, artsd, and several other packages. They all have some promise but where should I start? Is there something someone already built that can help?
PulseAudio should be able to let you do all that. You'll need to configure a custom pipeline for splitting the app's audio for task 4, and I'm not exactly certain how you'd accomplish task 3, but I do know that it's capable of all sorts of audio stream handling via its volume control (pavucontrol).
I use Jack, which is quite simple to install and use, even if it
requires more efforts to configure with Flash and Firefox ...
You can try the latest Ubuntu Studio distribution and see if it solves your
problem (for the GUI, look at "patchage").
I'm trying to get images from a minoru3d webcam, which is actually two Vimicro webcams plus a USB hub in a single package. The problem is, opencv always takes streams in maximum resolution, making simultaneous capture from two webcams impossible(due to usb constraints). How do I set resolution or FPS? For some reason, opencv calls
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 320 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 240 );
don't work. I don't need to work with opencv, any other library doing the same job is good for me. The webcam uses uvc drivers from kernel 2.6.30, with v4l2. I tried the custom module here: http://linuxtv.org/hg/~pinchartl/uvcvideo on my Ubuntu box with 2.6.27 kernel.
I used luvcview and v4l2cam for my purposes. 2 is specifically written for the Minoru.
I have a capture card that captures SDI video with embedded audio. I have source code for a Linux driver, which I am trying to enhance to add video4linux2 support. My changes are based on the vivi example.
The problem I've come up against is that all the example I can find deal with only video or only audio. Even on the client side, everything seems to assume v4l is just video, like ffmpeg's libavdevice.
Do I need to have my driver create two separate devices, a v4l2 device and an alsa device? It seems like this makes the job of keeping audio and video in sync much more difficult.
I would prefer some way for each buffer passed between the driver and the app (through v4l2's mmap interface) contain a frame, plus some audio that matches up (with respect to time) with that frame.
Or perhaps have each buffer contain a flag indicating if it is a video frame, or a chunk of audio. Then the time stamps on the buffers could be used to sync things up.
But I don't see a way to do this with the V4L2 API spec, nor do I see any examples of v4l2-enabled apps (gstreamer, ffmpeg, transcode, etc) reading both audio and video from a single device.
Generally, the audio capture part of a device shows up as a separate device. It's usually a different physical device (posibly sharing a card), which makes sense. I'm not sure how much help that is, but it's how all of the software I'm familiar with works...
There are some spare or reserved fields in the v4l2 buffers that can be used to pass audio or other data from the driver to the calling application via pointers to mmaped buffers.
I modified the BT8x8 driver to use this approach to pass data from an A/D card synchronized to the video on Ubuntu 6.06.
It worked OK, but the effort of maintaining my modified driver caused me to abandon this approach.
If you are still interested I could dig out the details.
IF you want your driver to play with gstreamer etc. a separate audio device generally is what is expected.
Most of the cheap v4l2 capture card's audio is only an analog pass through with a volume control requiring a jumper to capture the audio via the sound card's line input.