A similar question has already been asked for the .NET platform but I am on Debian Linux.
I am trying to find a solution for burning a video DVD directly from a camera attached to a capture card. The card outputs an MPEG-2 stream and I want to write it directly to a DVD disc without creating any intermediate files.
The reason is so that when the recording is finished, the DVD can be very quickly finalized and ejected.
I have been looking at commandline tools like cdrecord and dvdauthor but I don't think they can do this. Any suggestions ?
As in a data-dvd or videodvd?
video dvd might need some work, data dvd however can easily be done by piping the output of mkisofs to growisofs.
man growisofs man mkisofs
edit:
mkisofs -r /media/cam/ | growisofs -Z /dev/dvd
Related
I've been able to stream audio from a input device in windows to a Linux machine using LineInCode, plink (Putty) and PulseAudio but unfortunately there isn't an option to choose the Window's output device with LineInCode so I decided to make a program that it does.
A program developed by Matthew van Eerde already do most of the work. You can select an output device and record a wav file. So instead of writing in a file I should send it to the stdout and plink and pacat would do the rest. The audio format "recorded" with his program is type WAVE_FORMAT_EXTENSIBLE (SubFormat) and it should be streamed to the pacat as a PCM. So my question is how do I convert from SubFormat to PCM audio format?
Here's the command with linco:
linco.exe -B 16 -C 2 -R 44100 | plink -v 192.168.11.5 -l armbian -pw 1234 "cat - | pacat --playback"
PS: I've tried to be objective as I could, sorry for the long post. If you have an idea on how to shorten it please let me know how.
Follows the projects link: https://github.com/rsegecin/WLStream
The format recorded in the windows output device is PCM floating 32 bits little endian so pacat needed to be configured to be able to receive this kind of format accordingly. I posted the project in github. There were also the need to configure the output data in binary and use fwrite function because printf wasn't keeping up with data output.
See you there.
I was wondering how is it possible to capture a video from a usb camera device connected to my linux machine with c++ and the terminal alone or perhaps a bash script, i see the terminal but i dont think an echo would provide me with video or frames. Help would be extremely appreciated.
Thank you
Give a look at this page . The v4l2grab is a program that reads raw images and convert them to jpeg and is run in a terminal
I'm working on an audio recognition project.
For testing, I'd like to be able to have a program:
load audio data from a file
provide it to the Linux kernel, as if it were coming from a microphone
have any user-space program sampling the microphone be obtaining data sourced
from my file.
Is that possible in Linux without having to write a new kernel module?
EDIT: i guess that solution won't work .. but see my comment below.
this shoud be simple under linux.
here are the steps:
make a named pipe with mkfifo (mkfifo ~/audio_out.pipe)
cat the audiofile into this pipe (cat test.wav > ~/audio_out.pipe)
get the program you want to listen, to get input from this pipe. maybe you have to make a symlink for programs not flexible enough to read from any device.
I hope I got your question right.
I am working on an embedded device using Linux that will read video, process and modify every frame and then return USB video stream. I don't know how to make USB video from a sequence of frames. Can someone direct me where to start?
Take a look at http://electron.mit.edu/~gsteele/ffmpeg/
It shows you how to make video from a sequence of images using ffmpeg and mencoder
Yes, take a look at OpenCV.
There are lots of code around here to show you how to use the library. For instance, take a look at: OpenCV: process every frame
How can the linux frame buffer, on Cell Linux, be captured to obtain either screen shots or movies?
Is there a tool to do this for a running program, or must the program writing to, and presumably controlling, the frame buffer also handle capture and recording? If so, how would the program do so?
Many tools for doing so, for example FBGrab and fbdump; look at the sources for those two, it would be pretty easy to extend either one or write your own which captures video instead of just snapshots.
However, I would recommend that the program writing to the framebuffer be the one recording as well, in order to synchronize capturing frames between writing them (and not partially through a write, or skipping, or ...)
you could use ffmpeg or avconv (eg. avconv -f fbdev -i /dev/fb0 mymovie.flv).