Record video in background with mencoder - linux

I have an USB tv stick Sundtek MediaTV Pro III which has an analog input.
with the following command, recording works perfectly.
mencoder tv:// -tv driver=v4l2:width=720:height=576:outfmt=uyvy:device=/dev/video0:input=1:fps=25:adevice=/dev/dsp0:audiorate=48000:amode=1:forceaudio:immediatemode=0 -ffourcc DX50 -ovc lavc -lavcopts vcodec=mpeg4:mbd=2:turbo:vbitrate=1200:keyint=15 -oac mp3lame -noskip -o video1.avi
The only problem I have is, that I can hear the sound while recording.
This is kind of annoying because I want to be able to watch a move (a file, not with the usb stick), while I am recoding the analog tv stream.
How can I record without hearing the sound?

Try this:
/opt/bin/mediaclient -c external -d /dev/video0
this is telling the driver not to play back audio via the speaker

Related

Play video file and audio file simultaneously from Linux command line

I would like to play a separate video stream and audio stream simultaneously from the Linux command line, using e.g. cvlc or mpv.
More specifically, I would like to play a youtube video in high quality format, using youtube-dl along with a player.
More details:
I am using this command to playback a youtube video on my pc:
youtube-dl -i <youtube.com/url> -o - | mpv -
Lets say I have following formats for a youtube video available:
249 webm audio only tiny 62k , opus # 50k (48000Hz), 14.14MiB
251 webm audio only tiny 158k , opus #160k (48000Hz), 35.68MiB
303 webm 1920x1080 1080p60 4429k , vp9, 60fps, video only, 536.78MiB
299 mp4 1920x1080 1080p60 6901k , avc1.64002a, 60fps, video only, 884.09MiB
22 mp4 1280x720 720p 1339k , avc1.64001F, 30fps, mp4a.40.2#192k (44100Hz) (best)
youtube-dl would automatically choose the last entry of this list, as it is a format that includes video and audio in one file.
Is there a way I can play the formats 303 and 251 on my pc?
If I would like to download them I would use:
youtube-dl -i <youtube.com/url> -f 303+bestaudio
What youtube-dl does in this case is to download the video and the audio file seperately and merges them into one file using ffmpeg.
But I can't figure if there is a possibility to playback both streams without first downloading them into a file.
Alright I think I figured a solution.
The command I use is as follows:
ffmpeg -loglevel quiet -i $(youtube-dl -g youtube.com/url -f 303) -i $(youtube-dl -g youtube.com/url -f bestaudio) -f matroska -c copy - | mpv -
The youtube-dl -g option would just return the url to the video or audio stream.
In this case it will pass the urls to ffmpeg which is doing the merging process.
-f matroska tells ffmpeg to use the mkv container format
-c copy says that no re-encoding should be done
edit:
For some reason, on my systemm the input is broken after ffmpeg exits. For now I resolve this by typing reset, until I find a better solution to this issue.

How to improve my current method of generating an audio transcript from an online video

I am working on a project that needs to download and analyze online videos. Most of the videos are hosted on YouTube but some are hosted on Facebook.
My task is to generate an audio transcript for every video.
My colleague was using a series of programs sequentially given some {link}:
youtube-dl -f '(mp4)[height = 360][width = 640]' {link} -o '{out_1}.%(ext)s'
ffmpeg -i {out_1} -vn {out_2}.wav
sox {out_2} {out_3} channels 1 rate 16000
pocketsphinx_continuous -infile {out_3} -samprate 16000 -hmm {ACOUSTICMODEL} -dict {DICTIONARY} -lm {LANGMODEL} -fwdflat yes -bestpath yes 2> error.log | tee {out_4}.log && rm error.log
Note that there's an extra ffmpeg step to extract the audio, instead of simply directly downloading it with youtube-dl because video is needed as well.
Everything works correctly as far as I can tell, but I've never really dealt with audio before so I'm not sure if this is the best way to go about it.

No data written to stdin or stderr from ffmpeg

I have a dummy client that is suppose to simulate a video recorder, on this client i want to simulate a video stream; I have gotten so far that i can create a video from bitmap images that i create in code.
The dummy client is a nodejs application running on an Raspberry Pi 3 with the latest version of raspian lite.
In order to use the video I have created, I need to get ffmpeg to dump the video to pipe:1. The problem is that I need the -f rawvideo as a input parameter, else ffmpeg can't understand my video, but when i have that parameter set ffmpeg refuses to write anything to stdio
ffmpeg is running with these parameters
ffmpeg -r 15 -f rawvideo -s 3840x2160 -pixel_format rgba -i pipe:0 -r 15 -vcodec h264 pipe:1
Can anybody help with a solution to my problem?
--Edit
Maybe i sould explain a bit more.
The system i am creating is to be set up in a way, where instead of my stream server ask the video recorder for a video stream, it will be the recorder that tells the server that there is a stream.
I have have slowed my problem on my own. (-:
i now have 2 solutions.
Is to change my -f rawvideo to -f data that works for me anyways.
I can encode my bitmaps as jpeg in code and pipe my jpeg images to stdin. This also requires me to change the ffmpeg parameters to -r 4 -f mjpeg -i pipe:0 -r 4 -vcodec copy -f mjpeg pipe:1 and is by far the slowest thing i have ever done. and i can't use a 4k input
thanks #Mulvya for trying to help.
#eFox Thanks for editing my stupid spelling and grammar mistakes

Piping output from aplay to arecord in centos

I am trying to automate some tests for a websocket client. This client connects to a server on command and the server is basically a speech to text engine. The client supports audio streaming from a microphone, such that people can record themselves in real time and transmitting it to the engine. I am running the client in a centos VM which does not have a physical sound card so I decided to simulate one using
modprobe snd-dummy
My plan is to pipe the output of
aplay audioFile.raw
to the input of
arecord test.raw -r 8000 -t raw
so that I can use that simulate the microphone feature. I read online that the file plugin for ALSA can pipe the results of one command to the next so I made the following modifications to the .asoundrc file in my root directory:
pcm.!default {
type hw
card 0
}
pcm.Ted {
type file
slave mySlave
file "| arecord test.raw -r 8000 -t raw"
}
pcm_slave.mySlave {
pcm "hw:0,0"
}
ctl.!default {
type hw
card 0
}
When I try the following command:
aplay audioFile.raw -D Ted
It seems to run fine but the output of test.raw seems to contain only silence... Does anyone know what I am doing wrong, I am very new to ALSA so if anyone can point me in the right direction, it would be greatly appreciated. Thanks!
Issue Fixed, instead of using snd-dummy I used snd-aloop and audio correctly pipes refer to this question:
Is it possible to arecord output from dummy card?

ffmpeg get all sound devices(input/output)

I have downloaded the static build of ffmpeg for Windows and am trying to get all my sound devices (input/output) I have googled and found this command to retrieve audio devices , but when I use it ffmpeg arecord -l, it shows this error
Unrecognized option 'l'.
Error splitting the argument list: Option not found
what am missing here?
arecord is the command-line sound recorder and player for the ALSA soundcard driver which is available on Linux.
On Windows you can list the dshow devices with:
ffmpeg -list_devices true -f dshow -i dummy
See the Windows section of https://trac.ffmpeg.org/wiki/Capture/Desktop

Resources