I tried Live video Streaming with NodeJS and ffmpeg encoder. It works with a lag of around 2sec and with a distortion as well. Lag does not matter as there is always. But I need to eliminate the video distortion as much as possible. So what would be the suitable bit rates and is there a better encoder to do this? In ffmpeg, it encodes to mpegts so is there a more preferable format than mpegts ? plz help
my encoding code was
ffmpeg -s 640x480 -f dshow -i video="HP HD Webcam":audio="Microphone (Realtek High Definition Audio)" -preset ultrafast -qp 0 -f mpegts -v:b 800 -r 100 http://localhost:8082/abc/640/480/
You didn't set a video codec, so it used mpeg2 (the default for mpegts). You want to use H264, so use -c:v libx264:
ffmpeg -s 640x480 -f dshow -i video="HP HD Webcam":audio="Microphone (Realtek High Definition Audio)" -c:v libx264 -preset ultrafast -qp 0 -f mpegts -v:b 800 -r 100 http://localhost:8082/abc/640/480/
And then it should be fine. In addition, the green boxes sound like bugs (overflows?), so perhaps file a bug about them on the ffmpeg bug tracker.
Related
I'm trying to calculate the audio + visual difference between a harshly compressed video file and one that hasn't been.
I'm using pipes because ultimately I wish this to take src from a camera stream.
I've managed to get the video results that I'm looking for, but I'm struggling with the audio.
I've added a line to invert the phase of the compressed audio, so that when they add up in the blend they should almost cancel each other out, but that doesn't happen.
ffmpeg -i input.avi -f avi -c:v libxvid -qscale:v 30 -c:a wmav1 - | \
ffmpeg -i - -f avi -af "aeval='-val(0)':c=same" - | \
ffmpeg -i input.avi -i - -filter_complex "blend=all_mode=difference" -c:v libx264 -crf 18 -f avi - | \
ffplay -
I can still hear all the audio, when what I should be hearing are solely compression artifacts. thx
To preface, I'm not sure your method would identify audio compression 'artifacts'
Your command doesn't perform any audio comparison, it only inverts a single channel. Also, the audio and video are compressed twice and the codecs the last ffmpeg command receives are the default AVI codecs of mpeg4 and mp3.
Use
ffmpeg -i input.avi -f matroska -c:v libxvid -qscale:v 30 -c:a wmav1 - |\
ffmpeg -i input.avi -i - -filter_complex "[0][1]blend=all_mode=difference;[1]aselect=gt(n\,0),asetpts=PTS-STARTPTS[1a];[0][1a]amerge,aeval=val(0)-val(1):c=mono" -c:v rawvideo -c:a pcm_s16le -f matroska - |\
ffplay -
I assume your audio is mono. If your audio has N channels, your aeval will need N expressions where the Mth expression is val(M-1)-val(N+M-1)
I also trim out the first encoded audio frame in order to mitigate encoder delay that Paul mentioned, and it seems to work here.
There might be some delay introduced with encoded audio samples. Also your command is incorrect.
I have a webcam and a separate mic. I want to record what is happening.
It almost works, however the audio seems to play quickly and parts missing while playing over the video.
This is the command I am currently using to get it partially working
ffmpeg -thread_queue_size 1024 -f alsa -ac 1 -i plughw:1,0 -f video4linux2 -thread_queue_size 1024 -re -s 1280x720 -i /dev/video0 -r 25 -f avi -q:a 2 -acodec libmp3lame -ab 96k out.mp4
I have tried other arguments, but unsure if it has to do with the formats I am using or incorrect parameter settings.
Also, the next part would be how to stream it. Everytime I try going through rtp it complains about multiple streams. I tried doing html as well, but didn't like the format. html html://localhost:50000/live_feed or rts rts://localhost:5000
edit:
I am running this on a rpi 3.
I used FFmpeg to generate a test clip with color bars and a tone. I also made a special filter to dump out the raw audio data to check it. I was surprised to find that there is significant noise riding on the audio tone after it has gone through the AAC codec. Is this expected? Is there a way to prevent it?
To make the test file I used:
ffmpeg -f lavfi -i "smptehdbars=duration=600:size=1280x720:rate=59.94" -qscale:v 1 -pix_fmt yuv420p smpte_r59_720.mp4
then
ffmpeg -i smpte_r59_720.mp4 -f lavfi -i "sine=frequency=1000:sample_rate=48000:duration=600" -qscale:v 1 -vcodec copy -c:a aac -b:a 192k -shortest -map 0:0 -map 1:0 smpte_r59_720T.mp4
and then
ffmpeg -i smpte_r59_720T.mp4 -y -map 0 -acodec aac -vcodec libx264 -crf 23 -bsf:v h264_mp4toannexb smpte_r59_720T.ts
(Trying to do this all in one step kept failing.)
Other variations on this have varying degrees of noise, sometimes above nominal amplitude and sometimes below.
After finding this problem I pulled a third party test tone .WAV file with 44.1KHz sample rate from the web and checked it. The raw file is clean, but the encoded file I made has noise.
Noise on TS SMPTE bars
Clean audio from MP4
I am using the following command to take an audio mp3 file and make a video out of it (by using a static jpg picture). My aim is to get a mp3 audio that is as small as possible with still acceptable quality.
frequency="11000"
bitrate="45000"
avconv -loop 1 -i a.jpg -i audio.mp3 -shortest -r 1 -metadata STEREO_MODE=mono -c:v libx264 -ar "$frequency" -b:a "$bitrate" -ac 0 result.mkv
My questions are:
1. how can I achieve that the resulting file is MONO?
2. is it possible to reduce the bitrate furthermore? I would like to use values below 45000, too.
3. My aim is to get control of the parameters that influence the file size most significantly. Presently I know that the frequency is quite important. Are there any other parameters that would help me to get a very small output file with still acceptable quality?
Thanks in advance.
Since you are coding to a compressed audio codec, the frequency doesn't directly affect the file size. However, a frequency of 11 kHz will reduce quality of music.
Instead, I'd suggest
frequency="22050"
bitrate="48000"
ffmpeg -loop 1 -i a.jpg -i audio.mp3 -shortest -r 1 -c:v libx264 -crf 28 \
-ar "$frequency" -b:a "$bitrate" -ac 1 result.mkv
The CRF parameter controls video quality - lower values produce better quality but larger files. You'll get more savings from controlling that than audio bitrate, which is at the lower end anyway.
If your build has libfdk_aac included, you can instead use
frequency="22050"
bitrate="32000"
ffmpeg -loop 1 -i a.jpg -i audio.mp3 -shortest -r 1 -c:v libx264 -crf 28 \
-ar "$frequency" -c:a libfdk_aac -profile:a aac_he_v2 -b:a "$bitrate" -ac 1 result.mkv
Im trying to compare latency between different video codecs using ffmpeg and mplayer's benchmark.
I am using this command line to generate and send the stream:
ffmpeg -s 1280x720 -r 100 -f x11grab -i :0.0 -vcodec mpeg2video -b:v 8000 -f mpegts udp://localhost:4242
And I'm successfully using ffplay to receive and read it in real time:
ffplay -an -sn -i -fflags nobuffer udp://localhost:4242?listen
Now instead of playing the stream with ffplay, i'd like to use the mplayer benchmark to get some information on the latency:
mplayer -msglevel all=6 -benchmark udp://localhost:4242
But I get this output instead:
Playing udp://localhost:4242.
get_path('sub/') -> '/home/XXXXX/.mplayer/sub/'
STREAM_UDP, URL: udp://localhost:4242
Filename for url is now udp://localhost:4242
Listening for traffic on localhost:4242 ...
Timeout! No data from host localhost
udp_streaming_start failed
No stream found to handle url udp://localhost:4242
I tried with rtp protocol instead, didn't work either...
Does anyone have an idea what i'm doing wrong?
Thanks for the answers,
I actually tried a lot of different codecs, especially vp9, h264 and mpeg2, but the best low latency i got were with mpeg2video. Here are 3 of the command lines I used. I read the ffmpeg streaming guide and the different codec's encoding guides to try to get the best parameters for each of them, but the difference is noticeable:
ffmpeg -an -sn -s 1280x720 -r 30 -f x11grab -i :0.0 -vcodec libx264 -crf 18 -tune zerolatency -preset ultrafast -pix_fmt yuv420p -profile:v baseline -b:v 8000 -f mpegts threads 4 udp://127.0.0.1:4242
ffmpeg -s 1280x720 -r 30 -f x11grab -i :0.0 -vcodec mpeg2video -b:v 800k -f mpegts -threads 8 udp://127.0.0.1:4242
ffmpeg -t 5 -s 1280x720 -r 30 -f x11grab -i :0.0 -vcodec libvpx-vp9 -an -crf 18 -b:v 1M -f webm -threads 8 udp://127.0.0.1:4242
On localhost, I'm close to no latency at all with mpeg2video, when I have almost 1sec latency with h264. I heard vp9 could have very low latency too, but I apparently don't know how to use the options in ffmpeg, cuz I get really bad latency values...
Anyway, to get back to the topic, 127.0.0.1 instead of localhost doesn't help, and with ffmpeg://udp://ip:port it doesn't work either... :/ I think I may have wrong configurations on mplayer. maybe I should try to compile it myself.
But actually, I don't even know if mplayer would give me the informations I want (the average number of ms for a codec to encode/decode a frame, so that I can compare my different codecs precisely).
EDIT: Sorry for that... ffmpeg://udp://ip_addr works =) I made a typing mistake... n_n
Thanks a lot. Though, the quality of the video is really aweful compared to ffplay when I use mplayer...