FFmpeg stream dynamic png - node.js

I would like to know if its possible to stream a png or any kind of image using ffmpeg. I would like to generate the image contiously using nodejs that updates every 10 seconds. I would like to display game stats with this in a corner and mix it with some background music or pre recorded commentary on it. Additionaly i would like to mix a video and the image should act like an overlay.
I am also not sure if using a transparent png image its possible to do
I couldn't get my head around doing the mixing with ffmpeg and its looks very complicated so i would like to get some help on it.
I have video files stored in a folder that i would like to continously stream and mix different music and an image on it. I would like to have it all continously working without stopping the stream.
Is it possible with ffmpeg cli on linux or i cant avoid using a desktop windows pc for such thing?

Well after digging through the documentation and asking for help on irc i came up with the following command:
First i store the list of tracks in a txt file such as:
playlist.txt
file 'song1.mp3'
file 'song2.mp3'
file 'song3.mp3'
Then i want to concat the tracks so i use -concat and specify the input as a txt file.
The second thing is using a static image as an input that i can manually update.
ffmpeg -re -y -f concat -safe 0 -i playlist.txt -framerate 1 -loop 1 -f image2 \
-vcodec libx264 -pix_fmt yuv420p -preset ultrafast -r 12 -g 24 -b:v 4500k \
-acodec libmp3lame -ar 44100 -threads 6 -qscale 3 -b:a 128k -bufsize 512k \
-f flv "rtmp://"
The rest is specificing the output format and other settings for streaming.
Thats what i came up with so far, not sure if theres any better way of doing this but right now it is sufficient enough for my needs.

Related

Corrupted output when resizing with scale_npp filter ffmpeg

I'm trying to transcode a single video file into multiple variants with different
resolutions/bitrates using GPU acceleration - in one command.
The encoding/decoding part is working great and produces results as expected.
Main issue
However when I try to resize using the scale_npp filter things start to turn green.
The resulting output from ffmpeg is just a green image
Similar to Issue1,
Issue2 Already asked in ffmpeg forum but there is no answer for this issue
Command i am using for conversion
ffmpeg -y -vsync 0 -hwaccel cuvid -c:v h264_cuvid -i input.mp4 -vf scale_npp=1280:720 -c:a copy -c:v h264_nvenc -b:v 360k -hls_time 10 -hls_segment_filename output/ts_%03d.ts output/m3.m3u8
My output video showing like
https://i.stack.imgur.com/EbVfV.jpg
I appreciate your help.
Thanks

ffmpeg cutting the last 2 seconds of audio

When I record both audio and video using ffmpeg the audio recording cuts out for the last two seconds of the video.
ffmpeg \
-f v4l2 -i /dev/video0 \
-f alsa -i hw:2 \
samples/video.mp4
I have tried using different audio and video codecs, as well as different video formats and I, have noticed that mpg format instead of mp4 the audio works better.
I have also tried using different codecs with the mp4 and checked the compatibilities wikipedia but they don't seem to matter much.
So adding the following line seems to solve the problem.
-preset ultrafast -threads 0

Unable to find suitable output format for 'libmp3lame' and 'flv'

Currently I am trying to setup a livestream using ffmpeg in Kubuntu. I got really far, but unfortunately I cannot figure out one bit that mentions output format errors. Here's the code I am using for my .sh file:
#! /bin/bash
# streaming on Ubuntu via ffmpeg.
# see http://ubuntuguide.org/wiki/Screencasts for full documentation
# input resolution, currently fullscreen.
# you can set it manually in the format "WIDTHxHEIGHT" instead.
INRES="1920x1200"
# output resolution.
# keep the aspect ratio the same or your stream will not fill the display.
OUTRES="1280x720"
# input audio. You can use "/dev/dsp" for your primary audio input.
#INAUD="pulse"
# target fps
FPS="30"
# video preset quality level.
# more FFMPEG presets avaiable in /usr/share/ffmpeg
QUAL="ultrafast"
# stream key. You can set this manually, or reference it from a hidden file
like what is done here.
STREAM_KEY=$(cat ~/.twitch_key)
# stream url. Note the formats for twitch.tv and justin.tv
# twitch:"rtmp://live.twitch.tv/app/$STREAM_KEY"
# justin:"rtmp://live.justin.tv/app/$STREAM_KEY"
STREAM_URL="rtmp://live-cdg.twitch.tv/app/$STREAM_KEY"
ffmpeg \
-f alsa -ac 2 -i "$INAUD" \
-f x11grab -s "$INRES" -r "$FPS" -i :50.0 \
-vcodec libx264 -s "$OUTRES" -preset "$QUAL" -crf 22\
-acodec libmp3lame -threads 6 -q:a 0 -b:a 160k \
-f flv -ar 44100 "$STREAM_URL"
Now the issue is that whenever I run the .sh file, I get an error at the end saying
Unable to find a suitable output format for 'libmp3lame'
libmp3lame: Invalid argument
So I decided to troubleshoot by removing the audio line at the bottom and it just turned into
Unable to find a suitable output format for 'flv'
flv: Invalid argument
Something tells me this is because the stream key is not defined properly, but I have no idea whatsoever how to fix this.
So does anyone have an idea?
Thanks in advance!
Misterff1
Insert a space here:
-crf 22\
so
-crf 22 \

FFMPEG encode audio and forced subtitles at same time?

I'm using latest static build of ffmpeg windows.
My input file (.mkv) is:
[video] - 1080, V_MPEG4/ISO/AVC, 14.6 Mbps, ID#0
[audio] - DTS 5.1, 1510 Kbps, ID#1
[subtitles] - S_TEXT/ASS Lossless English, ID#14
My problem is this: I convert the audio, so that my target player, a XB1 console (media support faq), is able to play audio/video. However sometimes its rather difficult to hear or parts may be in foreign language, so I want to force the english subtitles into the mix at the same time I convert the audio.
Currently for the audio, I use the following command
ffmpeg -i input.mkv -codec copy -acodec ac3 output.mkv
Can I somehow tie in the forced subtitles (onto the video) in order to save an extra process of taking the output.mkv and trying to force subtitles on?
Edit: I've tried using the following command to extract subtitles to be able to edit them
ffmpeg -i Movie.mkv -map 0:s:14 subs.srt
However i get the error: Stream map '0:s:14' matches no streams
Edit2: attempted to extract subtitles and succeeded with
ffmpeg -i input.mkv -map 0:14 -c copy subtitles.ass
but still looking to force the subtitles, nonetheless!
Also - a little bonus to this question - can I somehow extract the .ass file and edit it to only produce subtitles for foreign parts - so english audio doesn't have subtitles during the movie but foreign audio does have subtitles?
Cheers
Edit3:
When I try to use both of the commands at once (my earlier mentioned audio converter & one from the ffmpeg wiki)
ffmpeg -i input.mkv -codec copy -acodec ac3 -vf "ass=subs.ass" output.mkv
I get the following error from ffmpeg,
Filtergraph 'ass=subs.ass' was defined for video output stream 0:0 but codec copy was selected.
Filtering and streamcopy cannot be used together.
Since your media player does not support subtitles, the text has to be burnt onto the video image. For that, use
ffmpeg -i input.mkv -vf "ass=subs.ass" -c:v libx264 -crf 20 -c:a ac3 output.mkv
This will re-encode the video, since text is being added. The CRF value controls the video quality. Lower values produce better quality but larger files. 18 to 28 is a decent range to try.

Problems with point to point streaming using FFmpeg

I want to live stream video from webcam and sound from microphone from one computer to another but there is some problems.
When I use this command line:
ffmpeg.exe -f dshow -rtbufsize 500M -i video="Camera":audio="Microphone" -c:v mpeg4 -c:a mp2 -f mpegts udp://127.0.0.1:1234
FFmpeg console starts filling with yellow color messages and stream becomes unstable: http://s16.postimg.org/qglcgr345/Untitled.png
To solve this problem I have added new parameter to the command line to set the frame rate -r 25:
ffmpeg.exe -f dshow -rtbufsize 500M -r 25 -i video="Camera":audio="Microphone" -c:v mpeg4 -c:a mp2 -f mpegts udp://127.0.0.1:1234
After I added -r 25 problem with yellow color messages disappears but then appears another problem. When I fresh start FFmpeg with this command line video and sound looks synchronous but after one or two minutes appears ~25 seconds lag between video and sound, sound goes behind video. I have tried that with different protocols UDP, TCP, RTP but problems are the same. Please help me!
I found answer for my problem with "-r" and asynchronous audio and video. Who is interested answer is here: https://trac.ffmpeg.org/wiki/DirectShow (in paragraph "Specifying input framerate").

Resources