I'm dealing with UDP/RTP multi program transport streams with Directshow.
I wish to decode in a single graph the audio channels brought by different programs.
How can I configure the Demultiplexer in order to achieve this?
Using GraphEdit, the basic graph composed by:
Network receiver ---> MS Demultiplexer ---> PSI parser
allows me to see the program list and audio/video channels associated to each program.
If I select program, audio and video PIDs in PSI parser properties, the contents are rendered.
Now, how can I render multiple channels from different programs at the same time, in the same graph?
I tried:
1) via PSI parser properties dialog. The 1st configured is OK, but as I configure the 2nd audio/video/program, the old content rendering is replaced by the new configuration. Building a graph via API with this approach brings the same result: only the 1st configuration works. If I add other pins, I can render contents only if the configuration is the same as the 1st pin. If the audio/video PID belongs to a different program, it is not rendered.
2) cascading two (or more) Demuxes, configuring the 1st to forward packets belonging to the specific program and the 2nd to extract audio and video from the stream received. For this configuration, output pin media type = "transport stream", mapped to "Transport packet (complete)"; the PID is the program PID identified by PSI parser.
Result: the graph runs, but I got a black window and no audio.
Can you help, please?
How about adding a tee filter after the demux and then adding multiple parsers to the output pins from the tee? I think that might work.
The way I do it now is I use ffmpeg and generate multiple outputs. Then use separate FFmpeg instances to encode those streams separately. The only possible issue is that I use Linux and this may not be ideal for other operating systems.
Here is the master FFmpeg command:
/usr/bin/ffmpeg -f mpegts -i "udp://#server_ip:8080?overrun_nonfatal=1&reuse=1" \
-map 0:p:1 -copyinkf -vcodec copy -acodec copy -f mpegts "udp://server_ip:8001" \
-map 0:p:2 -copyinkf -vcodec copy -acodec copy -f mpegts "udp://server_ip:8002" \
...
-map 0:p:10 -copyinkf -vcodec copy -acodec copy -f mpegts "udp://server_ip:8010"
Then you can have single FFmpeg instances running something like this:
/usr/bin/ffmpeg -i "udp://#server_ip:8001" -vcodec libx264 -acodec libmp3lame -f mpegts rtmp://other_server:port
Hope this helps put someone in the right direction. I wish it were this simply explained when I needed help.
Related
I am trying to insert many miniclips.mp4 into a main.mp4 video - Although I have been able to do this using this solution, I seem to suffer from Generation Loss
The command I am using (within a python script, in a loop at many different intervals) is:
ffmpeg -i main.mp4 -i miniclipX.mp4 -filter_complex "[0:v]drawbox=t=fill:enable='between(t,5,6.4)'[bg];[1:v]setpts=PTS+5/TB[fg];[bg][fg]overlay=x=(W-w)/2:y=(H-h)/2:eof_action=pass;[1:a]adelay=5s:all=1[a1];[0:a][a1]amix" output.mp4
(Then renaming output.mp4 to main.mp4 within a loop)
Would there be anyway to either:
A) Reduce generation loss by implementing certain flags
or
B) Include many different input files and many different -filter_complex's in a singular command to achieve what I am after?
Because you did not provide the ffmpeg log (and therefore there is no info about your ffmpeg or your inputs), for this answer I'll assume all videos are the same width and height.
Example to show miniclip1.mp4 at 5 seconds and miniclip2.mp4 at 10 seconds:
ffmpeg -i main.mp4 -i miniclip1.mp4 -i miniclip2.mp4 -filter_complex
"[1:v]setpts=PTS+5/TB[offset1];[0:v][offset1]overlay=x=(W-w)/2:y=(H-h)/2:eof_action=pass[bg];
[2:v]setpts=PTS+10/TB[offset2];[bg][offset2]overlay=x=(W-w)/2:y=(H-h)/2:eof_action=pass;
[1:a]adelay=5s:all=1[a1];
[2:a]adelay=10s:all=1[a2];
[0:a][a1][a2]amix=inputs=3"
output.mp4
Command was broken into multiple lines so it is easier to read. Make it one line when executing.
I have a single video with no audio tracks and want to add several audio tracks sequentially (each track starts immediately after the other).
The basic case might look something like this:
|-----------VIDEO-----------VIDEO-------------VIDEO-----------VIDEO-----------|
|---FULL AUDIO TRACK 1---|---FULL AUDIO TRACK 2---|---PARTIAL AUDIO TRACK 3---|
Here is my attempt to achieve this:
ffmpeg -i video.mov -i audio1.mp3 -i audio2.mp3 -i audio3.mp3 -map 0:0 -map 1:0 -map 2:0 -map 3:0 out.mp4
Of course it doesn't produced the desired result. It only uses the first music clip in out.mp4, and no other audio tracks are started when it ends.
Question 1
What am I missing in order to add multiple audio tracks sequentially? I assume it's specifying starting and end points of audio clips but I'm coming up short on locating the syntax.
...
In addition, I'm looking for a way to ensure that the video ends with the full duration of AUDIO TRACK 3, as seen below:
|-----------VIDEO-----------VIDEO-------------VIDEO-----------VIDEO-----------|
|---FULL AUDIO TRACK 1---|---PARTIAL AUDIO TRACK 2---|---FULL AUDIO TRACK 3---|
In this case, AUDIO TRACK 2 gets trimmed so that the full AUDIO TRACK 3 is pinned to the end.
Question 2
Can this type of audio pinning be done in FFmpeg, or would I have to trim AUDIO TRACK 2 with another program first?
Use the atrim, asetpts, and concat filters:
ffmpeg -i video.mov -i audio1.mp3 -i audio2.mp3 -i audio3.mp3
-filter_complex "[2:a]atrim=duration=5,asetpts=PTS-STARTPTS[a2];[1:a][a2][3:a]concat=n=3:a=1:v=0[a]"
-map 0:v -map "[a]" -c copy -c:a aac -shortest output.mp4
atrim trims the audio. You can also use the start and/or end options if you prefer them over duration.
asetpts resets the timestamps (required by concat).
concat concatenates each audio segment.
If you want to automate this you'll have to script it. You can get the duration of each input with ffprobe:
ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 input.mp4
Then use that to determine the duration of whatever audio stream you want to trim.
What I want is to be able to create a livestream from a Ubuntu v14.04 server to a RTMP Server (like Twitch) and to be able to use NodeJS to control visual aspects (adding layers, text, images) and add different sources (video files, others livestreams, etc). Like having OBS running on a server.
What I've done/researched so far:
FFmpeg
With ffmpeg I can can create video files streams like that:
ffmpeg -re -i video.mp4 -c:v libx264 -preset fast -c:a aac -ab 128k -ar 44100 -f flv rtmp://example.com
Also using the filter_complex I can create something near to a layer like this tutorial explains:
https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
But I found the following problems:
The streams that I create with ffmpeg only last until the video file is over, if I wanted to stream multiple video files (dynamic playlist) it would interrupt the stream between each file;
The manipulation is very limited as far as I am concerned, I can't edit filter_complex once ffmpeg is executing;
Can't display text and create animated overlays, like sliding text.
I tried to search for any cli/nodejs package that is able to create a continuos video stream and manipulate it to use as input source for ffmpeg which streams to the RTMP server.
Can someone give me more information about what I am trying to do?
I'm playing with github.com/fluent-ffmpeg/node-fluent-ffmpeg to see if I have a different outcome.
I'm using ffmpeg to build a short hunk of video from a machine-generated png. This is working, but the video now needs to have a soundtrack (an [audio] field) for some of the other things I'm doing with it. I don't actually want any sound in the video, so: is there a way to get ffmpeg to simply set up an empty soundtrack property in the video, perhaps as part of the call that creates the video? I guess I could make an n-second long silent mp3 and bash it in, but is there a simpler / more direct way? Thanks!
Thanks to #Alvaro for the links; one of these worked after a bit of massaging. It does seem to be a two-step process: First make the soundtrack-less video and then do:
ffmpeg -ar 44100 -acodec pcm_s16le -f s16le -ac 2 -channel_layout 2.1
-i /dev/zero -i in.mp4 -vcodec copy -acodec libfaac -shortest out.mp4
The silence comes from /dev/zero and -shortest makes the process stop at the end of the video. Argument order is significant here; -shortest needs to be down near the output file spec.
This assumes that your ffmpeg installation has libfaac installed, which it might not. But, otherwise, this seems to be working.
I guess you need to create a media file properly with audio and video stream. As far as i know, there is not a direct way.
If you know your video duration, first create the dummy audio and after when you create the video try to join the audio part.
In superuser, you can find more info link1 link2
I'm trying to broadcast an application's audio output to a media server like Adobe FMS, Red5, or IceCast.
Is there a tool that can help me accomplish this, or a library that can help with building a custom solution in linux/windows?
Thanks.
ffmpeg is an excellent option for this, transcoding and feeding into other streaming servers is where it shines, currently I'm using the following command to transcode an RTMP stream to 16x9 pixels in raw RGB24 format while also deleting the audio channel[s]:
ffmpeg -re -i http://192.168.99.148:8081/ -an -vf scale=16:9 -pix_fmt rgb24 -f rawvideo udp://127.0.0.1:4000
Of course the possibilities are limitless, if you can give more specific info about your case I might be able to help you construct the needed commands.