Broadcast a program's audio to a media server - linux

I'm trying to broadcast an application's audio output to a media server like Adobe FMS, Red5, or IceCast.
Is there a tool that can help me accomplish this, or a library that can help with building a custom solution in linux/windows?
Thanks.

ffmpeg is an excellent option for this, transcoding and feeding into other streaming servers is where it shines, currently I'm using the following command to transcode an RTMP stream to 16x9 pixels in raw RGB24 format while also deleting the audio channel[s]:
ffmpeg -re -i http://192.168.99.148:8081/ -an -vf scale=16:9 -pix_fmt rgb24 -f rawvideo udp://127.0.0.1:4000
Of course the possibilities are limitless, if you can give more specific info about your case I might be able to help you construct the needed commands.

Related

OBS alternative for server: Creating a continuous video streaming to RTMP server and beign able to manipulate it using NodeJS

What I want is to be able to create a livestream from a Ubuntu v14.04 server to a RTMP Server (like Twitch) and to be able to use NodeJS to control visual aspects (adding layers, text, images) and add different sources (video files, others livestreams, etc). Like having OBS running on a server.
What I've done/researched so far:
FFmpeg
With ffmpeg I can can create video files streams like that:
ffmpeg -re -i video.mp4 -c:v libx264 -preset fast -c:a aac -ab 128k -ar 44100 -f flv rtmp://example.com
Also using the filter_complex I can create something near to a layer like this tutorial explains:
https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
But I found the following problems:
The streams that I create with ffmpeg only last until the video file is over, if I wanted to stream multiple video files (dynamic playlist) it would interrupt the stream between each file;
The manipulation is very limited as far as I am concerned, I can't edit filter_complex once ffmpeg is executing;
Can't display text and create animated overlays, like sliding text.
I tried to search for any cli/nodejs package that is able to create a continuos video stream and manipulate it to use as input source for ffmpeg which streams to the RTMP server.
Can someone give me more information about what I am trying to do?
I'm playing with github.com/fluent-ffmpeg/node-fluent-ffmpeg to see if I have a different outcome.

Azure live streaming with already encoded content

I have been looking into Azure live streaming features and very impressed with the features they offer.
What I would like to know is, if we can live stream from an already encoded video asset rather than a live recording.
For an example if I want to stream an event on a specific time with existing VOD content on Azure.
Not sure if there is any support on Wirecast to do this.
Any help or suggestion would be appreciated.
Thanks
I tested right after I read your question, but currently I failed to publish out existing/already-uploaded Asset as a streaming source in Azure Media Service solely.
In case of WireCast, it can serve media files for streaming as the manual describes in page 36.
Wirecast uses the concept of a shot to construct presentations. A shot contains media,
along with the settings for that media. In its simplest form, a shot contains one piece of
media such as a photo or a video clip. But it can also be something more complex, like a
live camera with a title, and background music, or even a Playlist of shots.
But, if you only want to serve a file without editing, you can use simple encoder program like FFmpeg from your computer (or virtual machine) for transmitting as below documentation suggests.
https://azure.microsoft.com/ko-kr/blog/azure-media-services-rtmp-support-and-live-encoders/
At above link, FFmpeg command line example is as below;
C:\tools\ffmpeg\bin\ffmpeg.exe -v verbose -i MysampleVideo.mp4 -strict -2 -c:a aac -b:a 128k -ar 44100 -r 30 -g 60 -keyint_min 60 -b:v 400000 -c:v libx264 -preset medium -bufsize 400k -maxrate 400k -f flv rtmp://channel001-streamingtest.channel.media.windows.net:1935/live/a9bcd589da4b42409936940/mystream1

demultiplexing UDP/RTP multi program transport streams

I'm dealing with UDP/RTP multi program transport streams with Directshow.
I wish to decode in a single graph the audio channels brought by different programs.
How can I configure the Demultiplexer in order to achieve this?
Using GraphEdit, the basic graph composed by:
Network receiver ---> MS Demultiplexer ---> PSI parser
allows me to see the program list and audio/video channels associated to each program.
If I select program, audio and video PIDs in PSI parser properties, the contents are rendered.
Now, how can I render multiple channels from different programs at the same time, in the same graph?
I tried:
1) via PSI parser properties dialog. The 1st configured is OK, but as I configure the 2nd audio/video/program, the old content rendering is replaced by the new configuration. Building a graph via API with this approach brings the same result: only the 1st configuration works. If I add other pins, I can render contents only if the configuration is the same as the 1st pin. If the audio/video PID belongs to a different program, it is not rendered.
2) cascading two (or more) Demuxes, configuring the 1st to forward packets belonging to the specific program and the 2nd to extract audio and video from the stream received. For this configuration, output pin media type = "transport stream", mapped to "Transport packet (complete)"; the PID is the program PID identified by PSI parser.
Result: the graph runs, but I got a black window and no audio.
Can you help, please?
How about adding a tee filter after the demux and then adding multiple parsers to the output pins from the tee? I think that might work.
The way I do it now is I use ffmpeg and generate multiple outputs. Then use separate FFmpeg instances to encode those streams separately. The only possible issue is that I use Linux and this may not be ideal for other operating systems.
Here is the master FFmpeg command:
/usr/bin/ffmpeg -f mpegts -i "udp://#server_ip:8080?overrun_nonfatal=1&reuse=1" \
-map 0:p:1 -copyinkf -vcodec copy -acodec copy -f mpegts "udp://server_ip:8001" \
-map 0:p:2 -copyinkf -vcodec copy -acodec copy -f mpegts "udp://server_ip:8002" \
...
-map 0:p:10 -copyinkf -vcodec copy -acodec copy -f mpegts "udp://server_ip:8010"
Then you can have single FFmpeg instances running something like this:
/usr/bin/ffmpeg -i "udp://#server_ip:8001" -vcodec libx264 -acodec libmp3lame -f mpegts rtmp://other_server:port
Hope this helps put someone in the right direction. I wish it were this simply explained when I needed help.

ffmpeg: How to assign an empty soundtrack to a video?

I'm using ffmpeg to build a short hunk of video from a machine-generated png. This is working, but the video now needs to have a soundtrack (an [audio] field) for some of the other things I'm doing with it. I don't actually want any sound in the video, so: is there a way to get ffmpeg to simply set up an empty soundtrack property in the video, perhaps as part of the call that creates the video? I guess I could make an n-second long silent mp3 and bash it in, but is there a simpler / more direct way? Thanks!
Thanks to #Alvaro for the links; one of these worked after a bit of massaging. It does seem to be a two-step process: First make the soundtrack-less video and then do:
ffmpeg -ar 44100 -acodec pcm_s16le -f s16le -ac 2 -channel_layout 2.1
-i /dev/zero -i in.mp4 -vcodec copy -acodec libfaac -shortest out.mp4
The silence comes from /dev/zero and -shortest makes the process stop at the end of the video. Argument order is significant here; -shortest needs to be down near the output file spec.
This assumes that your ffmpeg installation has libfaac installed, which it might not. But, otherwise, this seems to be working.
I guess you need to create a media file properly with audio and video stream. As far as i know, there is not a direct way.
If you know your video duration, first create the dummy audio and after when you create the video try to join the audio part.
In superuser, you can find more info link1 link2

Download ONLY audio from a youtube video

I know that there are a million ways to download a video from youtube and then convert it to audio or do further processing on it. But recently I was surprised to see an app called YoutubeToMp3 on mac actually showing "Skipping X mb of video" and supposedly only downloading the audio from the video, without the need to use bandwith to download the entire video and then convert it. I was wondering if this is actually correct and possible at all because I cant find any way to do that. Do you have any ideas ?
EDIT:
After some tests here is some additional information on the topic. The video which I tried to get the audio from is just a sample mp4 file from the internet:
http://download.wavetlan.com/SVV/Media/HTTP/MP4/ConvertedFiles/MediaCoder/MediaCoder_test6_1m9s_XVID_VBR_306kbps_320x240_25fps_MPEG1Layer3_CBR_320kbps_Stereo_44100Hz.mp4
I tried
ffmpeg -i "input" out.mp3
ffmpeg -i "input" -vn out.mp3
ffmpeg -i “input” -vn -ac 2 -ar 44100 -ab 320k -f mp3 output.mp3
ffmpeg -i “input” -vn -acodec copy output.mp3
Unfortunately non of these commands seems to be using less bandwith. They all download the entire video. Now that you have the video can you confirm if there is actually a command that downloads only the audio stream from it and lowers the bandwith usage? Thanks!
After a lot of research I found out that this is not possible and developed an alternative approach:
Download the mp4 header
Parse the header and get the locations of the audio bytes
Download the audio bytes with http range requests and offsets
Assemble the audio bytes and wrap them in a simple ADTS container to produce a playing m4a file
That way only bandwidth for the audio bytes is used. If you find a better approach of doing it please let me know.
For a sample Android APP and implementation check out:
https://github.com/feribg/audiogetter/blob/master/audiogetter/src/main/java/com/github/feribg/audiogetter/tasks/download/VideoTask.java
FFmpeg is capable of accepting an URL as input. If the URL is seekable, then FFmpeg could theoretically skip all the video frames, and thus it would need to download only the data for the audio stream.
Try using
ffmpeg -i http://myvideo.avi out.mp3
and see if it takes less bandwidth.

Resources