I have been looking into Azure live streaming features and very impressed with the features they offer.
What I would like to know is, if we can live stream from an already encoded video asset rather than a live recording.
For an example if I want to stream an event on a specific time with existing VOD content on Azure.
Not sure if there is any support on Wirecast to do this.
Any help or suggestion would be appreciated.
Thanks
I tested right after I read your question, but currently I failed to publish out existing/already-uploaded Asset as a streaming source in Azure Media Service solely.
In case of WireCast, it can serve media files for streaming as the manual describes in page 36.
Wirecast uses the concept of a shot to construct presentations. A shot contains media,
along with the settings for that media. In its simplest form, a shot contains one piece of
media such as a photo or a video clip. But it can also be something more complex, like a
live camera with a title, and background music, or even a Playlist of shots.
But, if you only want to serve a file without editing, you can use simple encoder program like FFmpeg from your computer (or virtual machine) for transmitting as below documentation suggests.
https://azure.microsoft.com/ko-kr/blog/azure-media-services-rtmp-support-and-live-encoders/
At above link, FFmpeg command line example is as below;
C:\tools\ffmpeg\bin\ffmpeg.exe -v verbose -i MysampleVideo.mp4 -strict -2 -c:a aac -b:a 128k -ar 44100 -r 30 -g 60 -keyint_min 60 -b:v 400000 -c:v libx264 -preset medium -bufsize 400k -maxrate 400k -f flv rtmp://channel001-streamingtest.channel.media.windows.net:1935/live/a9bcd589da4b42409936940/mystream1
Related
What I want is to be able to create a livestream from a Ubuntu v14.04 server to a RTMP Server (like Twitch) and to be able to use NodeJS to control visual aspects (adding layers, text, images) and add different sources (video files, others livestreams, etc). Like having OBS running on a server.
What I've done/researched so far:
FFmpeg
With ffmpeg I can can create video files streams like that:
ffmpeg -re -i video.mp4 -c:v libx264 -preset fast -c:a aac -ab 128k -ar 44100 -f flv rtmp://example.com
Also using the filter_complex I can create something near to a layer like this tutorial explains:
https://trac.ffmpeg.org/wiki/Create%20a%20mosaic%20out%20of%20several%20input%20videos
But I found the following problems:
The streams that I create with ffmpeg only last until the video file is over, if I wanted to stream multiple video files (dynamic playlist) it would interrupt the stream between each file;
The manipulation is very limited as far as I am concerned, I can't edit filter_complex once ffmpeg is executing;
Can't display text and create animated overlays, like sliding text.
I tried to search for any cli/nodejs package that is able to create a continuos video stream and manipulate it to use as input source for ffmpeg which streams to the RTMP server.
Can someone give me more information about what I am trying to do?
I'm playing with github.com/fluent-ffmpeg/node-fluent-ffmpeg to see if I have a different outcome.
I need to make a video which will play on iPhone and Android but the problem is when I click play on the phone it needs minimum 7 seconds to start.
So maybe I need to fix something in this code to make the video play on phones (maybe another format is needed):
ffmpeg -i VIDEO -c:v libx264 -s 640x480 -strict experimental -c:a aac VIDEO.MP4
There must be something to make the video play faster without a delay on start.
I tried a FLV file and it worked fine on Android but the iPhone can't play it.
If you're referring to a progressive download scenario then you can use:
-movflags faststart
Run a second pass moving the index (moov atom) to the beginning of the
file. This operation can take a while, and will not work in various
situations such as fragmented output, thus it is not enabled by
default.
Source
The moov atom is generally at the end of the file and a full download is required before playback in this case. Moving it to the start with the aforementioned command allows the playback to start immediately.
I'm using ffmpeg to build a short hunk of video from a machine-generated png. This is working, but the video now needs to have a soundtrack (an [audio] field) for some of the other things I'm doing with it. I don't actually want any sound in the video, so: is there a way to get ffmpeg to simply set up an empty soundtrack property in the video, perhaps as part of the call that creates the video? I guess I could make an n-second long silent mp3 and bash it in, but is there a simpler / more direct way? Thanks!
Thanks to #Alvaro for the links; one of these worked after a bit of massaging. It does seem to be a two-step process: First make the soundtrack-less video and then do:
ffmpeg -ar 44100 -acodec pcm_s16le -f s16le -ac 2 -channel_layout 2.1
-i /dev/zero -i in.mp4 -vcodec copy -acodec libfaac -shortest out.mp4
The silence comes from /dev/zero and -shortest makes the process stop at the end of the video. Argument order is significant here; -shortest needs to be down near the output file spec.
This assumes that your ffmpeg installation has libfaac installed, which it might not. But, otherwise, this seems to be working.
I guess you need to create a media file properly with audio and video stream. As far as i know, there is not a direct way.
If you know your video duration, first create the dummy audio and after when you create the video try to join the audio part.
In superuser, you can find more info link1 link2
I'm trying to broadcast an application's audio output to a media server like Adobe FMS, Red5, or IceCast.
Is there a tool that can help me accomplish this, or a library that can help with building a custom solution in linux/windows?
Thanks.
ffmpeg is an excellent option for this, transcoding and feeding into other streaming servers is where it shines, currently I'm using the following command to transcode an RTMP stream to 16x9 pixels in raw RGB24 format while also deleting the audio channel[s]:
ffmpeg -re -i http://192.168.99.148:8081/ -an -vf scale=16:9 -pix_fmt rgb24 -f rawvideo udp://127.0.0.1:4000
Of course the possibilities are limitless, if you can give more specific info about your case I might be able to help you construct the needed commands.
I know that there are a million ways to download a video from youtube and then convert it to audio or do further processing on it. But recently I was surprised to see an app called YoutubeToMp3 on mac actually showing "Skipping X mb of video" and supposedly only downloading the audio from the video, without the need to use bandwith to download the entire video and then convert it. I was wondering if this is actually correct and possible at all because I cant find any way to do that. Do you have any ideas ?
EDIT:
After some tests here is some additional information on the topic. The video which I tried to get the audio from is just a sample mp4 file from the internet:
http://download.wavetlan.com/SVV/Media/HTTP/MP4/ConvertedFiles/MediaCoder/MediaCoder_test6_1m9s_XVID_VBR_306kbps_320x240_25fps_MPEG1Layer3_CBR_320kbps_Stereo_44100Hz.mp4
I tried
ffmpeg -i "input" out.mp3
ffmpeg -i "input" -vn out.mp3
ffmpeg -i “input” -vn -ac 2 -ar 44100 -ab 320k -f mp3 output.mp3
ffmpeg -i “input” -vn -acodec copy output.mp3
Unfortunately non of these commands seems to be using less bandwith. They all download the entire video. Now that you have the video can you confirm if there is actually a command that downloads only the audio stream from it and lowers the bandwith usage? Thanks!
After a lot of research I found out that this is not possible and developed an alternative approach:
Download the mp4 header
Parse the header and get the locations of the audio bytes
Download the audio bytes with http range requests and offsets
Assemble the audio bytes and wrap them in a simple ADTS container to produce a playing m4a file
That way only bandwidth for the audio bytes is used. If you find a better approach of doing it please let me know.
For a sample Android APP and implementation check out:
https://github.com/feribg/audiogetter/blob/master/audiogetter/src/main/java/com/github/feribg/audiogetter/tasks/download/VideoTask.java
FFmpeg is capable of accepting an URL as input. If the URL is seekable, then FFmpeg could theoretically skip all the video frames, and thus it would need to download only the data for the audio stream.
Try using
ffmpeg -i http://myvideo.avi out.mp3
and see if it takes less bandwidth.