We have encoded and distributed videos for some years now, using FFMPEG to produce h.264/mp4 files that have been working great for us. We have been using HTML mode and fall-backed to flash for browsers that does not support it natively using flowplayer.
We use cloudfront to serve our files from a s3 bucket and have been using http progressive streaming.
Recently we started distribute the files in flashmode over rtmp instead, using a cloudfront streaming distribution pointing to the same amazon s3 bucket.
All good for some weeks, until yesterday when we notice a couple of files with audio sync issues in rtmp mode.
The same file have no sync problems in flash with direct url to file.
What can be the case?
Not working when streamed via RTMP, but file work with http streaming/progressive.
You see the sync issue 15 sec's into the video.
rtmp://s2xe2avk54qztf.cloudfront.net:1935/cfx/st/mp4:95fvOY255bdPspO3z6tEvGi3Em7/default.mp4
http://media.shootitlive.com/95fvOY255bdPspO3z6tEvGi3Em7/default.mp4
Another file that have no sync issue at all.
rtmp://s2xe2avk54qztf.cloudfront.net:1935/cfx/st/mp4:P4EuH2TZxfV6BvpupP6dxrrs7gD/default.mp4
http://media.shootitlive.com/P4EuH2TZxfV6BvpupP6dxrrs7gD/default.mp4
Both files have the same format for video and audio and have been encoded the exact same way with ffmpeg. It's not player related as we see the audio sync issue on several players and when playing stream in VLC.
Related
I'm looking for a way to display multiple camerastreams (up to 200 cameras) in a single web application (only a single stream will be visible at each time).
My initial plan was to connect the webapp to the cameras by using an rtsp stream, but this protocol is not supported by most browsers. I have found some sources that it should be possible to display using a thirdparty plugin but for now no luck.
Another idea I had was to deploy a kubernetes cluster with a transcoding service for each camera that converts a rtsp stream into an HLS stream, which is usable in a webapp. But this means defining a hard link between each transcoder pod and each camera.
So my question: Is there an easy way of using rtsp streams in a webapp? Or what do you guys think is a viable way to handle this many cameras in a webapp?
So many thanks!
I am new into RTMP and live streaming.
I have my rtmp server, but the issue is distribution, was looking for a simple rtpms streaming cdn. That can support audio streaming with HSL or dash support.
Or something free similar to youtube live, but for audio but with embeddable html.
Recently(2022.01) most of CDNs support only file-based streaming protocol, like HLS/DASH/CMAF, even you publish the stream by RTMP or WebRTC, the CDN also covert the stream to these protocols.
If you want to build low lagging live streaming application, like RTMP, HTTP-FLV is recommend and you need a CDN to support HTTP-FLV rather than RTMP. HTTP-FLV works well on PC or mobile, please read this post.
You could build your CDN by open-source media-server cluster, like SRS Edge to delivery HTTP-FLV, based on AWS EC2.
For CDN which support HTTP-FLV, you could check Tencent Cloud Streaming Services, which supports publish by RTMP, and deliver by HLS/HTTP-FLV/WebRTC.
I need to generate .ISM files from MP4 files that are uploaded to Azure BLOB storage. Probably as soon as the user uploads a MP4 file to BLOB storage I should be able to fire up a Azure Function that does the conversion.
Can someone please help me how to do the conversion from MP4 to .ISM.
Note: I do not want to use Azure Media Service, it is too expensive.
The .ism file probably won't help you much at all for this situation.
If you are trying to avoid using AMS completely and just do static packaging, you should generate HLS or DASH content directly into storage blobs. You could do that with FFMPEG or the Shaka Packager tool from existing Mp4 file. There are lots of OSS solutions out there that can generate static HLS and DASH content if that is your goal.
The .ism file is primarily a feature of AMS - and it provides the information that the Streaming Endpoint (origin server) needs to dynamically packaging on-the-fly from standard MP4 files to MPEG DASH, HLS, Smooth and add on the DRM encryption for Widevine, Fairplay, and Playready. If you have no need for the multi-format dynamic packaging from MP4, then AMS is probably not the right solution for your needs.
If you can share - what parts are too expensive for you? The encoding, the streaming endpoint monthly cost for (standard endpoint cost?) or is it the overall egress bandwidth needed to deliver content from Azure (which won't go away with a storage based solution and is normally 90% of the cost of streaming if you have popular stuff.)
If you are trying to avoid encoding costs, you can encode locally or in ffmpeg on a server vm at your own costs, and then upload and stream with AMS - I have a good sample of that here - https://github.com/Azure-Samples/media-services-v3-dotnet/tree/main/Streaming/StreamExistingMp4
Thanks,
John
I am working on a VoD project in NodeJS which must provide customers with some videos to buy or subscribe.
Video are hosted on a Streaming Server (a server like Red5, but not exactly Red5) and provides interactive player, adaptive bit-rate streaming, enhanced speed using CDN, and etc.
The problem I have is users are able to download the video seeing they easily obtain videos URL.
According to the below question:
Is there a way a video file on a remote server can be downloaded in chunks using Node.js and piped through to a client, without storing any data on the server, …?
Request NPM has been suggested.
Now my questions are:
Is the suggested solution a wise decision to adapt for my scenario?
Following suggested solution would it be possible to use server's provided features like adaptive bit-rate streaming, ...?
You may also encrypt each segment with AES to prevent copy.
I have an android application (client), asp.net web api web server (server), and Windows Azure Media Services (WAMS) account.
What I Want: To upload a 3-30 second video from the client to the server and have it encoded with WAMS then available for streaming via HLSv3 as quickly as possible. Ideally a video preview image would be generated as well. As fast as possible is something like sub one minute turn around. That's likely not realistic, I realize, but the faster the better.
Where I'm At: We upload the video to the server as a stream, which then stores it in Azure blob storage. The server returns to the client indicating upload success. The server has an action that kicks off the encoding which then get's called. I run a custom encoding task based off of the H264 Adaptive Bitrate MP4 Set 720p preset modified for taking a 640x480 video and cropping it to 480x480 at the same time as encoding. Then I run a thumbnail job that generates one thumbnail at 480x480. Depending on the reserved encoder quality this can take ~5 mins to ~2 mins. The encoding job time is only 30-60 seconds of that and the rest is a mix of queue time, publishing time, and communication delay.
What can I do to improve the client upload to video streamable turn around time? Where are the bottle necks in the encoding process? Is there a reasonable max speed that can be achieved? Are there config settings that can be tweaked to improve the process performance?
Reduce the number of jobs
The first thing that springs to mind is given you're only interested in a single thumbnail, you should be able to consolidate your encode and thumbnail jobs by adding something like this to the MediaFile element of your encode preset:
<MediaFile ThumbnailTime="00:00:00"
ThumbnailMode="BestFrame"
ThumbnailJpegCompression="75"
ThumbnailCodec="Jpeg"
ThumbnailSize="480, 480"
ThumbnailEmbed="False">
The thumbnail will end up in the output asset along with your video streams.
Reduce the number of presets in the task
Another thing to consider is that the preset that you linked to has multiple presets defined within it in order to produce audio streams at different bitrates. My current understanding is that each of these presets is processed sequentially by the encode unit.
The first preset defines the video streams, and also specifies that each video stream should have the audio muxed in at 96kbps. This means that your video files will be larger than they probably need to be, and some time will be taken up in the muxing process.
The second and third presets just define the audio streams to output - these wouldn't contain any video. The first of these outputs the audio at 96kbps, the second at 56kbps.
Assuming you're happy with a fixed audio quality of 96kbps, I would suggest removing the audio from the video streams and the last of the audio streams (56kbps) - that would save the same audio stream being encoded twice, and audio being muxed in with the video. (Given what I can tell from your usage, you probably don't need that anyway)
The side benefit of this would be that your encoder output file size will go down marginally, and hence the cost of encodes will too.
Workflow optimisation
The only other point I would is regarding the workflow by which you get your video files into Azure in the first place. You say that you're uploading them into blob storage - I assume that you're subsequently copying them into an AMS asset so they can be configured as inputs for the job. If that's right, you may save a bit of time by uploading directly into an asset.
Hope that helps, and good luck!