How can i streaming and have the video available to download using Azure media services - azure

I need to stream a tv signal (I have the rights) using azure media service. And at the same time i need to have as a video to be access and download it at least as a part But how can i access part of this continuous video. I thought that a job encoder was the tool but i can't find a way. Is any way to do it?

Solution 1: Use FFmpeg to download any Azure media service video or live stream.
For this you need to have FFmpeg installed. No matter you are using Windows, Linux, or Mac OS.
Download latest FFmpeg here: https://ffmpeg.org/download.html
And you need to get the Azure Media Service smooth streaming URL of the video you are watching. Typically, this URL ends with 'manifest'.
Example :
Refer this documents where you can find the step by step procedure to download or live stream
1) https://anduin.aiursoft.com/post/2020/5/15/download-any-azure-media-service-video-or-live-stream-with-ffmpeg
Solution 2: live event can be set to either a pass-through (an on-premises live encoder sends a multiple bitrate stream) or live encoding (an on-premises live encoder sends a single bitrate stream). For details about live streaming in Media Services v3, see Live events and live outputs.
Live Event:
When using the pass-through Live Event, you rely on your on-premises live encoder to generate a multiple bitrate video stream and send that as the contribution feed to the Live Event (using RTMP or fragmented-MP4 input protocol). The Live Event then carries through the incoming video streams to the dynamic packager (Streaming Endpoint) without any further transcoding.
Live Encoding:
When using cloud encoding with Media Services, you would configure your on-premises live encoder to send a single bitrate video as the contribution feed (up to 32Mbps aggregate) to the Live Event (using RTMP or fragmented-MP4 input protocol). The Live Event transcodes the incoming single bitrate stream into multiple bitrate video streams at varying resolutions to improve delivery and makes it available for delivery to playback devices via industry standard protocols like MPEG-DASH, Apple HTTP Live Streaming (HLS), and Microsoft Smooth Streaming.
For more details refer this document

Related

Select different audio stream in Azure Communication services

I am researching a scenario where I setup a videoconference call app (like https://github.com/Azure-Samples/communication-services-web-calling-hero) with Azure Communication Services (ACS) and substitute the incoming audio stream with a different incoming audio stream (for example a translated English stream). In Azure Media Services this is possible, but this is lacking the videoconference functionality.
I cannot find any documentation on how to handle multiple audio streams for a single incoming video signal. Is this already possible in the current preview? Or should I switch to AWS Chime or Jitsi?

Real-Time Audio Encoding With Azure Media Services

Is it possible to get real-time audio encoding using Azure Media Services? We have an ASP.NET MVC C# site that we want to allow our users to upload an audio file and then immediately play that audio file back using a standard HTML5 audio tag.
I know I can upload the audio asset to azure and then ask it to encode it into an MP3 file so that it can be played using the audio tag but there may be to much of a delay in that process. Is there a way to upload the asset and then ask azure for an MP3 stream that it would encode in real-time so that I can play it to the user immediately after the upload completes.
If it cannot be done with azure is there a different service that offers that capability?
Currently, we do not provide a real-time transcoding option where the playback request triggers a real-time transcode.
An option for you may be to run ffmpeg directly in an Azure Function.

Multiplexing several ingested audio streams and one video stream with Azure Media Services

We are looking into developing a web application that streams video from one source and audio from several locations. In the future, we may consider streaming video from multiple locations also.
The content will be delivered to multiple clients and must be thus packaged as one output stream. From what we could find on Media Services docs, there is no builtin way to multiplex the incoming streams into one output stream to be delivered to users.
How could one multiplex several AV sources with Azure Media Services. Please note that "locations" was used deliberately to signify that the AV sources will be in different physical locations and as such, the multiplexing cannot be done locally on one computer.
Azure Media Services supports Adobe RTMP and Microsoft Smooth (fMP4) ingest. The Microsoft Smooth protocol will allow you to send independent streams containing video or audio, that are synchronized by timestamp. RTMP will support multiple audio tracks, but I don't believe multiple video tracks are supported.
When you create a Channel for Smooth (fMP4) ingest you will have access to an ingest end point which you can send media to from multiple end points, for example, http://domain/ingest.isml/Streams(video_camera_angle1), .../Streams(video_camera_angle2), .../Streams(audio_en), .../Streams(audio_sp), .../Streams(audio_fr).
Azure Media Services supports 4 egress protocols: Apple HLS, Adobe HDS, Microsoft Smooth, and MPEG-DASH. All of them support multiple audio tracks. Today, I believe only Microsoft Smooth and Apple HLS support multiple video tracks.

Windows Azure live media encoders provide live transcoding?

I have a simple question - I want to stream live video + audio. I would like to use Windows Azure for that (mainly because it seems to provide HLS with AES protection which I have not encounted in opensource solutionsand clear for managers pricing per streaming user) I amtrobuled because of next quote:
Currently, Media Services does not provide a live transcoding service.
You can use one of the following third party live encoders that output
RTMP or Smooth Streaming formats: Elemental, Envivio, Cisco, RGP
encoders output Smooth Streaming; Adobe Flash Live, Wirecast and
Tredek encoders output RTMP.
And a few lines after
You can deliver your live stream in any of the following formats:
Smooth Streaming, DASH and HLS. When doing live streaming, HLS is
packaged dynamically and the default HLS packaging ratio is 3 Smooth
fragments to 1 HLS segment (3:1).
...
Configure a live transcoder.
Every time you reconfigure the transcoder, call the Reset method on
the channel.
So no transcoding is provided yet I shall set up a transcoder... What? How?
In FFmpeg there are 2 types of transcoding
from one encoded data format to another (say PCM raw data to encoded MP3 frames)
from one frame/packet type to another (say MP4 frames of already encoded audio/video to FLV frames format with same encoded data in them)
Do they try to tell me that they provide frames repacking from RTMP to HLS yet no live encoding into another compression type (say from Speex audio to AAC)?
As I answered on your another post, you can use tool like Wirecast 6 to encode your live stream and push the stream into Azure Ingest URL. We will give you a publish URL that could dynamically package content into HLS, Smooth Streaming and DASH.
For more information, please refer to this post: http://azure.microsoft.com/blog/2014/09/10/getting-started-with-live-streaming-using-the-azure-management-portal/
Yes. The second type of transcoding you describe can be better named transpackaging because no video coding is done.
Transcoding is not provided. Transpackaging is provided.

Stream recorded audio from browser to server

I would like to live stream recorded audio from the browser to the server and play it. The server will end up being a embedded device that plays these audio streams.
So far I've successfully recorded audio and encoded it into a WAVE file and play it on the browser using the web audio API and following this tutorial.
Now I have a stream of .WAV encoded blobs. I tried finding ways to stream these to a nodejs backend with a web socket connection and play them using a npm module. But I haven't had any luck.
Does anyone know of any resources or modules I should follow? Maybe I should try a different approach? The audio needs to be played relatively quickly on the server since recording on the browser.
I'm doing this currently with some software that allows for streaming to internet radio servers via your web browser.
I use the WebAudio API along with getUserMedia to get live PCM audio data from the sound device. From there, I convert this data from 32-bit float to 16, 12, or 8 bit data depending on the amount of bandwidth available. This converted int samples are written to a stream setup with BinaryJS which wraps streams on both the Node.js and the client. As a bonus with BinaryJS, you can have as many streams open as you want, so I use a second stream over the same WebSocket connection for control data.
http://demo.audiopump.co:3000/

Resources