Real-Time Audio Encoding With Azure Media Services - azure

Is it possible to get real-time audio encoding using Azure Media Services? We have an ASP.NET MVC C# site that we want to allow our users to upload an audio file and then immediately play that audio file back using a standard HTML5 audio tag.
I know I can upload the audio asset to azure and then ask it to encode it into an MP3 file so that it can be played using the audio tag but there may be to much of a delay in that process. Is there a way to upload the asset and then ask azure for an MP3 stream that it would encode in real-time so that I can play it to the user immediately after the upload completes.
If it cannot be done with azure is there a different service that offers that capability?

Currently, we do not provide a real-time transcoding option where the playback request triggers a real-time transcode.
An option for you may be to run ffmpeg directly in an Azure Function.

Related

How can i streaming and have the video available to download using Azure media services

I need to stream a tv signal (I have the rights) using azure media service. And at the same time i need to have as a video to be access and download it at least as a part But how can i access part of this continuous video. I thought that a job encoder was the tool but i can't find a way. Is any way to do it?
Solution 1: Use FFmpeg to download any Azure media service video or live stream.
For this you need to have FFmpeg installed. No matter you are using Windows, Linux, or Mac OS.
Download latest FFmpeg here: https://ffmpeg.org/download.html
And you need to get the Azure Media Service smooth streaming URL of the video you are watching. Typically, this URL ends with 'manifest'.
Example :
Refer this documents where you can find the step by step procedure to download or live stream
1) https://anduin.aiursoft.com/post/2020/5/15/download-any-azure-media-service-video-or-live-stream-with-ffmpeg
Solution 2: live event can be set to either a pass-through (an on-premises live encoder sends a multiple bitrate stream) or live encoding (an on-premises live encoder sends a single bitrate stream). For details about live streaming in Media Services v3, see Live events and live outputs.
Live Event:
When using the pass-through Live Event, you rely on your on-premises live encoder to generate a multiple bitrate video stream and send that as the contribution feed to the Live Event (using RTMP or fragmented-MP4 input protocol). The Live Event then carries through the incoming video streams to the dynamic packager (Streaming Endpoint) without any further transcoding.
Live Encoding:
When using cloud encoding with Media Services, you would configure your on-premises live encoder to send a single bitrate video as the contribution feed (up to 32Mbps aggregate) to the Live Event (using RTMP or fragmented-MP4 input protocol). The Live Event transcodes the incoming single bitrate stream into multiple bitrate video streams at varying resolutions to improve delivery and makes it available for delivery to playback devices via industry standard protocols like MPEG-DASH, Apple HTTP Live Streaming (HLS), and Microsoft Smooth Streaming.
For more details refer this document

Select different audio stream in Azure Communication services

I am researching a scenario where I setup a videoconference call app (like https://github.com/Azure-Samples/communication-services-web-calling-hero) with Azure Communication Services (ACS) and substitute the incoming audio stream with a different incoming audio stream (for example a translated English stream). In Azure Media Services this is possible, but this is lacking the videoconference functionality.
I cannot find any documentation on how to handle multiple audio streams for a single incoming video signal. Is this already possible in the current preview? Or should I switch to AWS Chime or Jitsi?

Azure Media Services for transcoding and delivering audio

I have a common use case scenario where I want to do the following
Upload an audio file. (wav/mp3)
Transcodes to 128k or 192k mp3.
Stores the audio asset.
Allows the audio asset to be streamed.
Supports streaming actions such as play pause and seek.
The documentation for azure media services seems like it might be able to support this but I am not too sure, seems like they focus on video content. Anyone have experience with this?
You can manage audio and encode audio only assets with azure media services.
WAV is supported input format/conatiner as a input asset. To see full list of supported formats check following link:
https://azure.microsoft.com/en-us/documentation/articles/media-services-media-encoder-standard-formats/
Check https://github.com/Azure/azure-content/blob/master/articles/media-services/media-services-custom-mes-presets-with-dotnet.md#audio_only to see audio only preset options which you will use to encode an audio only preset.

How to stream audio mp3 file on web

Approx we all know about gaana.com, and saavn.com, that website stream audio mp3 files to client side but does't allow to users to grab the audio files, actually we want to know what technology he used to stream the audio mp3 files.
is he using streaming server or or something else ?
Can you describe the technology he is using in steaming the audio files.
Actually we are also creating a web app where audio files will be streammed in client side and we also don't want to allow users to download our mp3 files like gaana.com or saavn.com.
and we are also curious about if we want to stream our audio mp3 files in three different quality the what should i do. Should we convert all the mp3 files in all the three different quality and upload to the server or is any another solution exist for this purpose.
If you want to code your own streaming server then you can use this link
https://pypi.python.org/pypi/DeeFuzzer/ it's a python based streaming server, or you can also use ffmpeg or even VLC

Multiplexing several ingested audio streams and one video stream with Azure Media Services

We are looking into developing a web application that streams video from one source and audio from several locations. In the future, we may consider streaming video from multiple locations also.
The content will be delivered to multiple clients and must be thus packaged as one output stream. From what we could find on Media Services docs, there is no builtin way to multiplex the incoming streams into one output stream to be delivered to users.
How could one multiplex several AV sources with Azure Media Services. Please note that "locations" was used deliberately to signify that the AV sources will be in different physical locations and as such, the multiplexing cannot be done locally on one computer.
Azure Media Services supports Adobe RTMP and Microsoft Smooth (fMP4) ingest. The Microsoft Smooth protocol will allow you to send independent streams containing video or audio, that are synchronized by timestamp. RTMP will support multiple audio tracks, but I don't believe multiple video tracks are supported.
When you create a Channel for Smooth (fMP4) ingest you will have access to an ingest end point which you can send media to from multiple end points, for example, http://domain/ingest.isml/Streams(video_camera_angle1), .../Streams(video_camera_angle2), .../Streams(audio_en), .../Streams(audio_sp), .../Streams(audio_fr).
Azure Media Services supports 4 egress protocols: Apple HLS, Adobe HDS, Microsoft Smooth, and MPEG-DASH. All of them support multiple audio tracks. Today, I believe only Microsoft Smooth and Apple HLS support multiple video tracks.

Resources