Azure Communication Services (Calling SDK) - How many video streams are supported? - azure

I am very confused about the calling sdk specs. They are clear about the fact that only one video stream can be rendered at one time see here...
BUT when I try out the following sample I get video streams for all members of the group call. When I try the other example (both from ms), it behaves like written in the specs... So I am totally confused here why this other example can render more than one video stream in parallel? Can anybody tell me how to understand this? Is it possible or not?
EDIT: I found out that both examples work with multiple videos streams. So it is cool that the service provide more than the specs say, but I do not get the point why the specs tell about that not existing limitations...

Only one video stream is supported on ACS Web (JS) calling SDK, multiple video stream can be rendered for incoming calls but A/V quality is not guaranteed at this stage for more than one video. Support for 4(2x2) and 9(3x3) is on the roadmap and we'll publish support as network bandwidth paired with quality assurance testing and verification is identified and completed.

Related

Getting insights from Azure Video Analyzer using an IP camera(no iot edge capability) live stream

I want to use an IP camera without iot edge support to live stream the video footage to azure and I want to get insights from video using Azure Video Analyzer for Media(aka VideoIndexer).
I have came across 2 possible ways to achieve it in Azure-
Came across LiveStreamAnalysis GitHub repo but the functions are not getting deployed as it is using older version of Media Service(v2). However, I read the newer version of Media Services but didn't found the Live Stream sample to start with.
Found Video Analyzer(preview) documentation but I am only able to stream and record the live stream using a simulated IP camera live stream.
I want to do further analysis on video by using video indexer apis but I didn't find any way to achieve it using 2nd approach.It only explained using IOT edge device pipelines and worflows.
How can I achieve this?
thank you for bringing (1) to our attention.
I reached out to the relevant contact.
There isn't other built in integration your (2) option is using Azure video analyzer (not for media) which is a different service.
The most promising path at the moment is (1) based on a fix.
Regarding (1), I am the main author of the Live Stream Analysis sample and it is true that the functions need to be updated to use AMS v3 API, and the logic apps to be updated too to use these new functions. I started the work to build the new functions but the work is not complete. They will be posted to https://github.com/Azure-Samples/media-services-v3-dotnet-core-functions-integration/tree/main/Functions
You can see that there is a SubmitSubclipJob which will be key for the sample.

Automatically convert any given video to a format that can be posted on Instagram

Imagine I am given any video format, what is the most optimal way to convert the video to a format that can be posted on Instagram?
The video format must conform to this
Instagram Video Specifications
I'm looking at FFMPeg but I just felt I should ask first for probably existing solutions in the NodeJS community before re-inventing the wheel.
My tech stack is NodeJS.
Please note that I have attempted to search for solutions with no results.
Any ideas on this would be truly appreciated.
TLDR; Yes, FFMPEG command-line would be my preferred option. But, run it as a background worker.
This is what I would do:
For each video required, post the source URL to a queue (Azure Service Bus, RabbitMQ, etc...)
Create a background worker process (Docker container?) to pop each message off
Run the file through FFMPEG
Upload the result to Instagram
If there are no messages, just wait patiently in the background worker.
Generally it's a good idea to separate front-end API's from back-end workers so they can scale independently.

How online radio live stream music and are there available resources to build one with Node.js?

i'm little curious about 'How live streaming web application works'. Recently I want to built something like a online radio that can perform live stream through all the client, like music, speech etc. I'm quite familiar with Java Spring MVC and Node.js . If there are some resource using thease above technology, it would be really helpful for me to see how it works. Thanks in advance.
There are two good articles about it:
Streaming Audio on the Web with NodeJS
Using NodeJS to Stream a Radio Broadcast
You may also find this module helpful:
https://www.npmjs.com/package/websockets-streaming-audio
The best way to do this is use Node.js as your source application, and leave the actual serving of streams to existing servers. No reason to re-invent streaming on the web if you can get all the flexibility you need by writing the source end.
The flow will look like this:
Your Radio Source App --> Icecast (or similar) --> Listeners
Inside your app itself:
Raw audio sources --> Codecs (MP3, AAC w/ADTS, etc.) --> Icecast Source Client
Basically, you'll need to create a raw PCM audio stream using whatever method you want for your use case. From there, you'll send that stream off to a handful of codecs, configured with different bitrates. What bitrate and quality you use is up to you, based on the bandwidth availability of your users and tradeoff with quality you prefer. These days, I usually have 64k streams for bad mobile connections, and 256k streams for good connections. As long as you have at least a 128k stream in there, you'll be putting out acceptable quality.
The Icecast source client can be a simple HTTP PUT these days. The old method is very similar... instead of PUT, the verb was SOURCE. (There are some other minor differences as well, but that's the gist.)

Stream WebCam using socket.io

I have been trying to implement a web application that will be able to handle following scenario:
Streaming video/audio from a client to other clients (actually a particular set of them, no broadcasting) and server at the same time. The data source would be a webcam of the client.
This streamed data has to be displayed in the real time on the other clients' browser and be saved on the server side for the 'archiving' purposes.
It has to be implemented in node.js + socket.io environment.
To put it in some more specific context... The scenario is that there is a guy that makes a kind of a room for the users that he chooses. After the chosen users join the room, the creator starts streaming video/audio from his/her built in devices (webcam). All of the guests receive the data in real time, moreover the data is being sent to the server where it is stored so it can be recovered after the stream and room get closed.
I was thinking about mixing Socket.IO with WebRTC. In theory the combination of these two seem just perfect for the job.
The Socket.IO is great for gathering specific set of users by assigning some sockets to a room and for signaling process demanded by the WebRTC.
At the same time WebRTC is awesome for P2P connection between users gathered in the same room, it is also really easy to get access to the webcam and other built in devices that I might want to use.
So yeah, everything is looking pretty decent in theory but I would really need to see some code in action so I could actually try to implement it on my own. Moreover, I see some issues:
How do I save the stream that is sent by the P2P connection? Obviously server does not have access to that. I was thinking that I might treat the server as another 'guest', so it would be just another endpoint of the P2P connection with the creator of the room. Somehow it feels edgy, though.
Wouldn't it be better to treat server as the middleman between the creator and the clients? At one point there might be some, probably insignificant, delay comparing to P2P but presumably it would be the same for all the clients. (I tried that but I can't get the streaming from webcam to the server done, that's however is the topic for a different question as I am having problems with processing the MediaStream)
I was looking for some nice solutions but without any success. I have seen that there is this nice P2P solution made for socket.io: http://socket.io/blog/socket-io-p2p/ . The thing is - I don't think it will handle the data stream well. The examples mention only simple chat app and I need something a little bit heavier than that.
I would be really thankful for some specific examples, docs, whatever may lead me a little closer to the implementation of it as I really don't know how to approach it.
Thanks in advance :)
You task can be solved by using one of the open source WebRTC-servers.
For example, kurento.
You can realize schemas of stream:
One to one
One to many
Many to many
WebRtc-server schema
Clients will connect to each other through the WebRTC server.
So, on server side you can record the stream, or send it for transcoding.
webSocket is used for communicating with server.
You can find some examples according to your task
Video streaming to multiple users is a really hard problem that unfortunately requires extensive infrastructure to achieve. You will not be able to stream video data through a websocket. WebRTC is also not a viable solution for what you are describing because, as you mentioned, the WebRTC protocol is P2P, as in the streaming user will need to make a direct connection to all the 'viewers'. This will obviously not scale beyond a few 'viewers'. WebRTC is more for direct video calls like in Skype for example.
Here is an article describing the architecture used by a somewhat popular live streaming service. As you can see achieving live video at any sort of scale will require considerable resources.

Using nodeJS for streaming videos

I am planning to write a nodeJS server for streaming videos, One of my critical requirement is
to prevent video download( as much as possible ), something similar to safaribooksonline.com
I am planning to use amazon s3 for storage and nodeJS for streaming the videos to the client.
I want to know if nodeJS is the right tool for streaming videos( max size 100mb ) for an application expecting lot of users. If not then what are the alternatives ?
Let me know if any additional details are required.
In very simple terms you can't prevent video download. If a rogue client wants to do it they they generally can - the video has to make it to the client for the client to be able to play it back.
What is most commonly done is to encrypt the video so the downloaded version is unplayable without the right decryption key. A DRM system will allow the client play the video, without being able to copy it (depending on how determined the user is - a high quality camera pointed at a high quality screen is hard to protect against (!). In these scenarios, other tracing technologies come in to play).
As others have mentioned in the comments, streaming servers are not simple - they have to handle a wide range or encoders, packaging formats, streaming formats etc to allow as much each as possible and will have quite complicated mechanisms to ensure speed and reduce file storage requirements.
It might be an idea to look at some open source streaming servers to get a feel for the area, for example:
VideoLan (http://www.videolan.org/vlc/streaming.html)
GStreamer (https://gstreamer.freedesktop.org)
You can still use noedejs for the main web server component of your solution and just hand off the video streaming to the specialised streaming engines, if this meets your needs.

Resources