I need to record a stream video (say I have an url) and save the last N minutes of it to Azure?
My guess is that I need to use Azure media service for that.
I've already created an Azure media service account.
Could anybody give me a hint where to start from.
Update:
I'd prefer to use C#
Stream can be from blob:http://ipcamlive.com/a5fe3312-2a33-4b53-8b83-42af7928abb0 or from any web camera. Currently I'm not sure about the video format
You haven't really given much info on what type of video, what language you'll use so probably best to start with Azure Media Services documentation
Here you can find a tutorial on encoding from HTTPS source using .NET
If you can give more info on what you're looking to do, you'll likely get better hints; right now this smells like an XY Problem
Related
im currently working in a module of analysis of stadistics of videos from azure media services. I want to ask how can i get some data like average visualization time, number of visualizations and more stuff like that. im pretty sure it has to exist a very easy way to get this data but i cannot find it. I found that application insights could be useful. I have found that i may have to manually track this information. Im working on .net6. An example of code would be awesome. Thanks in advance!
pd: https://github.com/Azure-Samples/media-services-javascript-azure-media-player-application-insights-plugin/blob/master/options.md
I have found that Application Insights could be useful to my problem. Some classes like TelemetryClient (from the package Microsoft.ApplicationInsights) seems to be useful to my problem, but i cant find clear information about them.
No, there is no concept of client side analytics or viewer analytics in Azure Media Services. You have to track and log things on your own on the client side. App Insights is a good solution for this, and there are some older samples out there showing how to do that with a player application.
Take a look at this sample - https://learn.microsoft.com/en-us/samples/azure-samples/media-services-javascript-azure-media-player-application-insights-plugin/media-services-javascript-azure-media-player-application-insights-plugin/
Just WARNING: it is very old and probably very out of date. I would not use much of the code from that sample, as it is using SDK's from 4 year ago. Just use it as guidance at a high level for what the architecture might look like.
Another solution would be to look to a 3rd party service like Mux.com/Data that can plug into any player framework for client analytics.
I want to use an IP camera without iot edge support to live stream the video footage to azure and I want to get insights from video using Azure Video Analyzer for Media(aka VideoIndexer).
I have came across 2 possible ways to achieve it in Azure-
Came across LiveStreamAnalysis GitHub repo but the functions are not getting deployed as it is using older version of Media Service(v2). However, I read the newer version of Media Services but didn't found the Live Stream sample to start with.
Found Video Analyzer(preview) documentation but I am only able to stream and record the live stream using a simulated IP camera live stream.
I want to do further analysis on video by using video indexer apis but I didn't find any way to achieve it using 2nd approach.It only explained using IOT edge device pipelines and worflows.
How can I achieve this?
thank you for bringing (1) to our attention.
I reached out to the relevant contact.
There isn't other built in integration your (2) option is using Azure video analyzer (not for media) which is a different service.
The most promising path at the moment is (1) based on a fix.
Regarding (1), I am the main author of the Live Stream Analysis sample and it is true that the functions need to be updated to use AMS v3 API, and the logic apps to be updated too to use these new functions. I started the work to build the new functions but the work is not complete. They will be posted to https://github.com/Azure-Samples/media-services-v3-dotnet-core-functions-integration/tree/main/Functions
You can see that there is a SubmitSubclipJob which will be key for the sample.
I am very confused about the calling sdk specs. They are clear about the fact that only one video stream can be rendered at one time see here...
BUT when I try out the following sample I get video streams for all members of the group call. When I try the other example (both from ms), it behaves like written in the specs... So I am totally confused here why this other example can render more than one video stream in parallel? Can anybody tell me how to understand this? Is it possible or not?
EDIT: I found out that both examples work with multiple videos streams. So it is cool that the service provide more than the specs say, but I do not get the point why the specs tell about that not existing limitations...
Only one video stream is supported on ACS Web (JS) calling SDK, multiple video stream can be rendered for incoming calls but A/V quality is not guaranteed at this stage for more than one video. Support for 4(2x2) and 9(3x3) is on the roadmap and we'll publish support as network bandwidth paired with quality assurance testing and verification is identified and completed.
Context
I have an mobile app that provides our users with the possibility to capture the name plate of our products automatically. For this I use the Azure Cognitive Services OCR service.
I am a bit worried that customers might capture pictures of insufficient quality or of the wrong area of the product (where no name plate is). To analyse whether this is the case it would be handy to have a copy of the captured pictures so we can learn what went well or what went wrong.
Question
Is it possible to not only process an uploaded picture but to also store it in Azure Storage so that I can analyse it in a later point in time?
What I've tried so far
I configured the Diagnostic settings in a way that the logs and metrics are stored into Azure Storage. As it is called, this is only logs and metrics and not the actual image. So this does not solve my issue.
Remarks
I know that I can manually implement that in the app but I think it would be better if I have to upload
the picture only once.
I'm aware that there are data protection considerations that must be made.
No, you can't add an automatic logging based only on OCR operation, you have to implement it.
But to avoid uploading it twice as you said, you could create your logic on server side, but sending the image to your api and in the api, get the image and send it to OCR while storing it in parallel.
But I guess that based on your question, you might not have any server side things in your app?
I am looking a solution for Video Chatting in Xamarin forms backend Azure. Azure currently not supporting WebRTC. So I plan to do Create 2 live streaming channel for the users. Take one end camera for one live streaming channel and same for another end user. Before I am doing this test, I want to know it will work or not and performance wise good or bad?
Or I can go with signalr?
Unfortunately, I think neither Azure Media Services, nor SignalR will give you the low latency you need for a live video chat application.
I think that your best bet when running on Azure, will be to grab a Virtual Machine and install a 3rd party product such as:
Kurento
jitsi
Wowza (which I think also offer their product as a SaaS)
Any other product you might find
Hope it helps!