Is it possible to retrieve the full list of default recognized classes of Azure Video Indexer?
Azure Video Analyzer for Media, a.k.a. Video Indexer, supports thousands of class labels for video frames classification referred to as Labels. Although the full list is not available online you can easily infer the classes relevant for your data with a few API calls... Feel free to reach out the customer support at: visupport#microsoft.com for additional assistance.
Related
I am trying to create a custom transform to detect and replace Pii information in videos using video indexer and media services, but I am not able to find the correct workflow to use the services? video indexer detects insights (OCR)-> text analytics detects Pii -> Media Services encodes and blurs (or overlay) the regions in video? There is no sample for media services to blur regions only faceredaction
Media Services API only supports the detection of faces and the blurring of them in a two-pass or single-pass process.
The two-pass process returns a JSON file with bounding boxes that can be used to adjust the positioning and choose which areas are blurred or not blurred. That file can be updated and then used in the second pass.
https://learn.microsoft.com/en-us/azure/media-services/latest/analyze-face-redaction-concept
also see the JSON schema here - https://learn.microsoft.com/en-us/azure/media-services/latest/analyze-face-redaction-concept#elements-of-the-output-json-file
The current .NET sample for this only shows the single-pass mode being used though, and I don't yet have a detailed sample showing the process of editing and re-submitting the job for the second pass, but I can help with that if you are interested in the details.
The current sample uses the "Redact" mode, but you would want to start with the "Analyze" mode if you merely wanted the JSON file with the bounding boxes to be used for blurring adjustments.
There is no support to blur text or OCR related data directly in the service or in Video Indexer.
I am using Azure Face API to tell two different persons' faces.
It was easy to use thanks to the good documentation on the MicroSoft Azure API website.
But the different confidence rate between API call and the demo on the webstie: https://azure.microsoft.com/en-us/services/cognitive-services/face/#demo
My code is simple.
First I get the face ids of two uploaded images using face detection API.
And I just send two face ids to face verify API. Then I get the result of confidence rate that means the similarity of two faces.
I always get less confidence rate from my API call than the demo of the Azure website. About 20% less.
ex) I get 0.65123 on API call while I get the higher number like 0.85121 on the demo.
This is the Azure face API specifications to verity two faces:
https://learn.microsoft.com/en-us/rest/api/cognitiveservices/face/face/verifyfacetoface
Since I got no clue why it happens. I don't resize or crop the images on uploading.
I use the exactly same images for this test.
Is it possible for MS Azure to manipulate the values for their own interests?
I wonder if anyone has the same issue? If yes, please share your experience with me.
Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to How to specify a detection model.
'detection_02': Detection model released in 2019 May with improved accuracy compared to detection_01. When you use the Face - Detect API, you can assign the model version with the detectionModel parameter. The available values are:
detection_01
detection_02
Is is possible to define custom/special entities to be used for entity recognition within the Azure Text Analytics API?
NER (Names Entities Recognition) allows to discover a wide range of entities but for our purposes we're focusing on some model-specific entities (e.g. brand and product names) which we need to relate to the overall sentiment. General NER might not be enough for our purposes since we're looking for very specific appreciation/criticism terms during the topic generation.
The theme has already been presented in different flavors with no answers so far:
in 2016 it seemed to be an "upcoming" feature: Customizing the Named Entity Recogntition model in Azure ML
in 2018 someone was searching for much more specialized version capable to physically locate custom entities spatials position within documents: Documentation / Examples for Custom Entity Detection (Azure NLP / Text Analytics)
Here is what I am trying to do.
I am analyzing videos and based on my analysis I know at certain time intervals I need to capture a screenshot. I want this to be taken care of as part of encoding but I don't see any documentation that lets me achieve it in v3. Is this even possible in v3?
This feature (key frames) is available on Video Indexer. More info in here:
https://learn.microsoft.com/en-us/azure/media-services/video-indexer/scenes-shots-keyframes
You can use the v3 APIs to generate thumbnails at fixed intervals. In the sample here, you can see how the PngImage and PngFormat elements are used. You can also output JPEG images - the schema details are here.
I’ve started experimenting with the Azure ML studio and started playing with templates, upload data into it and immediately start working with it.
The problem is, I can’t seem to figure out how to tie these algorithm to real time data. Can I define a data source to input or can I configure the Azure ML studio in a way that it runs on data that I’ve specified?
Azure ML studio is for experimenting to find a proper solution to the problem set you have. You can upload data to sample, split and train your algorithms to obtain “trained models”. Once you feel comfortable with the results, you can turn that “training experiment” to a “Predictive Experiment”. From there on, your experiment will not be training but be predicting results based on user input.
To do so, you can publish the experiment as a web service, once you’ve published the web service, under the web services tab you can find your web service and run samples with it. There’s a manual input box dialog ( entry boxes here depend on the features you were using in your data samples), some documentation and REST API info for single query and BATCH query processing with the web service. Under batch you can even find sample code to connect to the published webservice.
From here on from any platform that can talk REST API, you can call the published webservice and get the results.
Find below the article about converting from training to predictive experiments
https://azure.microsoft.com/en-us/documentation/articles/machine-learning-walkthrough-5-publish-web-service/
Hope this helps!