Different face matching rates between Azure face verify API and Azure Demo - azure

I am using Azure Face API to tell two different persons' faces.
It was easy to use thanks to the good documentation on the MicroSoft Azure API website.
But the different confidence rate between API call and the demo on the webstie: https://azure.microsoft.com/en-us/services/cognitive-services/face/#demo
My code is simple.
First I get the face ids of two uploaded images using face detection API.
And I just send two face ids to face verify API. Then I get the result of confidence rate that means the similarity of two faces.
I always get less confidence rate from my API call than the demo of the Azure website. About 20% less.
ex) I get 0.65123 on API call while I get the higher number like 0.85121 on the demo.
This is the Azure face API specifications to verity two faces:
https://learn.microsoft.com/en-us/rest/api/cognitiveservices/face/face/verifyfacetoface
Since I got no clue why it happens. I don't resize or crop the images on uploading.
I use the exactly same images for this test.
Is it possible for MS Azure to manipulate the values for their own interests?
I wonder if anyone has the same issue? If yes, please share your experience with me.

Different 'detectionModel' values can be provided. To use and compare different detection models, please refer to How to specify a detection model.
'detection_02': Detection model released in 2019 May with improved accuracy compared to detection_01. When you use the Face - Detect API, you can assign the model version with the detectionModel parameter. The available values are:
detection_01
detection_02

Related

Accuracy of Cognito and Comprehend for PII detection

I have been through the documentation of both AWS Cognito and Azure Comprehend, trying to understand the accuracy or both TPR and FPR of the two services when it comes to identify PII and PHI inside a document without performing custom training. Unfortunately, I wasn't able to find any number and I do not have enough data to build my own confusion matrix, do any of you have an idea - even indicative - of their performances?
Thanks!

Get model download statistics for TFhub

I'm working on a tool for TF hub models and would like to test it on a representative sample of models. Is there a way do programmatically download a full list of TF hub models along with their download numbers (to judge popularity) other than scraping the website?
Also, I've noticed that the download numbers for some models (Ex: https://tfhub.dev/tensorflow/efficientnet/lite0/classification/2) are missing, and on a day-to-day basis, the download numbers sometimes go down. Is there any explanation for this?
There is no API to download all models or model stats. The bug regarding the decreasing download counts should be fixed by now.

Is it possible to store uploaded pictures for OCR analysis in azure storage for later debugging and analysis?

Context
I have an mobile app that provides our users with the possibility to capture the name plate of our products automatically. For this I use the Azure Cognitive Services OCR service.
I am a bit worried that customers might capture pictures of insufficient quality or of the wrong area of the product (where no name plate is). To analyse whether this is the case it would be handy to have a copy of the captured pictures so we can learn what went well or what went wrong.
Question
Is it possible to not only process an uploaded picture but to also store it in Azure Storage so that I can analyse it in a later point in time?
What I've tried so far
I configured the Diagnostic settings in a way that the logs and metrics are stored into Azure Storage. As it is called, this is only logs and metrics and not the actual image. So this does not solve my issue.
Remarks
I know that I can manually implement that in the app but I think it would be better if I have to upload
the picture only once.
I'm aware that there are data protection considerations that must be made.
No, you can't add an automatic logging based only on OCR operation, you have to implement it.
But to avoid uploading it twice as you said, you could create your logic on server side, but sending the image to your api and in the api, get the image and send it to OCR while storing it in parallel.
But I guess that based on your question, you might not have any server side things in your app?

Deployment of a Tensorflow object detection model and serving predicitions

I have a Tensorflow object detection model deployed on Google cloud platform's ML Engine. I have come across posts suggesting Tensorflow Serving + Docker for better performance. I am new to Tensorflow and want to know what is the best way to serve predictions. Currently, the ml engine online predictions have a latency of >50 seconds. My use case is a User uploading pictures using a mobile app and the getting a suitable response based on the prediction result. So, I am expecting th prediciton latency to come down to 2-3 seconds. What else can I do to make the predictions faster?
Google Cloud ML Engine has recently released GPUs support for Online Prediction (Alpha). I believe that our offering may provide the performance improvements you're looking for. Feel free to sign up here: https://docs.google.com/forms/d/e/1FAIpQLSexO16ULcQP7tiCM3Fqq9i6RRIOtDl1WUgM4O9tERs-QXu4RQ/viewform?usp=sf_link

Using real time data with Azure machine learning studio?

I’ve started experimenting with the Azure ML studio and started playing with templates, upload data into it and immediately start working with it.
The problem is, I can’t seem to figure out how to tie these algorithm to real time data. Can I define a data source to input or can I configure the Azure ML studio in a way that it runs on data that I’ve specified?
Azure ML studio is for experimenting to find a proper solution to the problem set you have. You can upload data to sample, split and train your algorithms to obtain “trained models”. Once you feel comfortable with the results, you can turn that “training experiment” to a “Predictive Experiment”. From there on, your experiment will not be training but be predicting results based on user input.
To do so, you can publish the experiment as a web service, once you’ve published the web service, under the web services tab you can find your web service and run samples with it. There’s a manual input box dialog ( entry boxes here depend on the features you were using in your data samples), some documentation and REST API info for single query and BATCH query processing with the web service. Under batch you can even find sample code to connect to the published webservice.
From here on from any platform that can talk REST API, you can call the published webservice and get the results.
Find below the article about converting from training to predictive experiments
https://azure.microsoft.com/en-us/documentation/articles/machine-learning-walkthrough-5-publish-web-service/
Hope this helps!

Resources