Use case is as follows
We have list of faces in our system
User will upload one image
We want show list of faces which matches with uploaded image with say >0.8 confidence
Now looking at how to, i understood as follows
Using Face Detect API, We need to first upload all images including image with we want to verify
We can add all faces from our system in one of PersonGroupId
We then need to call Face-Verify API & pass image to verify & PersonGroupId to start comparing
In response we will get all faceId with isIdentical & confidence data ??
Is this is the right way?
After applying filters, our system can have say around 1000-3000 images.
BTW, in given link, it is mentioned that faceid will be expired after 24 hours after detection call :(
We also need to take care of performance in this case, so we are thinking of async call and then will return result somewhere in our system which can be retrieved later on.
What can be the best approach for this?
Pricing
i can see that 1st 30,000 transactions are free (with limitation of 20/m)
Face Storage cost is 16.53/m for 1000 images, does it means that Face-Detect API will store in Azure Blob storage? If yes and still faceId will be deleted after 24 hours ?
Face Storage - Stores images sized up to 4 MB each - whereas Face-Detect says, can store up to 6 MB
I might be missing something here, it would be great if someone can throw lights on it
Let's see the process that you will need to implement.
In the documentation here it says;
Face APIs cover the following categories:
...
FaceList: Used to manage a FaceList for Find Similar.
(Large)PersonGroup: Used to manage a (Large)PersonGroup dataset for Identification.
(Large)PersonGroup Person: Used to manage (Large)PersonGroup Person Faces for Identification.
In your case, it looks like you want to identify faces so you will use PersonGroup with PersonGroup Person items inside.
Step 1 - Generate your list of known faces
Details
So first you need to store your known faces in a group (called PersonGroup or LargePersonGroup given the number of items you have to store), in order to query these items with the image uploaded by your user. It will persist the items, there is no "24hour limit" with those groups.
If you want to understand the differences between "normal" and "large-scale" groups, see reference here: there are some differences that you must consider, in particular regarding the training process.
So let's use a normal PersonGroup, not large. Please note that the amount of items depend on your subscription:
Free-tier subscription quota: 1,000 person groups. Each holds up to 1,000 persons.
S0-tier subscription quota: 1,000,000 person groups. Each holds up to 10,000 persons.
Actions
Please also note that here I'm pointing to the API operations but all these actions can be performed in any language with those API calls, but also directly with the provided SDK for some languages (see list here)
Create a PersonGroup with PersonGroup - Create operation. You will specify a personGroupId in your request, that you will use below
Then for each person of your known faces:
Create a Person with PersonGroup Person - Create operation, giving the previous personGroupId in the request. You will got a personId guid value as a result, like "25985303-c537-4467-b41d-bdb45cd95ca1"
Add Faces of this user to its newly created Person by calling PersonGroup Person - Add Face operation and providing personGroupId, personId, additional optional information in the request and your image url in the body.
Note that for this operation:
Valid image size is from 1KB to 4MB. Only one face is allowed per
image.
Finally, once you have added your persons with their faces:
Call PersonGroup - Train operation
Check the training status with PersonGroup - Get Training Status operation
Then you are ready to identify people based on this group!
Step 2 - Search this FaceId inside your known faces
Easy, just 2 actions here:
Call Face - Detect operation to find faces inside your image. The result will be an array of item containing faceId and other attributes
If you have detected faces, call Face - Identify operation with the following parameters:
faceId, which is the value from the detect operation
personGroupId: the Id of the group you have created during step 1
confidenceThreshold: your confidence threshold, like 0.8
maxNumOfCandidatesReturned: Number of candidates returned (between 1 and 100, default is 10)
Request sample:
{
"personGroupId": "sample_group",
"faceIds": [
"c5c24a82-6845-4031-9d5d-978df9175426",
"65d083d4-9447-47d1-af30-b626144bf0fb"
],
"maxNumOfCandidatesReturned": 1,
"confidenceThreshold": 0.8
}
Other questions
Face Storage cost is 16.53/m for 1000 images, does it means that
Face-Detect API will store in Azure Blob storage? If yes and still
faceId will be deleted after 24 hours ?
Face-Detect API is not storing the image. The storage cost is about using PersonGroup or FaceLists
Face Storage - Stores images sized up to 4 MB each - whereas
Face-Detect says, can store up to 6 MB
As said, storage is about persisting faces like when you use PersonGroup Person - Add Face, where the limit is 4MB, not 6
Related
Sometime we queries at one and get the value at another column, so thought of sharing this finding came across
I have created a logic-app, with trackedProperties "MessageId" and attached with Log analytics workspace (Diagnostic settings).
How to add track properties to log analytics workspace in logi-app
"trackedProperties": {
"MessageId": "#{json(xml(triggerBody())).ABC.DEF.MessageID}"
}
When I queried in Log Analytics,there I saw 2 trackedProperties columns with the name trackedProperties_MessageId_g and trackedProperties_MessageId_s.
Significance of above said 2 column names: When you provide a GUID value, it populates to trackedProperties_MessageId_g and when you provide string it populates to trackedProperties_MessageId_s.
Thanks for sharing your finding(s). Yes, AFAIK when you send particular field/column to Log Analytics its name is changed based on the type. This is true for almost any field/column. However there are some fields/columns which are called reserved that you can send without name change if you send them in the right type of course. An MVP, Stanislav Zhelyazkov has covered this topic here.
If you are not expecting 2 trackedProperties with the names trackedProperties_MessageId_g and trackedProperties_MessageId_s and were expecting only 1 trackedProperty then I suggest you to share your feedback in this UserVoice / feedback forum. Responsible product / feature team would check feasibility on how this can be resolved by adding some kind of checkpoint in the background and then if it's really feasible to accomplish then responsible product / feature team would triage / start prioritizing the feedback based on various factors like number of votes a feedback receives, priority, pending backlog items, etc.
I want to get the workload foreach teammember from the Devops API, so that I can visualize the same like what you can see in the picture and here: https://learn.microsoft.com/en-us/azure/devops/boards/sprints/adjust-work?view=azure-devops
I already saw that there is a capacity endpoint: https://learn.microsoft.com/en-us/rest/api/azure/devops/work/capacities/list?view=azure-devops-rest-5.1.
But this shows only the available hours foreach member in a week. I want all workItems per Member (hours summed up).
Is there a possible way to achieve this? Am I missing something?
I’m afraid that there is not a REST API can get the value directly at present. Before the value is displayed on the page, it is computed several times at the backend. After your assigned the work items to members, the page will show as following. It will show all work items per members(hours summed up) and capacity hours.
If you want to get the value by REST API, you can check the API from Fiddler, then follow the format of the API to pass the your value. Or you can get the member capacity and the Iteration work days then use the script to calculate this value manually.
I have a question regarding the Python API of Interactive Brokers.
Can multiple asset and stock contracts be passed into reqMktData() function and obtain the last prices? (I can set the snapshots = TRUE in reqMktData to get the last price. You can assume that I have subscribed to the appropriate data services.)
To put things in perspective, this is what I am trying to do:
1) Call reqMktData, get last prices for multiple assets.
2) Feed the data into my prediction engine, and do something
3) Go to step 1.
When I contacted Interactive Brokers, they said:
"Only one contract can be passed to reqMktData() at one time, so there is no bulk request feature in requesting real time data."
Obviously one way to get around this is to do a loop but this is too slow. Another way to do this is through multithreading but this is a lot of work plus I can't afford the extra expense of a new computer. I am not interested in either one.
Any suggestions?
You can only specify 1 contract in each reqMktData call. There is no choice but to use a loop of some type. The speed shouldn't be an issue as you can make up to 50 requests per second, maybe even more for snapshots.
The speed issue could be that you want too much data (> 50/s) or you're using an old version of the IB python api, check in connection.py for lock.acquire, I've deleted all of them. Also, if there has been no trade for >10 seconds, IB will wait for a trade before sending a snapshot. Test with active symbols.
However, what you should do is request live streaming data by setting snapshot to false and just keep track of the last price in the stream. You can stream up to 100 tickers with the default minimums. You keep them separate by using unique ticker ids.
Is it possible, to get data from Google Analytics that contain more than the usual limits found in the Google Analytics API.
I am using node-googleanalytics library to pull data from Google Analytics for use in node.js projects. When I querying more than 7 dimension or more than 10 dimension an error message is displayed [Error: Requested 8 dimensions; only 7 are allowed.] or [Error: Requested 11 metrics; only 10 are allowed.]. Is it possible to get more than 7 dimensions and 10 metrics?
Those are limits imposed by the Core Reporting API for Google Analytics. The latest Reference Guide (v3) specifies:
You can supply a maximum of 7 dimensions in any query.
You can supply a maximum of 10 metrics for any query.
In regards to dimensions, the limits are fixed (i.e. a maximum of 7 dimensions for any query) but there are programmatic workarounds. If you have specific dimensions that can be used to identify a user (e.g. a session ID and browser timestamp), you can execute multiple queries then patch them together.
I built a python program that will do exactly this: https://github.com/aiqui/ga-download
This program can bring together multiple groups of dimensions, so that any number of dimensions can be downloaded and combined into a single CSV file.
this is our situation:
We store user messages in table Storage. The Partition key is the UserId and the RowKey is used as a message id.
When a users opens his message panel we want to just .Take(x) number of messages, we don't care about the sortOrder. But what we have noticed is that the time it takes to get the messages varies very much by the number of messages we take.
We did some small tests:
We did 50 * .Take(X) and compared the differences:
So we did .Take(1) 50 times and .Take(100) 50 times etc.
To make an extra check we did the same test 5 times.
Here are the results:
As you can see there are some HUGE differences. The difference between 1 and 2 is very strange. The same for 199-200.
Does anybody have any clue how this is happening? The Table Storage is on a live server btw, not development storage.
Many thanks.
X: # Takes
Y: Test Number
Update
The problem only seems to come when I'm using a wireless network. But I'm using the cable the times are normal.
Possibly the data is collected in batches of a certain number x. When you request x+1 rows, it would have to take two batches and then drop a certain number.
Try running your test with increments of 1 as the Take() parameter, to confirm or dismiss this assumption.