I want to get the workload foreach teammember from the Devops API, so that I can visualize the same like what you can see in the picture and here: https://learn.microsoft.com/en-us/azure/devops/boards/sprints/adjust-work?view=azure-devops
I already saw that there is a capacity endpoint: https://learn.microsoft.com/en-us/rest/api/azure/devops/work/capacities/list?view=azure-devops-rest-5.1.
But this shows only the available hours foreach member in a week. I want all workItems per Member (hours summed up).
Is there a possible way to achieve this? Am I missing something?
I’m afraid that there is not a REST API can get the value directly at present. Before the value is displayed on the page, it is computed several times at the backend. After your assigned the work items to members, the page will show as following. It will show all work items per members(hours summed up) and capacity hours.
If you want to get the value by REST API, you can check the API from Fiddler, then follow the format of the API to pass the your value. Or you can get the member capacity and the Iteration work days then use the script to calculate this value manually.
Related
I am trying to create dashboard of my services in Azure. I added Azure Metrics Chart of each service and later wanted to add under it specific details to operations included in service.
But when I try to get it from logs, I get much higher number of requests made. KQL:
requests
| where cloud_RoleName startswith "notificationengine"
| summarize Count = count() by operation_Name
| order by Count
And result:
Problem is with some metrics chart I get values with minimal difference or exactly same while with some like one I shown I get completely different values. I tried to modify KQL or search what might be wrong but never got anywhere.
My guess is that those are 2 different values but in that case why both are labeled as "requests" and if so what are actual differences?
I have taken an Azure Function App with 2 Http Trigger Functions with identical names starts with “HttpTrigger” and run both the functions for couple of times.
Test Case 1:
In the Logs Workspace, Requests count got for the two functions that starts with the word “HttpTrigger”:
But I have pinned the chart of only 1 Function Requests Count to the Azure Dashboard:
Probably, I believe you have written the query of requests of all the services/applications that starts with “notificationengine” but pinned only some apps/services logs-chart to the dashboard.
Test Case 2:
We have the Sharepoint 2016 hosted on prem with a minimum set of services running on the server. The resource utilization is very low and the user base is around 100. There are no workflows or any other resource consuming service running.
We use list to store and update information for certain users with the help of a form for the end user. Of recent, the time consumed for the update has increased to over 6 seconds for a list data update.
Example:
https://sitename_url/_api/web/lists/GetByTitle('WFListInfo')/items(15207)
This list has about 15 items, mostly numbers and single line text or number or DateTime.
The indexing is set to automatic.
As part of the review, we conducted a few checks and DB indexing on our cluster, however there is no improvement.
Looking forward to any help / suggestions. Thank you.
Use case is as follows
We have list of faces in our system
User will upload one image
We want show list of faces which matches with uploaded image with say >0.8 confidence
Now looking at how to, i understood as follows
Using Face Detect API, We need to first upload all images including image with we want to verify
We can add all faces from our system in one of PersonGroupId
We then need to call Face-Verify API & pass image to verify & PersonGroupId to start comparing
In response we will get all faceId with isIdentical & confidence data ??
Is this is the right way?
After applying filters, our system can have say around 1000-3000 images.
BTW, in given link, it is mentioned that faceid will be expired after 24 hours after detection call :(
We also need to take care of performance in this case, so we are thinking of async call and then will return result somewhere in our system which can be retrieved later on.
What can be the best approach for this?
Pricing
i can see that 1st 30,000 transactions are free (with limitation of 20/m)
Face Storage cost is 16.53/m for 1000 images, does it means that Face-Detect API will store in Azure Blob storage? If yes and still faceId will be deleted after 24 hours ?
Face Storage - Stores images sized up to 4 MB each - whereas Face-Detect says, can store up to 6 MB
I might be missing something here, it would be great if someone can throw lights on it
Let's see the process that you will need to implement.
In the documentation here it says;
Face APIs cover the following categories:
...
FaceList: Used to manage a FaceList for Find Similar.
(Large)PersonGroup: Used to manage a (Large)PersonGroup dataset for Identification.
(Large)PersonGroup Person: Used to manage (Large)PersonGroup Person Faces for Identification.
In your case, it looks like you want to identify faces so you will use PersonGroup with PersonGroup Person items inside.
Step 1 - Generate your list of known faces
Details
So first you need to store your known faces in a group (called PersonGroup or LargePersonGroup given the number of items you have to store), in order to query these items with the image uploaded by your user. It will persist the items, there is no "24hour limit" with those groups.
If you want to understand the differences between "normal" and "large-scale" groups, see reference here: there are some differences that you must consider, in particular regarding the training process.
So let's use a normal PersonGroup, not large. Please note that the amount of items depend on your subscription:
Free-tier subscription quota: 1,000 person groups. Each holds up to 1,000 persons.
S0-tier subscription quota: 1,000,000 person groups. Each holds up to 10,000 persons.
Actions
Please also note that here I'm pointing to the API operations but all these actions can be performed in any language with those API calls, but also directly with the provided SDK for some languages (see list here)
Create a PersonGroup with PersonGroup - Create operation. You will specify a personGroupId in your request, that you will use below
Then for each person of your known faces:
Create a Person with PersonGroup Person - Create operation, giving the previous personGroupId in the request. You will got a personId guid value as a result, like "25985303-c537-4467-b41d-bdb45cd95ca1"
Add Faces of this user to its newly created Person by calling PersonGroup Person - Add Face operation and providing personGroupId, personId, additional optional information in the request and your image url in the body.
Note that for this operation:
Valid image size is from 1KB to 4MB. Only one face is allowed per
image.
Finally, once you have added your persons with their faces:
Call PersonGroup - Train operation
Check the training status with PersonGroup - Get Training Status operation
Then you are ready to identify people based on this group!
Step 2 - Search this FaceId inside your known faces
Easy, just 2 actions here:
Call Face - Detect operation to find faces inside your image. The result will be an array of item containing faceId and other attributes
If you have detected faces, call Face - Identify operation with the following parameters:
faceId, which is the value from the detect operation
personGroupId: the Id of the group you have created during step 1
confidenceThreshold: your confidence threshold, like 0.8
maxNumOfCandidatesReturned: Number of candidates returned (between 1 and 100, default is 10)
Request sample:
{
"personGroupId": "sample_group",
"faceIds": [
"c5c24a82-6845-4031-9d5d-978df9175426",
"65d083d4-9447-47d1-af30-b626144bf0fb"
],
"maxNumOfCandidatesReturned": 1,
"confidenceThreshold": 0.8
}
Other questions
Face Storage cost is 16.53/m for 1000 images, does it means that
Face-Detect API will store in Azure Blob storage? If yes and still
faceId will be deleted after 24 hours ?
Face-Detect API is not storing the image. The storage cost is about using PersonGroup or FaceLists
Face Storage - Stores images sized up to 4 MB each - whereas
Face-Detect says, can store up to 6 MB
As said, storage is about persisting faces like when you use PersonGroup Person - Add Face, where the limit is 4MB, not 6
I have a question regarding the Python API of Interactive Brokers.
Can multiple asset and stock contracts be passed into reqMktData() function and obtain the last prices? (I can set the snapshots = TRUE in reqMktData to get the last price. You can assume that I have subscribed to the appropriate data services.)
To put things in perspective, this is what I am trying to do:
1) Call reqMktData, get last prices for multiple assets.
2) Feed the data into my prediction engine, and do something
3) Go to step 1.
When I contacted Interactive Brokers, they said:
"Only one contract can be passed to reqMktData() at one time, so there is no bulk request feature in requesting real time data."
Obviously one way to get around this is to do a loop but this is too slow. Another way to do this is through multithreading but this is a lot of work plus I can't afford the extra expense of a new computer. I am not interested in either one.
Any suggestions?
You can only specify 1 contract in each reqMktData call. There is no choice but to use a loop of some type. The speed shouldn't be an issue as you can make up to 50 requests per second, maybe even more for snapshots.
The speed issue could be that you want too much data (> 50/s) or you're using an old version of the IB python api, check in connection.py for lock.acquire, I've deleted all of them. Also, if there has been no trade for >10 seconds, IB will wait for a trade before sending a snapshot. Test with active symbols.
However, what you should do is request live streaming data by setting snapshot to false and just keep track of the last price in the stream. You can stream up to 100 tickers with the default minimums. You keep them separate by using unique ticker ids.
When using Python to make a GET request to the Instagram API, passing the required variables as shown below
photos = api.media_search(lat=latitude, lng=longitude, distance=distance, count=count)
I have attempted to set the count parameter to over 100, but the API returns a maximum of 100 results.
Is this a limitation set for the API or am I doing something wrong?
The Instagram API documentation says there is a max value for count for each endpoint, from the docs:
On views where pagination is present, we also support the "count" parameter. Simply set this to the number of items you'd like to receive. Note that the default values should be fine for most applications - but if you decide to increase this number there is a maximum value defined on each endpoint.
However, I couldn't find any indication for that number in the documentation, neither for the media request nor for other requests. So I would assume that they don't guarantee any specific number.
They do specify that if the application is in sandbox mode, the data is restricted to 20 most recent media.