I have to find out time spend by a resource on a Task.
let say,
There are 3 Task (Task-A,Task-B,Task-C)
There are 3 User (User-A,User-B,User-C)
All the Task have Original estimate of 8 hours
on Day 1 All the User have work 2 hours on their respective tasks
So I should get a result of 2
I am using Azure Work Item query to calculate and display the results.
I am not understanding what should be done to calculate the daily work done on a task.
For this issue , you can add Completed Work and Remaining work column to the query.This can show the hours the tasks has spent and remaining work hours in the query.
Based on these two fields, you can calculate the daily hours spent on work item.
Azure Devops Query to find Daily hours spent on Work item
AFAIK, Azure devops does not focus on micromanaging and focus on low value metrics.
So, Azure devops doesn't track the time spent on a per day basis. It keeps track of the total time spent. If you want a per person per day value, you'll have to go through the iterations/workitem history and calculate the running difference.
If you really looking for a TimeReporting for the work items of per person, I suggest that you take a look at a third party yool like Timetracker:
Timetracker
New to version 5.0.! Individual, team, and custom reports powered by
version 3 of the REST-based reporting API. 7+ customizable widget
types that let you see data that you need, how you need it. In
addition to the six default reports in Reporting, users can create
custom reports for individuals or teams.
Hope this helps.
You can Analinics Views and Power BI. For example, you can add your custom view:
Then use it in Power BI:
Related
we run an eCommerce store and constantly create & test new Facebook Ads. Now I am looking for a good way to create a scoring for these ads. Basically it is a very simple problem, but I can't get my head around it.
For each Facebook Ad, I have this data:
Budget Spent
Impressions
Clicks
Product Page Visited
Purchases
The most important event obviously is the purchase. So in a perfect world with huge amount of data per ad I would simply calculate the cost per purchase (= Budget Spent / Purchases) and know, which is my best ad. But here comes the problem..
On each ad we don't have much data. So let's say we have:
AD 1
50€ Budget Spent
10.000 Impressions
200 Clicks
50 Product Page Visits
2 Purchases
= 25€ per purchase
AD 2
50€ Budget Spent
10.000 Impressions
400 Clicks
130 Product Page Visits
1 Purchases
= 50€ per purchase
Simply based on the cost per purchase, I would choose AD1. But when I am looking at the data of the previous steps (clicks and product-page-visits), AD2 looks more promising.
How can I create a score value, that tells me which ad will likely generate the better cost per purchase in the long run, considering also the values of the previous steps?
That score value should take into account the data of the previous steps. So if we have less purchase-data, it should strongly rely on the previous data and if we have much purchase-data, it should rely less on previous data (and more on the actual cost per purchase).
I think of something like:
Use the click-rate as the base score (because we have much data for this)
then modify that score with the values of the following steps but in a weighted-way. So the more data we have on the following steps, the more the score moves toward that values.
Thanks in advance for your help!
Best Regards
Patrick
I am a daily user of Azure and quite familiar with Logic Apps and other Azure tools.
I have 2 requirements,
1. To get an email everyday consisting list of top 25 resources which consumed the most credit amount on the previous day.
2. Get total spent amount of the previous day by all the resources in my subscription.
I achieved part 2 by setting up a budget and got daily spent from it through budgets API.
I want to achieve the same with part 1 of my requirement using any API provided by azure.
Please help or drop any questions. I'll be happy to explain.a
I think you are looking for Usage Details API, see here
Its List operation allow you to get the usage details where you can filter by date (startDate and endDate in headers). There is a $top parameters to limit the number of results, but it looks like they are not ordered by amount so you may have to do you own sort to limit to the 25 highest costs
I need to keep a 28 day history for some dashboard data. Essentially I have an event/action that is recorded through our BI system. I want to count the number of events and the distinct users who do that event for the past 1 day, 7 days and 28 days. I also use grouping sets (cube) to get the fully segmented data by country/browser/platform etc.
The old way was to do this keeping a 28 day history per user, for all segments. So if a user accessed the site from mobile and desktop every day for all 28 days they would have 54 rows in the DB. This ends up being a large table and is time consuming even to calculate approx_distinct and not distinct. But the issue is that I also wish to calculate approx_percentiles.
So I started investigating the user of HyperLogLog https://prestodb.io/docs/current/functions/hyperloglog.html
This works great, its much more efficient storing the sketches daily rather than the entire list of unique users per day. As I am using approx_distinct the values are close enough and it works.
I then noticed a similar function for medians. Qdigest.
https://prestodb.io/docs/current/functions/qdigest.html
Unfortunately the documentation is not nearly as good on this page as it is on previous pages, so it took me a while to figure it out. This works great for calculating daily medians. But it does not work if I want to calculate the median actions per user over the longer time period. The examples in HyperLogLog demonstrate how to calculate approx_distinct users over a time period but the Qdigest docs do not give such an example.
The results that I get when I try something to the HLL example for date ranges with Qdigest I get results similar to 1 day results.
Because you're in need of medians that are aggregated (summed) across multiple days on a per user basis, you'll need to perform that aggregation prior to insertion into the qdigest in order for this to work for 7- and 28-day per-user counts. In other words, the units of the data need to be consistent, and if daily values are being inserted into qdigest, you can't use that qdigest for 7- or 28-day per-user counts of the events.
Consider an analytics where you need to find out repeat customers for a date range. Repeat customers are defined for date range as customers who use the service 3*(Given Date Range Interval) before the starting range and also used the service in given date range.
For example repeat customer for this week is all customers who used service 3 weeks before starting of this week and all such customers used it this week.
I am using influxdb. I haven't decided the series yet, I am looking for inputs into how I can define a series such that I can do available operations in influxdb to obtain this analytics.
Data available to me is the timestamp at which user used the facility, user_id , service_category, service_instance_id, and a json dump of further details about service.
may be my thought process is limited, I need some intervention on how to approach this and any input is welcome.
So I thought about this and came to a decent solution. I have to save the last time a user visited along with the entry. So at least one reference will be there for any time period if user is repeating for that time period.
This is similar to a linkedlist except that we have direct access to time based filtering of nodes.
I have a table that tracks the dispatching of personnel. The table has the employee name and the date the person went out and the date they returned.
The table has hundreds of entries from 1988 to current.
In Excel I track the cumulative count per day (of the year) of how many people have been sent out, and I also track the number of people out on any given day. The table lists the Month & Day in the first column (every day of the year, including leap days) and the years on the first row. There is data for every date (a zero is entered until the first person is sent out that year, then starts counting up as there are more dispatches, or in the case of the number of people out each day, it will show zero if no one is out that day or if there were, say 5 people out, it would show "5" for that day). I then use the data in Excel to construct a graph that shows the number of dispatches on the y axis and the day of the year on the x axis (along with the current year’s number, the average number and the max over the 27 year history). Currently I just track this manually (I just keep a running count of each and enter it in manually in Excel.) I would like to build a query of my Access data that would return the same information that I could import into my Excel spreadsheet. One query that would show the day & month in the first column and the years along the top row and for each day show a cumulative count for that year of how many people have been sent out. Another query that has the day & month in the first column and the years along the top and a count of how many people were out for that particular day for that particular year. There shouldn't be any gaps (every day has data, even if it is "0"). I would then import those queries into Excel to replace my manual tracking that I am doing now.
I know how to construct the Excel stuff (I have that running already), and how to import info from Access to Excel, what I need to know is how to construct these 2 Access queries.
Any help/ideas on how to construct those 2 queries would be greatly appreciated!
I'd recommend that you migrate this app to a web based solution that uses a real database - SQL Server or MySQL, not Access.
"Desk drawer software" is what I call homegrown apps that someone creates for themselves to perform some small task that eventually become integral to running a business and grow out of hand. Your truck factor is 1: if anything happened to you, no one would know how to do this function. The software may not be backed up or checked into a source code management system. There's no QA. There's no way to migrate new features to production: if you alter the app, then that is what you have.
I'd recommend a web app to mitigate all the risks I've described:
You have to deploy a web app to a server, which takes it off your desktop and puts it in a central place where anyone who's authorized can access it.
Separates database from display issues.
Makes you think about how to archive historical data. Partitioning by year makes sense.
Likely you'll put this in a source code management system like Subversion or Git.