How to calculate the effort a customer type most likely require of an FTE during a year - excel

I know how many customers of each customer-size group each sales representative handles on an annual basis. Is it possible to calculate the likely time/effort required by each customer size based on this data set? Or said differently, I'm trying to find out if larger customers require more or less effort than smaller customers.
Is there a function or formula in Excel that will allow be to answer the above based on the data set below?
To add some context, in case it is helpful. There are 2080 work hours a year. I'm assuming they spend all their time with the customers under their responsibility. I also expect that the largest customers require more time than a small customers, but I dont know how much more. That is what I'm trying to figure out. Some employees do handle a lot more customers than others, so its probably best to look a the relative difference between the customer sizes for each employee...
Customer size is rated from 0 (very small) to 7 (the largest).
Below is a small data extract of a large Data table

Related

How to handle new investors, remove exiting investors from a fund

Heres the situation,
I've begun managing money for a handful of people in crypto markets. Each have a different sized account. Right now I am managing all of them individually, its a problem because its not scaleable and is time intensive.
So my question is how can I create something in google sheets to keep track of investor holdings in an overall fund? say there's 10 investors, each with $10,000. Each have 10%. But then a new investor comes in with 5%. How can I program this into an excel sheet?
Totally stuck. I need to be able to add/remove people, and adjust the total assets, holdings per person.
Running into a wall, and crypto forums are just exploding with activity. help.
Add a time variable (eg NumberOfDaysInvested) and multiply it by percentage each investor had originally up until new investors join in. That's a more fair way to divide the profits.
However, it still remains a problem that new investors can partake in profits obtained earlier - ie after just being an investor for 1+ days you will gain access to profits made by earlier investors.
I would suggest to look into dividing investors into funds and pools to separate them but still introduce the time variable. It will reduce overall work on your side and will be more fair.

Utilising varying amounts of cells for series of calculations

I am trying to account the value of a certain amount of grain coming in and out of storage based on the amount of fees. The grain is stored in a lump sum. I am trying to calculate the value/tonne of outcoming grain utilising a first in first out type approach in Excel. I have attempted learning Python for this task but I feel like it will be a while before my ability utilising coding (something pretty foreign to me) will be at the level where I could perform this task.
For example 400 tonnes might come in at a certain value which starts accruing storage fees in May. Then in June there might be 500 tonnes come in and start accruing fees from there. In July I might decide to take 600 tonnes out of storage (obviously meaning that 400 tonnes worth of fees from June and 200 tonnes worth of fees from July). Doing this leaves 300 tonnes of grain still in storage accruing fees, spillover which is then accounted for first for the next calculation. The size of outtakes varies between being larger or smaller than the amount on intakes.
I have tried utilising a sort of mini-grid. Which implements a series of If checks to solve the issue but it's difficult to automate when an outtake requires multiple different intakes of grain (multiple rows in the column) to then go to the next untouched cell in that column after I've taken into account the "spillover" from the previous outtake.
Is there solution here that I'm missing, mainly around taking into account the differing amounts of cells required for a series of calculations?

"IF" function for analysis of hospital lab frequency

I work for a hospital that is part of a larger network. We were recently asked by our corporate overlords to address the use of a specific laboratory test. in general, this test should only be performed daily, which should be considered to corresponded to a 24 hour period from last draw. sometimes, however, based on when people arrive to the hospital (e.g. 7pm), and in the interest of bundling labs for a single draw, they may be drawn sooner to coincide with routine testing i.e. 5am. it would never be necessary to otherwise need to repeat within a short (8 hour) window, particularly on the same day.
we have been asked to validate to see if we are adhering to this general practice, as testing any more frequent than that, say, within 12h of a previous test, has no real clinical value and thus adds unnecessary cost.
To address this issue I was given a dataset that among other items includes all instances the lab was performed including collection date and time.
please see HIPPA-safe example below (to be clear, no real data and identifiers are not real); the actual dataset has over 4,174 entries corresponding to 1,328 unique persons. everyone had at least one test performed, not everyone had >1.
I THINK what I want to do is an IF formula that reads the antecedent cell to 1) check if same person and 2) if so, perform a subtraction of the time stamp to display the relevant difference in time, which I can then filter, create histogram, etc. does this seem like a reasonable approach? is there a more preferable method to facilitate analysis? do any other forms of analysis come to mind?
=IF(B2=B1, D2-D1, "n/a")
example data set with formula:
any other forms of analysis come to mind?
By the looks of it you should consider taking the values under "Results" into account, assuming there is a band that might be considered 'normal' readings. The "one in 24 hours is sufficient" rule of thumb may well be appropriate for a series of values within the 'normal' band but not so much so if readings are close to 'danger level'.
That is, in some cases a higher than 'standard' frequency of monitoring may be in the patient's interest, even if not hospital policy, so it may be worth separating the "less than 24 hours interval" readings into those where the higher frequency provided information of little value (eg readings remaining within a 'normal' band) from any that crossed into or out of the band and/or large changes in value. This though may be more a matter of statistical analysis than programming and depend upon whether any action might be taken as a result of such "extra" readings.

How Can I Model Many Short Time Series Samples?

How Can I Model Multiple Short Time Series Samples?
For example, let's say I have a new subject each month, and I measure each subject every day for the entire month. I then want to model these multiple strings of independent time series because I assume that there is an underlying pattern that applies to all 12 subjects. However, a time series with an n of 30 is too short to model, so is there some way to group these 12 time series together for a parallel analysis?
I imagine the way to handle this is similar to how one might handle a time series with multiple breaks of unknown length. Unfortunately, I unaware of how to deal with this type of data structure.
Any thoughts on where to even begin? What terms I should research?
Well. Depends on what you're interested in. Makes it a lot easier if we know what kind of data you have, and what you're trying to analyse.
Trying to answer your question: If you assume that there is some underlying structure which is homogenous for, say, 6 of the subjects, and different for the other half, you can just pool the two data sets and do some kind of group-mean analysis. If you're interested in a temporal change over the 12 months, then you need to assume that each subject are homogenous across whatever variable you're measuring.
Normally, for e.g. timeseries in economics, what you're describing is called "censored" or "truncated data".
If we want to measure the income of everyone in a country, we do this by checking electronic paychecks or something. But some people at the end of each tail, may not have a visible income. Poor people may be earning income in other ways, and rich people may want to hide some of their income. This is censored data, and any advanced timeseries stats book will have something on that.
Truncated data is similar. Just imagine income again. If we truncate everyone who makes < 10,000$ a year, then this will "cut off the end" of your distribution. There are also remedies for this. Again check an advanced time series book.
Hope this helped a bit.

MS Access 2003 - Calculating an average based on qty sold/per site with supply %

Here is another question I have about being able to calculate this scenario in Access, or even at all for that matter:
I have a query that find the TOP 5 items sold in a given timeframe, and it groups by site. I use this to create a comparative chart between the site for ppt presentations. I do a lot of these but I have a problem with the presentation that I foresee they will have a problem with and it makes for bad metrics:
Some stores are bigger than others, and get much more supply. So a straight aggregate total of just qty of toping selling items, and comparing the locations is stacking the deck a little.
So if Site A gets 80% of the supply, and sells 500, Site B gets 15% supply and sell 75, and site C get 5% supply and sells 50 items, then Site C actually has the best sales for their size. I have exactly what I need in terms in the first chart (from my queries and such) to show the aggregate total, but what do I need to represent the idea mentioned above.
The factors that I have that go into this are:
ItemID - group by
Item - group by
qty sold - sum/descending (which is the variable that determines the Top 5)
Store/Location - Group By
and then I run a seperate query to get the total deliveries (supply) to each site
I realize that this may just be a lack of mathmatical understanding on my part, but can anyone help with this?
thanks
The first issue that I see isn't about SQL savvy; it's how to serve your data customer. What does he or she want to see? Metrics is a term with a holy ring, and for good reason: it's supposed to be what is used for the big business decisions, and it's scary easy to measure the wrong thing.
So I'd make sure I know what my customer wants. If you can't model it on a spreadsheet, you won't be able to develop your reporting effectively.
Every deck of cards is loaded. You have to know how they want it loaded.

Resources