question regarding randbetween in excel and revenue - excel

The Doobie Brothers garage band is planning a concert. Tickets are set at $20. Based on what other bands have done, they figure they should sell 350 tickets, but that could fluctuate. They figure the standard deviation of sales at 50 tickets. No shows are uniformly distributed between 1 and 10. Fixed costs are 5000.
How profitable is the concert likely to be?
So I am able to enter the excel formula for revenue 50*20 and subtract 5000 for FC, but I am having trouble deciphering how to account for the no show costs. I know that I have to use RANDBETWEEN(1,10) formula, but I am not sure if it gets multiplied or divided by something. Again, I am looking for what to do with the formula in the context of a profit equation.
If it helps, the mean for the number of tickets sold is 350 and stdev is 50, so I used that to get the number of attendees in a simulated sense...That is, NORM.INV(RAND(),350,50)
Of course, this problem may not be realistic in real life because promoters keep the money, but for the purposes of the problem...just assume that no promoters exist here.

Related

Number of days for delivery and number of orders delivered in two separate columns. Is there a way to get summary statistics about orders?

I've had a bit of trouble explaining this so please bear with me. I'm also very new to using excel so if there's a simple fix, I apologize in advance!
I have two columns, one listing number of days starting from 0 and increasing consecutively. The other column has the number of orders delivered. The two correspond to each other. For example, I've typed out how it would look below. It would mean that there were 100 orders delivered in 1 day, 150 orders delivered in 2 days, 800 orders delivered in 3 days, etc.
Is there a way to get summary statistics (mean, median, mode, upper and lower quartiles) for the number of days it took for the average order to get delivered? The only way I can think of solving this is to manually punch in "1" 100 times, "2" 150 times, etc. into a new column and take median, mean, and upper & lower quartile from that, but that seems extremely inefficient. Would I use a pivot table for this? Thank you in advance!
I tried using the data analysis add-on and doing summary statistics that way, but it didn't work. It just gave me the mean, median, mode, and quartiles of each individual column. It would have given me 3 for median number of days for delivery and 300 for median number of orders.
Method 1
The mean is just
=SUMPRODUCT(A2:A6,B2:B6)/SUM(B2:B6)
Mode is the value with highest frequency
=INDEX(A2:A6,MATCH(MAX(B2:B6),B2:B6,0))
The quartiles and median (or any other quantile by varying the value of p) from first principles following this reference
=LET(p,0.25,
values,A2:A6,
freq,B2:B6,
N,SUM(freq),
h,(N+1)*p,
floorh,FLOOR(h,1),
ceilh,CEILING(h,1),
frac,h-floorh,
cusum,SCAN(0,SEQUENCE(ROWS(values)),LAMBDA(a,c,IF(c=1,0,a+INDEX(freq,c-1)))),
xlower,XLOOKUP(floorh-1,cusum,values,,-1),
xupper,XLOOKUP(ceilh-1,cusum,values,,-1),
xlower+(xupper-xlower)*frac)
Method 2
If you don't like doing it this way, you can always expand the data like this:
=AVERAGE(XLOOKUP(SEQUENCE(SUM(B2:B6),1,0),SCAN(0,SEQUENCE(ROWS(A2:A6)),LAMBDA(a,c,IF(c=1,0,INDEX(B2:B6,c-1)+a))),A2:A6,,-1))
=MODE(XLOOKUP(SEQUENCE(SUM(B2:B6),1,0),SCAN(0,SEQUENCE(ROWS(A2:A6)),LAMBDA(a,c,IF(c=1,0,INDEX(B2:B6,c-1)+a))),A2:A6,,-1))
=QUARTILE.EXC(XLOOKUP(SEQUENCE(SUM(B2:B6),1,0),SCAN(0,SEQUENCE(ROWS(A2:A6)),LAMBDA(a,c,IF(c=1,0,INDEX(B2:B6,c-1)+a))),A2:A6,,-1),1)
=MEDIAN(XLOOKUP(SEQUENCE(SUM(B2:B6),1,0),SCAN(0,SEQUENCE(ROWS(A2:A6)),LAMBDA(a,c,IF(c=1,0,INDEX(B2:B6,c-1)+a))),A2:A6,,-1))
and
=QUARTILE.EXC(XLOOKUP(SEQUENCE(SUM(B2:B6),1,0),SCAN(0,SEQUENCE(ROWS(A2:A6)),LAMBDA(a,c,IF(c=1,0,INDEX(B2:B6,c-1)+a))),A2:A6,,-1),3)

How to remove price outliers from dataset so they don't impact average price

In the bank we are using the average price from six vendors. But we now and then entcounter wrong prices due to the fact that one or more vendors some time publish an incorrect price and this affects the average price. I seek inspiration/suggestions to most effecient/correct way and which formula/logic to use to exclude these price outliers. Here below you see an example of the problem. The prices in red text are considered outliers.
Remaining challenge when only three vendors publish a price and when one of these is a outlier. How do I remove/exclude this outlier?
Note: SAND=True and FALSK= False.
Hope someone can help us out. Many thanks in advance ;-)
Kind regards
Soren Sig Mikkelsen
Solution1:
Easiest of way to remove is via defining the inter quartile range...
In below screenshot you can see the formulas with calculated values...
Logic is very simple, calculate the inter quartile range of your data & see if any of the 6 value is outside the upper/lower limit, If so then it's outlier...
Solution2:
Using Average & Std Deviation

Utilising varying amounts of cells for series of calculations

I am trying to account the value of a certain amount of grain coming in and out of storage based on the amount of fees. The grain is stored in a lump sum. I am trying to calculate the value/tonne of outcoming grain utilising a first in first out type approach in Excel. I have attempted learning Python for this task but I feel like it will be a while before my ability utilising coding (something pretty foreign to me) will be at the level where I could perform this task.
For example 400 tonnes might come in at a certain value which starts accruing storage fees in May. Then in June there might be 500 tonnes come in and start accruing fees from there. In July I might decide to take 600 tonnes out of storage (obviously meaning that 400 tonnes worth of fees from June and 200 tonnes worth of fees from July). Doing this leaves 300 tonnes of grain still in storage accruing fees, spillover which is then accounted for first for the next calculation. The size of outtakes varies between being larger or smaller than the amount on intakes.
I have tried utilising a sort of mini-grid. Which implements a series of If checks to solve the issue but it's difficult to automate when an outtake requires multiple different intakes of grain (multiple rows in the column) to then go to the next untouched cell in that column after I've taken into account the "spillover" from the previous outtake.
Is there solution here that I'm missing, mainly around taking into account the differing amounts of cells required for a series of calculations?

Tests to Compare Sales Mix Percent between Periods

Background
I wish to compare menu sales mix ratios for two periods.
A menu is defined as a collection of products. (i.e., a hamburger, a club sandwich, etc.)
A sales mix ratio is defined as a product's sales volume in units (i.e., 20 hamburgers) relative to the total number of menu units sold (i.e., 100 menu items were sold). In the hamburger example, the sales mix ratio for hamburgers is 20% (20 burgers / 100 menu items). This represents the share of total menu unit sales.
A period is defined as a time range used for comparative purposes (i.e., lunch versus dinner, Mondays versus Fridays, etc.).
I am not interested in overall changes in the volume (I don't care whether I sold 20 hamburgers in one period and 25 in another). I am only interested in changes in the distribution of the ratios (20% of my units sold were hamburgers in one period and 25% were hamburgers in another period).
Because the sales mix represents a share of the whole, the mean average for each period will be the same; the mean difference between the periods will always be 0%; and, the sum total for each set of data will always be 100%.
Objective:
Test whether the sales distribution (sales mix percentage of each menu item relative to other menu items) changed significantly from one period to another.
Null Hypothesis: the purchase patterns and preferences of customers in period A are the same as those for customers in period B.
Example of potential data input:
[Menu Item] [Period A] [Period B]
Hamburger 25% 28%
Cheeseburger 25% 20%
Salad 20% 25%
Club Sandwich 30% 27%
Question:
Do common methods exist to test whether the distribution of share-of-total is significantly different between two sets of data?
A paired T-Test would have worked if I was measuring a change in the number of actual units sold, but not (I believe) for a change in share of total units.
I've been searching online and a few text books for a while with no luck. I may be looking for the wrong terminology.
Any direction, be it search terms or (preferably) the actual names appropriate tests, are appreciated.
Thanks,
Andrew
EDIT: I am considering a Pearson Correlation test as a possible solution - forgetting that each row of data are independent menu items, the math shouldn't care. A perfect match (identical sales mix) would receive a coefficient of 1 and the greater the change the lower the coefficient would be. One potential issue is that unlike a regular correlation test, the changes may be amplified because any change to one number automatically impacts the others. Is this a viable solution? If so, is there a way to temper the amplification issue?
Consider using a Chi Squared Goodness-of-Fit test as a simple solution to this problem:
H0: the proportion of menu items for month B is the same as month A
Ha: at least one of the proportions of menu items for month B is
different to month A
There is a nice tutorial here.

Monte Carlo Simulation using Excel Solver

I am trying to figure out what the optimal number of products I should make per day are, displaying the values in a chart and then using the chart to find the optimal number of products to make per day.
Cost of production: $4
Sold for: $12
Leftovers sold for $1
So the ideal profit for a product is $8, but it could be -$3 if it's left over at the end of the day.
The daily demand of sales has a mean of 150 and a standard deviation of 30.
I have been able to generate a list of random numbers using to generate a list of how many products: NORMINV(RAND(),mean,std_dev)
but I don't know where to go from here to figure out the amount sold from the amount of products made that day.
The number sold on a given day is min(# produced, daily demand).
ADDENDUM
The decision variable is a choice you make: "I will produce 150 each day", or "I will produce 145 each day". You told us in the problem statement that daily demand is a random outcome with a mean of 150 and a SD of 30. Let's say you go with producing 150, the mean of demand. Since it's the mean of a symmetric distribution, half the time you will sell everything you made and have no losses, but in most of those cases you actually could have sold more and made more money. You can't sell products you didn't make, so your profit is capped at selling 150 on those days. The other half of the time, you won't sell all 150 and will take a loss on the unsold items, reducing your profit a bit. The actual profit on any given day is a random variable, because it is determined by random demand.
Since profit is random, you can calculate your average earnings across many days based on the assumption that you produce 150. You can also average earnings based on the assumption that you produce 140 per day, or 160 per day, or any other number. It sounds like you've been asked to plot those average earnings versus how many you decided to produce, and choose a production level that results in the highest long-term average earnings.

Resources