What is the probability of more than 100 people arriving at the station, if they come based on exponential distribution with 2 mins? - statistics

So, i got this problem:
"You have people arriving at the bus station based on exponential distribution.
You know that the mean of the distribution is 2 mins.
Whats the probability for that in 3 hours more than 100 people will arrive.
So i figured out that the problem is that, we have to calculate the probability of having the actual mean under 1.8 mins.
But i don't really know how to solve this?
Is it something with confidence intervals?

So basically the rate of arrival to get 100 customers in 3hrs will be 1.8 min per customer. Using cumulative distribution function:
Here = 0.5 and t = 1.8. As we are looking for more than 100 customers within 3 hrs so the integral will be from 0 to 1.8.
This gives 1-e^(-0.5*1.8) your answer i.e 0.5934.
You can refer this link to get hold on the theory and few examples.

Related

Calculating "Reliability" of Cricket Stats

I'm not a statistician so please forgive the (mis)use of terminology.
I am calculating strike rates for batsmen in cricket. For non-cricket fans, this is the number of runs scored (broadly the same as points in other sports) per 100 balls faced.
So if a batsman has faced 100 balls in his career and scored 150 runs his striker rate would be 150 (runs/balls*100).
I now want to calculate how likely it is that the stat is an accurate representation of the batsman's ability.
The more balls batsmen have faced the more likely it is that the resulting stat is accurate but how do I calculate how reliable it is?
Any help would be appreciated.
Thanks
You have a point estimate, and a confidence interval can help you quantify your uncertainty. For your example of 150 runs in 100 balls, is there a certain number of balls per run? If so, you can definitely create a confidence interval using the standard formula and choose your level of confidence.
E.g. X bar +/- t_{99, 1-alpha/2} s/sqrt(n) is a 1-alpha/2 level confidence interval for the average hits per ball
Multiplying by 100 gives a CI for the average hits per 100 balls
Unfortunately, if you have no other information than the aggregate 150 runs in 100 balls, there is not much you can do

Get minimum period of non-equidistant sigal

I have non-equidistant timestamps and according values like
sample_timestamp powerdemand_in_kw_avg_sum
0 1.539009e+09 2.164672e+01
1 1.539009e+09 3.483988e+01
2 1.539010e+09 1.319316e+01
3 1.539014e+09 1.818989e-15
4 1.539021e+09 2.061695e+00
[...]
I would like to transform it to an equidistant signal. According to Nyquist–Shannon sampling theorem I should choose the sampling frequency smaller than half the minimum period. How can I get the minimum period (using Python)?
Sorry if there is some technical incorrecness, I am new to telecommunications.
To get the minimum difference between two timestamps, you can use .shift method
(df['sample_timestamp'] - df['sample_timestamp'].shift(1)).min()
I'm not an expert in telecommunications, the rest is up to you.

How to find the maximum and lowest value of a random normal or log-normal distribution?

This is my first question on Stack Overflow so forgive me if I'm not in conformity with some norms. That being said, this is my problem:
Edited:
I have a continuous variable where I can only measure some points of data and I need to assess the probability curve for the maximum and lowest values between each data point. I have the std deviation and the variable works on lognormal distribution, this means the average is a log-mean and the std deviation is multiplicative.
Example:
Assuming a car's speed is normally distributed and there are no traffic laws, at 10 AM the car is travelling at the speed of 40 MPH, at 11 AM he is travelling at 60 MPH, the standard deviation is a 10% change of its speed every hour. There is this 1h blackout in between where you have no information, but you should be able to estimate: the more probable highest speed the car achieved in this time, the more probable lowest speed, and somehow a probability distribution of everything in between. You can even assume Its the least unlikely probability that its speed at 10 AM was its lowest speed and at 11 AM was it highest speed in the period (if the car speed is truly random at every scale you can even assume its limiting the impossible). The outcome is a lognormal distribution which could be used to simulate scenarios regarding that car.
I'm not an expert in statistics and I understand only the basics and some theory, how should I address this problem?
I'm using this on Python 3.x in case you guys know an way to address that problem there.

Descriptive statistics, percentiles

I am stuck in a statistics assignment, and would really appreciate some qualified help.
We have been given a data set and are then asked to find the 10% with the lowest rate of profit, in order to decide what Profit rate is the maximum in order to be considered for a program.
the data has:
Mean = 3,61
St. dev. = 8,38
I am thinking that i need to find the 10th percentile, and if i run the percentile function in excel it returns -4,71.
However I tried to run the numbers by hand using the z-score.
where z = -1,28
z=(x-μ)/σ
Solving for x
x= μ + z σ
x=3,61+(-1,28*8,38)=-7,116
My question is which of the two methods is the right one? if any at all.
I am thoroughly confused at this point, hope someone has the time to help.
Thank you
This is the assignment btw:
"The Danish government introduces a program for economic growth and will
help the 10 percent of the rms with the lowest rate of prot. What rate
of prot is the maximum in order to be considered for the program given
the mean and standard deviation found above and assuming that the data
is normally distributed?"
The excel formula is giving the actual, empirical 10th percentile value of your sample
If the data you have includes all possible instances of whatever you’re trying to measure, then go ahead and use that.
If you’re sampling from a population and your sample size is small, use a t distribution or increase your sample size. If your sample size is healthy and your data are normally distributed, use z scores.
Short story is the different outcomes suggest the data you’ve supplied are not normally distributed.

Verify transmit power to be within certain limits of its expected value over 95% of test measurements

I have a requirement where I have to verify the transmit power out of a device as measured at its connector is within 2 dB of its expected value over 95% of test measurements.
I am using a signal analyzer to analyze the transmitted power. I only get the average power value, min, max and stdDev of the measurements and not the individual power measurements.
Now, the question is how would I verify the "95% thing" using average power, min, max and stdDev. It seems that I can use normal distribution to find the 95% confidence level.
I would appreciate if someone can help me on this.
Thanks in anticipation
The way I'm reading this, it seems you are a statistical beginner, so if I'm wrong there, the rest of this answer will probably be insultingly basic, and I'm sorry.
Anyway, the idea is that if a dataset is normally distributed, and all the observations are independent of one another, then 95% of the data points will fall within 1.96 standard deviations of the mean.
Do you get identical estimates of average power every time you measure, or are there some slight random differences from reading to reading? My guess is that it's the second. If you were to measure the power a whole bunch of times, and each time you plotted your average power value on a histogram, then that histogram of sample means would have the shape of a bell curve. This bell curve of sample means would have its own mean and standard deviation, and if you have thousands or millions of data points going into the calculation of each average power reading, it's not horrible to assume that it is a normal distribution. The explanation for this phenomenon is known as the 'central limit theorem', and I recommend both the Khan academy's presentation of it as well as the wikipedia page on it.
On the other hand, if your average power is the mean of some small number of data points, like for instance n= 5, or n= 30, then assumption of a normal distribution of sample means can be pretty bad. In this case, your 95% confidence interval around the average power goes from qt(0.975,n-1)*SD/sqrt(n) below the average to qt(0.975,n-1)*SD/sqrt(N) above the average, where qt(0.975,n-1) is the 97.5th percentile of the t distribution with n-1 degrees of freedom, and SD is your measured standard deviation.

Resources