I have calculated a set of binomial distributions giving me the probabilities of finding n number of objects given a sample of N objects.
I calculated this using the Percent Point Function (PPF) (a.k.a. the inverse cumulative function) using the scipy.stats.distributions package.
Now that I want to plot a probability distribution a question emerges: Which package and function in python should I use for this? I've found a few useful resources like: http://goo.gl/Q2UjxX but I am still no closer (most likely I am missing something).
Say n = 2 and N = 10. How would I go about creating the plot below?:
The x-axis shows the range of values from 0 - n/N (where N > n always).
The y-axis shows the probability of n/N.
Thank you all for your time.
Related
I have an experimental dataset of the following values (y, x1, x2, w), where y is the measured quantity, x1 and x2 are the two independet variables and w is the error of each measurement.
The function I've chosen to describe my data is
These are my tasks:
1) Estimate values of bi
2) Estimate their standard errors
3) Calculate predicted values of f(x1, x2) on a mesh grid and estimate their confidence intervals
4) Calculate predicted values of
and definite integral
and their confidence intervals on a mesh grid
I have several questions:
1) Can all of my tasks be solved by weighted least squares? I've solved task 1-3 using WLS in matrix form by linearisation of the chosen function, but I have no idea, how to solve step №4.
2) I've performed Monte Carlo simulations to estimate bi and their s.e. I've generated perturbated values y'i from normal distribution with mean yi and standard deviation wi. I did this operation N=5000 times. For each perturbated dataset I estimated b'i, and from 5000 values of b'i I calculated mean values and their standard distribution. In the end, bi estimated from Monte-Carlo simulation coincide with those found by WLS. Am I correct, that standard deviations of b'i must be devided by № of Degrees of freedom to obtain standard error?
3) How to estimate confidence bands for predicted values of y using Monte-Carlo approach? I've generated a bunch of perturbated bi values from normal distribution using their BLUE as mean and standard deviations. Then I calculated lots of predicted values of f(x1,x2), found their means and standard deviations. Values of f(x1,x2) found by WLS and MC coincide, but s.d. found from MC are 5-45 order higher than those from WLS. What is the scaling factor that I'm missing here?
4) It seems that some of parameters b are not independent of each other, since there are only 2 independent variables. Should I take this into account in question 3, when I generate bi values? If yes, how can this be done? Should I use Chi-squared test to decide whether generated values of bi are suitable for further calculations, or should they be rejected?
In fact, I not only want to solve tasks I've mentioned earlier, but also I want to compare the two methods for regression analysys. I would appreciate any help and suggestions!
I calculated a correlation of two dependent variables (size of plot/house vs cost), the correlation stands at 0.87. I want to use this index to measure the increase or decrease in cost if size is increased or decreased. Is it possible using correlation? How?
Correlation only tells us how much two variables are linearly related based on the data we have, but in it does not provide a method to calculate the value of variable given the value of another.
If the variables are linearly related we can predict the actual values that a variable Y will assume when a variable X has some value using Linear Regression:
The idea is to try and fit the data to a linear function, and use it to predict the values:
Y = bX + a
Usually we first discover if two variables are related using a Correlation Coefficient(ex. Pearson Coefficient), then we use a Regression method(ex. Linear) to predict values of a variable of interest given another.
Here is an easy to follow tutorial on Linear Regression in Python with some theory:
https://realpython.com/linear-regression-in-python/#what-is-regression
Here a tutorial on the typical problem of house price prediction:
https://blog.akquinet.de/2017/09/19/predicting-house-prices-on-kaggle-part-i/
I am trying to implement Aggregation Pheromone density based classification for land use map problem. In the paper, the formula to calculate pheromone intensity deposited at x by ant aj (located at xj) is calculated as :
T(aj,x) = e^-d(xj,x)^2/2del^2
where, d(xj,x) represents the euclidean distance between two points,
del denotes the spread of gaussian function.
I want to know two things : First, what is this and second, how to calculate this.
From a data set I'm working on I have produced the following graph:
The graph it is constructed as follows: to each element on the data set it is associated the ratio between two natural numbers (some times big numbers) where the numerator is lesser then the denominator. Let be this number k. Then for each value n in [0,1] it is counted how many elements have k>n.
So, while the exponential decay is expected, the jumps come out of the blue. To calculate the ratio between a and b I have just done:c=a/b
I'm asking if there is a way to check if the jumps are due to numerical approximation in the division or they are an actual property of my dataset.
I have sorted array of real values, say X, drawn from some unknown distribution. I would like draw a box plot for this data.
In the simplest case, I need to know five values: min, Q1, median, Q3, and max.
Trivially, min = X[0], max = X[length(X)-1], and possibly median = X[ceil(length(X)/2)]. But I'm wondering how to determine the lower quartile Q1 and Q3.
When I plot X = [1,2,4] using MATLAB, I obtain following result:
It seems to me like there is some magic how to obtain the values Q1 = 1.25 and Q3 = 3.5, but I don't know what the magic is. Does anybody have experience with this?
If you go to the original definition of box plots (look up John Tukey), you use the median for the midpoint (i.e., 2 in your data set of 1, 2, 4). The endpoints are the min and max.
The top and bottom of the box are not exactly defined by quartiles, instead they are called "hinges". Hinges are the medians of the top and bottom halves of the data. If there is an odd number of observations, the median of the entire set is used in determining both hinges. The lower hinge is the median of (1,2), or 1.5. The top hinge is the median of (2,4), or 3.
There are actually dozens of definitions of a box plot's quartiles (Wikipedia: "There is no universal agreement on choosing the quartile values"). If you want to rationalize MatLab's box plot, you'll have to check its documentation. Otherwise, you could Google your brains out to try to find a method that matches the results.
Minitab gives 1 and 4 for the hinges in your data set. Excel's PERCENTILE function gives 1.5 and 3, which incidentally matches Tukey's algorithm at least in this case.
The median devides the data into two halves. The median of the first half = Q1, and the median of the second half = Q3.
More info: http://www.purplemath.com/modules/boxwhisk.htm
Note on the MatLab boxplot: The Q1 and Q3 are maybe calculated in a different way in MatLab, I'd try with a larger amount of testing data. With my method, Q1 should be 1 and Q3 should be 4.
EDIT:
The possible calculation that MatLab does, is the difference between the median and the first number of the first half, and take a quarter of that. Add that to the first number to get Q1.
The same (roughly) applies to Q3: Take the difference between the median and the highest number, and subtract a quarter of that from the highest number. That is Q3.