Heavy tail distribution - Weibull [closed] - statistics

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I know that the Weibull distribution exhibits subexponential heavy-tailed behavior when the shape parameter is < 1. I need to demonstrate this using the limit definition of a heavy tailed distribution:
for all
How do I incorporate the cumulative distribution function (CDF) or any other equation characteristic of the Weibull distribution to prove that this limit holds?

The CDF of the Weibull distribution is 1 - exp(-(x/lambda)^k) = P(X <= x).
So
P(X > x) = 1 - CDF = exp(-(x/lambda)^k),
and
lim exp(lambda * x) * P(X > x) = lim exp(lambda x) * exp( - (x/lambda)^k)
= lim exp(lambda x - x^k/lambda^k)
Since k<1, and x is large, and lambda>0, lambda x grows large faster than x^k/lambda^k (the monomial with the greater exponent wins). In other words, the lambda x term dominates the x^k/lambda^k term. So lambda x - x^k/lambda^k is large and positive.
Thus, the limit goes to infinity.

Related

Generic Computation of Distance Matrices in Pytorch [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have two tensors a & b of shape (m,n), and I would like to compute a distance matrix m using some distance metric d. That is, I want m[i][j] = d(a[i], b[j]). This is somewhat like cdist(a,b) but assuming a generic distance function d which is not necessarily a p-norm distance. Is there a generic way to implement this in PyTorch?
And a more specific side question: Is there an efficient way to perform this with the following metric
d(x,y) = 1 - cos(x,y)
edit
I've solved the specific case above using this answer:
def metric(a, b, eps=1e-8):
a_norm, b_norm = a.norm(dim=1)[:, None], b.norm(dim=1)[:, None]
a_norm = a / torch.max(a_norm, eps * torch.ones_like(a_norm))
b_norm = b / torch.max(b_norm, eps * torch.ones_like(b_norm))
similarity_matrix = torch.mm(a_norm, b_norm.transpose(0, 1))
return 1 - similarity_matrix
I'd suggest using broadcasting: since a,b both have shape (m,n) you can compute
m = d( a[None, :, :], b[:, None, :])
where d needs to operate on the last dimension, so for instance
def d(a,b): return 1 - (a * b).sum(dim=2) / a.pow(2).sum(dim=2).sqrt() / b.pow(2).sum(dim=2).sqrt()
(here I assume that cos(x,y) represents the normalized inner product between x and y)

Haskell Program for Levenshtein distance [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Improve this question
I need a program in Haskell that computes the Levenshtein distance.
You need to calculate the Levenshtein distance (also called the edit distance) which is defined as follow for strings a and b: (taken from Wikipedia):
Because the values of lev(i,j) only depends on previous values, we can take advantage of Haskell's lazyness to intialize an Array where the value of element at position (i, j) is a function of previous values of the same array! See Dynamic programming example to see examples of how this can be done.
Here goes an basic implementation for lev:
import Data.Array
lev :: String -> String -> Int
lev x y = c ! (m, n)
where
c = listArray ((0, 0), (m, n)) [compute i j | i <- [0 .. m], j <- [0 .. n]]
compute 0 j = j
compute i 0 = i
compute i j
| x !! (i - 1) == y !! (j - 1) = c ! (i - 1, j - 1)
| otherwise = 1 + (minimum $ map (c !) [(i , j - 1),
(i - 1, j),
(i - 1, j - 1)])
m = length x
n = length y
This code can be further optimized, but should give you a good idea to get started.
After computing the Levenshtein, you just need to compare it with the edit cost bound k.

The best way to map correlation matrix from [-1, 1] space to [0, 1] space [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
SO is warning me my question is likely to be closed, I hope they're wrong :)
My question: let you have a correlation matrix; you would like correlations which are next to 1 and -1 go towards 1, while those next to 0 stay there.
The simplest way is to use absolute values, e.g. if Rho is you correlation matrix then you will use abs(Rho).
Is there any way which is theoretically more correct than the one above?
As an example: what if I use Normal p.d.f. instead of absolute value?
Adjusted Rho = N(Rho, mu = 0, sigma = stdev(Rho))
where N is the Normal p.d.f. function.
Have you any better way?
What are strengths and weaknesses of each method?
Thanks,
Try this.
x <- runif(min = -1, max = 1, n = 100)
tr <- (x - min(x))/diff(range(x))
plot(x)
points(tr, col = "red")
You could also use a logit link function that guarantees the values to be between 0 and 1. But given that you're limited to values between -1 and 1, you would get only values in the range of ~[0.3, 1].

Get the position of the point that lies at 25% of a line? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a line with 2 points. I know the distance between the 2 points. I also calculated the angle of the line.
My target is to get a point that lies at 25% ot the line.
I calculate the y of this point with (dist/100)*25.
My only problem is calculating the x of the point. I suspect i have all the variables needed i only can't seem to find how to calculate the x. Does anybody know this?
You have a segment (not line) with endpoints P0 (coordinates x0,y0) and P1(x1,y1). New point P lies at this segment and distance |P0P| = 0.25 * |P0P1|, if their coordinates are:
x = x0 + 0.25 * (x1-x0)
y = y0 + 0.25 * (y1-y0)
It's just simple vector maths, no need for any angles or trig here.
startPos = (0,0)
endPos = (10,10)
fratcion = 0.25
distX = endPos.x - startPos.x
distY = endPos.y - startPos.y
pos.x = startPos.x + fraction*distX
pos.y = startPos.y + fraction*distY

Find the coordinates in an isosceles triangle [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Given:
(x1,y1) = (0,0)
(x2,y2) = (0,-10)
Using the angle to C, how are the coordinates at C calculated?
Let A be the point (x1,y1) and B be the point (x2,y2).
AC must have length 10 since it is isosceles.
Let X the point on AB which a perpendicular line passes through C. AXC is a right angled triangle with hypotenuse AC. C has co-ordinates (-length(AX),length(XC)).
length(AX) = length(AC)*cos(theta) = 10*cos(theta)
length(XC) = length(AC)*sin(theta) = 10*sin(theta)
Therefore C has co-ordinates: (-10*cos(theta),10*sin(theta))
There are multiple valid answers to this question. The following coordinates all produce isosceles triangles:
(-10, 0)
(10, 0)
(-10, -10)
(10, -10)
(6, -8)
(-6, -8)
(8, -6)
(-8, -6)
(x, -5) | x != 0
And, as a matter of fact, this isn't a complete solution.
Without some hint as to what programming platform you intend to implement a solution in, we cannot help much more.

Resources