Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
this is how I can find sum of array.
p[i] - array of random integers, size 1000
sum = 0;
for (int j = 1; j < p.length; j++ )
{
sum = sum + p[j];
}
my question is how can I use multiple threads to perform it faster?
Simply:
int sum = Arrays.stream(p).parallel().sum();
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Create a dictionary containing four lambda functions square, cube, squareroot, multiply by 2.
E.g. dict = {'Square': function for squaring, 'Cube': function for cube, 'Squareroot': function for squareroot, 'Double': function for double} and so on
Pass the values (input from the user) to the functions in the dictionary respectively. Then add the outputs of each function and print it.
def addi(n):
d = { 'Square': lambda a : a**2,
'Cube': lambda a : a**3,
'Squareroot': lambda a : a**(1/2),
'Double': lambda a : a*2}
sum = 0
for key in d.keys():
sum += d[key](n)
return sum
print(addi(5))
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am trying to solve problem Uva-10128 (Queue) on UVa Online Judge. I am not able find a way to approach this problem. I searched on internet and found that most of people has solved this problem by precalulating using DP.
DP[1][1][1] = 1;
for(N = 2; N <= 13; N++)
for(P = 1; P <= N; P++)
for(R = 1; R <= N; R++)
DP[N][P][R] = DP[N-1][P][R]*(N-2) + DP[N-1][P-1][R] + DP[N-1][P][R-1];
Above code snippet is taken from https://github.com/morris821028/UVa/blob/master/volume101/10128%20-%20Queue.cpp.
Can someone please explain formula used in above code.
Thanks
When you calculate DP[N][P][R] you look at the position of the smallest person in the queue. Because he is the smallest, he can't block anybody. But he will get blocked if he doesn't stand at either end of the queue.
If he is the first person in the queue he is seen from the beginning of the line. So if we remove him, the queue contains N-1 people and you can only see P-1 people from the beginning, but still R people from the end. Therefore there are DP[N-1][P-1][R] combinations.
If he is in the middle, then by removing him we still can see P and R people. And since there are N-2 positions in the middle, there are DP[N-1][P][R] * (N-2) combinations.
And if he is the last person in the queue we get DP[N-1][P][R-1] combinations. The reasoning is identically to the first case.
So the total number of combinations for DP[N][P][R] is the sum of all three cases.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I just learned Python two days ago so this is probably very bad. Since i want to get better at optimization and organization, is there anything specifically bad here or that i could improve. Wasted lines of code, things done in a more memory intensive way than it could be and so on. Thanks a lot for any input and I'm looking forward to learn much more.
from random import*
b = 10
a = randint(1,b)
point = 1
x = 1
while x < 2:
print("Guess a number between 1 and ", b)
svar = int (input())
if svar == a:
b+=5
point= point+point
a = randint (1,b)
print("You have ", point, "points!")
elif svar < a:
print("Higher")
else:
print("Lower")
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
SO is warning me my question is likely to be closed, I hope they're wrong :)
My question: let you have a correlation matrix; you would like correlations which are next to 1 and -1 go towards 1, while those next to 0 stay there.
The simplest way is to use absolute values, e.g. if Rho is you correlation matrix then you will use abs(Rho).
Is there any way which is theoretically more correct than the one above?
As an example: what if I use Normal p.d.f. instead of absolute value?
Adjusted Rho = N(Rho, mu = 0, sigma = stdev(Rho))
where N is the Normal p.d.f. function.
Have you any better way?
What are strengths and weaknesses of each method?
Thanks,
Try this.
x <- runif(min = -1, max = 1, n = 100)
tr <- (x - min(x))/diff(range(x))
plot(x)
points(tr, col = "red")
You could also use a logit link function that guarantees the values to be between 0 and 1. But given that you're limited to values between -1 and 1, you would get only values in the range of ~[0.3, 1].
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I know that the Weibull distribution exhibits subexponential heavy-tailed behavior when the shape parameter is < 1. I need to demonstrate this using the limit definition of a heavy tailed distribution:
for all
How do I incorporate the cumulative distribution function (CDF) or any other equation characteristic of the Weibull distribution to prove that this limit holds?
The CDF of the Weibull distribution is 1 - exp(-(x/lambda)^k) = P(X <= x).
So
P(X > x) = 1 - CDF = exp(-(x/lambda)^k),
and
lim exp(lambda * x) * P(X > x) = lim exp(lambda x) * exp( - (x/lambda)^k)
= lim exp(lambda x - x^k/lambda^k)
Since k<1, and x is large, and lambda>0, lambda x grows large faster than x^k/lambda^k (the monomial with the greater exponent wins). In other words, the lambda x term dominates the x^k/lambda^k term. So lambda x - x^k/lambda^k is large and positive.
Thus, the limit goes to infinity.