As stated in title, what is a difference between avg from rollup_rate() and rate() in MetricsQL?
It is not clear to me from the official documentation.
Let's suppose we have a time series with the following samples on the duration d:
(v1, t1), (v2, t2), .... (vN, tN)`
Then the rate(m[d]) at tN is calculated as (vN - v1) / (tN - t1), while the avg returned from rollup_rate(m[d]) is calculated as an average value for per-sample rates (v2-v1)/(t2-t1), ..., (vN - vNminus1) / (tN - tNminus1).
Related
This is my expression:
x = symbols('x')
expr=(8.21067284717243e+22*((1/(16.3934426229508*x - 0.19672131147541))**1.2)**0.5*(1/(16.3934426229508*x - 0.19672131147541))**0.6*log(1531.16571479152*(1/(16.3934426229508*x - 0.19672131147541))**1.2)**0.5)
integrate(expr, (x, 0, 5))
I am trying to integrate my mass loss expression (g/Gyr) to find the total mass loss over 5 Gyr.
I am working on a conditional probability question.
A = probability of being legit review
B = probability of guessing correctly
P(A) = 0.98 → P(A’) = 0.02
P(B|A’) = 0.95
P(B|A) = 0.90
The question should be this: P(A’|B) =?
P(A’|B) = P(B|A’).P(A’) / P(B)
P(B) = P(B and A’) + P(B and A)
= P(B|A’). P(A’) + P(B|A). P(A)
= 0.901
P(A’|B) = P(B|A’).P(A’) / P(B)
= 0.95 x 0.02 / 0.901
= 0.021
However, my result is not listed on the choices of questions. Can you please tell me if I am missing anything? Or my logic is incorrect?
Example with numbers
This example with numbers is meant as an intuitive way to understand how Bayes' formula works:
Let's assume we have 10.000 typical reviews. We calculate what we would expect to happen with these 10.000 reviews:
9.800 are real
200 fake
To predict how many review are classified as fake:
Of the 9800 real ones, 10% are classified as fake → 9800 * 0.10 = 980
Of the 200 fake ones, 95% are classified as fake → 200 * 0.95 = 190
980 + 190 = 1.170 are classified a fake.
Now we have all the pieces we need to calculate the probability that a reviews is fake, given that it is classified as such:
All reviews that are classified as fake → 1.170
Of those, are actually fake → 190
190 / 1170 = 0.1623 or 16.23%
Using general Bayes' theorem
Let's set up the events. Note that my version of event B is slightly different from yours.
P(A): Real review
P(A'): Fake review
P(B): Predicted real
P(B'): Predicted fake
P(A'|B'): Probability that a review is actually fake, when it is predicted to be real
Now that we have our events defined, we can go ahead with Bayes:
P(A'|B') = P(A' and B') / P(B') # Bayes' formula
= P(A' and B') / (P(A and B') + P(A' and B')) # Law of total probability
We also know the following, by an adapted version of Bayes' rule:
P(A and B') = P(A) * P(B'|A )
= 0.98 * 0.10
= 0.098
P(A' and B') = P(A') * P(B'|A')
= 0.02 * 0.95
= 0.019
Putting the pieces together yields:
P(A'|B') = 0.019 / (0.098 + 0.019) = 0.1623
I want to create the following rule:
The patch will become in submittable only there is 3 votes or more with +1, but THERE SHOULD NOT BE a vote with +2, only votes with +1 will be considered for this criterion.
The rule that i have is:
% rule : 1+1+1=2 Code-Review
% rationale : introduce accumulative voting to determine if a change
% is submittable or not and make the change submittable
% if the total score is 3 or higher.
sum_list([], 0).
sum_list([H | Rest], Sum) :- sum_list(Rest,Tmp), Sum is H + Tmp.
add_category_min_score(In, Category, Min, P) :-
findall(X, gerrit:commit_label(label(Category,X),R),Z),
sum_list(Z, Sum),
Sum >= Min, !,
gerrit:commit_label(label(Category, V), U),
V >= 1,
!,
P = [label(Category,ok(U)) | In].
add_category_min_score(In, Category,Min,P) :-
P = [label(Category,need(Min)) | In].
submit_rule(S) :-
gerrit:default_submit(X),
X =.. [submit | Ls],
gerrit:remove_label(Ls,label('Code-Review',_),NoCR),
add_category_min_score(NoCR,'Code-Review', 3, Labels),
S =.. [submit | Labels].
this rule does not works at all, the problem is with the +2 vote.
How can i rework this rule in order to works as i want?
So you want to have min three reviewers that can add +1 and +2 is not allowed.
What if you remove developers rights to give +2 from project config and use prolog cookbook example 13 with little modifications?
submit_rule(submit(CR)) :-
sum(3, 'Code-Review', CR),
% gerrit:max_with_block(-1, 1, 'Verified', V).
% Sum the votes in a category. Uses a helper function score/2
% to select out only the score values the given category.
sum(VotesNeeded, Category, label(Category, ok(_))) :-
findall(Score, score(Category, Score), All),
sum_list(All, Sum),
Sum >= VotesNeeded,
!.
sum(VotesNeeded, Category, label(Category, need(VotesNeeded))).
score(Category, Score) :-
gerrit:commit_label(label(Category, Score), User).
% Simple Prolog routine to sum a list of integers.
sum_list(List, Sum) :- sum_list(List, 0, Sum).
sum_list([X|T], Y, S) :- Z is X + Y, sum_list(T, Z, S).
sum_list([], S, S).
I am trying to understand the following piece of theano code.
self.sgd_step = theano.function(
[x, y, learning_rate, theano.Param(decay, default=0.9)],
[],
updates=[(E, E - learning_rate * dE / T.sqrt(mE + 1e-6)),
(U, U - learning_rate * dU / T.sqrt(mU + 1e-6)),
(W, W - learning_rate * dW / T.sqrt(mW + 1e-6)),
(V, V - learning_rate * dV / T.sqrt(mV + 1e-6)),
(b, b - learning_rate * db / T.sqrt(mb + 1e-6)),
(c, c - learning_rate * dc / T.sqrt(mc + 1e-6)),
(self.mE, mE),
(self.mU, mU),
(self.mW, mW),
(self.mV, mV),
(self.mb, mb),
(self.mc, mc)
])
Can someone please tell me, what the author of the above code is trying to do there? There is a value, [x, y, learning_rate, theano.Param(decay, default=0.9)] trying to be updated, and the value is gonna be updated by []? And what is the function of updates here?
I would be so grateful if I can have an idea what is going on in the above code?
The documentation of the updates is as follows (taken from here).
updates must be supplied with a list of pairs of the form (shared-variable, new expression). It can also be a dictionary whose keys are shared-variables and values are the new expressions. Either way, it means “whenever this function runs, it will replace the .value of each shared variable with the result of the corresponding expression”. Above, our accumulator replaces the state‘s value with the sum of the state and the increment amount.
So when you call the above theano function with the required inputs, it will update values of shared variables, namely E, U, W, V, b, c, ..., self.mc. The new value to be updated is given by the second quantity in the tuple. Basically, E = E - learning_rate * dE / T.sqrt(mE + 1e-6) and so on.
I am comparing two alternatives for calculating p-values with R's pnorm() function.
xbar <- 2.1
mu <- 2
sigma <- 0.25
n = 35
# z-transformation
z <- (xbar - mu) / (sigma / sqrt(n))
# Alternative I using transformed values
pval1 <- pnorm(q = z)
# Alternative II using untransformed values
pval2 <- pnorm(q = xbar, mean = mu, sd = sigma)
How come the two calculated p-values are not the same? Should not they?
They are different because you use two different estimates of the standard deviation.
In the z-transformation calculation you use sigma / sqrt(n) as the standard deviation, but in the untransformed calculation you use sd = sigma, ignoring n.