I'm using 64-bit Python 3.3.1, pylab and 32GB system RAM. This function:
def sqrt2Expansion(limit):
x = Symbol('x')
term = 1+1/x
for _ in range(limit):
term = term.subs({x: (2+1/x)})
return term.subs({x: 2})
Produces expressions of this kind: 1 + 1/(2 + 1/(2 + 1/(2 + 1/(2 + 1/(2 + 1/(...)))))).
When called as: sqrt2Expansion(100) returns valid result, but sqrt2Expansion(200) produces RuntimeError with many pages of traceback and hangs up pylab/IPython interpreter with plenty of system memory left unused. Any ideas how to implement it more efficiently? I would like to call sqrt2Expansion(1000) and still get a result.
I will try to elaborate on the comment that I have posted above.
Sympy expressions are trees. Each operation is a node which has as branches its operands. For instance x+y looks like Add(x, y) and x*(y+z) like Mul(x, Add(y, z)).
Usually these expressions get automatically flattened like in Add(x, Add(y, z)) becoming Add(x, y, z), but for more complicated cases one can get very deep trees.
And deep trees can pose problems especially when either the interpreter or the library itself limits the depth of permitted recursion (as a protection against infinite recursion and exploding memory usage). Most probably this is the cause of your RuntimeError: each subs makes the tree deeper and as the tree gets deeper the recursive subs must call itself more times till it gets to the deepest node.
You can simplify the tree to something of the form polynomial/polynomial which has constant depth by using the factor method. Just change term = term.subs({x: (2+1/x)}) to term = term.subs({x: (2+1/x)}).factor().
Is there a reason that you need to do it symbolically? Just start with 2 and work from there. You will never hit recursion errors because the fraction will be flattened at each stage
In [10]: def sqrt2(limit):
....: expr = Rational(1, 2)
....: for i in range(limit):
....: expr = 1/(2 + expr)
....: return 1 + expr
....:
In [11]: sqrt2(100)
Out[11]:
552191743651117350907374866615429308899
───────────────────────────────────────
390458526450928779826062879981346977190
In [12]: sqrt2(100).evalf()
Out[12]: 1.41421356237310
In [13]: sqrt(2).evalf()
Out[13]: 1.41421356237310
In [15]: print sqrt2(1000)
173862817361510048113392732063287518809190824104684245763570072944177841306522186007881248757647526155598965224342185265607829530599877063992267115274300302346065892232737657351612082318884085720085755135975481584205200521472790368849847501114423133808690827279667023048950325351004049478273731369644053281603356987998498457434883570613383878936628838144874794543267245536801570068899/122939577152521961584762100253767379068957010866562498780385985503882964809611193975682098617378531179669585936977443997763999765977165585582873799618452910919591841027248057559735534272951945685362378851460989224784933532517808336113862600995844634542449976278852113745996406252046638163909206307472156724372191132577490597501908825040117098606797865940229949194369495682751575387690
Related
New to Julia and just trying to implement a basic Bayesian model. I would like to evaluate the log-likelihood of each data point, where each data point has a different mean parameter depending on their corresponding covariate, without having to implement a for loop over all data points.
using Distributions
y = -50:1:49
a = 1
b = 1
N = 100
x = rand(Normal(0, 1), N)
mu = a .+ b.*x
sigma = 5
# Can we evaluate the logpdf of every point in one call to logpdf without doing a for loop
loglikelihood = logpdf(Normal(mu, sigma), y)
MethodError: no method matching Normal(::Vector{Float64}, ::Int64)
Edit: I would like to clarify that the mu specified above is a vector of the same dimensions as y, and that instead evaluating logpdf of each observation using the function Normal(::Real, ::Real) in an iterative procedure, I would like to something that handles something to the effect of
logpdf(Normal(::Array, ::Real), ::Array). The code I provide in the following chunk does what I want by taking the sum of the log-likelihood across observations, but I would prefer to not have to transform to a multivariate distribution.
using LinearAlgebra
logpdf(MvNormal(mu, diagm(repeat([sigma], outer=N))), y)
Thanks for your help.
Your code doesn't actually run, as there are undefined variables (a, b, y). But in general what you're asking works out of the box:
julia> using Distributions
julia> μ = 2.0; σ = 3.0;
julia> logpdf(Normal(μ, σ), 0:0.5:4)
9-element Vector{Float64}:
-2.2397730440950046
-2.1425508218727822
-2.073106377428338
-2.0314397107616715
-2.0175508218727827
-2.0314397107616715
-2.073106377428338
-2.1425508218727822
-2.2397730440950046
Here I'm getting the log pdf at values 0, 0.5, 1, ..., 3.5, 4. This works because there's a method for logpdf which takes an AbstractArray as second argument:
julia> #which logpdf(Normal(μ, σ), 0:0.5:4)
logpdf(d::UnivariateDistribution{S} where S<:ValueSupport, X::AbstractArray) in Distributions at deprecated.jl:70
julia> #which logpdf(Normal(μ, σ), 0.5)
logpdf(d::Normal, x::Real) in Distributions at ...\Distributions\bawf4\src\univariate\continuous\normal.jl:105
As you see there though, that method signature is actually deprecated. Let's start Julia with depwarn=yes to see the deprecation notice:
$> julia --depwarn=yes
julia> using Distributions
julia> logpdf(Normal(), 1:10)
┌ Warning: `logpdf(d::UnivariateDistribution, X::AbstractArray)` is deprecated, use `logpdf.(d, X)` instead.
│ caller = top-level scope at REPL[4]:1
└ # Core REPL[4]:1
What this tells you is that actually you don't need a method signature which accepts an array, as Julia's built-in broadcasting syntax - appending a dot to a function call - gives you this for free. Returning to the first example:
julia> logpdf.(Normal(μ, σ), 0:0.5:4)
9-element Vector{Float64}:
-2.2397730440950046
-2.1425508218727822
-2.073106377428338
-2.0314397107616715
-2.0175508218727827
-2.0314397107616715
-2.073106377428338
-2.1425508218727822
-2.2397730440950046
Here, I'm actually calling the logpdf(d::Normal, x::Real) method, but the . after logpdf applies the function elementwise to the range 0:0.5:4.
The broadcast syntax also extends to constructors, so you can use it to construct multiple normal distributions with different mean:
julia> μ = rand(3)
3-element Vector{Float64}:
0.5341692431981215
0.5696647074299088
0.3021675356902611
julia> Normal.(μ, 5)
3-element Vector{Normal{Float64}}:
Normal{Float64}(μ=0.5341692431981215, σ=5.0)
Normal{Float64}(μ=0.5696647074299088, σ=5.0)
Normal{Float64}(μ=0.3021675356902611, σ=5.0)
that's what the error above is telling you - the Normal constructor does not accept a vector as first element, but a single value. If you want to apply it to multiple values, just broadcast!
So I'm struggling with Question 3. I think the representation of L would be a function that goes something like this:
import numpy as np
def L(a, b):
#L is 2x2 Matrix, that is
return(np.dot([[0,1],[1,1]],[a,b]))
def fibPow(n):
if(n==1):
return(L(0,1))
if(n%2==0):
return np.dot(fibPow(n/2), fibPow(n/2))
else:
return np.dot(L(0,1),np.dot(fibPow(n//2), fibPow(n//2)))
Given b I'm pretty sure I'm wrong. What should I be doing? Any help would be appreciated. I don't think I'm supposed to use the golden ratio property of the Fibonacci series. What should my a and b be?
EDIT: I've updated my code. For some reason it doesn't work. L will give me the right answer, but my exponentiation seems to be wrong. Can someone tell me what I'm doing wrong
With an edited code, you are almost there. Just don't cram everything into one function. That leads to subtle mistakes, which I think you may enjoy to find.
Now, L is not function. As I said before, it is a matrix. And the core of the problem is to compute its nth power. Consider
L = [[0,1], [1,1]]
def nth_power(matrix, n):
if n == 1:
return matrix
if (n % 2) == 0:
temp = nth_power(matrix, n/2)
return np.dot(temp, temp)
else:
temp = nth_power(matrix, n // 2)
return np.dot(matrix, np.dot(temp, temp))
def fibPow(n):
Ln = nth_power(L, n)
return np.dot(L, [0,1])[1]
The nth_power is almost identical to your approach, with some trivial optimization. You may optimize it further by eliminating recursion.
First thing first, there is no L(n, a, b). There is just L(a, b), a well defined linear operator which transforms a vector a, b into a vector b, a+b.
Now a huge hint: a linear operator is a matrix (in this case, 2x2, and very simple). Can you spell it out?
Now, applying this matrix n times in a row to an initial vector (in this case, 0, 1), by matrix magic is equivalent to applying nth power of L once to the initial vector. This is what Question 2 is about.
Once you determine how this matrix looks like, fibPow reduces to computing its nth power, and multiplying the result by 0, 1. To get O(log n) complexity, check out exponentiation by squaring.
in the documentation it is shown, how one can use srepr(expr) to create a string containing all elements of the formula tree.
I would like to have all elements of a certain level of such a tree in a list, not as a string but as sympy objects.
For the example
from sympy import *
x,y,z = symbols('x y z')
expr = sin(x*y)/2 - x**2 + 1/y
s(repr)
"""
gives out
=> "Add(Mul(Integer(-1), Pow(Symbol('x'), Integer(2))), Mul(Rational(1, 2),
sin(Mul(Symbol('x'), Symbol('y')))), Pow(Symbol('y'), Integer(-1)))"
"""
This should lead to
first_level = [sin(x*y)/2,-x**2,1/y]
second_level[0] = [sin(x*y),2]
I assume, the fundamental question is, how to index a sympy expression according to the expression tree such that all higher levels starting from an arbitrary point are sumarized in a list element.
For the first level, it is possible to solve the task
by getting the type of the first level (Add in this case and then run)
print(list(Add.make_args(expr)))
but how does it work for expressions deeper in the tree?
I think the thing you are looking for is .args. For example your example we have:
from sympy import *
x, y, z = symbols('x y z')
expr = sin(x * y) / 2 - x ** 2 + 1 / y
first_level = expr.args
second_level = [e.args for e in first_level]
print(first_level)
print(second_level[0])
"""
(1/y, sin(x*y)/2, -x**2)
(y, -1)
"""
These give tuples, but I trust that you can convert them to lists no problem.
If you want to get directly to the bottom of the tree, use .atoms() which produces a set:
print(expr.atoms())
"""
{x, 2, y, -1, 1/2}
"""
Another thing that seems interesting is perorder_traversal() which seems to give a tree kind of object and if you iterate through it, you get something like a power set:
print([e for e in preorder_traversal(expr)])
"""
[-x**2 + sin(x*y)/2 + 1/y, 1/y, y, -1, sin(x*y)/2, 1/2, sin(x*y), x*y, x, y, -x**2, -1, x**2, x, 2]
"""
Here is the beginning of the docstring of the function:
Do a pre-order traversal of a tree.
This iterator recursively yields nodes that it has visited in a pre-order fashion. That is, it yields the current node then descends through the tree breadth-first to yield all of a node's children's pre-order traversal.
For an expression, the order of the traversal depends on the order of .args, which in many cases can be arbitrary.
If any of these are not what you want then I probably don't understand your question correctly. Apologies for that.
I have the following line of code I want to port to Torch Matmul
rotMat = xmat # ymat # zmat
Can I know if this is the correct ordering:
rotMat = torch.matmul(xmat, torch.matmul(ymat, zmat))
According to the python docs on operator precedence the # operator has left-to-right associativity
https://docs.python.org/3/reference/expressions.html#operator-precedence
Operators in the same box group left to right (except for exponentiation, which groups from right to left).
Therefore the equivalent operation is
rotMat = torch.matmul(torch.matmul(xmat, ymat), zmat)
Though keep in mind that matrix multiplication is associative (mathematically) so you shouldn't see much of a difference in the result if you do it the other way. Generally you want to associate in the way that results in the fewest computational steps. For example using the naive matrix multiplication algorithm, if X is 1x10, Y is 10x100 and Z is 100x1000 then the difference between
(X # Y) # Z
and
X # (Y # Z)
is about 1*10*100 + 1*100*1000 = 101,000 multiplication/addition operations for the first versus 10*100*1000 + 1*10*1000 = 1,001,000 operations for the second. Though these have the same result (ignoring rounding errors) the second version will be about 10 x slower!
As pointed out by #Szymon Maszke pytorch tensors also support the # operator so you can still use
xmat # ymat # zmat
in pytorch.
I am using Python at the moment and I have a function that I need to multiply against itself for different constants.
e.g. If I have f(x,y)= x^2y+a, where 'a' is some constant (possibly list of constants).
If 'a' is a list (of unknown size as it depends on the input), then if we say a = [1,3,7] the operation I want to do is
(x^2y+1)*(x^2y+3)*(x^2y+7)
but generalised to n elements in 'a'. Is there an easy way to do this in Python as I can't think of a decent way around this problem? If the size in 'a' was fixed then it would seem much easier as I could just define the functions separately and then multiply them together in a new function, but since the size isn't fixed this approach isn't suitable. Does anyone have a way around this?
You can numpy ftw, it's fairly easy to get into.
import numpy as np
a = np.array([1,3,7])
x = 10
y = 0.2
print(x ** (2*y) + a)
print(np.sum(x**(2*y)+a))
Output:
[3.51188643 5.51188643 9.51188643]
18.53565929452874
I haven't really got much for it to be honest, I'm still trying to figure out how to get the functions to not overlap.
a=[1,3,7]
for i in range(0,len(a)-1):
def f(x,y):
return (x**2)*y+a[i]
def g(x,y):
return (x**2)*y+a[i+1]
def h(x,y):
return f(x,y)*g(x,y)
f1= lambda y, x: h(x,y)
integrate.dblquad(f1, 0, 2, lambda x: 1, lambda x: 10)
I should have clarified that the end result can't be in floats as it needs to be integrated afterwards using dblquad.