I am having trouble with a code that uses the scipy.optimize.bisect root finder 1. While using that routine I get the following error:
Traceback (most recent call last):
File "project1.py", line 100, in <module>
zero1=bisect(rho_function1,x[0],crit,maxiter=1e6,full_output=True)
File "/home/irya/anaconda3/envs/hubble/lib/python3.6/site-packages/scipy /optimize/zeros.py", line 287, in bisect
r = _zeros._bisect(f,a,b,xtol,rtol,maxiter,args,full_output,disp)
RuntimeError: Unable to parse arguments
The code I am using is the following:
import numpy as np
from scipy.optimize import bisect
gamma_default=-0.8
gamma_used=-2.0
def rho_function(x,P_0=5.0e3,B_0=3.0e-6,gamma=gamma_default):
"""function used to solve rho. Here, B_0[G] (default 3.0 uG), _0[K/cm^3] and
gamma are constants. The variable x is mean to be rho/rho_0"""
kb=1.3806e-16 # boltzmann constant in erg/k
f= B_0**2/(8.0*np.pi*kb)*(x**(4./3)-1) + P_0*(x**gamma-1)
return f,P_0,B_0,gamma
def rho_function1(x):
P_0=5.0e3
B_0=3.0e-6
gamma=gamma_default
"""function used to solve rho. Here, B_0[G] (default 3.0 uG), P_0[K/cm^3] and
gamma are constants. The variable x is mean to be rho/rho_0"""
kb=1.3806e-16 # boltzmann constant in erg/k
f= B_0**2/(8.0*np.pi*kb)*(x**(4./3)-1) + P_0*(x**gamma-1)
return f
def rho_prime_function(x,P_0=5.0e3,B_0=3.0e-6,gamma=gamma_default):
"""Derivative of the rho_function"""
kb=1.3806e-16 # boltzmann constant in erg/k
f=B_0**2/(6.0*np.pi*kb)*x**(1./3) + P_0*gamma*x**(gamma-1)
return f
def rho_2nd_prime_function(x,P_0=5.0e3,B_0=3.0e-6,gamma=gamma_default):
kb=1.3806e-16 # boltzmann constant in erg/k
f=B_0**2/(18.0*np.pi*kb)*x**(-2./3) + P_0*gamma*(gamma-1)*x** (gamma-2)
return f
def magnetic_term(x,B_0=3.0e-6):
""""Magnetic term of the rho function"""
kb=1.3806e-16 # boltzmann constant in erg/k
f=B_0**2/(8.0*np.pi*kb)*(x**(4./3)-1)
return f
def TI_term(x,P_0=5.0e3,gamma=gamma_default):
f=P_0*(x**gamma-1)
return f
x=np.arange(0.8,2,0.01)
f,P_0_out,B_0_out,gamma_out=rho_function(x,gamma=gamma_used)
f_prime=rho_prime_function(x,gamma=gamma_used)
b_term=magnetic_term(x)
ti_term=TI_term(x,gamma=gamma_used)
kb=1.3806e-16
crit=(-B_0_out**2/(6.*np.pi*kb*P_0_out*gamma_out))**(3./(3.*gamma_out-4))
print("crit =",crit)
print("Using interval: a=",x[0],", b=",crit)
#using bisect
zero1=bisect(rho_function1,x[0],crit,maxiter=1e6,full_output=True)
I would like to know what is wrong with the way I am suing to pass the arguments to the bisect routine. Could you please help me?
Cheers,
The parameter maxiter must be an integer. 1e6 is a float. Use
maxiter=int(1e6)
or
maxiter=1000000
Related
This question already has an answer here:
scipy curve_fit doesn't like math module
(1 answer)
Closed 2 years ago.
I am trying to calculate probabilities from Gaussian Distribution by varying the standard deviation(std).
I am expecting by using the Gaussian Quadrature with 21 points, integrating over a range from -1 to +1 with mean = 0 and setting std= 1 and std =2 will yield p = 0.68 and p = 0.95 respectively (picture attached).
import scipy.integrate as integrate
import math as m
#mean=0
#varying the sigma
def f(sigma,x):
return m.exp(-1*(x**2)/(2*sigma**2))/(sigma*m.sqrt(2*m.pi))
def prob_at_nsigma(sigma):
value = 0.
ans,anserr =integrate.quadrature(f,-1,1,args=(sigma ,),maxiter=21)
value = ans
return value
print(prob_at_nsigma(1))
And I get the following errors which I don't see why the "divide by zero" and "only size-1 arrays can be converted to Python scalars" arise:
runfile('C:/pythonExe/ntu_cp/untitled0.py', wdir='C:/pythonExe/ntu_cp')
C:\pythonExe\ntu_cp\untitled0.py:14: RuntimeWarning: divide by zero encountered in true_divide
return m.exp(-1*(x**2)/(2*sigma**2))/(sigma*m.sqrt(2*m.pi))
C:\pythonExe\ntu_cp\untitled0.py:14: RuntimeWarning: invalid value encountered in true_divide
return m.exp(-1*(x**2)/(2*sigma**2))/(sigma*m.sqrt(2*m.pi))
Traceback (most recent call last):
File "C:\pythonExe\ntu_cp\untitled0.py", line 23, in <module>
print(prob_at_nsigma(1))
File "C:\pythonExe\ntu_cp\untitled0.py", line 19, in prob_at_nsigma
ans,anserr =integrate.quadrature(f,-1,1,args=(sigma ,),maxiter=21)
File "C:\Users\cztee\anaconda3\lib\site-packages\scipy\integrate\_quadrature.py", line 238, in quadrature
newval = fixed_quad(vfunc, a, b, (), n)[0]
File "C:\Users\cztee\anaconda3\lib\site-packages\scipy\integrate\_quadrature.py", line 119, in fixed_quad
return (b-a)/2.0 * np.sum(w*func(y, *args), axis=-1), None
File "C:\Users\cztee\anaconda3\lib\site-packages\scipy\integrate\_quadrature.py", line 149, in vfunc
return func(x, *args)
File "C:\pythonExe\ntu_cp\untitled0.py", line 14, in f
return m.exp(-1*(x**2)/(2*sigma**2))/(sigma*m.sqrt(2*m.pi))
TypeError: only size-1 arrays can be converted to Python scalars
Appreciate any help. Thanks!
Gaussian distribution
Two problems in your code:
As mentioned, you should use numpy instead of math module.
Your f function's argument order is wrong. Scipy assumes that the first variable is x followed by other parameters. So it should be f(x, sigma), not f(sigma, x).
Solution:
import numpy as np
import scipy.integrate as integrate
def f(x, sigma):
return np.exp(-1*(x**2)/(2*sigma**2))/(sigma*np.sqrt(2*np.pi))
def prob_at_nsigma(sigma):
value = 0.
ans, anserr = integrate.quadrature(f, -1, 1, args=(sigma,), maxiter=21)
return ans
print(prob_at_nsigma(1))
# 0.6826894922280757
I am generating some (noisy) data-points (y) with some known parameters (m,c) that represent the equation of a straight line. Using sampling-based Bayesian methods, I now want to know the true values of parameters (m,c) from the data. Therefore, I am using DE Metropolis (PyMC3) to estimate the true parameters.
I am getting theano error theano.gof.fg.MissingInputError: Input 0 of the graph (indices start from 0), used to compute sigmoid(c_interval__), was not provided and not given a value.
Theano version: 1.0.4
PyMC3 version: 3.9.1
import matplotlib.pyplot as plt
import numpy as np
import arviz as az
import pymc3
import theano.tensor as tt
from theano.compile.ops import as_op
plt.style.use("ggplot")
# define a theano Op for our likelihood function
class LogLike(tt.Op):
itypes = [tt.dvector] # expects a vector of parameter values when called
otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)
def __init__(self, loglike, data, x, sigma):
# add inputs as class attributes
self.likelihood = loglike
self.data = data
self.x = x
self.sigma = sigma
def perform(self, node, inputs, outputs):
# the method that is used when calling the Op
theta, = inputs # this will contain my variables
# call the log-likelihood function
logl = self.likelihood(theta, self.x, self.data, self.sigma)
outputs[0][0] = np.array(logl) # output the log-likelihood
def my_model(theta, x):
y = theta[0]*x + theta[1]
return y
def my_loglike(theta, x, data, sigma):
model = my_model(theta, x)
ll = -(0.5/sigma**2)*np.sum((data - model)**2)
return ll
# set up our data
N = 10 # number of data points
sigma = 1. # standard deviation of noise
x = np.linspace(0., 9., N)
mtrue = 0.4 # true gradient
ctrue = 3. # true y-intercept
truemodel = my_model([mtrue, ctrue], x)
# make data
np.random.seed(716742) # set random seed, so the data is reproducible each time
data = sigma*np.random.randn(N) + truemodel
print(data)
ndraws = 3000 # number of draws from the distribution
# create our Op
logl = LogLike(my_loglike, data, x, sigma)
# use PyMC3 to sampler from log-likelihood
with pymc3.Model():
# uniform priors on m and c
m = pymc3.Uniform('m', lower=-10., upper=10.)
c = pymc3.Uniform('c', lower=-10., upper=10.)
# convert m and c to a tensor vector
theta = tt.as_tensor_variable([m, c])
# use a DensityDist (use a lamdba function to "call" the Op)
pymc3.DensityDist('likelihood', lambda v: logl(v), observed={'v': theta})
step = pymc3.DEMetropolis()
trace = pymc3.sample(ndraws, step)
# plot the traces
axes = az.plot_trace(trace)
fig = axes.ravel()[0].figure
fig.savefig('./trace_plots.png')
Find the full trace here:
Population sampling (4 chains)
DEMetropolis: [c, m]
Attempting to parallelize chains to all cores. You can turn this off with `pm.sample(cores=1)`.
Population parallelization failed. Falling back to sequential stepping of chains.---------------------| 0.00% [0/4 00:00<00:00]
Sampling 4 chains for 0 tune and 4_000 draw iterations (0 + 16_000 draws total) took 5 seconds.███████| 100.00% [4000/4000 00:04<00:00]
Traceback (most recent call last):
File "test.py", line 75, in <module>
trace = pymc3.sample(ndraws, step)
File "/home/csl_user/.local/lib/python3.7/site-packages/pymc3/sampling.py", line 599, in sample
idata = arviz.from_pymc3(trace, **ikwargs)
File "/home/csl_user/.local/lib/python3.7/site-packages/arviz/data/io_pymc3.py", line 531, in from_pymc3
save_warmup=save_warmup,
File "/home/csl_user/.local/lib/python3.7/site-packages/arviz/data/io_pymc3.py", line 159, in __init__
self.observations, self.multi_observations = self.find_observations()
File "/home/csl_user/.local/lib/python3.7/site-packages/arviz/data/io_pymc3.py", line 172, in find_observations
multi_observations[key] = val.eval() if hasattr(val, "eval") else val
File "/home/csl_user/.local/lib/python3.7/site-packages/theano/gof/graph.py", line 522, in eval
self._fn_cache[inputs] = theano.function(inputs, self)
File "/home/csl_user/.local/lib/python3.7/site-packages/theano/compile/function.py", line 317, in function
output_keys=output_keys)
File "/home/csl_user/.local/lib/python3.7/site-packages/theano/compile/pfunc.py", line 486, in pfunc
output_keys=output_keys)
File "/home/csl_user/.local/lib/python3.7/site-packages/theano/compile/function_module.py", line 1839, in orig_function
name=name)
File "/home/csl_user/.local/lib/python3.7/site-packages/theano/compile/function_module.py", line 1487, in __init__
accept_inplace)
File "/home/csl_user/.local/lib/python3.7/site-packages/theano/compile/function_module.py", line 181, in std_fgraph
update_mapping=update_mapping)
File "/home/csl_user/.local/lib/python3.7/site-packages/theano/gof/fg.py", line 175, in __init__
self.__import_r__(output, reason="init")
File "/home/csl_user/.local/lib/python3.7/site-packages/theano/gof/fg.py", line 346, in __import_r__
self.__import__(variable.owner, reason=reason)
File "/home/csl_user/.local/lib/python3.7/site-packages/theano/gof/fg.py", line 391, in __import__
raise MissingInputError(error_msg, variable=r)
theano.gof.fg.MissingInputError: Input 0 of the graph (indices start from 0), used to compute sigmoid(c_interval__), was not provided and not given a value. Use the Theano flag exception_verbosity='high', for more information on this error.
I've run into the same problem when following the example how to sample from a black box likelihood found here:
https://docs.pymc.io/notebooks/blackbox_external_likelihood.html
This seems to be a version problem. I'm on Manjaro Linux and also ran theano 1.0.4 and pymc3 3.9 using python 3.8. I could solve the issue and make the code work by downgrading to python 3.7 and pymc3 3.8. This seems to be in issue with python 3.8, as simply downgrading pymc3 did not solve the issue for me. I am far from an expert in pymc3 so I don't have a solution how to fix this issue using the newest versions, but for now downgrading makes my simulations run.
Hope this helps.
Edit: The devs seem to be aware of this, there is a an open issue on their github page
https://github.com/pymc-devs/pymc3/issues/4002
I have some code that was working fine in python2. I need to translate it to python3.
There is one piece of it that I can't understand how to adapt.
Here is some code.
Function with error
def gauss((x, y), x0, y0, intens, sigma):
return intens*numpy.exp(-(numpy.power(x-x0, 2)+numpy.power(y-y0, 2))/(2.*sigma**2)).ravel()
Caller function
def dofwhm(psfdata):
x = numpy.arange(psfdata.shape[1])
y = numpy.arange(psfdata.shape[0])
x, y = numpy.meshgrid(x, y)
popt, pcov = opt.curve_fit(gauss, (x, y), psfdata.ravel(), p0=[psfdata.shape[1]/2, psfdata.shape[0]/2, psfdata[psfdata.shape[1]/2, psfdata.shape[0]/2], 5.0])
return 2.355*abs(popt[3])
The error that I get is
Traceback (most recent call last):
File "catalog.py", line 8, in <module>
import cutPsf
File "/Users/igor/GALPHAT/pypygalphat/preprocessingNew/cutPsf.py", line 9
def gauss((x, y), x0, y0, intens, sigma):
^
SyntaxError: invalid syntax
Can somebody help me how to adapt it for python3?
UPDATE:
Well, #hpaulj answer seems to be right. I found that there are routine to convert Python2 code to Python3 code. After running on target file 2to3 -w cutPsf.py as a result I get the suggested solution from hpaulj. Unfortunately it results in fallowing error:
Traceback (most recent call last):
File "catalog.py", line 323, in <module>
cutPsf.run(tempDir+galaxy.psffile, outDirFits+galaxy.psffile)
File "/Users/igor/GALPHAT/pypygalphat_p3/preprocessingNew/cutPsf.py", line 63, in run
coeffwhm = dofwhm(newPsf)
File "/Users/igor/GALPHAT/pypygalphat_p3/preprocessingNew/cutPsf.py", line 20, in dofwhm
psfdata.shape[1]/2, psfdata.shape[0]/2, psfdata[psfdata.shape[1]/2, psfdata.shape[0]/2], 5.0])
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
As said before, everything runs perfectly with Python2...
do the unpacking later
def gauss(xy, x0, y0, intens, sigma):
x, y = xy
return intens*numpy.exp(-(numpy.power(x-x0, 2)+numpy.power(y-y0, 2))/(2.*sigma**2)).ravel()
I suggested this based on the typical scipy optimized requirements, where the user defined function is called with f(x, *args), where x is the variable (possibly array) that is optimized. But curve_fit is different.
scipy.optimize.curve_fit(f, xdata, ydata, p0=None,...)
Where the f (your gauss?) satisfies:
ydata = f(xdata, *params) + eps
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html
I guess my suggestion is still valid if xdata is the (x,y) tuple, or array made from that. And ydata is psfdata.ravel().
You need to make some modification with * operator.
def gauss(x, y, x0, y0, intens, sigma):
return intens*numpy.exp(-(numpy.power(x-x0, 2)+numpy.power(y-y0, 2))/(2.*sigma**2)).ravel()
def dofwhm(psfdata):
x = numpy.arange(psfdata.shape[1])
y = numpy.arange(psfdata.shape[0])
x, y = numpy.meshgrid(x, y)
popt, pcov = opt.curve_fit(gauss, *(x, y), psfdata.ravel(), p0=[psfdata.shape[1]/2, psfdata.shape[0]/2, psfdata[psfdata.shape[1]/2, psfdata.shape[0]/2], 5.0])
return 2.355*abs(popt[3])
I'm trying to speed up a comparison between two pointclouds, I have some code which took up to an hour to complete. I've butchered it to this and tried to implement numba. The code works with the exception of the scipy cdist function. It's my first test of using numba, where am I going wrong?
from numba import jit
#jit(nopython=True)
def near_dist_top(T, B):
xi = [i[0] for i in T]
yi = [i[1] for i in T]
zi = [i[2] for i in T]
XB = B
insert_params = []
for i in range(len(T)):
XA = [T[i]]
disti = cdist(XA, XB, metric='euclidean').min()
insert_params.append((xi[i], yi[i], zi[i], disti))
# print("Top: " + str(i) + " of " + str(len(T)))
print(i)
return insert_params
print(XB)
### Edits ###
Both T and B are lists of coordinates
(580992.507, 4275268.8321, 192.4599), (580992.507, 4275268.8391, 192.4209), (580992.507, 4275268.8391, 192.4209)
hmmm, does numba handle lists, does it need to be a numpy array, would cdist handle a numpy array...?
The error
numba.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Untyped global name 'cdist': cannot determine Numba type of <class 'function'>
File "scratch_28.py", line 132:
def near_dist_top(T, B):
<source elided>
XA = [T[i]]
disti = cdist(XA, XB, metric='euclidean').min()
^
When I run the following script, I notice the following couple of errors:
import tensorflow as tf
import numpy as np
import seaborn as sns
import random
#set random seed:
random.seed(42)
def potential(N):
points = np.random.rand(N,2)*10
values = np.array([np.exp((points[i][0]-5.0)**2 + (points[i][1]-5.0)**2) for i in range(N)])
return points, values
def init_weights(shape,var_name):
"""
Xavier initialisation of neural networks
"""
init = tf.contrib.layers.xavier_initializer()
return tf.get_variable(initializer=init,name = var_name,shape=shape)
def neural_net(X):
with tf.variable_scope("model",reuse=tf.AUTO_REUSE):
w_h = init_weights([2,10],"w_h")
w_h2 = init_weights([10,10],"w_h2")
w_o = init_weights([10,1],"w_o")
### bias terms:
bias_1 = init_weights([10],"bias_1")
bias_2 = init_weights([10],"bias_2")
bias_3 = init_weights([1],"bias_3")
h = tf.nn.relu(tf.add(tf.matmul(X, w_h),bias_1))
h2 = tf.nn.relu(tf.add(tf.matmul(h, w_h2),bias_2))
return tf.nn.relu(tf.add(tf.matmul(h2, w_o),bias_3))
X = tf.placeholder(tf.float32, [None, 2])
with tf.Session() as sess:
model = neural_net(X)
## define optimizer:
opt = tf.train.AdagradOptimizer(0.0001)
values =tf.placeholder(tf.float32, [None, 1])
squared_loss = tf.reduce_mean(tf.square(model-values))
## define model variables:
model_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,"model")
train_model = opt.minimize(squared_loss,var_list=model_vars)
sess.run(tf.global_variables_initializer())
for i in range(10):
points, val = potential(100)
train_feed = {X : points,values: val.reshape((100,1))}
sess.run(train_model,feed_dict = train_feed)
print(sess.run(model,feed_dict = {X:points}))
### plot the approximating model:
res = 0.1
xy = np.mgrid[0:10:res, 0:10:res].reshape(2,-1).T
values = sess.run(model, feed_dict={X: xy})
sns.heatmap(values.reshape((int(10/res),int(10/res))),xticklabels=False,yticklabels=False)
On the first run I get:
[nan] [nan] [nan] [nan] [nan] [nan] [nan]] Traceback (most
recent call last):
...
File
"/Users/aidanrockea/anaconda/lib/python3.6/site-packages/seaborn/matrix.py",
line 485, in heatmap
yticklabels, mask)
File
"/Users/aidanrockea/anaconda/lib/python3.6/site-packages/seaborn/matrix.py",
line 167, in init
cmap, center, robust)
File
"/Users/aidanrockea/anaconda/lib/python3.6/site-packages/seaborn/matrix.py",
line 206, in _determine_cmap_params
vmin = np.percentile(calc_data, 2) if robust else calc_data.min()
File
"/Users/aidanrockea/anaconda/lib/python3.6/site-packages/numpy/core/_methods.py",
line 29, in _amin
return umr_minimum(a, axis, None, out, keepdims)
ValueError: zero-size array to reduction operation minimum which has
no identity
On the second run I have:
ValueError: Variable model/w_h/Adagrad/ already exists, disallowed.
Did you mean to set reuse=True or reuse=tf.AUTO_REUSE in VarScope?
It's not clear to me why I get either of these errors. Furthermore, when I use:
for i in range(10):
points, val = potential(10)
train_feed = {X : points,values: val.reshape((10,1))}
sess.run(train_model,feed_dict = train_feed)
print(sess.run(model,feed_dict = {X:points}))
I find that on the first run, I sometimes get a network that has collapsed to the constant function with output 0. Right now my hunch is that this might simply be a numerics problem but I might be wrong.
If so, it's a serious problem as the model I have used here is very simple.
Right now my hunch is that this might simply be a numerics problem
indeed, when running potential(100) I sometimes get values as large as 1E21. The largest points will dominate your loss function and will drive the network parameters.
Even when normalizing your target values e.g. to unit variance, the problem of the largest values dominating the loss would still remain (look e.g. at plt.hist(np.log(potential(100)[1]), bins = 100)).
If you can, try learning the log of val instead of val itself. Note however that then you are changing the assumption of the loss function from 'predictions follow a normal distribution around the target values' to 'log predictions follow a normal distribution around log of the target values'.