I'm trying to determine p and q values for an ARMA model. The time series is already stationary and I was looking to ACF and PACF plots, but I need to get those p and q values "on the go" (like performing a simulation).
I noticed that in statsmodels there are actually two functions for acf and pacf, but I'm not understanding how to use them properly.
This is how the code looks like
from statsmodels.tsa.stattools import acf, pacf
>>>acf(data,qstat=True)
(array([1. , 0.98707179, 0.9809318 , 0.9774078 , 0.97436479,
0.97102392, 0.96852746, 0.96620799, 0.9642253 , 0.96288455,
0.96128443, 0.96026672, 0.95912503, 0.95806287, 0.95739194,
0.95622575, 0.9545498 , 0.95381055, 0.95318588, 0.95203675,
0.95096276, 0.94996035, 0.94892427, 0.94740811, 0.94582933,
0.94420572, 0.9420396 , 0.9408416 , 0.93969163, 0.93789606,
0.93608273, 0.93413445, 0.93343312, 0.93233588, 0.93093149,
0.93033546, 0.92983324, 0.92910616, 0.92830326, 0.92799811,
0.92642784]),
array([ 2916.11296684, 5797.02377904, 8658.22999328, 11502.6002944 ,
14328.44503612, 17140.72034976, 19940.48013538, 22729.69637912,
25512.09429552, 28286.18290207, 31055.33003897, 33818.82409725,
36577.1270353 , 39332.49361223, 42082.0755955 , 44822.94911057,
47560.49941212, 50295.38504714, 53024.59880222, 55748.57526173,
58467.72758802, 61181.8659989 , 63888.25003765, 66586.53110019,
69276.46332225, 71954.97102175, 74627.57217707, 77294.54406888,
79952.23080669, 82600.54514273, 85238.73829645, 87873.86209917,
90503.68343426, 93126.47509834, 95746.79574474, 98365.17422285,
100980.34471949, 103591.88164688, 106202.58634768, 108805.3453693 ]),
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0.]))
>>>pacf(data)
array([ 1. , 0.98740203, 0.26463067, 0.18709112, 0.11351714,
0.0540612 , 0.06996315, 0.05159168, 0.05358487, 0.06867607,
0.03915513, 0.06099868, 0.04020074, 0.0390229 , 0.05198753,
0.01873783, -0.00169158, 0.04387457, 0.03770717, 0.01360295,
0.01740693, 0.01566421, 0.01409722, -0.00988412, -0.00860644,
-0.00905181, -0.0344616 , 0.0199406 , 0.01123293, -0.02002155,
-0.01415968, -0.0266674 , 0.03583483, 0.0065682 , -0.00483241,
0.0342638 , 0.02353691, 0.01704061, 0.01292073, 0.03163407,
-0.02838961])
How can I get p and q with this functions? The acf function returns only 1 array if qstat is set to False
Selecting the order of an ARMA(p,q) model using estimated ACFs/PACFs is usually not the best approach. This is simply because in case of an ARMA process both the ACF and PACF slowly decay (in absolute terms) for increasing lags. So you cannot really infer the lag order from it. Instead they are mostly used for pure AR/MA models in which you observe a clear cutoff in either of the two series (but even then it is more of a graphical approach).
If you want to determine p and q "on the fly" for an ARMA model it seems more reasonable to use information criteria (e.g. AIC, BIC, etc.). statsmodels provides the function arma_order_select_ic() for this very purpose. So what you want is something like this:
from statsmodels.tsa.stattools import arma_order_select_ic
arma_order_select_ic(data, max_ar=4, max_ma=4, ic='bic')
Related
I need to solve a cascade of sparse linear systems Ax=b. The solution x of the first systems is an input to the second system, which is an input to the third and so on. Because of numerical errors compounding and for other reasons, I have to use scipy.sparse.linalg.bicgstab as my linear solver. However, for a system that is not even ill-conditioned and definitely has an inverse, the solver gives me a flag for: "illegal input or breakdown".
import numpy as np
from scipy.sparse.linalg import bicgstab, inv
from scipy import sparse
A = np.array(
[[ -1., 0., 0., 0., 0., 0., 0., 0.],
[ 0., -1., 0., 0., 0., 0., 0., 0.],
[ 0., 0., -10., 0., 0., 0., 0., 0.],
[ 0., 0., 0., -10., 0., 0., 0., 0.],
[ 0., 0., 3., 0., -3., 0., 0., 0.],
[ 0., 0., 0., 3., 0., -3., 0., 0.],
[ 0., 0., 0., 7., 3., 0., -10., 0.],
[ 0., 0., 7., 0., 0., 3., 0., -10.]]
)
A = -sparse.csc_matrix(A)
b = np.array([ 1., 0., 10., 0., 0., 0., 0., 0.])
x, flag = bicgstab(A=A, b=b, maxiter=40, tol=1e-6)
x, flag
>>> (array([1. , 0. , 1. , 0. , 1.00118012,
0. , 0.3004875 , 0.70009946]), -10)
Just to prove the point
inv(A).dot(b)
>>> array([1. , 0. , 1. , 0. , 1. , 0. , 0.3, 0.7])
The output above is exactly what I expect. Does anyone know why bicgstab is not giving me the desired output? I could not find documentation on illegal input or breakdown for bicgstab, and therefore I am my question on SO.
The -10 error code does not necessarily mean that you have a wrong input; in your case, it is most likely that the breakdown occurred during the iterative solve.
By slightly changing your RHS:
b = np.array([ 1., 0., 0., 0., 10., 0., 0., 0.])
the scipy.bicgstab has no troubles converging even without a preconditioner:
x, flag = bicgstab(A=A, b=b, maxiter=40, tol=1e-6)
print (x, flag)
(array([1. , 0. , 0. , 0. , 3.33333333,
0. , 1. , 0. ]), 0)
The fact that the matrix has an inverse and a decent condition number
print(np.linalg.cond(A))
14.616397823169317
does not guarantee that it is easy to obtain a solution for a particular RHS, especially using an iterative solver or a particular iterative solver. It seems to me (without an elaborate analysis of the matrix spectrum and its kernel space), that your RHS lies exactly in such a "bad region".
If you are simply interested in a solution, I would suggest switching to GMRES:
x, flag = gmres(A=A, b=b, maxiter=40, tol=1e-6)
(array([1. , 0. , 0.1 , 0. , 0.1 , 0. , 0.03, 0.07]), 0)
If you are interested in investigating why BiCGStab failed, while GMRES succeded in the solution of this system, I would invite your narrowed down question to Computational Science SE.
I am trying to maximize cx - xAx with A being positive definite, but solution is not what I it should be. Please help
I tried the problem using this data
A = np.array([[1595., 1098., 1133., 0., 0., 0., 0.],
[1191., 1497., 1133., 0., 0., 0., 0.],
[1191., 1098., 1396., 0., 0., 0., 0.],
[ 0., 0., 0., 655., 0., 0., 0.],
[ 0., 0., 0., 0., 1313., 0., 0.],
[ 0., 0., 0., 0., 0., 581., 0.],
[ 0., 0., 0., 0., 0., 0., 536.]])
c = np.array([4673.36981266, 4727.12719741, 5939.49046907, 3867.69830799,
6099.15146109, 5358.10885615, 4885.96523884])
prob = cp.Problem(cp.Maximize(cp.quad_form(x,A)+c.T#x),[x>=0])
prob.solve()
I get DCP error with code above..
I then tried Minimize version but then get -inf as answer
prob = cp.Problem(cp.Minimize(cp.quad_form(x,A)-c.T#x),[x>=0])
prob.solve()
The actual optimal solution to Max (cx - xAx) is
np.array([0,0,2.134,2.903,2.359,4.6266,4.508])
with optimal value of 42586
I think you forgot to change your constraint after you changed the problem from a maximization to a minimization. For a minimization, you could try x<=0
My Sequential CNN model is trained on 39 classes as a multi-class classifier. As for predictions, it returns a one-hot encoded array like [0,0,...1,0,...] whereas I want something like [0.012,0.022,0.067,...,0.997,0.0004,...]
Is there a way to get this? if not what exactly should I make to get these?
The reason I want it this way is to verify how close are other classes, so if one says 0.98 and others say 0.96 then I am doing something wrong, data isn't enough, etc..
Thank you :)
My model is basically a keras.model resnet50 with following configs :
model = keras.applications.resnet.ResNet50(include_top=False, weights=None, input_tensor=None, input_shape=(64,64,1), pooling='avg', classes=39)
x = model.output
x = Dropout(0.7)(x)
num_classes = 39
predictions = Dense(num_classes, activation= 'softmax')(x)
model = Model(inputs = model.input, outputs = predictions)
optimizer = keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(optimizer, loss='categorical_crossentropy', metrics=['categorical_accuracy'], loss_weights=None, sample_weight_mode=None, weighted_metrics=None, target_tensors=None)
Sample input :
import cv2
img = cv2.imread(IMAGE_PATH, 0)
img = cv2.resize(img, (64,64))
img = np.reshape(img, (1,64,64,1))
predicted_class_indices = np.argmax(model.predict(img, verbose = 1))
Sample output:
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
Desired output (numbers are hypothetical):
array([[0.022, 0.353, 0.0535, 0.52, 0212., 0.822, 0.532, 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0.]], dtype=float32)
One way to do so is to remove the last activation layer (related issue).
You can do so by using model.layers[-1].activation=None.
However, the softmax shouldn't output a one-hot vector but the prob distribution, you might want to check how your training is doing.
I'm pretty certain this is trivial, but I haven't yet managed to quite get my head around scan. I want to iteratively build a matrix of values, m, where
m[i,j] = f(m[k,l]) for k < i, j < l
so you could think of it as a dynamic programming problem. However, I can't even generate the list [1..100] by iterating over the list [1..100] and updating the shared value as I go.
import numpy as np
import theano as T
import theano.tensor as TT
def test():
arr = T.shared(np.zeros(100))
def grid(idx, arr):
return {arr: TT.set_subtensor(arr[idx], idx)}
T.scan(
grid,
sequences=TT.arange(100),
non_sequences=[arr])
return arr
run = T.function([], outputs=test())
run()
which returns
array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0.])
There's a few things here that point towards some misunderstandings. scan really can be a hard bit of Theano to wrap your head around!
Here's some updated code that does what I think you're trying to do, but I wouldn't recommend using this code at all. The basic issue is that you seem to be using a shared variable inappropriately.
import numpy as np
import theano as T
import theano.tensor as TT
def test():
arr = T.shared(np.zeros(100))
def grid(idx, arr):
return {arr: TT.set_subtensor(arr[idx], idx)}
_, updates = T.scan(
grid,
sequences=TT.arange(100),
non_sequences=[arr])
return arr, updates
outputs, updates = test()
run = T.function([], outputs=outputs, updates=updates)
print run()
print outputs.get_value()
This code is changed from the original in two ways:
The updates from the scan have to be captured (originally discarded) and passed to the theano.function's updates parameters. Without this the shared variable won't be updated at all.
The contents of the shared variable need to be examined after the function is executed (see below).
This code prints two sets of values. The first is the output of the Theano function from when it's executed. The second is the contents of the shared variable after the Theano function has executed. The Theano function returns the shared variable so you might think that these two sets of values should be the same, but you'd be wrong! No shared variables are updated until after all of the function's output values have been computed. So it's only after the function has been executed and we look at the contents of the shared variable that we see the values we expected to see originally.
Here's an example of implementing a dynamic programming algorithm in Theano. The algorithm is a simplified version of dynamic time warping which has a lot of similarities to edit distance.
import numpy
import theano
import theano.tensor as tt
def inner_step(j, c_ijm1, i, c_im1, x, y):
insert_cost = tt.switch(tt.eq(j, 0), numpy.inf, c_ijm1)
delete_cost = tt.switch(tt.eq(i, 0), numpy.inf, c_im1[j])
match_cost = tt.switch(tt.eq(i, 0), numpy.inf, c_im1[j - 1])
in_top_left = tt.and_(tt.eq(i, 0), tt.eq(j, 0))
min_c = tt.min(tt.stack([insert_cost, delete_cost, match_cost]))
c_ij = tt.abs_(x[i] - y[j]) + tt.switch(in_top_left, 0., min_c)
return c_ij
def outer_step(i, c_im1, x, y):
outputs, _ = theano.scan(inner_step, sequences=[tt.arange(y.shape[0])],
outputs_info=[tt.constant(0, dtype=theano.config.floatX)],
non_sequences=[i, c_im1, x, y], strict=True)
return outputs
def main():
x = tt.vector()
y = tt.vector()
outputs, _ = theano.scan(outer_step, sequences=[tt.arange(x.shape[0])],
outputs_info=[tt.zeros_like(y)],
non_sequences=[x, y], strict=True)
f = theano.function([x, y], outputs=outputs)
a = numpy.array([1, 2, 4, 8], dtype=theano.config.floatX)
b = numpy.array([2, 3, 4, 7, 8, 9], dtype=theano.config.floatX)
print a
print b
print f(a, b)
main()
This is highly simplified and I wouldn't recommend using it for real. In general Theano is very bad at doing dynamic programming because theano.scan is so slow in comparison to native looping. If you need to propagate gradients through a dynamic program then you may not have any choice but if you don't need gradients you should probably avoid using Theano for dynamic programming.
If you want a much more thorough implementation of DTW which gets over some of the performance hits Theano imposes by computing many comparisons in parallel (i.e. batching) then take a look here: https://github.com/danielrenshaw/TheanoBatchDTW.
I can apply the following code to an array.
from numpy import *
A = eye(4)
A[A[:,1] > 0.5,:]
But How can I apply the similar method to a mat?
A = mat(eye(4))
A[A[:,1] > 0.5,:]
I know the above code is wrong, but what should I do?
The problem is that, when A is a numpy.matrix, A[:,1] returns a 2-d matrix, and therefore A[:,1] > 0.5 is also 2-d. Anything that makes this expression look like the same thing that is created when A is an ndarray will work. For example, you can write A.A[:,1] > 0.5 (the .A attribute returns an ndarray view of the matrix), or (A[:,1] > 0.5).A1 (the A1 attribute returns a flatten ndarray).
For example,
In [119]: A
Out[119]:
matrix([[ 1., 0., 0., 0.],
[ 0., 1., 0., 0.],
[ 0., 0., 1., 0.],
[ 0., 0., 0., 1.]])
In [120]: A[(A[:, 1] > 0.5).A1,:]
Out[120]: matrix([[ 0., 1., 0., 0.]])
In [121]: A[A.A[:, 1] > 0.5,:]
Out[121]: matrix([[ 0., 1., 0., 0.]])
Because of quirks like these, I (and many others) recommend avoiding the numpy.matrix class. Most code can be written just as easily by using ndarrays throughout.