R: How to calculate IRF knowing VAR coefficients - var

I used a different approach to estimate the VAR coefficients, and now I ended up with a K=2, p=3 VAR model. As the VAR model is not a result returned by the var function, I can't use it as an input for irf().
My questions are:
How can I compute the irf using the VAR coefficients?
How can I plot the IRF with 95% confidence interval?
Thank you!

Related

TensorFlow dataset collapses with too much data

I'm using the simplest model available to make this testcase in Node.js:
model = tf.sequential();
model.add(tf.layers.dense({units: 1, inputShape: [1]}));
And training it with a formula X+X=Y for testing purposes:
let xsData = [];
let ysData = [];
for (let i = 1; i < 17; i++) { // Please note the 16 iterations here!
xsData.push(i);
ysData.push(i+i);
}
const xs = tf.tensor2d(xsData, [xsData.length, 1]);
const ys = tf.tensor2d(ysData, [ysData.length, 1]);
await model.fit(xs, ys, {epochs: 500});
After that completes, I'm testing the model using the number ten:
model.predict(tf.tensor2d([10], [1, 1])).dataSync();
Which gives me a value of around 20. Which is correct (10+10=20).
Now for the problem: Whenever I change the iterations to 17+, the model collapses. I test the same number (10), output results in -1.9837386463351284e+25 and such. Even higher dataset iterations result in Infinity and NaN.
Does anyone have a clue what is going on here? Would be great if anyone could point me in the right direction. Thank you in advance.
Using SGD in regression could be tricky as outputs don't have an upper bound and that can lead NaN values in loss, in other words exploding gradients etc.
Changing optimizer to Adam or RMSProp works most of the times.

Parallel Computation of Elements of an Array in Node.JS

I have an array with at least 360 numbers and one function func which I want to call upon each one of them. Because the function takes some time to calculate, I'm looking for a way to parallelize these tasks. Because I have almost no experience with Node.JS, I am looking for some help on how to achieve this.
That is what's given
var geometrical_form = new paper.CompoundPath('...');
var angles = [0, ..., 360];
What I want is that the following functions are called for each angle
geometrical_form.rotate(angle[i]);
func(geometrical_form);

How to use precision_recall_curve to calculate precision from recall value

I try to calculate precision from recall value (e.g. 0.9) using precision-recall curve. The way I do is to find the index (idx) near minimum value of abs(recall - 0.9), and then find the precision(idx) I can use interpolate from the two sides of minimum values to improve the accuracy. However, I think there must be a better way. Is there an function to look up or interpolate the prevision from recall or vice verse from prevision-recall curve?
The below is my code. I try to get a better way of doing it.
from sklearn.metrics import precision_recall_curve
y_scores_lr = m.decision_function(X_test)
precision, recall, thresholds = precision_recall_curve(y_test, y_scores_lr)
idx = abs(recall - 0.9).argmin()
prec = precision[idx] # use interpolation to get a better result

SARIMAX - Summary table coefficient signs are reversed when calling them

I've fit a SARIMAX model using statsmodels as follows
mod = sm.tsa.statespace.SARIMAX(ratingCountsRSint,order=(2,0,0),seasonal_order=(1,0,0,52),enforce_stationarity=False,enforce_invertibility=False, freq='W')
results = mod.fit()
print(results.summary().tables[1])
In the results table I have a coefficient ar.S.L52 that shows as 0.0163. When I try to retrieve the coefficient using
seasonalAR=results.polynomial_seasonal_ar[52]
I get -0.0163. I'm wondering why the sign has turned around. The same thing happens with polynomial_ar. In the documentation it says that polynomial_seasonal_ar gives the "array containing seasonal autoregressive lag polynomial coefficients". I would have guessed that I should get exactly the same as in the summary table. Could someone clarify how that comes about and whether the actual coefficient of the lag is positive or negative?
I'll use an AR(1) model as an example, but the same principle applies to a seasonal model.
We typically write the AR(1) model as:
y_t = \phi_1 y_{t-1} + \varespilon_t
The parameter estimated by Statsmodels is \phi_1, and that is what is presented in the summary table.
When writing the AR(1) model in lag-polynomial form, we usually write it like:
\phi(L) y_t = \varepsilon_t
where \phi(L) = 1 - \phi L, and L is the lag operator. The coefficients of this lag polynomial are (1, -\phi). These coefficients are what are presented in the polynomial attributes in the results object.

Lomb Scargle phase

Is there any way I can extract the phase from the Lomb Scargle periodogram? I'm using the LombScargle implementation from gatspy.
from gatspy.periodic import LombScargleFast
model = LombScargleFast().fit(t, y)
periods, power = model.periodogram_auto()
frequencies = 1 / periods
fig, ax = plt.subplots()
ax.plot(frequencies, power)
plt.show()
Power gives me an absolute value. Any way I can extract the phase for each frequency as I can for a discrete fourier transform.
The Lomb-Scargle method produces a periodogram, i.e., powers at each frequency. This is in order to be able to be performant, compared to directly least-squares fitting a sinusoidal model. I don't know about gatspy, but astropy does allow you to compute the best phase for a specific frequency of interest, see http://docs.astropy.org/en/stable/stats/lombscargle.html#the-lomb-scargle-model . I imagine doing this for many frequencies is many times slower than computing the periodogram.
-EDIT-
The docs outline moved to:
https://docs.astropy.org/en/stable/timeseries/lombscargle.html
let's consider that you're looking for a specific frequency fo. Then the corresponding period can be given by P = 1/fo.
We can define a function, as in below:
def phase_plot(t,period):
#t is the array of timesteps
phases = (time/period)%1.
this will give you all the phases for that particular frequency of interest.

Resources