I am a training a GNN model using Pytorch. I set the seed to a constant, but still can not get the same training results. I used torch_scatter. Is the scatter function can not be reproducible? If it can not be reproducible, what can I do? Please help me. Thanks very much.
from torch_scatter import scatter
out = scatter(agg_embed, index, dim=0, out=None, dim_size=t_embed.size, reduce='mean')
Related
I have been training ARIMA models using the statsmodels (v0.12.2) package, and would like to check out how a model fits on the training data
Current Code:
from statsmodels.tsa.arima.model import ARIMA
#for some p,d,q
model = ARIMA(train, order = (p,d,q)
model_fit = model.fit()
Attempting to do:
I would like to plot the predictions of the training set against the actual training values.
I have been trying to use the following after reading this documentation:
model_fit.get_prediction()
However, this returns:
<statsmodels.tsa.statespace.mlemodel.PredictionResultsWrapper at 0x7f804bf5bbe0>
Question:
How can I return these predicted values for the training set?
Advice appreciated!
I think you are looking for the fitted values of the model, if so then use,
model_fit.fittedvalues
You can find a complete example here.
I've found that changing:
model_fit.get_prediction()
to
model_fit.get_prediction().predicted_mean
returns an array which isnt perfect but suitable for my analysis.
Please post an answer if you have an alternative/better method!
I had a neural net in keras that performed well. Now with the deprecation that came with Tensorflow 2 I had to rewrite the model. Now it is giving me worse accuracy metrics.
My suspicion is that tf2 wants you to use their data structure to train models and they give a example of how to go from Numpy to tf.data.Dataset here.
So I did:
train_dataset = tf.data.Dataset.from_tensor_slices((X_train_deleted_nans, y_train_no_nans))
train_dataset = train_dataset.shuffle(SHUFFLE_CONST).batch(BATCH_SIZE)
Once the training starts I get this warning error:
2019-10-04 23:47:56.691434: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence
[[{{node IteratorGetNext}}]]
Appending .repeat() to the creation of my tf.data.Dataset solved my error. Like suggested by duysqubix in his eloquent solution posted here:
https://github.com/tensorflow/tensorflow/issues/32817#issuecomment-539200561
I'm currently trying to implement YOLOv3 in TensorFlow, using the Estimator API. However, I'm stuck at the loss function. YOLOv3 makes predictions at three scales and I can't figure out, how to calculate the loss for all of them. I've already looked at the paper and also tried to find the loss function in the darknet source code but can't figure it out. I've also looked at the code for the loss function of another implementation YOLOv3 TensorFlow implementation but this hasn't really helped me to understand the calculation of the loss either.
Can someone explain how exactly the loss for training is calculated while taking into account the predictions of all three scales?
Hey I am new to Tensorflow. I used DNN to train the model and I would like to plot the loss curve. However, I do not want to use Tensorboard since I am really not familiar with that. I wonder whether it is possible to extract the loss info info in each step and plot it use other plotting package or scikit-learn?
Really appreciated!
Change your sess.run(training_function, feed_dict) statement so it includes your loss function as well. Then use something like Matplotlib to plot the data.
_, loss = sess.run((training_function, loss_function), feed_dict)
loss_list.append(loss)
import matplotlib.pyplot as plt
plt.plot(loss_list)
I'm new in keras and i have one question.
To get reproducible result, i fixed seed. If the fit function shuffle parameter is true, is traning data order always same for all epochs or not?
Thanks in advance.
Yes, if you set the seed correctly to a certain value the training order should always be the same with the same seed. However I there were some problems regarding reproducibility when using TF and multiprocessing. I'm not sure if this is solved by now.
You can also checkout this site in the Keras Documentation.