I'm trying to use "statsmodels.api" to work with time series data and trying to fit a simple ARIMA model using
sm.tsa.arima_model.ARIMA(dta,(4,1,1)).fit()
but I got the following error
module 'statsmodels.tsa.api' has no attribute 'arima_model'
I'm using 'statsmodels' version 0.9.0 with 'spyder' version 3.2.8 I'd be pleased to get your help thanks
The correct path is :
import statsmodels.api as sm
sm.tsa.ARIMA()
You can view it using a shell that allows autocomplete like ipython.
It is also viewable in the example provided by statsmodels such as this one.
And more informations about package structure may be found here.
Related
I tried following along with the PyTorch transfer learning tutorial and found it pulled an error code when I brought up the make grid.
It's because it's out of date. As of 1.11.0 it has been moved to torchvision.utils.make_grid instead of torch.utils.make_grid
https://pytorch.org/vision/main/generated/torchvision.utils.make_grid.html?highlight=make_grid#torchvision.utils.make_grid
I am using someone else code (Code Link) which is implemented in Tensorflow1. I want to run this code into Tensorflow2 However I am getting this error:
mnist = tf.contrib.learn.datasets.load_dataset("mnist")
AttributeError: module 'tensorflow' has no attribute 'contrib'
I upgraded this code by using this instruction:
!tf_upgrade_v2 \
--infile /research/dept8/gds/anafees/MyTest.py \
--outfile /research/dept8/gds/anafees/MyTest2.py
Most things are updated, however generated report showed:
168:21: ERROR: Using member tf.contrib.distribute.MirroredStrategy in deprecated module tf.contrib. tf.contrib.distribute.MirroredStrategy cannot be converted automatically. tf.contrib will not be distributed with TensorFlow 2.0, please consider an alternative in non-contrib TensorFlow, a community-maintained repository such as tensorflow/addons, or fork the required code.
I search google; however, I could not find any suitable solution. I do not want to move back to Tensorflow1. Is there any alternate solution? Can anyone help?
Few libraries are deprecated in Tensorflow 2.x such tf.contrib, To make the code compatible with Tf 2 needs some changes in library.
Replace
tf.contrib.learn.datasets.load_dataset("mnist")
with
tf.keras.datasets.mnist.load_data()
And Replace
tf.contrib.distribute.MirroredStrategy
with
tf.distribute.MirroredStrategy(devices=None, cross_device_ops=None)
I'm trying to do the Udacity mini project and I've got the latest version of the SKLearn library installed (20.2).
When I run:
from sklearn.decomposition import RandomizedPCA
I get the error:
ImportError: cannot import name 'RandomizedPCA' from 'sklearn.decomposition' (/Users/kintesh/Documents/udacity_ml/python3/venv/lib/python3.7/site-packages/sklearn/decomposition/__init__.py)
I actually even upgraded the version using:
pip3 install -U scikit-learn
Which upgraded from 0.20.0 to 0.20.2, which also uninstalled and reinstalled... so I'm not sure why it can't initialise sklearn.decomposition.
Are there any solutions here that might not result in completely uninstalling python3 from my machine?! Would ideally like to avoid that.
Any help would be thoroughly appreciated!
Edit:
I'm doing some digging and trying to fix this, and it appears as though the __init__.py file in the decomposition library on the SKLearn GitHub doesn't reference RandomizedPCA... has it been removed or something?
Link to the GitHub page
As it turns out, RandomizePCA() was depreciated in an older version of SKLearn and is simply a parameter in PCA().
You can fix this by changing the import statement to:
from sklearn.decomposition import PCA as RandomizedPCA
... and then your classifier looks like this:
pca = RandomizedPCA(n_components=n_components, svd_solver='randomized', whiten=True).fit(X_train)
However, if you're here because you're doing the Udacity Machine Learning course on Eigenfaces.py, you'll notice that the PIL library is also deprecated.
Unfortunately I don't have a solution for that one, but here's the GitHub issue page, and here's a kind hearted soul that used a Jupyter Notebook to solve their mini-project back when these repositories worked.
I hope this helps, and gives enough information for the next person to get into Machine Learning. If I get some time I might take a crack at recoding eigenfaces.py for SKLearn 0.20.2, but for now I'm just going to crack on with the rest of this course.
In addition to what #Aaraeus said, the PIL library has been forked to Pillow.
You can fix the PIL import error using
pip3 install pillow
I wanna use the LDA in gensim for topic modeling over a few thousand documents.
Therefore I´m using a csv-File as Input in the format of a term-document-matrix.
Currently it occurs an error when running the following code:
from gensim import corpora
import_path ="TDM.csv"
dictionary = corpora.csvcorpus(import_path, labels='true')
The error is the following:
dictionary = corpora.csvcorpus(import_path, labels='true')
AttributeError: module 'gensim.corpora' has no attribute 'csvcorpus'
Am I using the module correctly and if so, where is my mistake?
Thanks in advance.
This also bugged me for quite awhile.
It looks like csvcorpus is actually in the experimental stage as you can see in their github issue, https://github.com/RaRe-Technologies/gensim/issues/1583
I would recommend going by the old fashioned way of using the csv package to read your csv file instead.
Cheers.
for tensorflow, this post very well explain how to using keras with tensorflow
https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html
But, I don't find how to use keras with theano directly.
Is it impossible using like tensorflow??
The official documentation is here: https://keras.io/backend/
Basically, edit your $HOME/.keras/keras.json (linux) or %USERPROFILE%\.keras\keras.json (windows) configuration file.
Use:
{
"image_data_format": "channels_last",
"epsilon": 1e-07,
"floatx": "float32",
"backend": "theano"
}
(Note the "backend" is set to "theano".)
This will of course change all your Keras projects to use Theano.
If you only want to change 1 project, you can set the KERAS_BACKEND environment variable, either from the command line or in code before you import keras:
import os
os.environ["KERAS_BACKEND"] = "theano"
import keras
(I tested this on Windows 10 with Python 3.5 with both Theano and TensorFlow installed (remove this, and it uses TensorFlow, include it and it will use Theano)).
It's nice to include in your Python source because this dependency is then included explicitly in source control. As the underlying ML library Keras is using is not 100% abstracted (there are lots of little differences that seep through), having the code indicate that it needs one or the other is probably a good idea.
I hope that helps,
Robert