I want to initialize a 4*11 matrix using glorot uniform in Keras, using following code:
import keras
keras.initializers.glorot_uniform((4,11))
I get this output :
<keras.initializers.VarianceScaling at 0x7f9666fc48d0>
How can I visualize the output? I have tried c[1] and got output 'VarianceScaling' object does not support indexing.
The glorot_uniform() creates a function, and later this function will be called with a shape. So you need:
# from keras.initializers import * #(tf 1.x)
from tensorflow.keras.initializers import *
unif = glorot_uniform() #this returns a 'function(shape)'
mat_as_tensor = unif((4,11)) #this returns a tensor - use this in keras models if needed
mat_as_numpy = K.eval(mat) #this returns a numpy array (don't use in models)
print(mat_as_numpy)
Related
I intend to randomly sample a VARMA model but I cannot seem to see a function in statsmodels for this, I studied the example on the ARMA and can replicate this successfully for a 1 variable.
# for the ARMA
import numpy as np
from statsmodels.tsa.arima_model import ARMA
import statsmodels.api as sm
arparams=np.array([.9,-.7])
maparams=np.array([.5,.8])
ar=np.r_[1,-arparams]
ma=np.r_[1,maparams]
obs=10000
sigma=1
# for the VARMA
import numpy as np
from statsmodels.tsa.statespace.varmax import VARMAX
# generate a a 2-D correlated normal series
mean = [0,0]
cov = [[1,0.9],[0.9,1]]
data = np.random.multivariate_normal(mean,cov,100)
# fit the data into a VARMA model
model = VARMAX(data, order=(1,1)).fit()
`enter code here`
# I cant seem to find a way to randomly sample the VARMA
Results objects from fitting a VARMAX model have a simulate method which can be used to generate a random sample. For example:
mod = VARMAX(data, order=(1,1))
res = mod.fit()
# to generate a time series of length 100 following the VARMAX process described by `res`:
sample = res.simulate(100)
This is true of any state space model, including SARIMAX, UnobservedComponents, VARMAX, and DynamicFactor.
(Also, the model class has a simulate method. The main difference is that since model objects don't have associated parameter values, you need to pass a particular parameter vector in that case).
I am new to machine learning and facing some issues in converting scalar array to 2d array.
I am trying to implement polynomial regression in spyder. Here is my code, Please help!
# Polynomial Regression
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('Position_Salaries.csv')
X = dataset.iloc[:, 1:2].values
y = dataset.iloc[:, 2].values
# Fitting Linear Regression to the dataset
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
# Fitting Polynomial Regression to the dataset
from sklearn.preprocessing import PolynomialFeatures
poly_reg = PolynomialFeatures(degree = 4)
X_poly = poly_reg.fit_transform(X)
poly_reg.fit(X_poly, y)
lin_reg_2 = LinearRegression()
lin_reg_2.fit(X_poly, y)
# Predicting a new result with Linear Regression
lin_reg.predict(6.5)
# Predicting a new result with Polynomial Regression
lin_reg_2.predict(poly_reg.fit_transform(6.5))
ValueError: Expected 2D array, got scalar array instead: array=6.5.
Reshape your data either using array.reshape(-1, 1) if your data has a
single feature or array.reshape(1, -1) if it contains a single sample.
You get this issue in Jupyter only.
To resolve in jupyter make the value into np array using below code.
lin_reg.predict(np.array(6.5).reshape(1,-1))
lin_reg_2.predict(poly_reg.fit_transform(np.array(6.5).reshape(1,-1)))
For spyder it work same as you expected:
lin_reg.predict(6.5)
lin_reg_2.predict(poly_reg.fit_transform(6.5))
The issue with your code is linreg.predict(6.5).
If you read the error statement it says that the model requires a 2-d array , however 6.5 is scalar.
Why? If you see your X data is having 2-d so anything that you want to predict with your model should also have two 2d shape.
This can be achieved either by using .reshape(-1,1) which creates a column vector (feature vector) or .reshape(1,-1) If you have single sample.
Things to remember in order to predict I need to prepare my data in the same way as my original training data.
If you need any more info let me know.
You have to give the input as 2D array, Hence try this!
lin_reg.predict([6.5])
lin_reg_2.predict(poly_reg.fit_transform([6.5]))
Is it possible to use Keras model objects with CalibratedClassifierCV from sklearn.calibration? Or is there another way to performa isotonic regression in sklearn/other python packages without having to pass it a model object.
I tried using the sklearn wrapper for Keras, but it didn't work. Here is the doc for the CalibratedClassifierCV class.
You can train an isotonic regression a posteriori, after prediction. Let 'file1' be a csv containing your predictions pred and real observed events obs on a subset of data. Ideally, this subset has never been used before (not even in Keras training). Let file2 contain the predictions you want to calibrate (Keras predictions for the test set).
import pandas as pd
from sklearn.isotonic import IsotonicRegression
never_seen=pd.read_csv('file1')
uncalibrated=pd.read_csv('file2')
ir = IsotonicRegression( out_of_bounds = 'clip' )
ir.fit( never_seen.pred,never_seen.obs )
p_calibrated = ir.transform( uncalibrated.pred )
I am trying to convert a RandomForestClassifier model of the MNIST dataset into a CoreML model using coremltools.
I use fetch_mldata from sklearn.datasets to import the data; my samples (data) are stored in a Numpy array X, and the labels (target) are stored in a Numpy array Y.
My model is generated like this:
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier()
rfc.fit(X, Y)
I then try to convert the model:
import coremltools
model = coremltools.converters.sklearn.convert(rfc)
The CoreML model input then looks like this:
input {
name: "input"
type {
multiArrayType {
shape: 784
dataType: DOUBLE
}
}
}
The problem is the multiArrayType. It is much easier to deal with a pixel-buffer in iOS, so various sources point to this syntax (however, they use Caffe rather than sklearn):
model = coremltools.converters.sklearn.convert(rfc, image_input_names='data')
However, this gives me an error message:
TypeError: convert() got an unexpected keyword argument 'image_input_names'
I have tried to find the documentation for these parameters, but I have only found some few example using Caffe, and they do not seem to get this error.
I would like to export decision tree using sklearn.
First I trained a decision tree classifier:
self._selected_classifier = tree.DecisionTreeClassifier()
self._selected_classifier.fit(train_dataframe, train_class)
self._column_names = list(train_dataframe.columns.values)
After that I used the following method in order to export the decision tree:
def _create_graph_visualization(self):
decision_tree_classifier = self._selected_classifier
from sklearn.externals.six import StringIO
dot_data = StringIO()
tree.export_graphviz(decision_tree_classifier,
out_file=dot_data,
feature_names=self._column_names)
import pydotplus
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_pdf("decision_tree_output.pdf")
After many errors regarding missing executables now the program is finished successfully.
The file is created, but it is empty.
What am I doing wrong?
Here is an example with output which works for me, using pydotplus:
from sklearn import tree
import pydotplus
import StringIO
# Define training and target set for the classifier
train = [[1,2,3],[2,5,1],[2,1,7]]
target = [10,20,30]
# Initialize Classifier. Random values are initialized with always the same random seed of value 0
# (allows reproducible results)
dectree = tree.DecisionTreeClassifier(random_state=0)
dectree.fit(train, target)
# Test classifier with other, unknown feature vector
test = [2,2,3]
predicted = dectree.predict(test)
dotfile = StringIO.StringIO()
tree.export_graphviz(dectree, out_file=dotfile)
graph=pydotplus.graph_from_dot_data(dotfile.getvalue())
graph.write_png("dtree.png")
graph.write_pdf("dtree.pdf")