AttributeError in python: object has no attribute - python-3.x

I started learning Machine Learning and came across Neural Networks. while implementing a program i got this error. i have tried checking for every solution but no luck. here's my code:
from numpy import exp, array, random, dot
class neural_network:
def _init_(self):
random.seed(1)
self.weights = 2 * random.random((2, 1)) - 1
def train(self, inputs, outputs, num):
for iteration in range(num):
output = self.think(inputs)
error = outputs - output
adjustment = 0.01*dot(inputs.T, error)
self.weights += adjustment
def think(self, inputs):
return (dot(inputs, self.weights))
neural = neural_network()
# The training set
inputs = array([[2, 3], [1, 1], [5, 2], [12, 3]])
outputs = array([[10, 4, 14, 30]]).T
# Training the neural network using the training set.
neural.train(inputs, outputs, 10000)
# Ask the neural network the output
print(neural.think(array([15, 2])))
this is the error which i'm getting when running neural.train:
Traceback (most recent call last):
File "neural.py", line 27, in <module>
neural.train(inputs, outputs, 10000)
File "neural.py", line 10, in train
output = self.think(inputs)
File "neural.py", line 16, in think
return (dot(inputs, self.weights))
AttributeError: 'neural_network' object has no attribute 'weights'
Though its has a self attribute self.weights() still it says no such attribute.

Well, it turns out that your initialization method should be named __init__ (two underscores), not _init_...
So, changing the method to
def __init__(self):
random.seed(1)
self.weights = 2 * random.random((2, 1)) - 1
your code works OK:
neural.train(inputs, outputs, 10000)
print(neural.think(array([15, 2])))
# [ 34.]

Your initializing method is written wrong, its two underscores __init__(self): not one underscore_init_(self):
Otherwise, nice code!

Related

How can I utilize JAX library on my code with numpy take realted error: "NotImplementedError: The 'raise' mode to jnp.take is not supported."

Due to my need to speed up my written code, I have modified that to pure NumPy code to evaluate the runtime in this way and by JAX accelerator in Python. I don't know if my code is appropriate to be accelerated by JAX, but my little previous studies and JAX usage experiences encourage me to try vectorizing or parallelizing the prepared NumPy code by JAX. For initial test, I have put jax.jit decorator on the function, but it stuck at the first line of my code. it raised the following error in Colab:
<__array_function__ internals> in take(*args, **kwargs)
UnfilteredStackTrace: NotImplementedError: The 'raise' mode to jnp.take is not supported.
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
NotImplementedError Traceback (most recent call last)
<__array_function__ internals> in take(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/jax/_src/numpy/lax_numpy.py in _take(a, indices, axis, out, mode)
5437 elif mode == "raise":
5438 # TODO(phawkins): we have no way to report out of bounds errors yet.
-> 5439 raise NotImplementedError("The 'raise' mode to jnp.take is not supported.")
5440 elif mode == "wrap":
5441 indices = mod(indices, _constant_like(indices, a.shape[axis_idx]))
NotImplementedError: The 'raise' mode to jnp.take is not supported.
I don't know how to handle this code by JAX. This error is related to np.take module, although I guess it will stuck again at some other lines e.g. which contain reduce.
The sample code is:
import numpy as np
import jax
pp_ = np.array([[0.75, 0.5, 0.5], [15, 10, 15], [0.5, 3., 0.35], [15, 17, 15]])
rr_ = np.array([1, 3, 2, 5], dtype=np.float64)
gg_ = np.array([-0.48305741, -1])
ee_ = np.array([[0, 2], [1, 3]], dtype=np.int64)
#jax.jit
def JAX_acc(pp_, rr_, gg_, ee_):
rr_act = np.take(rr_, ee_)
r_add = np.add.reduce(rr_act, axis=1)
pc_dis = np.sum((r_add, gg_), axis=0)
ang_ = np.arccos((rr_act ** 5 + pc_dis[:, None] ** 2) / 1e5)
pl_rad = rr_act * np.cos(ang_)
pp_act = np.take(pp_, ee_, axis=0)
pc_vec = -np.subtract.reduce(pp_act, axis=1)
pc_ = pp_act[:, 0, :] + pc_vec / np.linalg.norm(pc_vec, axis=1)[:, None] * np.abs(pl_rad[:, 0][:, None])
return print(pc_dis, pc_, pl_rad)
JAX_acc(pp_, rr_, gg_, ee_)
main Qusestion: Could JAX library be utilized for this example? How?
Shall I use other modules instead np.take?
I would be appreciated for helping to cure this code by JAX.
---------------- solved by the update ----------------
I would be grateful for any other explanations on the following extraneus questions (not needed):
Which of math operations (-,+,*,...) and their NumPy equivalents (np.power, nu.sum,...) will be faster using JAX? Do NumPy ones will be handled by JAX in a better scheme (in terms of speed) than common math ones?
Does JAX CPU mode need other writing styles than TPU mode; I didn't use that so far.
Updates:
I have changed the code using jnp related modules based on #jakedvp comment and the problem by np.take is gone:
def JAX_acc_jnp(pp_, rr_, gg_, ee_):
rr_act = jnp.take(rr_, ee_)
r_add = jnp.sum(rr_act, axis=1) # .squees()
pc_dis = jnp.add(r_add, gg_)
ang_ = jnp.arccos((rr_act ** 5 + pc_dis[:, None] ** 2) / 1e5)
pl_rad = rr_act * jnp.cos(ang_)
pp_act = jnp.take(pp_, ee_, axis=0)
pc_vec = jnp.diff(pp_act, axis=1).squeeze()
pc_ = pp_act[:, 0, :] + pc_vec / jnp.linalg.norm(pc_vec, axis=1)[:, None] * jnp.abs(pl_rad[:, 0][:, None])
return pc_dis, pc_, pl_rad
For pc_dis and pc_ the results are true, but pl_rad is different due to ang_ different achieved values which are all -1.0927847e-10; perhaps because true values are with -13 decimals and JAX changed dtype to float32, I don't know. If so, how could I specify which dtype JAX use?
larger data sizes: pp_, rr_, gg_, ee_

AllenNLP 2.0: Can't get FBetaMultiLabelMeasure to run

I would like to compute the f1-score for a classifier trained with allen-nlp. I used the working code from a allen-nlp guide, which computed accuracy, not F1, so I tried to adjust the metric in the code.
According to the documentation, CategoricalAccuracy and FBetaMultiLabelMeasure take the same inputs. (predictions: torch.Tensor of shape [batch_size, ..., num_classes], gold_labels: torch.Tensor of shape [batch_size, ...])
But for some reason the input that worked perfectly well for the accuracy results in a RuntimeError when given to the f1-multi-label metric.
I condensed the problem to the following code snippet:
>>> from allennlp.training.metrics import CategoricalAccuracy, FBetaMultiLabelMeasure
>>> import torch
>>> labels = torch.LongTensor([0, 0, 2, 1, 0])
>>> logits = torch.FloatTensor([[ 0.0063, -0.0118, 0.1857], [ 0.0013, -0.0217, 0.0356], [-0.0028, -0.0512, 0.0253], [-0.0460, -0.0347, 0.0400], [-0.0418, 0.0254, 0.1001]])
>>> labels.shape
torch.Size([5])
>>> logits.shape
torch.Size([5, 3])
>>> ca = CategoricalAccuracy()
>>> f1 = FBetaMultiLabelMeasure()
>>> ca(logits, labels)
>>> f1(logits, labels)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".../lib/python3.8/site-packages/allennlp/training/metrics/fbeta_multi_label_measure.py", line 130, in __call__
true_positives = (gold_labels * threshold_predictions).bool() & mask & pred_mask
RuntimeError: The size of tensor a (5) must match the size of tensor b (3) at non-singleton dimension 1
Why is this error happening? What am I missing here?
You want to use FBetaMeasure, not FBetaMultiLabelMeasure. "Multilabel" means you can specify more than one correct answer, but "Categorical Accuracy" only allows one correct answer. That means you have to specify another dimension in your labels.
I suspect the documentation of FBetaMultiLabelMeasure is misleading. I'll look into fixing it.

Tensorflow to CoreML with tf-coreml

I have a multi-input network that uses a tf.bool tf.placeholder to manage how batch normalization is executed in training and validation / testing.
I’ve been trying to convert this trained model to CoreML via tf-coreml library with no success, with below error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Retval[26] does not have value
I understand that this error states that there is a certain node that’s missing a value so the converter can execute the model. I also understand this error is connected to control flow operations (linked to the batch normalization method creating operations like Switch and Merge). The source code shows this:
def testSwitchDeadBranch(self):
with self.cached_session():
data = constant_op.constant([1, 2, 3, 4, 5, 6], name="data")
ports = ops.convert_to_tensor(True, name="ports")
switch_op = control_flow_ops.switch(data, ports)
dead_branch = array_ops.identity(switch_op[0])
with self.assertRaisesWithPredicateMatch(
errors_impl.InvalidArgumentError,
lambda e: "Retval[0] does not have value" in str(e)):
self.evaluate(dead_branch)
Note that my error is Retval[26] (I’ve gotten [24], etc.), not Retval[0]. I’m assuming it tests the Switch “dead branch”, which should be the non-used branch for inference. The code also does the same with Merge “dead branch”.
Is there any detail I’m missing that may be causing this error (not the first error I’ve faced during conversion, of course)? The way the inference is done? The way batch normalization is implemented? The way the model is saved?
What I’ve done so far:
I’m using Tensorflow 1.14.0
I know tf.layers.batch_normalization creates operations Switch and Merge, which are not CoreML compatible
I’ve tried converting to Tensorflow Lite with similar issues
I’ve follow Facenet (this model uses the same tf.bool logic for training, validation, testing) conversion process with no success
I’ve tried the GraphTransforms library
I’ve tried scripts to remove / modify the control flow
I’ve created separate graphs to avoid the extra ops with no success
Note: I’ve abstracted big part of the code to post this question.
This is how batch normalization is implemented (within a convolution block).
training = tf.placeholder(tf.bool, shape = (), name = 'training')
def conv_layer(input, kernelSize, nFilters, poolSize, stride, input_channels = 1, name = 'conv'):
with tf.name_scope(name):
shape = [kernelSize, kernelSize, input_channels, nFilters]
weights = new_weights(shape = shape) biases = new_biases(length = nFilters)
conv = tf.nn.conv2d(input, weights, strides = [1, 2, 2, 1], padding = 'SAME', name = 'convL')
conv += biases
pool = tf.reduce_max(conv, reduction_indices=[3], keep_dims=True, name = 'pool')
pool = tf.nn.max_pool(conv, ksize = [1, poolSize, poolSize, 1], strides = shape, padding = 'SAME')
bnorm = tf.layers.batch_normalization(pool, training = training, center = True, scale = True, fused = False, reuse= False)
act = tf.nn.relu(bnorm)
return act
Below is the code to train and save the model.
saver = tf.train.Saver()
with tf.Session(config = config) as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
sess.run(init_train_op)
for epoch in range(MAX_EPOCHS):
for step in range(10):
l, _, se = sess.run(
[loss, train_op, mean_squared_error],
feed_dict = {training: True})
print('\nRunning validation operation...')
sess.run(init_val_op)
for _ in range(10):
val_out, val_l, val_se = sess.run(
[out, val_loss, val_mean_squared_error],
feed_dict = {training: False})
sess.run(init_train_op) # switch back to training set
#Save model
print('Saving Model...\n')
saver.save(sess, join(saveDir, './model_saver_validation'.format(modelIndex)), write_meta_graph = True)
Below is the code to load, update inputs, perform inference, and freeze the model.
# Dummy data for inference
b = np.zeros((1, 80, 160, 1), np.float32)
ill = np.ones((1,3), np.float32)
is_train = False
def freeze():
with tf.Graph().as_default():
with tf.Session() as sess:
bIn = tf.placeholder(dtype=tf.float32, shape=[
1, 80, 160, 1], name='bIn')
illumIn = tf.placeholder(dtype=tf.float32, shape=[
1, 3], name='illumIn')
training = tf.placeholder(tf.bool, shape=(), name = 'training')
# Load the model metagraph and checkpoint
meta_file = meta_graph #.meta file from saver.save()
ckpt_file = checkpoint_file #checkpoint file
# Load graph to redirect inputs from iterator to expected inputs
saver = tf.train.import_meta_graph(meta_file, input_map={
'IteratorGetNext:0': bIn,
'IteratorGetNext:3': illumIn,
'training:0': training}, clear_devices = True)
tf.get_default_session().run(tf.global_variables_initializer())
tf.get_default_session().run(tf.local_variables_initializer())
saver.restore(tf.get_default_session(), ckpt_file)
pred = tf.get_default_graph().get_tensor_by_name('Out:0')
tf.get_default_session().run(pred, feed_dict={'bIn:0': b, 'poseIn:0': po, 'training:0': is_train})
# Retrieve the protobuf graph definition and fix the batch norm nodes
input_graph_def = sess.graph.as_graph_def()
# Freeze the graph def
output_graph_def = freeze_graph_def(
sess, input_graph_def, output_node_names)
# Serialize and dump the output graph to the filesystem
with tf.gfile.GFile(frozen_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
freeze()
Below is the code to convert to CoreML.
tfcoreml.convert(
tf_model_path=frozen_graph,
mlmodel_path='./coreml_model.mlmodel',
output_feature_names=['Out:0'],
input_name_shape_dict={
'bIn:0': [1, 80, 160, 1],
'illumIn:0': [1, 3],
'training:0': []})
Below is the error thrown by tf-coreml.
Loading the TF graph...
Graph Loaded.
Collecting all the 'Const' ops from the graph, by running it....
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call
return fn(*args)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Retval[26] does not have value
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tf2opencv.py", line 392, in <module>
'illumIn:0': [1, 3], 'poseIn:0': [1, 16], 'training:0': []})
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tfcoreml/_tf_coreml_converter.py", line 586, in convert
custom_conversion_functions=custom_conversion_functions)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tfcoreml/_tf_coreml_converter.py", line 243, in _convert_pb_to_mlmodel
tensors_evaluated = sess.run(tensors, feed_dict=input_feed_dict)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 950, in run
run_metadata_ptr)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1173, in _run
feed_dict_tensor, options, run_metadata)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
run_metadata)
File "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Retval[26] does not have value

Sklearn method in class

I would like to create a class that uses sklearn transformation methods. I found this article and I am using it as an example.
from sklearn import preprocessing
from sklearn.base import TransformerMixin
def minmax(dataframe):
minmax_transformer = preprocessing.MinMaxScaler()
return minmax_tranformer
class FunctionFeaturizer(TransformerMixin):
def __init__(self, scaler):
self.scaler = scaler
def fit(self, X, y=None):
return self
def transform(self, X):
fv = self.scaler(X)
return fv
if __name__=="__main__":
scaling = FunctionFeaturizer(minmax)
df = pd.DataFrame({'feature': np.arange(10)})
df_scaled = scaling.fit(df).transform(df)
print(df_scaled)
The output is StandardScaler(copy=True, with_mean=True, with_std=True) which is actually the result of the preprocessing.StandardScaler().fit(df) if I use it out of the class.
What I am expecting is:
array([[0. ],
[0.11111111],
[0.22222222],
[0.33333333],
[0.44444444],
[0.55555556],
[0.66666667],
[0.77777778],
[0.88888889],
[1. ]])
I am feeling that I am mixing few things here but I do not know what.
Update
I did some modifications:
def minmax():
return preprocessing.MinMaxScaler()
class FunctionFeaturizer(TransformerMixin):
def __init__(self, scaler):
self.scaler = scaler
def fit(self, X, y=None):
return self
def fit_transform(self, X):
self.scaler.fit(X)
return self.scaler.transform(X)
if __name__=="__main__":
scaling = FunctionFeaturizer(minmax)
df = pd.DataFrame({'feature': np.arange(10)})
df_scaled = scaling.fit_transform(df)
print(df_scaled)
But now I am receiving the following error:
Traceback (most recent call last):
File "C:/my_file.py", line 33, in <module>
test_scale = scaling.fit_transform(df)
File "C:/my_file.py", line 26, in fit_transform
self.scaler.fit(X)
AttributeError: 'function' object has no attribute 'fit'
Solving your error
in your code you have:
if __name__=="__main__":
scaling = FunctionFeaturizer(minmax)
df = pd.DataFrame({'feature': np.arange(10)})
df_scaled = scaling.fit_transform(df)
print(df_scaled)
change the line
scaling = FunctionFeaturizer(minmax)
to
scaling = FunctionFeaturizer(minmax())
you need to call the function to get the instantiation of MinMaxScaler returned to you.
Suggestion
Instead of implementing fit and fit_transform, implement fit and transform unless you can optimize both process into fit_tranform. This way, it is clearer what you are doing.
If you implement only fit and transform, you can still call fit_transform because you extend the TransformerMixin class. It will just call both functions in a row.
Getting your expected results
Your transformer is looking at every column of your dataset and distributing the values linearly between 0 and 1.
So, to get your expected results, it will really depend on what your df looks like. However, you did not share that with us, so it is difficult to tell if you will get it.
However, if you have df = [[0],[1],[2],[3],[4],[5],[6],[7],[8],[9]], you will see your expected result.
if __name__=="__main__":
scaling = FunctionFeaturizer(minmax())
df = [[0], [1], [2], [3], [4], [5], [6], [7], [8], [9]]
df_scaled = scaling.fit_transform(df)
print(df_scaled)
> [[0. ]
> [0.11111111]
> [0.22222222]
> [0.33333333]
> [0.44444444]
> [0.55555556]
> [0.66666667]
> [0.77777778]
> [0.88888889]
> [1. ]]

Building my own tf.Estimator, how did model_params overwrite model_dir? RuntimeWarning?

Recently I built a customized deep neural net model using TFLearn, which claims to bring deep learning to the scikit-learn estimator API. I could train models and make predictions, but I couldn't get the scoring (evaluate) function to work, so I couldn't do cross-validation. I tried to ask questions about TFLearn in various places, but I got no responses.
It appears that TensorFlow itself has an estimator class. So I am putting TFLearn aside, and I'm trying to follow the guide at https://www.tensorflow.org/extend/estimators. Somehow I'm managing to get variables where they don't belong. Can anyone spot my problem? I will post code and the output.
Note: Of course, I can see the RuntimeWarning at the top of the output. I have found references to this warning online, but so far everyone claims it's harmless. Maybe it is not...
CODE:
import tensorflow as tf
from my_library import Database, l2_angle_distance
def my_model_function(topology, params):
# This function will eventually be a function factory. This should
# allow easy exploration of hyperparameters. For now, this just
# returns a single, fixed model_fn.
def model_fn(features, labels, mode):
# Input layer
net = tf.layers.conv1d(features["x"], topology[0], 3, activation=tf.nn.relu)
net = tf.layers.dropout(net, 0.25)
# The core of the network is here (convolutional layers only for now).
for nodes in topology[1:]:
net = tf.layers.conv1d(net, nodes, 3, activation=tf.nn.relu)
net = tf.layers.dropout(net, 0.25)
sh = tf.shape(features["x"])
net = tf.reshape(net, [sh[0], sh[1], 3, 2])
predictions = tf.nn.l2_normalize(net, dim=3)
# PREDICT EstimatorSpec
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode,
predictions={"vectors": predictions})
# TRAIN or EVAL EstimatorSpec
loss = l2_angle_distance(labels, predictions)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=params["learning_rate"])
train_op = optimizer.minimize(loss=loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, predictions, loss, train_op)
return model_fn
##===================================================================
window = "whole"
encoding = "one_hot"
db = Database("/home/bwllc/Documents/Files for ML/compact")
traindb, testdb = db.train_test_split()
train_features, train_labels = traindb.values(window, encoding)
test_features, test_labels = testdb.values(window, encoding)
# Create the model.
tf.logging.set_verbosity(tf.logging.INFO)
LEARNING_RATE = 0.01
topology = (60,40,20)
model_params = {"learning_rate": LEARNING_RATE}
model_fn = my_model_function(topology, model_params)
model = tf.estimator.Estimator(model_fn, model_params)
print("\nmodel_dir? No? Why not? ", model.model_dir, "\n") # This documents the error
# Input function.
my_input_fn = tf.estimator.inputs.numpy_input_fn({"x" : train_features}, train_labels, shuffle=True)
# Train the model.
model.train(input_fn=my_input_fn, steps=20)
OUTPUT
/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_model_dir': {'learning_rate': 0.01}, '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f0b55279048>, '_task_type': 'worker', '_task_id': 0, '_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
model_dir? No? Why not? {'learning_rate': 0.01}
INFO:tensorflow:Create CheckpointSaverHook.
Traceback (most recent call last):
File "minimal_estimator_bug_example.py", line 81, in <module>
model.train(input_fn=my_input_fn, steps=20)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/estimator/estimator.py", line 302, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/estimator/estimator.py", line 756, in _train_model
scaffold=estimator_spec.scaffold)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/basic_session_run_hooks.py", line 411, in __init__
self._save_path = os.path.join(checkpoint_dir, checkpoint_basename)
File "/usr/lib/python3.6/posixpath.py", line 78, in join
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not dict
------------------
(program exited with code: 1)
Press return to continue
I can see exactly what went wrong, model_dir (which I left as the default) somehow bound to the value I intended for model_params. How did this happen in my code? I can't see it.
If anyone has advice or suggestions, I would greatly appreciate them. Thanks!
Simply because you're feeding your model_param as a model_dir when you construct your Estimator.
From the tensorflow documentation :
Estimator __init__ function :
__init__(
model_fn,
model_dir=None,
config=None,
params=None
)
Notice how the second argument is the model_dir one. If you want to specify only the params one, you need to pass it as a keyword argument.
model = tf.estimator.Estimator(model_fn, params=model_params)
Or specify all the previous positional arguments :
model = tf.estimator.Estimator(model_fn, None, None, model_params)

Resources