I just read an interesting paper: A continuum among logarithmic, linear, and exponential functions, and its potential to improve generalization in neural networks.
I'd like to try to implement this activation function in Keras. I've implemented custom activations before, e.g. a sinusoidal activation:
def sin(x):
return K.sin(x)
get_custom_objects().update({'sin': Activation(sin)})
However, the activation function in this paper has 3 unique properties:
It doubles the size of the input (the output is 2x the input)
It's parameterized
It's parameters should be regularized
I think once I have a skeleton for dealing with the above 3 issues, I can work out the math myself, but I'll take any help I can get!
Here, we will need one of these two:
A Lambda layer - If your parameters are not trainable (you don't want them to change with backpropagation)
A custom layer - If you need custom trainable parameters.
The Lambda layer:
If your parameters are not trainable, you can define your function for a lambda layer. The function takes one input tensor, and it can return anything you want:
import keras.backend as K
def customFunction(x):
#x can be either a single tensor or a list of tensors
#if a list, use the elements x[0], x[1], etc.
#Perform your calculations here using the keras backend
#If you could share which formula exactly you're trying to implement,
#it's possible to make this answer better and more to the point
#dummy example
alphaReal = K.variable([someValue])
alphaImag = K.variable([anotherValue]) #or even an array of values
realPart = alphaReal * K.someFunction(x) + ...
imagPart = alphaImag * K.someFunction(x) + ....
#You can return them as two outputs in a list (requires the fuctional API model
#Or you can find backend functions that join them together, such as K.stack
return [realPart,imagPart]
#I think the separate approach will give you a better control of what to do next.
For what you can do, explore the backend functions.
For the parameters, you can define them as keras constants or variables (K.constant or K.variable), either inside or outside the function above, or even transform them in model inputs. See details in this answer
In your model, you just add a lambda layer that uses that function.
In a Sequential model: model.add(Lambda(customFunction, output_shape=someShape))
In a functional API model: output = Lambda(customFunction, ...)(inputOrListOfInputs)
If you're going to pass more inputs to the function, you'll need the functional model API.
If you're using Tensorflow, the output_shape will be computed automatically, I believe only Theano requires it. (Not sure about CNTK).
The custom layer:
A custom layer is a new class you create. This approach will only be necessary if you're going to have trainable parameters in your function. (Such as: optimize alpha with backpropagation)
Keras teaches it here.
Basically, you have an __init__ method where you pass the constant parameters, a build method where you create the trainable parameters (weights), a call method that will do the calculations (exactly what would go in the lambda layer if you didn't have trainable parameters), and a compute_output_shape method so you can tell the model what the output shape is.
class CustomLayer(Layer):
def __init__(self, alphaReal, alphaImag):
self.alphaReal = alphaReal
self.alphaImage = alphaImag
def build(self,input_shape):
#weights may or may not depend on the input shape
#you may use it or not...
#suppose we want just two trainable values:
weigthShape = (2,)
#create the weights:
self.kernel = self.add_weight(name='kernel',
shape=weightShape,
initializer='uniform',
trainable=True)
super(CustomLayer, self).build(input_shape) # Be sure to call this somewhere!
def call(self,x):
#all the calculations go here:
#dummy example using the constant inputs
realPart = self.alphaReal * K.someFunction(x) + ...
imagPart = self.alphaImag * K.someFunction(x) + ....
#dummy example taking elements of the trainable weights
realPart = self.kernel[0] * realPart
imagPart = self.kernel[1] * imagPart
#all the comments for the lambda layer above are valid here
#example returning a list
return [realPart,imagPart]
def compute_output_shape(self,input_shape):
#if you decide to return a list of tensors in the call method,
#return a list of shapes here, twice the input shape:
return [input_shape,input_shape]
#if you stacked your results somehow in a single tensor, compute a single tuple, maybe with an additional dimension equal to 2:
return input_shape + (2,)
You need to implement a "Layer", not a common activation function.
I think the implementation of pReLU in Keras would be a good example for your task. See pReLU
A lambda function in the activation worked for me. Maybe not what you want but it's one step more complicated than the simple use of a built-in activation function.
encoder_outputs = Dense(units=latent_vector_len, activation=k.layers.Lambda(lambda z: k.backend.round(k.layers.activations.sigmoid(x=z))), kernel_initializer="lecun_normal")(x)
This code changes the output of a Dense from Reals to 0,1 (ie, binary).
Keras throws a warning but the code still proves to work.
Related
In scikit-learn , there is the notion of a scoring function. If we have some predicted labels and the true labels, we can get to the score by calling scoring(y_true, y_predict). An example of such scoring function is sklearn.metrics.accuracy_score.
A scoring function is not to be confused of the scorer, which is an object that can be called as scorer(estimator, X, y_true).
There are many builtin scorers in scikit-learn. It is possible to get to these scorers by their string names. For example, we can get the scorer corresponding to the name 'accuracy' by calling sklearn.metrics.get_scorer("accuracy")/
But it turns out that there is no obvious mechanism to access the built-in scoring functions by their names at run-time, through passing in the name as a string. For example, there is no way to access sklearn.metrics.accuracy_score by its name accuracy.
For example, if at run time, the program knows the name of the scoring function is contained in variable name, I am looking for a mechanism get_scoring_function(), such that, get_scoring_function(name) will return the scoring function handle. Note that this name, name, is not known at scripting time.
Is there any way to access the built-in scoring functions by their names at run time through passing in the names as strings?
You can use the get_scorer() function, which accepts a string as an argument, and then get the _score_func attribute of the returned object.
So for example
from sklearn.metrics import get_scorer
get_scorer('accuracy')._score_func(y_true, y_pred)
is equivalent to
from sklearn.metrics import accuracy_score
accuracy_score(y_true, y_pred)
I myself faced this task, and I haven't found a better way to access metrics by names than with sklearn.metrics.get_scorer function, but the drawback of it is that you have to pass an estimator there, not predictions. I tried to use the #collinb9 recommendation, but you see, you have to access a protected method there, and in my case, it led to unpleasant consequences, namely incorrectly calculated metrics.
This is a short example showing this problem.
from sklearn import datasets, model_selection, linear_model, metrics
features, labels = datasets.make_regression(1000, random_state=123)
train_features, test_features, train_labels, test_labels = model_selection.train_test_split(features, labels, test_size=0.1, random_state=567)
model = linear_model.LinearRegression()
model.fit(train_features, train_labels)
print(f'variant 1 neg_mse = {metrics.get_scorer("neg_mean_squared_error")(model, test_features, test_labels)}')
print(f'variant 1 neg_rmse = {metrics.get_scorer("neg_root_mean_squared_error")(model, test_features, test_labels)}\n')
preds = model.predict(test_features)
print(f'variant 2 mse = {metrics.mean_squared_error(test_labels, preds)}')
print(f'variant 2 rmse = {metrics.mean_squared_error(test_labels, preds, squared=False)}\n')
print(f'protected neg_mse = {metrics.get_scorer("neg_mean_squared_error")._score_func(test_labels, preds)}')
print(f'protected neg_rmse = {metrics.get_scorer("neg_root_mean_squared_error")._score_func(test_labels, preds)}')
The output of this program will be:
variant 1 neg_mse = -2.142587870436064e-25
variant 1 neg_rmse = -4.628809642268803e-13
variant 2 mse = 2.142587870436064e-25
variant 2 rmse = 4.628809642268803e-13
protected neg_mse = 2.142587870436064e-25
protected neg_rmse = 2.142587870436064e-25
You see, metrics calculated with the use of the protected method differ. First, we ordered to get negative values, but got positive ones (it should be mentioned, that for variant 2 metrics we didn't imply negative values). Second, the neg_mse and neg_rmse values are equal but should be different.
If we go to the source code of sklearn metrics, we will see:
This is how _score_func is called: it is multiplied by sign, so that's where we lose our negative values.
This is how scorers are made: you see, neg_root_mean_squared_error_scorer has extra parameter squared=False. This parameter is stated explicitly as an optional one in metrics.mean_squared_error, so you won't make a mistake. We can pass this parameter as a keyword argument to _score_fun and at least we will get a correct absolute value then:
print(f'protected neg_rmse = {metrics.get_scorer("neg_root_mean_squared_error")._score_func(test_labels, preds, squared=False)}')
protected neg_rmse = 4.628809642268803e-13
To make things short, I've shown, to my knowledge, the only way to get sklearn metrics by name (btw, you can find the full list of names here), and that it's not safe to use protected methods that you're not supposed to use. BTW, I was using sklearn version=0.24.2.
Since the documentation is incomplete, you'll have to go directly to the source code here for the complete list of metric names:
Metric Names
Search for __all__.
Answer of #collinb9 should not be accepted as it would lead to incorrect calculations.
You need other arguments (such as squared:False for rmse) to compute the correct thing. They can be accessed via the _kwargs attribute of _BaseScorer class. If you combine _score_func and _kwargs then we can get the corresponding scorer function.
The full answer to the question should be:
import functools
import sklearn
def score(scoring_name, y_true, y_pred):
sklearn_scorer = sklearn.metrics.get_scorer(scoring_name)
return sklearn_scorer._sign * sklearn_scorer._score_func(
y_true=y_true, y_pred=y_pred, **sklearn_scorer._kwargs
)
score("neg_root_mean_squared_error", y_true, y_pred)
I would like to know if Keras can be used as an interface to TensoFlow for only doing computation on my GPU.
I tested TF directly on my GPU. But for ML purposes, I started using Keras, including the backend. I would find it 'comfortable' to do all my stuff in Keras instead of Using two tools.
This is also a matter of curiosity.
I found some examples like this one:
http://christopher5106.github.io/deep/learning/2018/10/28/understand-batch-matrix-multiplication.html
However this example does not actually do the calculation.
It also does not get input data.
I duplicate the snippet here:
'''
from keras import backend as K
a = K.ones((3,4))
b = K.ones((4,5))
c = K.dot(a, b)
print(c.shape)
'''
I would simply like to know if I can get the result numbers from this snippet above, and how?
Thanks,
Michel
Keras doesn't have an eager mode like Tensorflow, and it depends on models or functions with "placeholders" to receive and output data.
So, it's a little more complicated than Tensorflow to do basic calculations like this.
So, the most user friendly solution would be creating a dummy model with one Lambda layer. (And be careful with the first dimension that Keras will insist to understand as a batch dimension and require that input and output have the same batch size)
def your_function_here(inputs):
#if you have more than one tensor for the inputs, it's a list:
input1, input2, input3 = inputs
#if you don't have a batch, you should probably have a first dimension = 1 and get
input1 = input1[0]
#do your calculations here
#if you used the batch_size=1 workaround as above, add this dimension again:
output = K.expand_dims(output,0)
return output
Create your model:
inputs = Input(input_shape)
#maybe inputs2 ....
outputs = Lambda(your_function_here)(list_of_inputs)
#maybe outputs2
model = Model(inputs, outputs)
And use it to predict the result:
print(model.predict(input_data))
I am a keras rookie and I need some help in working with keras after many days struggling at this problem. Please ask for further information if there is any ambiguity.
Currently, I am trying to modify the code from a link.According to their network model, there are 2 input tensors expected. Now I have trouble including 2 input tensors into the source code provided by them.
Function Boneage_prediction_model() initiates a model of 2 input tensors.
def Boneage_prediction_model():
i1 = Input(shape=(500, 500, 1), name='input_img') # the 1st input tensor
i2 = Input(shape=(1,), name='input_gender') # the 2nd input tensor
... ...
model = Model(inputs=(i1, i2), outputs=o) # define model input
with both i1 and i2
... ...
#using model.fit_generator to instantiate
# datagen is initiated by keras.preprocessing.image.ImageDataGenerator
# img_train is the 1st network input, and boneage_train is the training label
# gender_train is the 2nd network input
model.fit_generator(
(datagen.flow(img_train, boneage_train, batch_size=10),
gender_train),
... ...
)
I tried many ways to combine the two (datagen.flow(img_train, boneage_train, batch_size=10) and gender_train) as stated above, but it failed and kept reporting errors
such as the following,
ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[[[-0.26078433],
[-0.26078433],
[-0.26078433],
...,
[-0.26078433],
[-0.26078433],
[-0.26078433]],
[[-0.26078433],
[-0.26...
If I understand you correctly, you want to have two inputs for one network and have one label for the combined output. In the official documentation for the fit_generator there is an example with multiple inputs.
Using a dictionary to map the multiple inputs would result in:
model.fit_generator(
datagen.flow({'input_img':img_train, 'input_gender':gender_train}, boneage_train, batch_size=10),
...
)
After failure either blindly to simply combine the 2 inputs, or as another contributor suggested, to use a dictionary to map the multiple inputs, I realized it seems to be the problem of datagen.flow which keeps me from combining a image tensor input and a categorical tensor input. datagen.flow is initiated by keras.preprocessing.image.ImageDataGenerator with the goal of preprocessing the input images. Therefore chances are that it is inappropriate to combine the 2 inputs inside datagen.flow. Additionally, fit_generator seems to expect an input of generator type, and what I did as proposed in my question is wrong, though I do not fully understand the mechanism of this function.
As I looked up carefully in other codes written by the team, I learned that I need to write a generator to combine the two. The solution is as following,
def combined_generators(image_generator, gender_data, batch_size):
gender_generator = cycle(batch(gender_data, batch_size))
while True:
nextImage = next(image_generator)
nextGender = next(gender_generator)
assert len(nextImage[0]) == len(nextGender)
yield [nextImage[0], nextGender], nextImage[1]
def batch(iterable, n=1):
l = len(iterable)
for ndx in range(0, l, n):
yield iterable[ndx:min(ndx + n, l)]
train_gen_wrapper = combined_generators(train_gen_boneage, train_df_boneage['male'], BATCH_SIZE_TRAIN)
model.fit_generator(train_gen_wrapper, ... )
I cannot find a way to set the initial weights of the neural network, could someone tell me how please?
I am using python package sklearn.neural_network.MLPClassifier.
Here is the code for reference:
from sklearn.neural_network import MLPClassifier
classifier = MLPClassifier(solver="sgd")
classifier.fit(X_train, y_train)
Solution:
A working solution is to inherit from MLPClassifier and override the _init_coef method. In the _init_coef write the code to set the initial weights.
Then use the new class "MLPClassifierOverride" as in the example below instead of "MLPClassifier"
# new class
class MLPClassifierOverride(MLPClassifier):
# Overriding _init_coef method
def _init_coef(self, fan_in, fan_out):
if self.activation == 'logistic':
init_bound = np.sqrt(2. / (fan_in + fan_out))
elif self.activation in ('identity', 'tanh', 'relu'):
init_bound = np.sqrt(6. / (fan_in + fan_out))
else:
raise ValueError("Unknown activation function %s" %
self.activation)
coef_init = ### place your initial values for coef_init here
intercept_init = ### place your initial values for intercept_init here
return coef_init, intercept_init
The docs show you the attributes in use.
Attributes:
...
coefs_ : list, length n_layers - 1
The ith element in the list represents the weight matrix corresponding to > layer i.
intercepts_ : list, length n_layers - 1
The ith element in the list represents the bias vector corresponding to layer > i + 1.
Just build your classifier clf=MLPClassifier(solver="sgd") and set coefs_ and intercepts_ before calling clf.fit().
The only remaining question is: does sklearn overwrite your inits?
The code looks like:
if not hasattr(self, 'coefs_') or (not self.warm_start and not
incremental):
# First time training the model
self._initialize(y, layer_units)
This looks to me like it won't replace your given coefs_ (you might check biases too).
The packing and unpacking functions further indicates that this should be possible. These are probably used for serialization through pickle internally.
multilayer_perceptron.py initializes the weights based on the nonlinear function used for hidden layers. If you want to try a different initialization, you can take a look at the function _init_coef here and modify as you desire.
Assume I have a model like this. M1 and M2 are two layers linking left and right sides of the model.
The example model: Red lines indicate backprop directions
During training, I hope M1 can learn a mapping from L2_left activation to L2_right activation. Similarly, M2 can learn a mapping from L3_right activation to L3_left activation.
The model also needs to learn the relationship between two inputs and the output.
Therefore, I should have three loss functions for M1, M2, and L3_left respectively.
I probably can use:
model.compile(optimizer='rmsprop',
loss={'M1': 'mean_squared_error',
'M2': 'mean_squared_error',
'L3_left': mean_squared_error'})
But during training, we need to provide y_true, for example:
model.fit([input_1,input_2], y_true)
In this case, the y_true is the hidden layer activations and not from a dataset.
Is it possible to build this model and train it using it's hidden layer activations?
If you have only one output, you must have only one loss function.
If you want three loss functions, you must have three outputs, and, of course, three Y vectors for training.
If you want loss functions in the middle of the model, you must take outputs from those layers.
Creating the graph of your model: (if the model is already defined, see the end of this answer)
#Here, all "SomeLayer(blabla)" could be replaced by a "SomeModel" if necessary
#Example of using a layer or a model:
#M1 = SomeLayer(blablabla)(L12)
#M1 = SomeModel(L12)
from keras.models import Model
from keras.layers import *
inLef = Input((shape1))
inRig = Input((shape2))
L1Lef = SomeLayer(blabla)(inLef)
L2Lef = SomeLayer(blabla)(L1Lef)
M1 = SomeLayer(blablaa)(L2Lef) #this is an output
L1Rig = SomeLayer(balbla)(inRig)
conc2Rig = Concatenate(axis=?)([L1Rig,M1]) #Or Add, or Multiply, however you're joining the models
L2Rig = SomeLayer(nlanlab)(conc2Rig)
L3Rig = SomeLayer(najaljd)(L2Rig)
M2 = SomeLayer(babkaa)(L3Rig) #this is an output
conc3Lef = Concatenate(axis=?)([L2Lef,M2])
L3Lef = SomeLayer(blabla)(conc3Lef) #this is an output
Creating your model with three outputs:
Now you've got your graph ready and you know what the outputs are, you create the model:
model = Model([inLef,inRig], [M1,M2,L3Lef])
model.compile(loss='mse', optimizer='rmsprop')
If you want different losses for each output, then you create a list:
#example of custom loss function, if necessary
def lossM1(yTrue,yPred):
return keras.backend.sum(keras.backend.abs(yTrue-yPred))
#compiling with three different loss functions
model.compile(loss = [lossM1, 'mse','binary_crossentropy'], optimizer =??)
But you've got to have three different yTraining too, for training with:
model.fit([input_1,input_2], [yTrainM1,yTrainM2,y_true], ....)
If your model is already defined and you don't create it's graph like I did:
Then, you have to find in yourModel.layers[i] which ones are M1 and M2, so you create a new model like this:
M1 = yourModel.layers[indexForM1].output
M2 = yourModel.layers[indexForM2].output
newModel = Model([inLef,inRig], [M1,M2,yourModel.output])
If you want that two outputs be equal:
In this case, just subtract the two outputs in a lambda layer, and make that lambda layer be an output of your model, with expected values = 0.
Using the exact same vars as before, we'll just create two addictional layers to subtract outputs:
diffM1L1Rig = Lambda(lambda x: x[0] - x[1])([L1Rig,M1])
diffM2L2Lef = Lambda(lambda x: x[0] - x[1])([L2Lef,M2])
Now your model should be:
newModel = Model([inLef,inRig],[diffM1L1Rig,diffM2L2lef,L3Lef])
And training will expect those two differences to be zero:
yM1 = np.zeros((shapeOfM1Output))
yM2 = np.zeros((shapeOfM2Output))
newModel.fit([input_1,input_2], [yM1,yM2,t_true], ...)
Trying to answer to the last part: how to make gradients only affect one side of the model.
...well.... at first that sounds unfeasible to me. But, if that is similar to "train only a part of the model", then it's totally ok by defining models that only go to a certain point and making part of the layers untrainable.
By doing that, nothing will affect those layers. If that's what you want, then you can do it:
#using the previous vars to define other models
modelM1 = Model([inLef,inRig],diffM1L1Rig)
This model above ends in diffM1L1Rig. Before compiling, you must set L2Right untrainable:
modelM1.layers[??].trainable = False
#to find which layer is the right one, you may define then using the "name" parameter, or see in the modelM1.summary() the shapes, types etc.
modelM1.compile(.....)
modelM1.fit([input_1, input_2], yM1)
This suggestion makes you train only a single part of the model. You can repeat the procedure for M2, locking the layers you need before compiling.
You can also define a full model taking all layers, and lock only the ones you want. But you won't be able (I think) to make half gradients pass by one side and half the gradients pass by the other side.
So I suggest you keep three models, the fullModel, the modelM1, and the modelM2, and you cycle them in training. One epoch each, maybe....
That should be tested....