AttributeError: 'Functional' object has no attribute '_jit_compile' - keras

I'm trying to run bert use keras. But facing this problem. Can anyone help me? very thanks QQ
AttributeError: in user code:
File "C:\Users\user.conda\envs\bertdio\lib\site-packages\keras\engine\training.py", line 1021, in train_function *
return step_function(self, iterator)
File "C:\Users\user.conda\envs\bertdio\lib\site-packages\keras\engine\training.py", line 1006, in step_function **
if self._jit_compile:
AttributeError: 'Functional' object has no attribute '_jit_compile'
It's my code. When it run "outputs = old_train_function(inputs)" start have problem.
def train_function(inputs): # 重新定义训练函数
grads = embedding_gradients(inputs)[0] # Embedding梯度
delta = epsilon * grads / (np.sqrt((grads**2).sum()) + 1e-8) # 计算扰动
K.set_value(embeddings, K.eval(embeddings) + delta) # 注入扰动
outputs = old_train_function(inputs) # 梯度下降
K.set_value(embeddings, K.eval(embeddings) - delta) # 删除扰动
return outputs
model.train_function = train_function # 覆盖原训练函数
I can't found any solution for same problem.
I run in keras2.8.0 but the original env is keras2.3.0 I think some code was change.

Related

TypeError: 'str' object is not callable | FastAi

Goal: instantiate unet_learner() using weights.
weights is a str that I bring in from a user-defined .yaml file; hence eval().
file_path and training are classes that hold parameters.
Code:
import numpy as np
from fastai.vision.all import *
def train(dls, file_path, training):
labels = np.loadtxt(file_path.labels, dtype=str)
weights = torch.tensor(eval(training.weights))
print('#################')
print(weights)
print(type(weights))
print('#################')
learner = unet_learner(dls, training.architecture,loss_func=CrossEntropyLossFlat(
axis=1,
weight=weights)
)
return learner.load(file_path.weights)
Placing torch.tensor() around weights again in the parameter line doesn't help. Same error.
Traceback:
(venv) me#ubuntu-pcs:~/PycharmProjects/project$ python pdl1_lung_train/main.py
/home/me/miniconda3/envs/venv/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /opt/conda/conda-bld/pytorch_1607370156314/work/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
#################
tensor([0.4000, 0.9000])
<class 'torch.Tensor'>
#################
Traceback (most recent call last):
File "pdl1_lung_train/main.py", line 27, in <module>
main(ROOT)
File "pdl1_lung_train/main.py", line 19, in main
learner = train(dls, file_path, training)
File "/home/me/PycharmProjects/project/pdl1_lung_train/train.py", line 16, in train
weight=weights))
File "/home/me/miniconda3/envs/venv/lib/python3.7/site-packages/fastai/vision/learner.py", line 267, in unet_learner
model = create_unet_model(arch, n_out, img_size, pretrained=pretrained, **kwargs)
File "/home/me/miniconda3/envs/venv/lib/python3.7/site-packages/fastai/vision/learner.py", line 243, in create_unet_model
model = arch(pretrained)
TypeError: 'str' object is not callable
Please let me know if I need to add other info. to post.
I might be wrong but I think your training.architecture is a string. But according to unet_learner documentation it has to be callable.

ValueError: Dimensions must be equal, but are 100 and 19 with input shapes: [?,100], [?,100,19]

I have an error in my code, and I've done read the documentation but it still error, How this error can be fixed?
Code:
import tensorflow.keras.backend as K
import tensorflow_addons as tfa
from tensorflow_addons.layers import CRF
from keras_crf import CRFModel
def create_model(): #
max_words=length_long_sentence
MAX_SENTENCE_NUM=100
embedding_size=100
lstm_size=128
learn_rate=0.01
output_size=len(unique_tag_set)
current_input=Input(shape=(MAX_SENTENCE_NUM,max_words,))
emb_current = Embedding(vocab_size, embedding_size, weights=
[embedding_matrix],input_length=max_words, name='current_embed',trainable=False)(current_input)
hidden_vectors=TimeDistributed(Bidirectional(LSTM(units=lstm_size, return_sequences=False)))
(emb_current )
hidden_vectors=Bidirectional(LSTM(units=lstm_size, return_sequences=True))(hidden_vectors )
base = tf.keras.Model(inputs=current_input, outputs=hidden_vectors)
model = CRFModel(base, 19)
opt = tf.keras.optimizers.Adam(learning_rate=learn_rate)
model.compile(optimizer=opt, metrics=['acc'])
print(model.summary())
return model
model_2=create_model()
and here is the model summary:
Here is the code to fit in training data:
history_2=model_2.fit(x_train_split,y_train_split,
epochs=1,batch_size=16,
shuffle = False, verbose = 1,
validation_split=0.2,
sample_weight=sample_weights)
And I got this error:
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 878, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 867, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 860, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras_crf/crf_model.py", line 49, in train_step
crf_loss = -tfa.text.crf_log_likelihood(potentials, y, sequence_length, kernel)[0]
File "/usr/local/lib/python3.7/dist-packages/tensorflow_addons/text/crf.py", line 242, in crf_log_likelihood
inputs, tag_indices, sequence_lengths, transition_params
File "/usr/local/lib/python3.7/dist-packages/tensorflow_addons/text/crf.py", line 104, in crf_sequence_score
return tf.cond(tf.equal(tf.shape(inputs)[1], 1), _single_seq_fn, _multi_seq_fn)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_addons/text/crf.py", line 97, in _multi_seq_fn
unary_scores = crf_unary_score(tag_indices, sequence_lengths, inputs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow_addons/text/crf.py", line 277, in crf_unary_score
flattened_tag_indices = tf.reshape(offsets + tag_indices, [-1])
ValueError: Dimensions must be equal, but are 100 and 19 for '{{node cond/add_1}} = AddV2[T=DT_INT32](cond/add, cond/add_1/Cast)' with input shapes: [?,100], [?,100,19].
This could be because you have 19 classes. But your y vector has digits: 0, ..., 18. Your model is outputting a 19 dimensional vector.
So, try tf.keras.utils.to_categorical. Link: https://www.tensorflow.org/api_docs/python/tf/keras/utils/to_categorical
Essentially:
y_train_split = tf.keras.utils.to_categorical(y_train_split)
# code to fit

Invalid placeholder in tensorflow

I am trying to write a custom loss function as follows.
def vgg16_feature_model(flayers, weights='imagenet'):
"""
Feature exctraction VGG16 model.
# Arguments
flayers: list of strings with names of layers to get the features for.
The length of `flayers` should be > 1, otherwise the output shape
is one axis less.
weights: ether "imagenet" or path to the file with weights.
# Returns
features_model: keras.models.Model instance to extract the features.
# Raises
AssertionError: in case of `flayers` is not a list.
AssertionError: in case of length of 'flayers' < 2.
"""
assert isinstance(flayers,list), "First argument 'flayers' must be a list"
assert len(flayers) > 1, "Length of 'flayers' must be > 1."
base_model = VGG16(include_top=False, weights=weights)
vgg16_outputs = [base_model.get_layer(flayers[i]).output for i in range(len(flayers))]
features_model = Model(inputs=[base_model.input], outputs=vgg16_outputs, name='vgg16_features')
features_model.trainable = False
features_model.compile(loss='mse', optimizer='adam')
return features_model
# Losses:
# -------
def total_loss(mask, vgg16_weights='imagenet'):
"""
Total loss defined in Eq 7 of Liu et al 2018 with:
y_true = I_gt,
y_pred = I_out,
y_comp = I_comp.
"""
vgg16_lnames = ['block1_pool', 'block2_pool', 'block3_pool']
vgg_model = vgg16_feature_model(vgg16_lnames, weights=vgg16_weights)
def loss(y_true, y_pred):
mask_inv = 1 - mask
y_comp = mask * y_true + mask_inv * y_pred
print("y_pred", y_pred)
print(y_comp)
input()
vgg_out = vgg_model(y_pred)
vgg_gt = vgg_model(y_true)
print("abc-----------------------------------")
vgg_comp = vgg_model(y_comp)
print("abc")
l_valid = loss_per_pixel(y_true, y_pred, mask)
l_hole = loss_per_pixel(y_true, y_pred, mask_inv)
l_perc = loss_perc(vgg_out, vgg_gt, vgg_comp)
l_style = loss_style(vgg_out, vgg_gt, vgg_comp)
l_tv = loss_tv(y_comp, mask_inv)
return l_valid + 6.*l_hole + 0.05*l_perc + 120.*l_style + 0.1*l_tv
return loss
I am getting an error as
Traceback (most recent call last):
File "inpainter_main.py", line 46, in <module>
model = pconv_model(lr=LR_STAGE1, image_size=IMAGE_SIZE, vgg16_weights=VGG16_WEIGHTS)
File "/home/bitsy-chuck/Downloads/PConv2D-2ndimp/inpainter_utils/pconv2d_model.py", line 118, in pconv_model
model.compile(Adam(lr=lr), loss=total_loss(mask_input, vgg16_weights=vgg16_weights))
File "/home/bitsy-chuck/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py", line 456, in _method_wrapper
result = method(self, *args, **kwargs)
File "/home/bitsy-chuck/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_v1.py", line 446, in compile
self._compile_weights_loss_and_weighted_metrics()
File "/home/bitsy-chuck/anaconda3/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py", line 456, in _method_wrapper
result = method(self, *args, **kwargs)
File "/home/bitsy-chuck/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_v1.py", line 1515, in _compile_weights_loss_and_weighted_metrics
self.total_loss = self._prepare_total_loss(masks)
File "/home/bitsy-chuck/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_v1.py", line 1575, in _prepare_total_loss
per_sample_losses = loss_fn.call(y_true, y_pred)
File "/home/bitsy-chuck/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/losses.py", line 246, in call
return self.fn(y_true, y_pred, **self._fn_kwargs)
File "/home/bitsy-chuck/Downloads/PConv2D-2ndimp/inpainter_utils/pconv2d_loss.py", line 58, in loss
vgg_comp = vgg_model(y_comp)
File "/home/bitsy-chuck/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 737, in __call__
base_layer_utils.create_keras_history(inputs)
File "/home/bitsy-chuck/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 186, in create_keras_history
_, created_layers = _create_keras_history_helper(tensors, set(), [])
File "/home/bitsy-chuck/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 249, in _create_keras_history_helper
layer_inputs, processed_ops, created_layers)
File "/home/bitsy-chuck/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer_utils.py", line 246, in _create_keras_history_helper
constants[i] = backend.function([], op_input)([])
File "/home/bitsy-chuck/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/backend.py", line 3632, in __call__
run_metadata=self.run_metadata)
File "/home/bitsy-chuck/anaconda3/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1472, in __call__
run_metadata_ptr)
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'pconv2d_dec_16_target' with dtype float and shape [?,?,?,?]
[[{{node pconv2d_dec_16_target}}]]
I first thought that y_comp is not correct, but
y_pred ---> Tensor("pconv2d_dec_16/BiasAdd:0", shape=(None, 512, 512, 3), dtype=float32)
y_comp ---> Tensor("loss_1/pconv2d_dec_16_loss/add:0", shape=(None, 512, 512, 3), dtype=float32)
They both appear the same to me and it should work, according to me.
error is at line vgg_comp = vgg_model(y_comp)
Can anyone also explain why am I getting an error of placeholder?
Tf version 1.3
keras 2.2.4
placeholder errors are usually due to tensorflow versions. I had the exact same error and it was fixed when I installed keras first and then tensorflow first. Using anaconda might help as they cache all the files when you uninstall so it is easy to install again without having to download the entire thing again.
There might be some other fix, I believe, but this fixed mine.

Variable_scope runtime error when creating keras custom layer using tensorflow hub models and tensorflow 2.0 as backend

I'm trying to use the pretrained tf-hub elmo model by integrating it into a keras layer.
Keras Layer:
class ElmoEmbeddingLayer(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(ElmoEmbeddingLayer, self).__init__(**kwargs)
self.dimensions = 1024
self.trainable = True
self.elmo = None
def build(self, input_shape):
url = 'https://tfhub.dev/google/elmo/2'
self.elmo = hub.Module(url)
self._trainable_weights += trainable_variables(
scope="^{}_module/.*".format(self.name))
super(ElmoEmbeddingLayer, self).build(input_shape)
def call(self, x, mask=None):
result = self.elmo(
x,
signature="default",
as_dict=True)["elmo"]
return result
def compute_output_shape(self, input_shape):
return input_shape[0], self.dimensions
When I run the code I get the following error:
Traceback (most recent call last):
File "D:/Google Drive/Licenta/Gemini/Emotion Analysis/nn/trainer/model.py", line 170, in <module>
validation_steps=validation_dataset.size())
File "D:/Google Drive/Licenta/Gemini/Emotion Analysis/nn/trainer/model.py", line 79, in train_gpu
model = build_model(self.config, self.embeddings, self.sequence_len, self.out_classes, summary=True)
File "D:\Google Drive\Licenta\Gemini\Emotion Analysis\nn\architectures\models.py", line 8, in build_model
return my_model(embeddings, config, sequence_length, out_classes, summary)
File "D:\Google Drive\Licenta\Gemini\Emotion Analysis\nn\architectures\models.py", line 66, in my_model
inputs, embedding = resolve_inputs(embeddings, sequence_length, model_config, input_type)
File "D:\Google Drive\Licenta\Gemini\Emotion Analysis\nn\architectures\models.py", line 19, in resolve_inputs
return elmo_input(model_conf)
File "D:\Google Drive\Licenta\Gemini\Emotion Analysis\nn\architectures\models.py", line 58, in elmo_input
embedding = ElmoEmbeddingLayer()(input_text)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 616, in __call__
self._maybe_build(inputs)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1966, in _maybe_build
self.build(input_shapes)
File "D:\Google Drive\Licenta\Gemini\Emotion Analysis\nn\architectures\custom_layers.py", line 21, in build
self.elmo = hub.Module(url)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow_hub\module.py", line 156, in __init__
abs_state_scope = _try_get_state_scope(name, mark_name_scope_used=False)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow_hub\module.py", line 389, in _try_get_state_scope
"name_scope was already taken." % abs_state_scope)
RuntimeError: variable_scope module/ was unused but the corresponding name_scope was already taken.
It seems to be due to the eager execution behaviour. If I disable eager execution I have to surround the model.fit function within a tensorflow session and initialize the variables by using sess.run(global_variables_initializer()) to avoid the next error:
Traceback (most recent call last):
File "D:/Google Drive/Licenta/Gemini/Emotion Analysis/nn/trainer/model.py", line 168, in <module>
validation_steps=validation_dataset.size().eval(session=Session()))
File "D:/Google Drive/Licenta/Gemini/Emotion Analysis/nn/trainer/model.py", line 90, in train_gpu
class_weight=weighted)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\keras\engine\training.py", line 643, in fit
use_multiprocessing=use_multiprocessing)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 664, in fit
steps_name='steps_per_epoch')
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\keras\engine\training_arrays.py", line 294, in model_iteration
batch_outs = f(actual_inputs)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\keras\backend.py", line 3353, in __call__
run_metadata=self.run_metadata)
File "D:\Apps\Anaconda\envs\tf2.0\lib\site-packages\tensorflow\python\client\session.py", line 1458, in __call__
run_metadata_ptr)
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Error while reading resource variable module/bilm/RNN_0/RNN/MultiRNNCell/Cell1/rnn/lstm_cell/bias from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/module/bilm/RNN_0/RNN/MultiRNNCell/Cell1/rnn/lstm_cell/bias/class tensorflow::Var does not exist.
[[{{node elmo_embedding_layer/module_apply_default/bilm/RNN_0/RNN/MultiRNNCell/Cell1/rnn/lstm_cell/bias/Read/ReadVariableOp}}]]
(1) Failed precondition: Error while reading resource variable module/bilm/RNN_0/RNN/MultiRNNCell/Cell1/rnn/lstm_cell/bias from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/module/bilm/RNN_0/RNN/MultiRNNCell/Cell1/rnn/lstm_cell/bias/class tensorflow::Var does not exist.
[[{{node elmo_embedding_layer/module_apply_default/bilm/RNN_0/RNN/MultiRNNCell/Cell1/rnn/lstm_cell/bias/Read/ReadVariableOp}}]]
[[metrics/f1_micro/Identity/_223]]
0 successful operations.
0 derived errors ignored.
My solution:
with Session() as sess:
sess.run(global_variables_initializer())
history = model.fit(self.train_data.repeat(),
epochs=self.config['epochs'],
validation_data=self.validation_data.repeat(),
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
callbacks=self.__callbacks(monitor_metric),
class_weight=weighted)
The main question is if there is another way to use elmo tf-hub module in a keras custom layer and train my model. Another question is if my current solution is not affecting the training performances or give the OOM GPU error (I get the OOM error after a few epochs with a higher batch size, which I've found to be related to sessions not closed or memory leaks).
If you wrap your model in Session() field, you will also have to wrap all another code that uses your model in Session() field. It takes a lot times and efforts. I have another way to deal with it:
firstly, create a elmo module, add a session to keras:
elmo_model = hub.Module("https://tfhub.dev/google/elmo/3", trainable=True,
name='elmo_module')
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run(tf.tables_initializer())
K.set_session(sess)
Instead of create elmo module directly in your ElmoEmbeddinglayer
self.elmo = hub.Module(url)
self._trainable_weights += trainable_variables(
scope="^{}_module/.*".format(self.name))
You can do the following, i think it works normally!
self.elmo = elmo_model
self._trainable_weights += trainable_variables(
scope="^elmo_module/.*")
Here is a simple solution that I used in my case:
That thing happened to me while I was using a separated python script to create the module.
To solve it I passed the tf.Session() in the main script to the tf.keras.backend in the other script by creating an entry point to pass it before calling the Layer.init
Example:
Main file:
import tensorflow.compat.v1 as tf
from ModuleFile import ModuleLayer
def __main__():
init_args = [...]
input = ...
sess= tf.keras.backend.get_session()
Module_layer.__init_session___(sess)
module_layer = ModuleLayer(init_args)(input)
Module file:
import tensorflow.compat.v1 as tf
class ModuleLayer(tf.keras.layers.Layer):
#staticmethod
def __init_session__(session):
tf.keras.backend.set_session(session)
def __init__(*args):
...
Hope that helps :)

Error in Keras when I want to calculate the Sensitivity and Specificity

I am writing a code for classification between two types of images based on a CNN.
I want to measure the accuracy, sensitivity, and specificity for my work but unfortunately, I have the following error.
Could you please let me know what my problem is.
m = tf.keras.metrics.SensitivityAtSpecificity(0.5)
model.compile(optimizer='adam', loss=keras.losses.binary_crossentropy, metrics=['accuracy',m])
error:
Traceback (most recent call last):
File "C:/Users/Hamed/PycharmProjects/Deep Learning/CNN.py", line 77, in <module>
validation_steps = 1600//batch_size)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\keras\engine\training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\keras\engine\training_generator.py", line 217, in fit_generator
class_weight=class_weight)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\keras\engine\training.py", line 1217, in train_on_batch
outputs = self.train_function(ins)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\keras\backend\tensorflow_backend.py", line 2715, in __call__
return self._call(inputs)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\keras\backend\tensorflow_backend.py", line 2675, in _call
fetched = self._callable_fn(*array_vals)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\tensorflow\python\client\session.py", line 1439, in __call__
run_metadata_ptr)
File "C:\Users\Hamed\Anaconda3\envs\tensorflowGPU\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 528, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: Resource localhost/false_negatives/class tensorflow::Var does not exist.
[[{{node metrics/sensitivity_at_specificity/AssignAddVariableOp_1}}]]
[[{{node metrics/sensitivity_at_specificity/Mean}}]]
The metric tf.keras.metrics.SensitivityAtSpecificity calculates sensitivity at a given specificity Click here.
Unfortunately sensitivity and specificity metrics are not yet included in Keras, so you have to write your own custom metric as is specified here.
The following is one simple way to calculate specificity found at this answer.
def specificity(y_true, y_pred):
"""
param:
y_pred - Predicted labels
y_true - True labels
Returns:
Specificity score
"""
neg_y_true = 1 - y_true
neg_y_pred = 1 - y_pred
fp = K.sum(neg_y_true * y_pred)
tn = K.sum(neg_y_true * neg_y_pred)
specificity = tn / (tn + fp + K.epsilon())
return specificity
You can get Keras implementations for specificity and sensitivity on this link.
You Can try this, if it helps...
import keras
model.compile(optimizer="adam",
loss="categorical_crossentropy",
metrics=[keras.metrics.Precision(), keras.metrics.Recall(), keras.metrics.SpecificityAtSensitivity(0.5), keras.metrics.SensitivityAtSpecificity(0.5), 'accuracy'])

Resources