sklearn, Keras, DeepStack - ValueError: multi_class must be in ('ovo', 'ovr') - scikit-learn

I trained a set of DNNs and I want to use them in a deep ensemble. The code is implemented in TF2, but the package deepstack works with Keras as well. The code looks something like this
from deepstack.base import KerasMember
from deepstack.ensemble import DirichletEnsemble
dirichletEnsemble = DirichletEnsemble(N=2000 * ensemble_size)
for net_idx in range(0,ensemble_size):
member = KerasMember(name=model_name, keras_model=model,
train_batches=(train_images,train_labels), val_batches=(valid_images, valid_labels))
dirichletEnsemble.add_member(member)
dirichletEnsemble.fit()
where 'model' is essentially a Keras model, thus you need to load one model at each loop (I am using my own implementation). 'ensemble_size' represents the number of DNNs used in the ensemble.
As a result, I get the following error
ValueError: multi_class must be in ('ovo', 'ovr')
which is generated by the sklearn package.
FURTHER DETAILS: deepstack creates a metric
metric = metrics.roc_auc_score
and then returns it as
return metric(y_t, y_p)
which then calls sklearn
if multi_class == 'raise':
raise ValueError("multi_class must be in ('ovo', 'ovr')")
In my specific case, the labels are respectively y_t
[ 7 10 18 52 10 13 10 4 7 7 24 26 7 26 13 13]
and y_p
[ 73 250 250 250 281 281 250 281 281 174 281 250 281 250 250 250]
How do I set multi_class as 'ovo' or 'ovr'?

The documentation for roc_auc_score indicates the following:
roc_auc_score(
y_true,
y_score,
*,
average='macro',
sample_weight=None,
max_fpr=None,
multi_class='raise',
labels=None
)
The second last parameter there is multi_class, which has the following explanation:
Multiclass only. Determines the type of configuration to use. The default value raises an error, so either 'ovr' or 'ovo' must be passed explicitly.
So, it seems that there is some variation in how roc auc is calculated and they are forcing you to explicitly choose which variation you want them to use. If you don't make the choice, the default will result in an exception being raised. And that exception is the error that you are reporting in your question title.

if you are getting this error while using sklearn roc_auc_score library, try roc_auc_score(YTEST,YPRED, multi_class='ovr') ovr is one vs rest which will convert your multiclass problem to a binary problem

Related

Clarifications on training job parameters with Tensorflow

Im using the new Tensorflow object detection API.
I need to replicate training parameters used on a paper but Im a bit confused.
In the paper is stated
When training neural network models, their base confguration is similar to that used to
train on the COCO 2017 dataset. For the unambiguous comparison of the selected models, the total number of
training steps was set to 100 equal to 100′000 iterations of learning.
Inside model_main_tf2.py, which is the script used to start the training, I can read the following:
"""Creates and runs TF2 object detection models.
For local training/evaluation run:
PIPELINE_CONFIG_PATH=path/to/pipeline.config
MODEL_DIR=/tmp/model_outputs
NUM_TRAIN_STEPS=10000
SAMPLE_1_OF_N_EVAL_EXAMPLES=1
python model_main_tf2.py -- \
--model_dir=$MODEL_DIR --num_train_steps=$NUM_TRAIN_STEPS \
--sample_1_of_n_eval_examples=$SAMPLE_1_OF_N_EVAL_EXAMPLES \
--pipeline_config_path=$PIPELINE_CONFIG_PATH \
--alsologtostderr
"""
Also, you can specify the num_steps and total_steps parameters in the pipeline.config file (used by the training script):
train_config: {
batch_size: 1
sync_replicas: true
startup_delay_steps: 0
replicas_to_aggregate: 8
num_steps: 50000
optimizer {
momentum_optimizer: {
learning_rate: {
cosine_decay_learning_rate {
learning_rate_base: .16
total_steps: 50000
warmup_learning_rate: 0
warmup_steps: 2500
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
So, what Im not understanding is how should I map what is written in the paper with tensorflow parameters.
What is the num steps and total_steps inside the pipeline.config file?
What is the NUM_TRAIN_STEPS argument instead?
Does it overwrite config file steps or its a completely different thing?
If more details are needed feel free to ask.

PyTorch error isDifferentiableType(variable.scalar_type()) for calculating det of a complex matrix

Following up this post
When I want to use complex_det function to calculate det of a complex matrix I face this error:
RuntimeError: isDifferentiableType(variable.scalar_type()) INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/autograd/functions/utils.h":59, please report a bug to PyTorch.
any idea how I can fix it?
<ipython-input-76-246d142f8871> in complex_det(A)
3 return torch.view_as_complex(torch.stack((A.real.diag(), A.imag.diag()),dim=1))
4 #Perform LU decomposition to matrix A:
----> 5 A_LU, pivots = A.lu()
6 P, A_L, A_U = torch.lu_unpack(A_LU, pivots)
7 #Det. of multiplied matrices is multiplcation of det.:
/usr/local/lib/python3.6/dist-packages/torch/tensor.py in lu(self, pivot, get_infos)
332 r"""See :func:`torch.lu`"""
333 # If get_infos is True, then we don't need to check for errors and vice versa
--> 334 LU, pivots, infos = torch._lu_with_info(self, pivot=pivot, check_errors=(not get_infos))
335 if get_infos:
336 return LU, pivots, infos

Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers

Update #1 (original question and details below):
As per the suggestion of #MatthijsHollemans below I've tried to run this by removing dynamic_axes from the initial create_onnx step below. This removed both:
Description of image feature 'input_image' has missing or non-positive width 0.
and
Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
Unfortunately this opens up two sub-questions:
I still want to have a functional ONNX model. Is there a more appropriate way to make H and W dynamic? Or should I be saving two versions of the ONNX model, one without dynamic_axes for the CoreML conversion, and one with for use as a valid ONNX model?
Although this solves the compilation error in xcode (specified below) it introduces the following runtime issues:
Finalizing CVPixelBuffer 0x282f4c5a0 while lock count is 1.
[espresso] [Espresso::handle_ex_plan] exception=Invalid X-dimension 1/480 status=-7
[coreml] Error binding image input buffer input_image: -7
[coreml] Failure in bindInputsAndOutputs.
I am calling this the same way I was calling the fixed size model, which does still work fine. The image dimensions are 640 x 480.
As specified below the model should accept any image between 64x64 and higher.
For flexible shape models, do I need to provide an input differently in xcode?
Original Question (parts still relevant)
I have been slowly working on converting a style transfer model from pytorch > onnx > coreml. One of the issues that has been a struggle is flexible/dynamic input + output shape.
This method (besides i/o renaming) has worked well on iOS 12 & 13 when using a static input shape.
I am using the following code to do the onnx > coreml conversion:
def create_coreml(name):
mlmodel = convert(
model="onnx/" + name + ".onnx",
preprocessing_args={'is_bgr': True},
deprocessing_args={'is_bgr': True},
image_input_names=['input_image'],
image_output_names=['stylized_image'],
minimum_ios_deployment_target='13'
)
spec = mlmodel.get_spec()
img_size_ranges = flexible_shape_utils.NeuralNetworkImageSizeRange()
img_size_ranges.add_height_range((64, -1))
img_size_ranges.add_width_range((64, -1))
flexible_shape_utils.update_image_size_range(
spec,
feature_name='input_image',
size_range=img_size_ranges)
flexible_shape_utils.update_image_size_range(
spec,
feature_name='stylized_image',
size_range=img_size_ranges)
mlmodel = coremltools.models.MLModel(spec)
mlmodel.save("mlmodel/" + name + ".mlmodel")
Although the conversion 'succeeds' there are a couple of warnings (spaces added for readability):
Translation to CoreML spec completed. Now compiling the CoreML model.
/usr/local/lib/python3.7/site-packages/coremltools/models/model.py:111:
RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was:
Error compiling model:
"Error reading protobuf spec. validator error: Description of image feature 'input_image' has missing or non-positive width 0.".
RuntimeWarning)
Model Compilation done.
/usr/local/lib/python3.7/site-packages/coremltools/models/model.py:111:
RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was:
Error compiling model:
"compiler error: Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
".
RuntimeWarning)
If I ignore these warnings and try to compile the model for latest targets (13.0) I get the following error in xcode:
coremlc: Error: compiler error: Input 'input_image' of layer '63' not found in any of the outputs of the preceeding layers.
Here is what the problematic area appears to look like in netron:
My main question is how can I get these two warnings out of the way?
Happy to provide any other details.
Thanks for any advice!
Below is my pytorch > onnx conversion:
def create_onnx(name):
prior = torch.load("pth/" + name + ".pth")
model = transformer.TransformerNetwork()
model.load_state_dict(prior)
dummy_input = torch.zeros(1, 3, 64, 64) # I wasn't sure what I would set the H W to here?
torch.onnx.export(model, dummy_input, "onnx/" + name + ".onnx",
verbose=True,
opset_version=10,
input_names=["input_image"], # These are being renamed from garbled originals.
output_names=["stylized_image"], # ^
dynamic_axes={'input_image':
{2: 'height', 3: 'width'},
'stylized_image':
{2: 'height', 3: 'width'}}
)
onnx.save_model(original_model, "onnx/" + name + ".onnx")

Parsing error when reading a specific Pajek (NET) file with Networkx into Jupyter

I am trying to reading this pajek file in Google Colab's version of Jupyter and I get an error when executing the following very simple code:
J = nx.MultiDiGraph()
J=nx.read_pajek("/content/data/graphdatasets/jazz.net")
print(nx.info(J))
The error is the following:
/usr/local/lib/python3.6/dist-packages/networkx/readwrite/pajek.py in parse_pajek(lines)
211 except AttributeError:
212 splitline = shlex.split(str(l))
--> 213 id, label = splitline[0:2]
214 labels.append(label)
215 G.add_node(label)
ValueError: not enough values to unpack (expected 2, got 1)
With pip show networkx, I see that I'm running Networkx version: 2.3. Am I doing something wrong in the code?
Update: Pasting below the file's first few lines:
*Vertices 198
*Arcs
*Edges
1 8 1
1 24 1
1 35 1
1 42 1
1 46 1
1 60 1
1 74 1
1 78 1
According to the Pajek definition the first two lines of your file are not according to the standard. After *vertices n, n lines with details about the vertices are expected. In addition, *edges and *arcs is a duplicate. NetworkX assumes use for an edge list, which started with *arcs a MultiDiGraph and for *edges a MultiGraph (see current code). To resolve your problem, you only need to delete the first two lines of your .net-file.

JAGS Beginner - Receiving and Understanding Output

When using JAGS, how does one receive output from a model in the format:
Inference for Bugs model at "model.txt", fit using jags,
3 chains, each with 10000 iterations (first 5000 discarded)
n.sims = 15000 iterations saved
mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff
mu 9.950 0.288 9.390 9.755 9.951 10.146 10.505 1.001 11000
sd.obs 3.545 0.228 3.170 3.401 3.534 3.675 3.978 1.001 13000
deviance 820.611 3.460 818.595 819.132 819.961 821.366 825.871 1.001 15000
I assumed, as with BUGS, it would appear when the model completes however I only get something in the format:
Compiling model graph
Resolving undeclared variables
Allocating nodes
Graph information:
Observed stochastic nodes: 1785
Unobserved stochastic nodes: 1843
Total graph size: 61542
Initializing model
|++++++++++++++++++++++++++++++++++++++++++++++++++| 100%
Apologies for the basic question. If anyone can provide useful JAGS introductory material that would also be useful.
Kind regards.
If you only get the 'plus' signs, it means you only initialized the model. When jags really runs, it typically produces '***' signs after. So you are missing a line here (would have been nice to see your code). For instance if you use r2jags, you would write:
out <- jags(data = data, parameters.to.save = params, n.chains = 3, n.iter = 90000,n.burnin = 5000,
model.file = modFile)
out.upd <- update(abundance.out.mod, n.iter=10000)

Resources