DNN network architecture of the original parent network from which the Intel OpenVINO pre-trained models were optimized and if yes, how? - openvino

I have used pre-trained models from OpenVINO for inference. I would like to know how to see the network structure of these models? And if I want to re-train these networks from scratch, can I know from which parent models these pre-trained models were originally derived from?

Information about Intel pre-trained models is available at the following page, “Overview of OpenVINO™ Toolkit Intel's Pre-Trained Models”.
https://docs.openvinotoolkit.org/2020.4/omz_models_intel_index.html
Information about public pre-trained models is available at the following page, “Overview of OpenVINO™ Toolkit Public Models”.
https://docs.openvinotoolkit.org/2020.4/omz_models_public_index.html
DL Workbench can be used to visualize the network structure. DL Workbench is a web-based graphical environment that enables users to visualize, fine-tune, and compare performance of deep learning models. More information about DL Workbench is available at the following page, “Introduction to Deep Learning Workbench”.
https://docs.openvinotoolkit.org/2020.4/workbench_docs_Workbench_DG_Introduction.html

Related

VM or Azure ML for training Deep Learning Algorithms

I'm trying to train a deep-learning model for a 512x512 model with TensorFlow. Normally, I would do it with Google Colab or another GPU in the cloud provider. However, due to security reasons, I am going to train the model in Azure which have instances with GPU restricted. My current options are the following:
-Request a Standard_NC4as_T4_v3 as a computing instance for Azure Machine Learning Studio and train everything in Azure Notebooks. I currently have the dataset there.
-Request an NC4as_T4_v3 for a VM and get the NVIDIA image to train the model in a VM. Getting the data from Azure Machine Learning Studio is not a problem.
Both options have the T4 GPU (16GB vRAM) because I did similar experiments in the past and it was good for the job. Before requesting access to an instance, I would like to know which option is better and more likely to be accepted.
I've tried to train a model in the currently available computing instances (Tesla K80 and M60), but they don't have enough power and are out of date with the latest libraries. Tried to work with the only GPU instance available at the moment (NV8as_v4) but it has an AMD GPU and is not intended for Deep Learning training.
VM or ML Studio will not give much difference but the feasibility with Azure ML studio in validation of the images and then we are using the deep learning models. Computational power can be scalable in the form of clusters and instances when we use the azure can be increased in the node count.
In ML Studio we need to use the attached computes to increase the capacity of the computation.

BERT fine-tuning for Conversational AI

I am trying to build a conversational AI chatbot. Since BERT is quite a popular model, I am thinking about using it. I can see that BERT has a pre-trained model for the Question Answering task. Can anyone tell me which version of the BERT model should I use to build a conversation AI? Or Can anyone direct me to the useful resources?
Thanks in advance!

Deploying a Tensorflow/Keras model in Spark Pipeline

I have trained several RNN+biLSTM models that I want to deploy in a pipeline consisting of pyspark pipeline steps. spark-deep-learning seems to be a stale project that only accommodates work with image data. Are there any best practices today for loading tensorflow/keras models (and their associated vector embeddings) into pyspark pipelines?
If you want to deploy a tensorflow model into Spark, you should take a look at Deeplearning4J. It comes with some Importers, where you can read keras and TensorFlow models.
Be aware, that not every layer is supported.
Besides spark-deep-learning there is tensorframe, i never used it , so I don´t know how good it is.
In general I would suggest to use tensorflow directly via Distributed Tensorflow and not using all these wrappers.

Activation function of Regression Neural Net in Azure ML Studio?

I am not able to find activation function for Regression Neural Network in Azure Machine Learning Studio. I am not able to identify what is the activation function taken for my NN. Followed this document also-
https://learn.microsoft.com/en-us/azure/machine-learning/studio-module-reference/neural-network-regression
Can someone suggest where to mention it/ what is the default activation function used?
Microsoft Azure ML studio has basically two configurations to work:
Use the default architecture provided to create a neural network model (preferable for beginners).
-In this you can only change the number of nodes in the hidden layer, learning rate, and normalization.
Defining your own custom architecture to create a neural network model (preferable if you know your way around neural networks).
-In this you can customize the architecture completely and modify its connections along with the activation function of your choice.
Take a look at the following link for more detailed description (official documentation):
https://learn.microsoft.com/en-us/azure/machine-learning/studio-module-reference/neural-network-regression
The following example uses softmax activation:
https://gallery.azure.ai/Experiment/7d3f74981b5b42cd9687370671c86696
Default activation function is sigmoid for classification models and linear for regression models.

Manage scikit-learn model in Google Cloud Platform

We are trying to figure out how to host and run many of our existing scikit-learn and R models (as is) in GCP. It seems ML Engine is pretty specific to Tensorflow. How can I train a scikit-learn model on Google cloud platform and manage my model if the dataset is too large to pull into datalab? Can I still use ML Engine or is there a different approach most people take?
As an update I was able to get the python script that trains the scikit-learn model to run by submitting it as a training job to ML Engine but haven't found a way to host the pickled model or use it for prediction.
Cloud ML Engine only supports models written in TensorFlow.
If you're using scikit-learn you might want to look at some of the higher level TensorFlow libraries like TF Learn or Keras. They might help migrate your model to TensorFlow in which case you could then use Cloud ML Engine.
It's possible, Cloud ML has this feature from Dec 2017, As of today it is provided as an early access. Basically Cloud ML team is testing this feature but you can also be part of it. More on here.
Use the following command to deploy your scikit-learn models to cloud ml. Please note these parameters may change in future.
gcloud ml-engine versions create ${MODEL_VERSION} --model=${MODEL} --origin="gs://${MODEL_PATH_IN_BUCKET}" --runtime-version="1.2" --framework="SCIKIT_LEARN"
sklearn is now supported on ML Engine.
Here is a fully worked out example of using fully-managed scikit-learn training, online prediction and hyperparameter tuning:
https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/blogs/sklearn/babyweight_skl.ipynb

Resources