keras vs. tensorflow.keras - python-3.x

Inspired by this post.
Why is there a difference between the 2 modules?
When would I use one over the other?
Anything else I should know?

Keras is a standalone high-level API that supports TensorFlow, Theano and CNTK backends. Now, Theano and CNTK are out of development.
tf.keras is the Keras API integrated into TensorFlow 2.
So, if you aim to use TensorFlow as your deep learning framework I recommend using tensorflow.keras for less headache.
Also based on a tweet from François Chollet, the creator of Keras:
We recommend you switch your Keras code to tf.keras.
Both Theano and CNTK are out of development. Meanwhile, as Keras
backends, they represent less than 4% of Keras usage. The other 96% of
users (of which more than half are already on tf.keras) are better
served with tf.keras.
Keras development will focus on tf.keras going forward.
Importantly, we will seek to start developing tf.keras in its own
standalone GitHub repository at keras-team/keras in order to make it
much easier for 3rd party folks to contribute.

Related

DNN network architecture of the original parent network from which the Intel OpenVINO pre-trained models were optimized and if yes, how?

I have used pre-trained models from OpenVINO for inference. I would like to know how to see the network structure of these models? And if I want to re-train these networks from scratch, can I know from which parent models these pre-trained models were originally derived from?
Information about Intel pre-trained models is available at the following page, “Overview of OpenVINO™ Toolkit Intel's Pre-Trained Models”.
https://docs.openvinotoolkit.org/2020.4/omz_models_intel_index.html
Information about public pre-trained models is available at the following page, “Overview of OpenVINO™ Toolkit Public Models”.
https://docs.openvinotoolkit.org/2020.4/omz_models_public_index.html
DL Workbench can be used to visualize the network structure. DL Workbench is a web-based graphical environment that enables users to visualize, fine-tune, and compare performance of deep learning models. More information about DL Workbench is available at the following page, “Introduction to Deep Learning Workbench”.
https://docs.openvinotoolkit.org/2020.4/workbench_docs_Workbench_DG_Introduction.html

Custom Vision save curent model

i'm using Custom vision from Microsoft service to classify image. Since the model will have to be re train few times a years, I would like to know if I can save current version of azure custom vision model to re train my new model on the same version? because I guess microsoft will try to increase performances of its service among time so model used on this tools will probably change...
You can export the model after each run, but you cannot use an existing model as a starting point for another training run.
So yes, as it is a managed service, Microsoft might optimize or somehow change the algorithms to train in the background. It is on you to decide if that works for you. If not, a managed service like this is probably generally not something you should use, but instead train your own models entirely.

Deployment of a Tensorflow object detection model and serving predicitions

I have a Tensorflow object detection model deployed on Google cloud platform's ML Engine. I have come across posts suggesting Tensorflow Serving + Docker for better performance. I am new to Tensorflow and want to know what is the best way to serve predictions. Currently, the ml engine online predictions have a latency of >50 seconds. My use case is a User uploading pictures using a mobile app and the getting a suitable response based on the prediction result. So, I am expecting th prediciton latency to come down to 2-3 seconds. What else can I do to make the predictions faster?
Google Cloud ML Engine has recently released GPUs support for Online Prediction (Alpha). I believe that our offering may provide the performance improvements you're looking for. Feel free to sign up here: https://docs.google.com/forms/d/e/1FAIpQLSexO16ULcQP7tiCM3Fqq9i6RRIOtDl1WUgM4O9tERs-QXu4RQ/viewform?usp=sf_link

Deploying a Tensorflow/Keras model in Spark Pipeline

I have trained several RNN+biLSTM models that I want to deploy in a pipeline consisting of pyspark pipeline steps. spark-deep-learning seems to be a stale project that only accommodates work with image data. Are there any best practices today for loading tensorflow/keras models (and their associated vector embeddings) into pyspark pipelines?
If you want to deploy a tensorflow model into Spark, you should take a look at Deeplearning4J. It comes with some Importers, where you can read keras and TensorFlow models.
Be aware, that not every layer is supported.
Besides spark-deep-learning there is tensorframe, i never used it , so I don´t know how good it is.
In general I would suggest to use tensorflow directly via Distributed Tensorflow and not using all these wrappers.

Manage scikit-learn model in Google Cloud Platform

We are trying to figure out how to host and run many of our existing scikit-learn and R models (as is) in GCP. It seems ML Engine is pretty specific to Tensorflow. How can I train a scikit-learn model on Google cloud platform and manage my model if the dataset is too large to pull into datalab? Can I still use ML Engine or is there a different approach most people take?
As an update I was able to get the python script that trains the scikit-learn model to run by submitting it as a training job to ML Engine but haven't found a way to host the pickled model or use it for prediction.
Cloud ML Engine only supports models written in TensorFlow.
If you're using scikit-learn you might want to look at some of the higher level TensorFlow libraries like TF Learn or Keras. They might help migrate your model to TensorFlow in which case you could then use Cloud ML Engine.
It's possible, Cloud ML has this feature from Dec 2017, As of today it is provided as an early access. Basically Cloud ML team is testing this feature but you can also be part of it. More on here.
Use the following command to deploy your scikit-learn models to cloud ml. Please note these parameters may change in future.
gcloud ml-engine versions create ${MODEL_VERSION} --model=${MODEL} --origin="gs://${MODEL_PATH_IN_BUCKET}" --runtime-version="1.2" --framework="SCIKIT_LEARN"
sklearn is now supported on ML Engine.
Here is a fully worked out example of using fully-managed scikit-learn training, online prediction and hyperparameter tuning:
https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/blogs/sklearn/babyweight_skl.ipynb

Resources