I am trying to train my code with distributed data parallelism, I already trained using torch.nn.DataParallel and now I am trying to see how much gain I can get in training speed if I train using torch.nn.parallel.DistributedDataParallel since I read on numerous pages that its better to use DistributedDataParallel. So I followed one of the examples but I am not sure how to set the following environment variables (os.environ['MASTER_ADDR'] and os.environ['MASTER_PORT']) since I am using a cloud service so I am not sure which specific node my code gets allocated for training my model. Can anyone help me to set these variables?
Related
I want to hand write a framework to perform inference of a given neural network. The network is so complicated, so to make sure my implementation is correct, I need to know how exactly the inference process is done on device.
I tried to use torchviz to visualize the network, but what I got seems to be the back propagation compute graph, which is really hard to understand.
Then I tried to convert the pytorch model to ONNX format, following the instruction enter link description here, but when I tried to visualize it, it seems that the original layers of the model had been seperated into very small operators.
I just want to get the result like this
How can I get this? Thanks!
Have you tried saving the model with torch.save (https://pytorch.org/tutorials/beginner/saving_loading_models.html) and opening it with Netron? The last view you showed is a view of the Netron app.
You can try also the package torchview, which provides several features (useful especially for large models). For instance you can set the display depth (depth in nested hierarchy of moduls).
It is also based on forward prop
github repo
Disclaimer: I am the author of the package
Note: The accepted format for tool is pytorch model
I am working on a product in which we want to protect our learned models. The keras models are saved using the built-in model.save(), and they are stored on the edge device. The python script, however is downloaded on system launch and is only kept in a ram disk and is hence secured.
Therefore, I am looking for advice on how to protect our models. Currently I am considering to do some strange pre-processing and/or scale the input in a strange way, so that the any person getting the models will have a low chance to get any reasonable output as they would not know how to define the inputs (besides input size).
Would this be a good way to approach the issue? Any advice is highly appreciated, thx.
I have built and trained neural network in Pytorch and is ready for production to a website but how do I deploy it?
There is multiple ways to do it.
Yet first, I add a PS : I noticed that you were asking specifically about reinforcement learning after having posted my answer. Know that even though I have written this answer with a static neural network model in mind, I offer at the end of the post a solution to apply the ideas of this answer to reinforcement learning.
The different options :
From what I know, using PyTorch in production is not especially recommended for big scale production.
It is more common to convert the PyTorch model to the ONNX format (a format to make ai models interchangeable between frameworks). Here is a tutorial if you want to operate this way : https://github.com/onnx/tutorials/blob/master/tutorials/PytorchOnnxExport.ipynb .
Then run it using the ONNX runtime, Caffe2 (by Facebook) or with TensorFlow (by Google).
My answer is not going to explore those solutions (and i did not include tutorials to those options), because i recently did the same as you are trying to do (building a neural network architecture and wanting to deploy it, and also allowing users to train their neural network with the architecture), yet i did not converted my neural network for the following reasons :
ONNX is evolving quickly, yet is currently not supporting all the operations you can possibly do in a PyTorch model. So if you have a highly custom or specific neural network (like in my case), you might not be able to convert it to ONNX with ease. You might need to change your architecture, or maybe have to re-write a big part of it so that it can be converted to ONNX.
You will need to use one or two additional tools, where most tutorials are not going really deep, or not explaining the logic behind what they are doing.
Note that you might want to convert your neural network if you call your network billions or trillions of times a day, otherwise i think you can stick with PyTorch without issues even for production, and avoid the fallback of converting to ONNX.
First let's see how we can save a trained neural network, load it back trough the architecture of the network, and re-run the trained network.
Second how we can deploy a network to a website, and also how you can allow users to train their networks. It is likely not the best or most efficient way, yet it sure works.
Saving the network :
First, you clearly need to have imported pyTorch with "import torch". Inside your neural network file you should save the stateDict (basically a dictionary of the operations and weights of your network) of the network you want to re-use. You could for example only save the stateDict of the model with the smallest loss of your epoch.
# network is the variable containing your neural network class
network_stateDict = network.state_dict()
# Saving network stateDict to a variable
Then when you want to save the stateDict to a file that you can re-use later, use :
torch.save(network_stateDict, "folderPath/myStateDict.pt)
# Saving the stateDict variable to a file
# The pt extension is just a convention in the PyTorch community, ptr is also used a lot
Finally when you will want to re-use your trained network later on, you will need to :
network = myNetwork(1, 2, 3)
# Load the architecture of the network in a variable (use the same architecture
# and the same network parameters as the ones used to create the stateDict)
network.load_state_dict(torch.load(folderPath/myStateDict.pt))
# Loading the file containing the stateDict of the trained network into a format
# pyTorch can read with the torch.load function. Then load the stateDict inside the
# network architecture with the load_state_dict function, applied to your network
# object with network.load_state_dict .
network.eval()
# To make sure that the stateDict has correctly been loaded.
output = network(input_data)
# You should now be able to get output data from your
# trained network, by feeding it a single set of input data.
For more infos on saving models and the stateDicts : https://pytorch.org/tutorials/beginner/saving_loading_models.html
Deploying the network:
Now that we know how to save, restore and feed input data to a network, all that there is left to do is to deploy it so that this process is done trough the website.
You first need to get (likely from your user) the inputs that your neural network will use. I am not going to include any link, since there is so many different web frameworks.
You would then need to either use a framework (like Django) that allow you to do in Python the logic of :
import torch
network = myNetwork(1, 2, 3)
network.load_state_dict(torch.load(folderPath/myStateDict.pt))
network.eval()
input_data = data_fromMyUser
output = network(input_data)
Then you would collect the output to display it, or do whatever you want it.
If your framework is not giving you the ability to use Python, i think it would be a good idea to have a tiny Python script, to which you would give the input data, and which would return the output.
If you would like to give the possibility to the user to train networks, you should just give them the possibility to start the training of one, and then use torch.save on a stateDict object to save the stateDict to a file.
You or they could later use the trained networks (you should also need to create a little function to make sure that you do not override previous stateDict files).
How to apply it to reinforcement learning :
I did not deploy a reinforcement learning model, yet i can offer you some ideas and leads to explore to deploy one.
You could store and add the inputs that you get from your user to a file or a database, and write a little program, that say every 24 hours or every hour, re-run the neural network with the now bigger dataset.
You could then totally apply the suggestions in this answer, of running the network, saving the stateDict of the model and then changing the stateDict that your network is using in production.
This is a bit hacky, yet would allow you to save in a "static way" your trained networks, and still have them evolving and changing their stateDicts.
Conclusion
This is clearly not the most mass-scale production approach that you could employ, yet it is in my opinion the easiest to put in place.
You also know that the output that you will get, will be the actual output of your neural network, without any distorsions or errors in the values.
Have a great day !
save the trained model however you want (HD5 or with pickle)
write the program to handle in production by loading the trained model
deploy the program on distributed system for real time computation like on Apache storm, Flink, Alink, Apache Samoa etc..
if you feel you need to retrain the model depending on feedback then retrain the model on different cluster or parallel environment and observe the model accuracy if looks good then move the model to production (initial days you need to retrain multiple times and it will decrease time goes on if your model is designed in a good way)
I am trying to replicate the hyperparameter tuning example reported at this link but I want to use scikit learn XGBoost instead of tensorflow in my training application.
I am able to run multiple trials in a single job, on for each of the hyperparameters combination. However, the Training output object returned by ML-Engine does not include the finalMetric field, reporting metric information (see the differences in the picture below).
What I get with the example of the link above:
Training output object with Tensorflow training app
What I get running my Training application with XGBoost:
Training output object with XGBoost training app
Is there a way for XGBoost to return training metrics to ML-Engine?
It seems that this process is automated for tensorflow, as specified in the documentation:
How Cloud ML Engine gets your metric
You may notice that there are no instructions in this documentation
for passing your hyperparameter metric to the Cloud ML Engine training
service. That's because the service monitors TensorFlow summary events
generated by your training application and retrieves the metric.
Is there a similar mechanism for XGBoost?
Now, I can always dump each metric results to a file at the end of each trial and then analyze them manually to select the best parameters. But, by doing so, am I loosing the automated mechanism offered by Cloud ML Engine, especially concerning the "ALGORITHM_UNSPECIFIED" hyperparameters search algorithm?
i.e.,
ALGORITHM_UNSPECIFIED: [...] applies Bayesian optimization to search
the space of possible hyperparameter values, resulting in the most
effective technique for your set of hyperparameters.
Hyperparameter tuning support of XGBoost was implemented in a different way. We created the cloudml-hypertune python package to help do it. We're still working on the public doc for it. At the meantime, you can refer to this staging sample to learn about how to use it.
Sara Robinson over at google put together a good post on how to do this. Rather than regurgitate and claim it as my own, I'll post this here for anyone else that comes across this post:
https://sararobinson.dev/2019/09/12/hyperparameter-tuning-xgboost.html
I am trying make an app which will detect traffic sign from video's frames.I am using yolo-tensor by following steps from https://github.com/thtrieu/darkflow .
I need to know how can I train this model with my data-set of images of traffice signs?
If you're using Darkflow on Windows then you need to make some small adjustments to how you use Darkflow. If cloning the code and using straight from the repository then you need to place python in front of the commands given as it is a python file.
e.g. python flow --imgdir sample_img/ --model cfg/yolo-tiny.cfg --load bin/yolo-tiny.weights --json
If you are installing using pip globally (not a bad idea) and you still want to use the flow utility from any directory just make sure you take the flow file with you.
To train, use the commands listed on the github page here: https://github.com/thtrieu/darkflow
If training on your own data you will need to take some extra steps as outlined here: https://github.com/thtrieu/darkflow#training-on-your-own-dataset
Your annotations need to be in the popular PASCAL VOC format which are a set of xml files including file information and the bounding box data.
Point your flow command at your new dataset and annotations to train.
The best data for you to practice is PASCAL VOC dataset. There are 2 folders you need to prepare for the training. 1 folder with images and 1 folder with xml files(annotation folder), 1 image will need 1 xml file (have the same name) content all the basic informations (object name, object position, ...). after that you only need to choose 1 predefine .cfg file in cfg folder and run the command follow:
flow --model cfg/yolo-new.cfg --train --dataset "path/to/images/folder" --annotation "path/to/annotation/folder"
Read more the options supported by darkflow to optimize more the training process.
After spending too much time on how to train custom data set for object detection
Prerequisite :
1:training environment : a system with at least 4gb GPU or you can use AWS / GCP pre-configured cloud machine with cuda 9 installation
2: ubuntu 16.04 os
3: images of the object you want to detect. images size should not be too much large it will create out of memory issue in dataset training
4: labelling tool many are available like LabelImg/ BBox-Label-Tool i used is also good one
I tried python project dataset-generator also but result of labelling using dataset generator was not efficient in real time scenarios
My suggestion for training environment is to use AWS machine rather than spend time in local installation of cuda and cudnn even though you are able to install cuda locally but if you are not having GPU >= 4 gb you will not be able to train many times it will break due to out of memory issue
solutions to train data set :
1: train ssd_mobilenet_v2 data set using tensorflow object detection api
this training output can be use on both android , ios platform
2: use darknet to train data set which required pascal VOC data format of labelling , for that labelIMG can do the job of labelling very good
3: retrain that data weights which comes as output from darknet with darkflow