Download and use torchvision dataset without internet connection - pytorch

I am new to pytorch and would like to run some examples on a computer without internet connection.
In the tutorial page the following code is given for a computer with internet connection.
I would appreciate it if someone could advise on how to do the same for a computer with no internet connection.
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)

First you have to download the dataset from a computer that has internet connection, and then copy it to the one that has torch. I will explain the steps:
(A) Download the following files in the links below :
http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz
http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz
http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz
(B) Copy these files to torch ready PC under torch/datasets/FashionMNIST/raw/
(C) Extract them using: gzip -d *.gz
(D) Change your code to point to the dataset location
training_data = datasets.FashionMNIST(
root="torch/datasets/",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="torch/datasets/",
train=False,
download=True,
transform=ToTensor()
)

Related

GPU not used on d3rlpy

I am new to using d3rlpy for offline RL training and makes use of pytorch. So I installed cuda 1.16 as recommended from PYtorch doc: pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116. I installed d3rlpy after and run the following sample code:
from d3rlpy.algos import BC,DDPG,CRR,PLAS,PLASWithPerturbation,TD3PlusBC,IQL
import d3rlpy
import numpy as np
import glob
import time
#models
continuous_models = {
"BehaviorCloning": BC,
"DeepDeterministicPolicyGradients": DDPG,
"CriticRegularizedRegression": CRR,
"PolicyLatentActionSpace": PLAS,
"PolicyLatentActionSpacePerturbation": PLASWithPerturbation,
"TwinDelayedPlusBehaviorCloning": TD3PlusBC,
"ImplicitQLearning": IQL,
}
#load dataset data_batch is created as a*.h5 file with d3rlpy
dataset = d3rlpy.dataset.MDPDataset.load(data_batch)
# preprocess
mean = np.mean(dataset.observations, axis=0, keepdims=True)
std = np.std(dataset.observations, axis=0, keepdims=True)
scaler = d3rlpy.preprocessing.StandardScaler(mean=mean, std=std)
# test models
for _model in continuous_models:
the_model = continuous_models[_model](scaler = scaler)
the_model.use_gpu = True
the_model.build_with_dataset(dataset)
the_model.fit(dataset = dataset.episodes,
n_steps_per_epoch = 10800,
n_steps = 54000,
logdir = './logs',
experiment_name = f"{_model}",
tensorboard_dir = 'logs',
save_interval = 900, # we don't want to save intermediate parameters
)
#save model
the_timestamp = int(time.time())
the_model.save_model(f"./models/{_model}/{_model}_{the_timestamp}.pt")
The issue is that None of the models, despite being set with use_gpu =True are actually using the GPU. With a sample code of pytotch and testing torch.cuda.current_device() I can see that pytorch is properly set and detecting the gpu. Any idea where to look for solving this issue? I am not sure this is a bug from the d3rlpy so I would bother creating an issue on github yet :)

Keras Model Training with Azure Machine Learning

I have trained a multiclass-classification model locally using Keras. I am attempting to migrate this so that it can be trained and run in Azure Machine Learning Studio (AML).
I have provided the sections of code below which are used in AML - the Main AML Code and the script to train the model (EnsemblingModel.py). From the Main AML Code, the script to train the model is called via src = (Script Run Config).
Please note that I have also uploaded the dataset which the model should be trained upon to AML directly and is titled 'test_data'.
However an error is returned when executing the line RunDetails(run).show() from the Main AML code section. The error is:
Error occurred: User program failed with FileNotFoundError: [Errno 2] No such file or directory: 'test_data'
This error message refers to the the following line from the EnsemblingModel.py script:
dataframe = pd.read_csv("test_data", header=None)
I understand that the script is unable to load the data and I have therefore tried changing the code, for example:
dataframe = dataset.get_by_name(ws, name='test_data')
Which returned the following error:
Error occurred: User program failed with NameError: name 'dataset' is not defined
How do I change this so that the script is able to read and load the data so that training can commence? Maybe I am going about this completely the wrong way, so any advice is welcomed.
I have consulted the various Microsoft documentation as well as Github azure guides here, but there seems to be limited examples.
I am new to AML, so if anyone has any resources for using it alongside Keras, then that would also be appreciated.
Main AML Code:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import os
import azureml
from azureml.core import Experiment
from azureml.core import Environment
from azureml.core import Dataset
from azureml.core import Workspace, Run
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
from azureml.core import Experiment
script_folder = './TestingModel1'
os.makedirs(script_folder, exist_ok=True)
exp = Experiment(workspace=ws, name='TestingModel1')
dataset = Dataset.get_by_name(ws, name='test_data')
dataframe = dataset.to_pandas_dataframe()
df = dataframe.values
cluster_name = "cpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target')
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_NC6',
max_nodes=4)
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
compute_targets = ws.compute_targets
for name, ct in compute_targets.items():
print(name, ct.type, ct.provisioning_state)
from azureml.core import Environment
keras_env = Environment.from_conda_specification(name = 'keras-2.3.1', file_path = './conda_dependencies.yml')
# Specify a GPU base image
#keras_env.docker.enabled = True
keras_env.docker.base_image = 'mcr.microsoft.com/azureml/openmpi3.1.2-cuda10.0-cudnn7-ubuntu18.04'
from azureml.core import ScriptRunConfig
src = ScriptRunConfig(source_directory=script_folder,
script='EnsemblingModel.py',
compute_target=compute_target,
environment=keras_env)
run = exp.submit(src)
from azureml.widgets import RunDetails
RunDetails(run).show()
Ensembling Model Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#KerasLibraries
from keras import callbacks
from keras.layers.normalization import BatchNormalization
from keras.layers import Activation
from keras.layers import Dropout
from keras.optimizers import SGD
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
#tensorFlow
import tensorflow as tf
#SKLearnLibraries
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
from sklearn.pipeline import Pipeline
from azureml.core import Run
# In[3]:
dataframe = pd.read_csv("test_data", header=None)
dataframe = dataset.get_by_name(ws, name='test_data')
dataset = dataframe.values
# In[4]:
X = dataset[:,0:22].astype(float)
y = dataset[:,22]
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(y)
encoded_y = encoder.transform(y)
# convert integers to dummy variables (i.e. one hot encoded)
dummy_y = np_utils.to_categorical(encoded_y)
print(dummy_y.shape)
#print(X.shape)
#print(X)
import sys
np.set_printoptions(threshold=sys.maxsize)
dummy_y_new = dummy_y[0:42,:]
print(dummy_y_new)
#dataset
# In[5]:
earlystopping = callbacks.EarlyStopping(monitor ="val_loss",
mode ="min", patience = 125,
restore_best_weights = True)
#define Keras
model1 = Sequential()
model1.add(Dense(50, input_dim=22))
model1.add(BatchNormalization())
model1.add(Activation('relu'))
model1.add(Dropout(0.5,input_shape=(50,)))
model1.add(Dense(50))
model1.add(BatchNormalization())
model1.add(Activation('relu'))
model1.add(Dropout(0.5,input_shape=(50,)))
model1.add(Dense(8, activation='softmax'))
#compile the keras model
model1.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])
# fit the keras model on the dataset
model1.fit(X, dummy_y, validation_split=0.25, epochs=10000, batch_size=100, verbose=1, callbacks=[earlystopping])
_, accuracy3 = model1.evaluate(X, dummy_y, verbose=0)
print('Accuracy: %.2f' % (accuracy3*100))
predict_dataset = tf.convert_to_tensor([
[1,5,1,0.459,0.322,0.041,0.002,0.103,0.032,0.041,14,0.404,0.284,0.052,0.008,0.128,0.044,0.037,0.043,54,0,155],
])
predictions = model1(predict_dataset, training=False)
predictions2 = predictions.numpy()
print(predictions2)
print(type(predictions2))
I have resolved the above issue by adding an argument to the ScriptRunConfig code:
test_data_ds = Dataset.get_by_name(ws, name='test_data')
src = ScriptRunConfig(source_directory=script_folder,
script='EnsemblingModel.py',
# pass dataset as an input with friendly name 'titanic'
arguments=['--input-data', test_data_ds.as_named_input('test_data')],
compute_target=compute_target,
environment=keras_env)
As well as the following to the modelling script itself:
import argparse
from azureml.core import Dataset, Run
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str)
args = parser.parse_args()
run = Run.get_context()
ws = run.experiment.workspace
# get the input dataset by ID
dataset = Dataset.get_by_id(ws, id=args.input_data)
# load the TabularDataset to pandas DataFrame
df = dataset.to_pandas_dataframe()
dataset = df.values
For anyone curious, more information can be found here:

Loading trained model to make prediction of single image

I have trained a ResNet50 model on intel image multiclass classification task. The task is trying to predict an image whether it is a building a street or glacier etc. The model is succesfully trained and able to make prediction. I have save the model and trying to use the saved model on new image.
Here is the code on training
import os
import torch
import tarfile
import torchvision
import torch.nn as nn
from PIL import Image
import matplotlib.pyplot as plt
import torch.nn.functional as F
from torchvision import transforms
from torchvision.utils import make_grid
from torch.utils.data import random_split
from torchvision.transforms import ToTensor
from torchvision.datasets import ImageFolder
from torch.utils.data import Dataset, DataLoader
from torchvision.datasets.utils import download_url
import PIL
import PIL.Image
import numpy as np
transform_train=transforms.Compose([
transforms.Resize((150,150)),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize((.5,.5,.5),(.5,.5,.5))
])
transform_test=transforms.Compose([
transforms.Resize((150,150)),
transforms.ToTensor(),
transforms.Normalize((.5,.5,.5),(.5,.5,.5))
])
...
torch.save(model2.state_dict(),'/content/drive/MyDrive/saved_model/model_resnet.pth')
When I called the model in other files, I use similar image transformation, however it gives me an error, here is the code and the error
model = torch.load('/content/drive/MyDrive/saved_model/model_resnet.pth')
image=Image.open(Path('/content/drive/MyDrive/images/seg_pred/seg_pred/10004.jpg'))
transform_train=transforms.Compose([
transforms.Resize((150,150)),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.ToTensor(),
transforms.Normalize((.5,.5,.5),(.5,.5,.5))
])
input = transform_train(image)
#input = input.view(1, 3, 150,150)
output = model(input)
prediction = int(torch.max(output.data, 1)[1].numpy())
print(prediction)
The error that gives me is
TypeError: 'collections.OrderedDict' object is not callable
My pytorch version is
1.9.0+cu102
You need to create the structure of the model first, it's similar to create model2 on your training code, it can be like:
model = resnet()
Then load the saved state dict:
model.load_state_dict(torch.load('/content/drive/MyDrive/saved_model/model_resnet.pth'))
model.eval()
Ref:
https://pytorch.org/tutorials/beginner/saving_loading_models.html
Based on your question it's clear that you want to prediction on a new image. But you are trying to augment and get transform the image using transform which is not a proper way to get the prediction.
So as the code link you provided having plenty of code you can use them as in your code.
I am sharing the fast.ai and simple `TensorFlow code by which you can predict a new image and then be able to see the result.
img = open_image('any_image.jpg')
print(learn.predict(img)[0])
OR you can try this function:
import matplotlib.pyplot as plt # visualization
import matplotlib.image as mpimg
import tensorflow as tf # Deep Learning Framework
import pathlib
def pred_plot(file, model, class_names=class_names, image_size=(150, 150)):
img = tf.io.read_file(file)
img = tf.io.decode_image(img, channels=3)
img = tf.image.resize(img, size=image_size)
pred_probs = model.predict(tf.expand_dims(img, axis=0))
pred_class = class_names[pred_probs.argmax()]
plt.imshow(img/225.)
plt.title(f'Pred: {pred_class}')
plt.axis(False);
pass any image and you will get the prediction with visilzation.
url ='dummy.jpg'
pred_plot(url, model=model_2, class_names=class_names)

Google colab stuck on downloading Fashion_Mnist dataset

I have tried running my code from google collab for the fashion using this code but it is stuck on downloading the code. I also switched between the hardware accelerators but still nothing. Is there any workaround to this problem?
For Google Colab
At the top write !pip install mnist. Use import mnist.
Then simply store the images and labels:
train_images = mnist.train_images()
train_labels = mnist.train_labels()
test_images = mnist.test_images()
test_labels = mnist.test_labels()
That's it!!!
You can download it from the github repository.
Put the downloaded files (from the readme links) in a directory in your current path called data/fashion/, then you can use their loader.
def load_mnist(path, kind='train'):
import os
import gzip
import numpy as np
"""Load MNIST data from `path`"""
labels_path = os.path.join(path,
'%s-labels-idx1-ubyte.gz'
% kind)
images_path = os.path.join(path,
'%s-images-idx3-ubyte.gz'
% kind)
with gzip.open(labels_path, 'rb') as lbpath:
labels = np.frombuffer(lbpath.read(), dtype=np.uint8,
offset=8)
with gzip.open(images_path, 'rb') as imgpath:
images = np.frombuffer(imgpath.read(), dtype=np.uint8,
offset=16).reshape(len(labels), 784)
return images, labels
X_train, y_train = load_mnist('data/fashion', kind='train')
X_test, y_test = load_mnist('data/fashion', kind='t10k')
The other option would be to use the torchvision FMNIST dataset.
Edit
You can also use:
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/fashion', source_url='http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/')
Edit 2
Here is the code for downloading the files (it can be improved with some try-catch):
import os
import requests
path = 'data/fashion'
def download_fmnist(path):
DEFAULT_SOURCE_URL = 'http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/'
files = dict(
TRAIN_IMAGES='train-images-idx3-ubyte.gz',
TRAIN_LABELS='train-labels-idx1-ubyte.gz',
TEST_IMAGES='t10k-images-idx3-ubyte.gz',
TEST_LABELS='t10k-labels-idx1-ubyte.gz')
if not os.path.exists(path):
os.mkdir(path)
for f in files:
filepath = os.path.join(path, files[f])
if not os.path.exists(filepath):
url = DEFAULT_SOURCE_URL + files[f]
r = requests.get(url, allow_redirects=True)
open(filepath, 'wb').write(r.content)
print('Successfully downloaded', f)
download_fmnist(path)
The command keras.datasets.fashion_mnist.load_data() returns a tuple of numpy arrays: (xtrain, ytrain) and (xtest, ytest).
The dataset won't be downloaded to your local storage this way. This is why the command cd fashion-mnist/ raises an error. There was no directory created. The fashion-mnist dataset was loaded correctly into (xtrain, ytrain) and (xtest, ytest) in your code.

Running python code consumes GPU. why?

This is my python code for a model prediction.
import csv
import numpy as np
np.random.seed(1)
from keras.models import load_model
import tensorflow as tf
import pandas as pd
import time
output_location='Desktop/result/'
#load model
global graph
graph = tf.get_default_graph()
model = load_model("newmodel.h5")
def Myfun():
ecg = pd.read_csv('/Downloads/model.csv')
X = ecg.iloc[:,1:42].values
y = ecg.iloc[:,42].values
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
y1 = encoder.fit_transform(y)
Y = pd.get_dummies(y1).values
from sklearn.model_selection import train_test_split
X_train,X_test, y_train,y_test = train_test_split(X,Y,test_size=0.2,random_state=0)
t1= timer()
with graph.as_default():
prediction = model.predict(X_test[0:1])
diff=timer()-t1
class_labels_predicted = np.argmax(prediction)
filename1=str(i)+"output.txt"
newfile=output_location+filename1
with open(str(newfile),'w',encoding = 'utf-8') as file:
file.write(" takes %f seconds time. predictedclass is %s \n" %(diff,class_labels_predicted))
return class_labels_predicted
for i in range(1,100):
Myfun()
My system GPU is of size 2GB. While running this code ,nvidia-smi -l 2 shows it consumes 1.8 GB of GPU. And 100 files are getting as a result. Soon after the task completes again GPU utilisation turns to 500MB. I have tensorflow and keras GPU version installed in my system. My Question is:
Why does this code runs on GPU. Does the complete code uses GPU or its only for importing libraries such as keras-gpu and tensorflow-gpu?
As I can see from your code, you are using Keras and Tensorflow. From Keras F.A.Q.
If you are running on the TensorFlow or CNTK backends, your code will automatically run on GPU if any available GPU is detected.
You can force Keras to run on CPU only
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = ""

Resources