Multiprocessing ResNet Feature Extraction with Image URLs - keras

I have a simple function to take an image url and extract features from it using resnet in Keras then hand it off to an Xgboost model loaded from a pkl file.
def classify(img, resnet_model, loaded_model):
try:
images = io.imread(img.strip())
images = Image.fromarray(images)
test_image = img.resize((224,224))
test_image = image.img_to_array(test_image)
test_image = expand_dims(test_image, axis = 0)
img_data = preprocess_input(test_image)
image_features = resnet_model.predict(img_data)
image_features_array = array(image_features)
predicted_image = loaded_model.predict(xgb.DMatrix(DataFrame(image_features_array)))
except:
predicted_image = 'Broken URL'
return predicted_image`
Currently I am just looping through a list of img urls and it works fine but I will need it to perform much faster. The code itself may not be the most efficient yet but I am mostly concerned with multiprocessing. My attempts either hang or immediately result in an empty list.
There is a similar question posted years ago here: question but the answers were not very satisfying and involved holding a batch of image files locally. I would prefer to just have one worker making the request for the image and then predicting that image.

Related

Can I disable CUDA temporarily in PyTorch? [duplicate]

I want to do some timing comparisons between CPU & GPU as well as some profiling and would like to know if there's a way to tell pytorch to not use the GPU and instead use the CPU only? I realize I could install another CPU-only pytorch, but hoping there's an easier way.
Before running your code, run this shell command to tell torch that there are no GPUs:
export CUDA_VISIBLE_DEVICES=""
This will tell it to use only one GPU (the one with id 0) and so on:
export CUDA_VISIBLE_DEVICES="0"
I just wanted to add that it is also possible to do so within the PyTorch Code:
Here is a small example taken from the PyTorch Migration Guide for 0.4.0:
# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
I think the example is pretty self-explaining. But if there are any questions just ask! One big advantage is when using this syntax like in the example above is, that you can create code which runs on CPU if no GPU is available but also on GPU without changing a single line.
Instead of using the if-statement with torch.cuda.is_available() you can also just set the device to CPU like this:
device = torch.device("cpu")
Further you can create tensors on the desired device using the device flag:
mytensor = torch.rand(5, 5, device=device)
This will create a tensor directly on the device you specified previously.
I want to point out, that you can switch between CPU and GPU using this syntax, but also between different GPUs.
I hope this is helpful!
Simplest way using Python is:
os.environ["CUDA_VISIBLE_DEVICES"]=""
There are multiple ways to force CPU use:
Set default tensor type:
torch.set_default_tensor_type(torch.FloatTensor)
Set device and consistently reference when creating tensors:
(with this you can easily switch between GPU and CPU)
device = 'cpu'
# ...
x = torch.rand(2, 10, device=device)
Hide GPU from view:
import os
os.environ["CUDA_VISIBLE_DEVICES"]=""
General
As previous answers showed you can make your pytorch run on the cpu using:
device = torch.device("cpu")
Comparing Trained Models
I would like to add how you can load a previously trained model on the cpu (examples taken from the pytorch docs).
Note: make sure that all the data inputted into the model also is on the cpu.
Recommended loading
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH, map_location=torch.device("cpu")))
Loading entire model
model = torch.load(PATH, map_location=torch.device("cpu"))
This is a real world example: original function with gpu, versus new function with cpu.
Source: https://github.com/zllrunning/face-parsing.PyTorch/blob/master/test.py
In my case I have edited these 4 lines of code:
#totally new line of code
device=torch.device("cpu")
#net.cuda()
net.to(device)
#net.load_state_dict(torch.load(cp))
net.load_state_dict(torch.load(cp, map_location=torch.device('cpu')))
#img = img.cuda()
img = img.to(device)
#new_function_with_cpu
def evaluate(image_path='./imgs/116.jpg', cp='cp/79999_iter.pth'):
device=torch.device("cpu")
n_classes = 19
net = BiSeNet(n_classes=n_classes)
#net.cuda()
net.to(device)
#net.load_state_dict(torch.load(cp))
net.load_state_dict(torch.load(cp, map_location=torch.device('cpu')))
net.eval()
to_tensor = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),])
with torch.no_grad():
img = Image.open(image_path)
image = img.resize((512, 512), Image.BILINEAR)
img = to_tensor(image)
img = torch.unsqueeze(img, 0)
#img = img.cuda()
img = img.to(device)
out = net(img)[0]
parsing = out.squeeze(0).cpu().numpy().argmax(0)
return parsing
#original_function_with_gpu
def evaluate(image_path='./imgs/116.jpg', cp='cp/79999_iter.pth'):
n_classes = 19
net = BiSeNet(n_classes=n_classes)
net.cuda()
net.load_state_dict(torch.load(cp))
net.eval()
to_tensor = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),])
with torch.no_grad():
img = Image.open(image_path)
image = img.resize((512, 512), Image.BILINEAR)
img = to_tensor(image)
img = torch.unsqueeze(img, 0)
img = img.cuda()
out = net(img)[0]
parsing = out.squeeze(0).cpu().numpy().argmax(0)
return parsing

GAN Model Summary Pytorch using TensorBoard?

Is there a way can I visualize the complete training loop for the GAN architecture in TensorBoard using Pytorch? I think it's possible using TF, but I am having a hard time to figure out one using Pytorch.
You can use TensorboardX for this.
You can make use of SummaryWriter from TensorboardX to create an event file in a given directory and add summaries and events to it.
The code below is an example that you can use but you have to add in the loss values, the ground truth images and the generated images yourself. I commented where they would have to go.
from tensorboardX import SummaryWriter
import torchvision.utils as vutils
import numpy as np
REPORT_EVERY_ITER = 100
SAVE_IMAGE_EVERY_ITER = 1000
if __name__ == "__main__":
writer = SummaryWriter()
gen_losses = []
dis_losses = []
iter_no = 0
// looping over the batches in the environment
for batch_v in iterate_batches(envs):
// getting the outputs
// getting the generators loss
// getting the discriminators loss
iter_no += 1
// save the loss values for both generators and the discriminator every 100 steps
if iter_no % REPORT_EVERY_ITER == 0:
log.info(
"Iter %d: gen_loss=%.3e, dis_loss=%.3e",
iter_no,
np.mean(gen_losses),
np.mean(dis_losses),
)
writer.add_scalar("gen_loss", np.mean(gen_losses), iter_no)
writer.add_scalar("dis_loss", np.mean(dis_losses), iter_no)
gen_losses = []
dis_losses = []
// save the images being produced from both the ground truth and the generator
// it is saved every 1000 iterations
if iter_no % SAVE_IMAGE_EVERY_ITER == 0:
// save the generated images from the generator
writer.add_image(
"fake",
vutils.make_grid(gen_output_v.data[:64], normalize=True),
iter_no
)
// add the ground truth images here
// these will be the same throughout the cycle
writer.add_image(
"real",
vutils.make_grid(batch_v.data[:64], normalize=True),
iter_no
)
To view the results just run the command: tensorboard --logdir runs in the same directory where you ran the model training(runs contains the results from the training). A link will be shown which you can go to view the plots such as the one below. If you want to run Tensorboard on a remote server then you would have to add in the command --bind_all in the command line to access it from the outside.
Viewing the generated images
Viewing the loss values

Load several Images without label in keras cnn

I have several .jpeg images with different names, that I want to load into a cnn in a jupyter notebook to have them classified. The only way I found was:
test_image = image.load_img("name_of_picute.jpeg",target_size=(64,64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis=0)
result = cnn.predict(test_image)
All the other things found at the Keras API like tf.keras.preprocessing.image_dataset_from_directory()seems to only work on labeled data. Sadly I can't "simply" iterate over the name of the pictures a they are named differently, is there a way to predict all of them at once without naming every single picture?
Thanks for yout help,
Nick
The solutiontf.keras.preprocessing.image_dataset_from_directory can be updated to return the dataset and the image_path as explained here -> https://stackoverflow.com/a/63725072/4994352
There are multiple ways, for larger data it is useful to use a tf.data.DataSet as it can be tweaked for performance quite easily. I will give you the non-performance-optimized code. Replace <YOUR PATH INCL. REGEX> with the path like ../input/pokemon-images-and-types/images/*/*.
import tensorflow as tf
from tensorflow.data.experimental import AUTOTUNE
def load(file_path):
img = tf.io.read_file(file_path)
img = tf.image.decode_jpeg(img, channels=3)
... # do some preprocessing like resizing if necessary
return img
list_ds = tf.data.Dataset.list_files(str('<YOUR PATH INCL. REGEX>'), shuffle=True) # Get all images from subfolders
train_dataset = list_ds.take(-1)
# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
train_dataset = train_dataset.map(load, num_parallel_calls=AUTOTUNE)

How to save a encoded image using Pickle

What I am doing here is encoding a image and then adding this into a list with the path of the original image in the database variable like this
database.append[path, encoding]
I then want to save this database variable into a pickle for use in other programs. how would I go about doing that as I have had no luck with saving the files correctly yet.
Any help would be appreciated.
Here is the method that I am using to generate the variables I want to save
def embedDatabase(imagePath, model, metadata):
#Get the metadata
#Perform embedding
# calculated by feeding the aligned and scaled images into the pre-trained network.
'''
#Go through the database and get the embedding for each image
'''
database = []
embedded = np.zeros((metadata.shape[0], 128))
print("Embedding")
for i, m in enumerate(metadata):
img = imgUtil.loadImage(m.image_path())
_,img = imgUtil.alignImage(img)
# scale RGB values to interval [0,1]
if img is not None:
img = (img / 255.).astype(np.float32)
#Get the embedding vectors for the image
embedded[i] = model.predict(np.expand_dims(img, axis=0))[0]
database.append([m.image_path(), embedded[i]])
#return the array of embedded images from the database
return embedded, database
And this is the load image method
def loadImage(path):
img = cv2.imread(path, 1)
if img is not None:
# OpenCV loads images with color channels
# in BGR order. So we need to reverse them
return img[...,::-1]
else:
pass
print("There is no Image avaliable")
Figured it out.
with open("database.pickle", "wb") as f:
pickle.dump(database, f, pickle.HIGHEST_PROTOCOL)
for some reason I needed the pickle.HIGHEST_PROTOCOL Thing

Train your own image with tensorflow?

I have one image ( i don't have dataset ) I want to train a model in tensorflow,
such that I can use that model to recognize the image fast.
I have implemented one such thing, but it doesn't work:
import tensorflow as tf
filenames = ['pic.jpg']
# step 2
filename_queue = tf.train.string_input_producer(filenames)
# step 3: read, decode and resize images
reader = tf.WholeFileReader()
filename, content = reader.read(filename_queue)
image = tf.image.decode_jpeg(content, channels=3)
image = tf.cast(image, tf.float32)
resized_image = tf.image.resize_images(image, [224, 224])
# step 4: Batching
image_batch = tf.train.batch([resized_image], batch_size=8)
Also, how vuforia is able to recognize with only one image so fast?. I want a similar implementation in tensorflow
This is not how machine learning and deep learning works. You can't just grab one element and build a model which explains this one element. If you will check a few NN tutorials, you will see that in order to train a reasonable model people use thousands or even millions of data points.

Resources