I am trying to implement model parallelism in Keras.
I am using Keras-2.2.4
Tensorflor-1.13.1
Rough structure of my code is :
import tensorflow as tf
import keras
def model_definition():
input0 = Input(shape = (None, None))
input1 = Input(shape = (None, None))
with tf.Session(config=tf.ConfigProto(allow_soft_placement=False, log_device_placement=True)):
model = get_some_CNN_model()
with tf.device(tf.DeviceSpec(device_type="GPU", device_index=0)):
op0 = model(input0)
with tf.device(tf.DeviceSpec(device_type="GPU", device_index=1)):
op1 = model(input1)
with tf.device(tf.DeviceSpec(device_type="CPU", device_index=0)):
concatenated_ops = concatenate([op0, op1],axis=-1, name = 'check_conc1')
mixmodel = Model(inputs=[input0, input1], outputs = concatenated_ops)
return mixmodel
mymodel = model_definition()
mymodel.fit_generator()
Expected result: While training, computations for op0 and op1 should be done on gpu0 and gpu1, respectively.
Problem 1: When I have 2 gpus available, the training works fine. nvidia-smi shows that both gpus are being used. Though I am not sure if both gpus are doing their intended work. How to confirm that?
Because, even I set log_device_placement as True, I don't see any task allocated to gpu 1
Problem 2: When I run this code on a machine with 1 GPU available, still it runs fine. It is expected show an error because GPU 1 is not available.
The example shown here works fine as expected. It doesn't show the problem 2, i.e. on single GPU, it raises an error.
So I think some manipulation is happening inside keras.
I have also tried using import tensorflow.python.keras instead of import keras, in case it is causing any conflict.
However, the both problems persist.
Would appreciate any clue about this issue. Thank you.
Related
I want to do some timing comparisons between CPU & GPU as well as some profiling and would like to know if there's a way to tell pytorch to not use the GPU and instead use the CPU only? I realize I could install another CPU-only pytorch, but hoping there's an easier way.
Before running your code, run this shell command to tell torch that there are no GPUs:
export CUDA_VISIBLE_DEVICES=""
This will tell it to use only one GPU (the one with id 0) and so on:
export CUDA_VISIBLE_DEVICES="0"
I just wanted to add that it is also possible to do so within the PyTorch Code:
Here is a small example taken from the PyTorch Migration Guide for 0.4.0:
# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
I think the example is pretty self-explaining. But if there are any questions just ask! One big advantage is when using this syntax like in the example above is, that you can create code which runs on CPU if no GPU is available but also on GPU without changing a single line.
Instead of using the if-statement with torch.cuda.is_available() you can also just set the device to CPU like this:
device = torch.device("cpu")
Further you can create tensors on the desired device using the device flag:
mytensor = torch.rand(5, 5, device=device)
This will create a tensor directly on the device you specified previously.
I want to point out, that you can switch between CPU and GPU using this syntax, but also between different GPUs.
I hope this is helpful!
Simplest way using Python is:
os.environ["CUDA_VISIBLE_DEVICES"]=""
There are multiple ways to force CPU use:
Set default tensor type:
torch.set_default_tensor_type(torch.FloatTensor)
Set device and consistently reference when creating tensors:
(with this you can easily switch between GPU and CPU)
device = 'cpu'
# ...
x = torch.rand(2, 10, device=device)
Hide GPU from view:
import os
os.environ["CUDA_VISIBLE_DEVICES"]=""
General
As previous answers showed you can make your pytorch run on the cpu using:
device = torch.device("cpu")
Comparing Trained Models
I would like to add how you can load a previously trained model on the cpu (examples taken from the pytorch docs).
Note: make sure that all the data inputted into the model also is on the cpu.
Recommended loading
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH, map_location=torch.device("cpu")))
Loading entire model
model = torch.load(PATH, map_location=torch.device("cpu"))
This is a real world example: original function with gpu, versus new function with cpu.
Source: https://github.com/zllrunning/face-parsing.PyTorch/blob/master/test.py
In my case I have edited these 4 lines of code:
#totally new line of code
device=torch.device("cpu")
#net.cuda()
net.to(device)
#net.load_state_dict(torch.load(cp))
net.load_state_dict(torch.load(cp, map_location=torch.device('cpu')))
#img = img.cuda()
img = img.to(device)
#new_function_with_cpu
def evaluate(image_path='./imgs/116.jpg', cp='cp/79999_iter.pth'):
device=torch.device("cpu")
n_classes = 19
net = BiSeNet(n_classes=n_classes)
#net.cuda()
net.to(device)
#net.load_state_dict(torch.load(cp))
net.load_state_dict(torch.load(cp, map_location=torch.device('cpu')))
net.eval()
to_tensor = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),])
with torch.no_grad():
img = Image.open(image_path)
image = img.resize((512, 512), Image.BILINEAR)
img = to_tensor(image)
img = torch.unsqueeze(img, 0)
#img = img.cuda()
img = img.to(device)
out = net(img)[0]
parsing = out.squeeze(0).cpu().numpy().argmax(0)
return parsing
#original_function_with_gpu
def evaluate(image_path='./imgs/116.jpg', cp='cp/79999_iter.pth'):
n_classes = 19
net = BiSeNet(n_classes=n_classes)
net.cuda()
net.load_state_dict(torch.load(cp))
net.eval()
to_tensor = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),])
with torch.no_grad():
img = Image.open(image_path)
image = img.resize((512, 512), Image.BILINEAR)
img = to_tensor(image)
img = torch.unsqueeze(img, 0)
img = img.cuda()
out = net(img)[0]
parsing = out.squeeze(0).cpu().numpy().argmax(0)
return parsing
I am trying to convert a pre-trained torch model to ONNX, but recive the following error:
RuntimeError: step!=1 is currently not supported
I'm trying this on a pre-trained colorization model: https://github.com/richzhang/colorization
Here is the code I ran in Google Colab:
!git clone https://github.com/richzhang/colorization.git
cd colorization/
import colorizers
model = colorizer_siggraph17 = colorizers.siggraph17(pretrained=True).eval()
input_names = [ "input" ]
output_names = [ "output" ]
dummy_input = torch.randn(1, 1, 256, 256, device='cpu')
torch.onnx.export(model, dummy_input, "test_converted_model.onnx", verbose=True,
input_names=input_names, output_names=output_names)
I appreciate any help :)
UPDATE 1: #Proko suggestion solved the ONNX export issue. Now I have a new possibly related problem when I try to convert the ONNX to TensorRT. I get the following error:
[TensorRT] ERROR: Network must have at least one output
Here is the code I used:
import torch
import pycuda.driver as cuda
import pycuda.autoinit
import tensorrt as trt
import onnx
TRT_LOGGER = trt.Logger()
def build_engine(onnx_file_path):
# initialize TensorRT engine and parse ONNX model
builder = trt.Builder(TRT_LOGGER)
builder.max_workspace_size = 1 << 25
builder.max_batch_size = 1
if builder.platform_has_fast_fp16:
builder.fp16_mode = True
network = builder.create_network()
parser = trt.OnnxParser(network, TRT_LOGGER)
# parse ONNX
with open(onnx_file_path, 'rb') as model:
print('Beginning ONNX file parsing')
parser.parse(model.read())
print('Completed parsing of ONNX file')
# generate TensorRT engine optimized for the target platform
print('Building an engine...')
engine = builder.build_cuda_engine(network)
context = engine.create_execution_context()
print("Completed creating Engine")
return engine, context
ONNX_FILE_PATH = 'siggraph17.onnx' # Exported using the code above
engine,_ = build_engine(ONNX_FILE_PATH)
I tried to force the build_engine function to use the output of the network by:
network.mark_output(network.get_layer(network.num_layers-1).get_output(0))
but it did not work.
I appropriate any help!
Like I have mentioned in a comment, this is because slicing in torch.onnx supports only step = 1 but there are 2-step slicing in the model:
self.model2(conv1_2[:,:,::2,::2])
Your only option as for now is to rewrite slicing to be some other ops. You can do it by using range and reshape to obtain proper indices. Consider the following function "step-less-arange" (I hope it is generic enough for anyone with similar problem):
def sla(x, step):
diff = x % step
x += (diff > 0)*(step - diff) # add length to be able to reshape properly
return torch.arange(x).reshape((-1, step))[:, 0]
usage:
>> sla(11, 3)
tensor([0, 3, 6, 9])
Now you can replace every slice like this:
conv2_2 = self.model2(conv1_2[:,:,self.sla(conv1_2.shape[2], 2),:][:,:,:, self.sla(conv1_2.shape[3], 2)])
NOTE: you should optimize it. Indices are calculated for every call so it might be wise to pre-compute it.
I have tested it with my fork of the repo and I was able to save the model:
https://github.com/prokotg/colorization
What works for me was to add the opset_version=11 on torch.onnx.export
First I had tried use opset_version=10, but the API suggest 11 so it works.
So your function should be:
torch.onnx.export(model, dummy_input, "test_converted_model.onnx", verbose=True,opset_version=11,
input_names=input_names, output_names=output_names)
I'm a beginner in deep learning and python,
I tried to run keras R-FCN I google colab and i got an error like this
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Error while reading resource variable _AnonymousVar404 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar404/N10tensorflow3VarE does not exist.
[[node regr_vote/crop_to_bounding_box_7/stack/ReadVariableOp_1 (defined at usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3009) ]]
(1) Cancelled: Function was cancelled before it was started
0 successful operations.
0 derived errors ignored. [Op:__inference_keras_scratch_graph_16906]
I think function crop_to_bounding_box has a something that uninitialized
and this is the code that called crop_to_bounding_box
# position-sensitive ROI pooling + classify
score_map_bins = []
for channel_step in range(self.k*self.k):
bin_x = K.variable(int(channel_step % self.k) *
self.pool_shape, dtype='int32')
print(bin_x)
bin_y = K.variable(int(channel_step / self.k) *
self.pool_shape, dtype='int32')
channel_indices = K.variable(list(range(
channel_step*self.channel_num, (channel_step+1)*self.channel_num)), dtype='int32')
croped = tf.image.crop_to_bounding_box(
tf.gather(pooled, indices=channel_indices, axis=-1), bin_y, bin_x, self.pool_shape, self.pool_shape)
# [pool_shape, pool_shape, channel_num] ==> [1,1,channel_num] ==> [1, channel_num]
croped_mean = K.pool2d(croped, (self.pool_shape, self.pool_shape), strides=(
1, 1), padding='valid', data_format="channels_last", pool_mode='avg')
# [batch * num_rois, 1,1,channel_num] ==> [batch * num_rois, 1, channel_num]
croped_mean = K.squeeze(croped_mean, axis=1)
score_map_bins.append(croped_mean)
I'm using tensorflow-GPU v2.1.0 and Keras 2.3.1
UPDATED:
probably because I add K.variable inside loop, if I tried comment K.variable the code work perfectly, but I need to change K.variable based on channel_step.
how to update K.variable so i can defined K.variable outside loop, and update value inside the loop based on channel_step ?
I found the solution for this problem, I just need to downgrade Keras and TensorFlow version. Now I'm using tensorflow-GPU 1.15.0 and Keras 2.2.4. Apparently, this code does not support TensorFlow 2 and later.
I have fixed this by changing the load function. If you are loading your model from .h5 or .pb
change it from this:
model = tf.saved_model.load(self.filename, [tf.saved_model.SERVING])
to this:
from tensorflow.python.keras.models import load_model
model = load_model(self.filename)
I have a strange problem in trying to use OpenVino.
I have exported my pytorch model to onnx and then imported it to OpenVino using the following command:
python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model ~/Downloads/unet2d.onnx --disable_resnet_optimization --disable_fusing --disable_gfusing --data_type=FP32
So for the test case, I have disabled the optimizations.
Now, using the sample python applications, I run inference using the model as follows:
from openvino.inference_engine import IENetwork, IECore
import numpy as np
model_xml = path.expanduser('model.xml')
model_bin = path.expanduser('model.bin')
ie = IECore()
net = IENetwork(model=model_xml, weights=model_bin)
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
net.batch_size = 1
exec_net = ie.load_network(network=net, device_name='CPU')
np.random.seed(0)
x = np.random.randn(1, 2, 256, 256) # expected input shape
res = exec_net.infer(inputs={input_blob: x})
res = res[out_blob]
The problem is that this seems to output something completely different from my onnx or the pytorch model.
Additionally, I realized that I do not even have to pass an input, so if I do something like:
x = None
res = exec_net.infer(inputs={input_blob: x})
This still returns me the same output! So it seems to suggest that somehow my input is getting ignored or something like that?
Could you try without --disable_resnet_optimization --disable_fusing --disable_gfusing
with leaving the optimizations in.
I am currently trying to visualize the output of an intermediate layer in Keras 1.0 (which I could do with Keras 0.3) but it does not work anymore.
x = model.input
y = model.layers[3].output
f = theano.function([x], y)
But I get the following error:
MissingInputError: ("An input of the graph, used to compute DimShuffle{x,x,x,x}(keras_learning_phase), was not provided and not given a value.Use the Theano flag exception_verbosity='high',for more information on this error.", keras_learning_phase)
Prior to Keras 1.0, with my graph model, I could just do:
x = graph.inputs['input'].input
y = graph.nodes[layer].get_output(train=False)
f = theano.function([x], y, allow_input_downcast=True)
So I suspect it to come from the "train=False" parameter which I don't know how to set in the new version.
Thank you for your help
Try:
In the import statements first give
from keras import backend as K
from theano import function
then
f = K.function([model.layers[0].input, K.learning_phase()],
[model.layers[3].output])
# output in test mode = 0
layer_output = get_3rd_layer_output([X_test, 0])[0]
# output in train mode = 1
layer_output = get_3rd_layer_output([X_train, 1])[0]
This was just answered by François Chollet on github:
Your model apparently has a different behavior in training and test mode, and so needs to know what mode it should be using.
Use
iterate = K.function([input_img, K.learning_phase()], [loss, grads])
and pass 1 or 0 as value for the learning phase, based on whether you want the model in training mode or test mode.
https://github.com/fchollet/keras/issues/2417