Please using tenseal library for homomorphic encryption, how can we use crypted tensors (ckks_tensor object) on tensor operations (tensor.grad) or backward... with pytorch.
Thanks in advance ^
Related
I have a tensor and want to convert in into a Quantized Binary form.
x= torch.Tensor([-9.0387e-01, 1.4811e-01, 2.8242e-01, 3.6679e-01, 3.2012e-01])
PyTorch only supports qint8 type.
You can convert the tensor to a quantized version with torch.quantize_per_tensor, you can check the wiki here.
I'm trying to convert Pytorch model containing NonZero operations to TRT.
I've successfully saved it as ONNX, but neither TensorRT support the operation natively nor existing converters like https://github.com/onnx/onnx-tensorrt
Could you please advice how I can deal with it maybe by changing the operation to Where or Equal + Logical Not?
I have a problem making a prediction using a pre-trained model that contains an encoder and decoder for handwritten text recognition.
What I did is the following:
checkpoint = torch.load("Model/SPAN/SPAN-PT-RA_rimes.pt",map_location=torch.device('cpu'))
encoder_state_dict = checkpoint['encoder_state_dict']
decoder_state_dict = checkpoint['decoder_state_dict']
img = torch.LongTensor(img).unsqueeze(1).to(torch.device('cpu'))
global_pred = decoder_state_dict(encoder_state_dict(img))
This generates this error:
TypeError: 'collections.OrderedDict' object is not callable
I would highly appreciate your help! ^_^
encoder_state_dict and decoder_state_dict are not the torch Models, but a collection (dictionary) of tensors that include pre-trained parameters of the checkpoint you loaded.
Feeding inputs (such as the input image you got transformed) to such collection of tensors does not make sense. In fact, you should use these stat_dicts (i.e., a collection of pre-trained tensors) to load them into the parameters of your model object that is mapped to the network. See torch.nn.Module class.
I am trying out some examples using keras models, that are already available. Most of the examples are using keras with tensorflow (or pytorch or theano).
Due to limited available resource and cost cutting, I am using plaidml to work with amd gpu. As keras support pluggable backend, I think this may not be an issue.
Please share your thoughts about using keras api and later plugging in with desired backend.
I have this concern because the samples and this are using keras from tensorflow (import tensorflow.keras) and I am using plain from keras(import keras) with pluggable backend.
what is equivalent statement for
img = tf.io.decode_png(img, channels=1)
# 3. Convert to float32 in [0, 1] range
img = tf.image.convert_image_dtype(img, tf.float32)
Is there any limitation going with plain keras api?
I just used PIL Image to read and convert an image. It works the same as without using tensorflow api. Most of the keras api can be used irrespective of the backend. There are some caveat with PlaidML as well, there are some function like CTC Loss ctc_batch_cost cannot be found. I got an error like
The Keras backend function 'ctc_batch_cost' is not yet implemented in
Plaid. You can help us prioritize by letting us know if this function
is important to you, and as always, contributions are welcome!
There are some posts, which provide some sample implementation but it is not straight forward. From PLaidML, the response was that it may not be available soon.
The program I am inspecting uses pytorch to load weights and cuda code to do the computations with the weights. My understanding of THC library is how tensors are implemented in the backend of pytorch ( and torch, maybe? ).
how is THC implemented ( I would really appreiciate some details if possible )?
what does THCudaTensor_data( THC_state, THCudaTensor* ) do? ( from the way it is used in the code, it seems like it is used to convert pytorch's tensor to an array in cuda. if this is the case, then would the function preserve all elements and the length of the array?)
I am still not exactly sure of the inner-workings of THCudaTensor_data, but the behaviour that was tripping me up was: for n-dimensional tensor, THCudaTensor_data returns a flattened 1D array of the tensor.
Hope this helps