I am currently trying to implement Mask RCNN by following Matterport repo. I have a doubt regarding implementation of tensorboard.
The dataset is similar to coco dataset. Inside model.py under def train, tensorboard is mentioned as
callbacks = [ keras.callbacks.TensorBoard(log_dir=self.log_dir,histogram_freq=0, write_graph=True, write_images=False)
But What else I should mention for using tensorboard? When I try to run the tensorboard, it say log file not found. I know that there is something I am missing some where!!. Please help me out !
In your model.train() ensure you set custom_callbacks = callbacks parameter.
If you specified these parameters exactly like this, then it means that your issue is that you do not properly open the logs directory.
Open (inside Anaconda/PyCharm) or a separate Python terminal and put the absolute path(to make sure it works):
tensorboard --logdir = my_absolute_path/logs/
Related
I am having this problem trying to use LayoutParser but I believe it is actually an issue with Pytorch checkpoint. I get the following assert
The directory contains model.pth?dl=1 and model.pth.lock
It looks like the downloading creates a file called model.pth?dl=1 and and a .lock file. But it fails to finish by renaming it to model.pth. It seems to happen with all LayoutParser pretrained models.
Anyone seen this?
Thank you
Peter
The assert is thrown from this call with any of the pretrained models
The full code to reproduce this is:
import layoutparser as lp
if __name__ == "__main__":
model = lp.Detectron2LayoutModel(
config_path="lp://PubLayNet/faster_rcnn_R_50_FPN_3x/config",
label_map={0:"Text",1:"Title",2:"List",3:"Table",4:"Figure"},
extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST",0.1]
)
I have an image dataset (folder of jpg images) .I would like to split it : 70 % for train and 30% for test randomly.
So I write this simple script:
from sklearn.model_selection import train_test_split
path = ".\dataset"
output_split=train_test_split(path,path,test_size=0.2)
But I don't find anything in folder "output_split"
So where I store output of spliting (train and test)?
I recommend using tf.dataset in your project. You can reference this link here for documentation:
https://www.tensorflow.org/api_docs/python/tf/keras/utils/image_dataset_from_directory
And this link for a example of the function use:
https://keras.io/api/preprocessing/image/
Please remember to use the same seed for both training and (testing/validation).
Please note that no changes will happen to your files in your local.
I am using fastai with pytorch to fine tune XLMRoberta from huggingface.
I've trained the model and everything is fine on the machine where I trained it.
But when I try to load the model on another machine I get OSError - Not Found - No such file or directory pointing to .cache/torch/transformers/. The issue is the path of a vocab_file.
I've used fastai's Learner.export to export the model in .pkl file, but I don't believe that issue is related to fastai since I found the same issue appearing in flairNLP.
It appears that the path to the cache folder, where the vocab_file is stored during the training, is embedded in the .pkl file:
The error comes from transformer's XLMRobertaTokenizer __setstate__:
def __setstate__(self, d):
self.__dict__ = d
self.sp_model = spm.SentencePieceProcessor()
self.sp_model.Load(self.vocab_file)
which tries to load the vocab_file using the path from the file.
I've tried patching this method using:
pretrained_model_name = "xlm-roberta-base"
vocab_file = XLMRobertaTokenizer.from_pretrained(pretrained_model_name).vocab_file
def _setstate(self, d):
self.__dict__ = d
self.sp_model = spm.SentencePieceProcessor()
self.sp_model.Load(vocab_file)
XLMRobertaTokenizer.__setstate__ = MethodType(_setstate, XLMRobertaTokenizer(vocab_file))
And that successfully loaded the model but caused other problems like missing model attributes and other unwanted issues.
Can someone please explain why is the path embedded inside the file, is there a way to configure it without reexporting the model or if it has to be reexported how to configure it dynamically using fastai, torch and huggingface.
I faced the same error. I had fine tuned XLMRoberta on downstream classification task with fastai version = 1.0.61. I'm loading the model inside docker.
I'm not sure about why the path is embedded, but I found a workaround. Posting for future readers who might be looking for workaround as retraining is usually not possible.
I created /home/.cache/torch/transformer/ inside the docker image.
RUN mkdir -p /home/<username>/.cache/torch/transformers
Copied the files (which were not found in docker) from my local /home/.cache/torch/transformer/ to docker image /home/.cache/torch/transformer/
COPY filename:/home/<username>/.cache/torch/transformers/filename
I am writing a code of a well-known problem MNIST database of handwritten digits in PyTorch. I downloaded the train and testing dataset (from the main website) including the labeled dataset. The dataset format is t10k-images-idx3-ubyte.gz and after extract t10k-images-idx3-ubyte. My dataset folder looks like
MINST
Data
train-images-idx3-ubyte.gz
train-labels-idx1-ubyte.gz
t10k-images-idx3-ubyte.gz
t10k-labels-idx1-ubyte.gz
Now, I wrote a code to load data like bellow
def load_dataset():
data_path = "/home/MNIST/Data/"
xy_trainPT = torchvision.datasets.ImageFolder(
root=data_path, transform=torchvision.transforms.ToTensor()
)
train_loader = torch.utils.data.DataLoader(
xy_trainPT, batch_size=64, num_workers=0, shuffle=True
)
return train_loader
My code is showing Supported extensions are: .jpg,.jpeg,.png,.ppm,.bmp,.pgm,.tif,.tiff,.webp
How can I solve this problem and I also want to check that my images are loaded (just a figure contains the first 5 images) from the dataset?
Read this Extract images from .idx3-ubyte file or GZIP via Python
Update
You can import data using this format
xy_trainPT = torchvision.datasets.MNIST(
root="~/Handwritten_Deep_L/",
train=True,
download=True,
transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()]),
)
Now, what is happening at download=True first your code will check at the root directory (your given path) contains any datasets or not.
If no then datasets will be downloaded from the web.
If yes this path already contains a dataset then your code will work using the existing dataset and will not download from the internet.
You can check, first give a path without any dataset (data will be downloaded from the internet), and then give another path which already contains dataset data will not be downloaded.
Welcome to stackoverflow !
The MNIST dataset is not stored as images, but in a binary format (as indicated by the ubyte extension). Therefore, ImageFolderis not the type dataset you want. Instead, you will need to use the MNIST dataset class. It could even download the data if you had not done it already :)
This is a dataset class, so just instantiate with the proper root path, then put it as the parameter of your dataloader and everything should work just fine.
If you want to check the images, just use the getmethod of the dataloader, and save the result as a png file (you may need to convert the tensor to a numpy array first).
While trying to run my Keras code on GPU (CUDA installed), I am not able to execute the following statement, as has been suggested on many online references.
set THEANO_FLAGS="mode=FAST_RUN,device=gpu,floatX=float32" & python theanogpu_example.py
I am getting the following error.
ValueError: Invalid value ("FAST_RUN,device=gpu,floatX=float32") for configurati
on variable "mode". Valid options are ('Mode', 'DebugMode', 'FAST_RUN', 'NanGuar
dMode', 'FAST_COMPILE', 'DEBUG_MODE')
I have tried the other mode suggested as well from inside the code.
import theano
theano.config.device = 'gpu'
theano.config.floatX = 'float32'
I get the following error.
Exception: Can't change the value of this config parameter after initialization!
Apart from knowing how to make it run, I would also take this opportunity to ask a simpler question. How to know in Windows what is my device i.e. whether 'gpu' or 'gpu1' or 'gpu0'? I have tried all 3 for my case but it hasn't yielded result.
Any suggestions will be appreciated.
The best way is using THEANO_FLAGS before run code, because the config variables cannot be changed after importing Theano, try this:
import os
os.environ['THEANO_FLAGS'] = "device=cuda,force_device=True,floatX=float32"
import theano