Even though I defined my Google Drive(and my dataset in it) to google colab but when I run my code I give this error:FileNotFoundError: [Errno 2] No such file or directory: 'content/drive/My Drive/....
I already defined google drive in google colab and I can access to it through google colab but when I run my code I give this error
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
model=Sequential()
model.add(Convolution2D(32,3,3,input_shape=(64,64,3),activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Convolution2D(32,3,3,activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(output_dim=128,activation='relu'))
model.add(Dense(output_dim=1,activation='sigmoid'))
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
from keras.preprocessing.image import ImageDataGenerator
train_datagen=ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen=ImageDataGenerator(rescale=1./255)
training_set=train_datagen.flow_from_directory(
directory='content/drive/My Drive/Convolutional_Neural_Networks/dataset/training_set',
target_size=(64,64),
batch_size=32,
class_mode='binary')
test_set=test_datagen.flow_from_directory(
directory='content/drive/My Drive/Convolutional_Neural_Networks/dataset/test_set',
target_size=(64,64),
batch_size=32,
class_mode='binary')
#train
model.fit_generator(
training_set,
samples_per_epoch=8000,
nb_epoch=2,
validation_data=test_set,
nb_val_samples=1000)
import numpy as np
from keras.preprocessing import image
test_image=image.load_img('sunn.jpg',target_size=(64,64))
test_image=image.img_to_array(test_image)
test_image=np.expand_dims(test_image,axis=0)
result=model.predict(test_image)
training_set.class_indices
if result[0][0] >= 0.5:
prediction='dog'
else:
prediction='cat'
print(prediction)
After mounting, move into the dataset folder.
cd content/drive/My Drive/Convolutional_Neural_Networks/dataset/
Don't use the !.
Then set your directory as ./training_set
I think you are missing a leading / in your /content/drive... path.
It's typical to mount you Drive files via
from google.colab import drive
drive.mount('/content/drive')
https://colab.research.google.com/notebooks/io.ipynb#scrollTo=u22w3BFiOveA
I have been trying, and for those curious, it has not been possible for me to use flow from directory with a folder inside google drive. The collab file environment does not read the path and gives a "Folder does not exist" error. I have been trying to solve the problem and searching stack, similar questions have been posted here Google collaborative and here Deep learnin on Google Colab: loading large image dataset is very long, how to accelerate the process? , with no effective solution and for some reason, many downvotes to those who ask.
The only solution I find to reading 20k images in google colab, is uploading them and then processing them, wasting two sad hours to do so. It makes sense, google identifies things inside the drive with ids, flow from directory requires it to be identified both the dataset, and the classes with folder absolute paths, not being compatible with google drives identification method. Alternative might be using a google cloud enviroment instead I suppose and paying.We are getting quite a lot for free as it is. This is my novice understanding of the situation, please correct me if wrong.
edit1: I was able to use flow from directory on google collab, google does identify things with path also, the thing is that if you use os.getcwd(), it does not work properly, if you use it it will give you that the current working directory is "/content", when in truth is "/content/drive/My Drive/foldersinsideyourdrive/...../folderthathasyourcollabnotebook/. If you change in the traingenerator the path so that it includes this setting, and ignore os, it works. I had however, problems with the ram even when using flow from directory, not being able to train my cnn anyway, might be something that just happens to me though.
from google.colab import drive
drive.mount('/content/drive')
using above code you can load your drive in colab,
when to load images use:
directory='drive/My Drive/Convolutional_Neural_Networks/dataset/test_set',
not this :
directory='content/drive/My Drive/Convolutional_Neural_Networks/dataset/test_set',
for keras imagedatagenerator dataset strcut:
So, I started by the default commands of Colab
from google.colab import drive
drive.mount('/gdrive', force_remount=True)
And the main changes that I did was here
img_width, img_height = 64, 64
train_data_dir = '/gdrive/My Drive/Colab Notebooks/dataset/training_set'
validation_data_dir = '/gdrive/My Drive/Colab Notebooks/dataset/test_set'
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(64, 64),
batch_size=32,
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_data_dir,
target_size=(64, 64),
batch_size=32,
class_mode='binary')
classifier.fit_generator(
train_generator,
steps_per_epoch=8000, # Number of images in train set
epochs=25,
validation_data=validation_generator,
validation_steps=2000)
This worked for me and I hope this helps someone.
P.s. Ignore the indentation.
for some reason you have to %cd into your google drive folder and then execute your code in order to access files from your drive or write files there.
first mount your google drive:
from google.colab import drive
drive.mount('/gdrive', force_remount=True)
then cd into your google drive and then run your code:
%cd /content/drive/My\ Drive/
directory='./Convolutional_Neural_Networks/dataset/training_set'
Try removing "content", it worked for me after 1-hour of troubleshooting here
cd drive/My Drive/dissertation
After Mounted at /content/drive
from google.colab import drive
drive.mount('/content/drive')
Change working directory to folder created previously
cd '/content/drive/My Drive/PLANT DISEASE RECOGNITION'
This causes me the error that we cannot change the directory.
To solve this error we may use
%cd /content/drive/My\ Drive/PLANT DISEASE RECOGNITION
After following the mount drive advice:
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
I realised that referencing the dataset directly, by name, didn't work. Loading the path (parent) of my dataset did work.
This didn't work:
dataset = load_dataset("/content/drive/MyDrive/my_filename.json")
This did work:
dataset = load_dataset("/content/drive/MyDrive")
Related
I am trying to save and load with the following code but it is not working. It is showing me an error telling me that it is not able to find the model. Am I missing something? I'm using Google Colab. Thank you
import keras
callbacks_list = [keras.callbacks.EarlyStopping(monitor='val_loss',patience=6,),
keras.callbacks.ModelCheckpoint(filepath='my_model.h5',monitor='val_loss',mode='min', save_freq='epoch',save_best_only=True,)]
model.compile(loss=MeanAbsoluteError(), optimizer='Adam',metrics=[RootMeanSquaredError()])
history= model.fit(X_train, y_train,batch_size=512, epochs=100,callbacks=callbacks_list,validation_data=(X_val, y_val))
from tensorflow.keras.models import load_model
#save model to single file
model.save('my_model.h5')
#To load model
model = load_model('my_model.h5')
Since you are using Google Colab, you must mount your drive to access the data on Colab. Assuming that the notebook you are executing is in the directory my_dir (update the path according to YOUR particular path) you can add the following code to a cell before your save and load code:
from google.colab import drive
drive.mount('/content/drive') # mounts the drive
%cd /content/drive/MyDrive/my_dir/ # moves your position inside the directory where you are executing the code
# ... your code to save and your code to load
I am writing a code of a well-known problem MNIST database of handwritten digits in PyTorch. I downloaded the train and testing dataset (from the main website) including the labeled dataset. The dataset format is t10k-images-idx3-ubyte.gz and after extract t10k-images-idx3-ubyte. My dataset folder looks like
MINST
Data
train-images-idx3-ubyte.gz
train-labels-idx1-ubyte.gz
t10k-images-idx3-ubyte.gz
t10k-labels-idx1-ubyte.gz
Now, I wrote a code to load data like bellow
def load_dataset():
data_path = "/home/MNIST/Data/"
xy_trainPT = torchvision.datasets.ImageFolder(
root=data_path, transform=torchvision.transforms.ToTensor()
)
train_loader = torch.utils.data.DataLoader(
xy_trainPT, batch_size=64, num_workers=0, shuffle=True
)
return train_loader
My code is showing Supported extensions are: .jpg,.jpeg,.png,.ppm,.bmp,.pgm,.tif,.tiff,.webp
How can I solve this problem and I also want to check that my images are loaded (just a figure contains the first 5 images) from the dataset?
Read this Extract images from .idx3-ubyte file or GZIP via Python
Update
You can import data using this format
xy_trainPT = torchvision.datasets.MNIST(
root="~/Handwritten_Deep_L/",
train=True,
download=True,
transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()]),
)
Now, what is happening at download=True first your code will check at the root directory (your given path) contains any datasets or not.
If no then datasets will be downloaded from the web.
If yes this path already contains a dataset then your code will work using the existing dataset and will not download from the internet.
You can check, first give a path without any dataset (data will be downloaded from the internet), and then give another path which already contains dataset data will not be downloaded.
Welcome to stackoverflow !
The MNIST dataset is not stored as images, but in a binary format (as indicated by the ubyte extension). Therefore, ImageFolderis not the type dataset you want. Instead, you will need to use the MNIST dataset class. It could even download the data if you had not done it already :)
This is a dataset class, so just instantiate with the proper root path, then put it as the parameter of your dataloader and everything should work just fine.
If you want to check the images, just use the getmethod of the dataloader, and save the result as a png file (you may need to convert the tensor to a numpy array first).
I saved keras trained model in google colab free version
model.save("my_model.h5")
i tried to retrieve model using below method
from keras.models import load_model
model = load_model('my_model.h5')
But it is throwing errors
OSError: Unable to open file (unable to open file: name = 'my_model.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
will i able to retrive saved model from free google colab version, can you any help on this
I checked similar question in stackoverflow, i think these answers belongs to colab pro version
Otherwise, do i have to save model in specific path to local drive while training?
What is Problem
You are storing your model in runtime not in your google drive. After 12 hour runtime automatically deleted with data. So we have to save model in google drive.
How to store to Google Drive
First connect to google drive
from google.colab import drive
drive.mount('/content/drive')
Now you will find file explorer at left side which has drive directory. When you go inside that directory, it will take you to google drive.
Suppose I want to put my data in drive My Drive then
from keras.models import load_model
MODEL_PATH = './drive/My Drive/model.h5'
# Now save model in drive
model.save(MODEL_PATH)
# Load Model
model = load_model(MODEL_PATH)
When you open your drive, you will find file model.h5 in drive.
I am trying to load a densenet121 model in Kaggle kernel without switching on the internet.
I have done the required steps such as adding the pre-trained weights to my input directory and moving it to '.cache/torch/checkpoints/'. It still would not work and throws a gaierror.
The following the is code SNIPPET:
!mkdir -p /tmp/.cache/torch/checkpoints
!cp ../input/fastai-pretrained-models/densenet121-a639ec97.pth /tmp/.cache/torch/checkpoints/densenet121-a639ec97.pth
learn_cd = create_cnn(data_cd, models.densenet121, metrics=[error_rate, accuracy],model_dir = Path('../kaggle/working/models'),path=Path('.'),).to_fp16()
I have been struggling with this for a long time. Any help would be immensely helpful
so input path "../input/" in kaggle kernel is read only. create a folder in "kaggle/working" rather and copy the model weights there. Example below
if not os.path.exists('/root/.cache/torch/hub/checkpoints/'):
os.makedirs('/root/.cache/torch/hub/checkpoints/')
!mkdir '/kaggle/working/resnet34'
!cp '/root/.cache/torch/hub/checkpoints/resnet34-333f7ec4.pth' '/kaggle/working/resnet34/resnet34.pth'
After I call this function like this:
from tensorflow.contrib import learn
#----------------------------------------
#Do some process here
#----------------------------------------
classifier = learn.Estimator(model_fn=bag_of_words_model,model_dir='F:/data')
classifier.fit(feature_train, target_train, steps=1000)
I will have some file in my folder "F:/data" like this
And I wonder Do I have anyway to reuse this model ? Like move to new computer and use this to predict new data. Sorry for my bad English. Thanks for all the answers!! Hope you all have a nice day.
In a script where you want to reuse your model, redefine/import bag_of_words_model again and define
classifier_loaded = learn.Estimator(model_fn=bag_of_words_model, model_dir='F:/data')
Tensorflow will reload the graph and import the weights into the graph and you can just use it as
classifier_loaded.predict(input_fn=input_fn)
or continue the training with
claissifier_loaded.fit(feature_train, target_train)