Import Image from Google Drive to Google Colab - python-3.x

I have mounted my Google drive to my colab notebook:
from google.colab import drive
drive.mount("/content/gdrive")
I can import csv files through it, in my case:
df = pd.read_csv("/content/gdrive/MyDrive/colab/heart-disease.csv")
But when I try to import images in markdown/text in colab
nothing happens:
![](/content/gdrive/MyDrive/colab/6-step-ml-framework.png)
Here's my directory on Google drive:

You can using OpenCV in colab by import cv2 as cv, read image by img = cv.imread('/content/gdrive/MyDrive/colab/6-step-ml-framework.png'), convert image to numpy array using img = np.float32(img)

#Mr. For Example
you should not use np.float(img), because OpenCV's method imread has converted it to np.array already

Related

How would I import Pillow for python

I've tried everything from other questions but nothing seems to work. It’s installed on this computer, it’s just I can’t import it!
Here’s what I wrote:
from PIL import Image
#Open image using Image module
im = Image.open("/home/****/Pictures/Screenshot from 2021-08-03 18-21-59.png")
#Show actual Image
im.show()

How to read an image from Google Drive using Python?

I want to read image from drive and convert to binary.How can I do that? I used this code but not get the actual image.
link = urllib.request.urlopen("https://drive.google.com/file/d/1CT12YIeF0xcc8cwhBpvR-Oq0AFOABwsw/view?usp=sharing").read()
image_base64 = base64.encodestring(link)
1. Download the image to your computer.
2. You can use cv2 to convert an image to binary like so:
import cv2
img = cv2.imread('imgs/mypic.jpg',2)
ret, bw_img = cv2.threshold(img,127,255,cv2.THRESH_BINARY)

How to save images using matplotlib without displaying them?

I have multiple(in millions) numpy 2-D arrays which need to be saved. One can save an individual image like this:
import numpy as np
import matplotlib.pyplot as plt
surface_profile = np.empty((50,50)) #numpy array to be saved
plt.figure()
plt.imshow(surface_profile)
save_filename='filename.png'
plt.savefig(save_filename)
However this process also displays the image which I don't require. If I keep saving million images like this, I should somehow avoid imshow() function of matplotlib.
Any help???
PS: I forgot to mention that I am using Spyder.
Your problem is using plt.imshow(surface_profile) to create the image as this will always display the image as well.
This can be done using PIL, try the following,
from PIL import Image
import numpy as np
surface_profile = np.empty((50,50)) #numpy array to be saved
im = Image.fromarray(surface_profile)
im = im.convert('RGB')
save_filename="filename.png"
im.save(save_filename, "PNG")

Is there anyway to call PubChem API In python?

I have been using PubChem API to convert Chemical smiles to the structure but still have an error.
Here is my google colab I try with PIL image plus TKinter
https://colab.research.google.com/drive/1TE9WxXwaWKSLQzKRQoNlWFqztVSoIxB7
My desired output should be in structure format like this
https://pubchem.ncbi.nlm.nih.gov/rest/pug/compound/smiles/O=C(N1C=CN=C1)N2C=CN=C2/PNG?record_type=2d&image_size=large
Download and display in a Jupyter Notebook
from urllib.request import urlretrieve
from IPython.display import Image
smiles = 'NC1=NC(C)=C(C2=CC=C(S(=O)(C)=O)C(F)=C2)S1'
urlretrieve('https://pubchem.ncbi.nlm.nih.gov/rest/pug/compound/smiles/'+smiles+'/PNG', 'smi_pic.png')
p = Image(filename='smi_pic.png')
p
Output

Loading a 8.9 GB dataset from Google Drive to Google Colab?

I am working on a huge laboratory dataset and want to know how to load an 8.9GB dataset from my google drive to my google colab file. The error it shows is runtime stopped, Restarting it.
I've already tried chunksize, nrows, na_filter, and dask. There might be a problem implementing them though. If you could explain to me how to use it. I am attaching my original code below.
import pandas as pd
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
id = '1M4tregypJ_HpXaQCIykyG2lQtAMR9nPe'
downloaded = drive.CreateFile({'id':id})
downloaded.GetContentFile('Filename.csv')
df = pd.read_csv('Filename.csv')
df.head()
If you suggest any of the methods I've already tried please do so with appropriate and working code.
The problem is probably from pd.read_csv('Filename.csv').
A 8.9GB CSV file will take more than 13GB RAM. You should not load the whole file into memory, but work incrementally.

Resources