Access Google Sheets on Google Colaboratory - python-3.x

Hi I am using Google Colaboratory (similar to Jupyter Notebook). Does anyone know how to access data from Google Sheets using Google Colaboratory notebook?

Loading data from Google Sheets is covered in the I/O example notebook:
https://colab.research.google.com/notebook#fileId=/v2/external/notebooks/io.ipynb&scrollTo=sOm9PFrT8mGG

!pip install --upgrade -q gspread
import gspread
import pandas as pd
from google.colab import auth
auth.authenticate_user()
from google.auth import default
creds, _ = default()
gc = gspread.authorize(creds)
worksheet = gc.open('data_set.csv').sheet1
rows = worksheet.get_all_values()
pd.DataFrame.from_records(rows)

Related

PyTrends Api Not giving Same Results On google Colab after each Runtime Start

I am using pytrends library in google colab, the problem is, whenever I restart my runtime the results are different. Here is my code
!pip install pytrends
from pytrends.request import TrendReq
wom = 'today 1-m'
geo = 'GB'
key_word = '401k'
pytrend = TrendReq(hl='en-US',tz=-360)
pytrend.build_payload([key_word], timeframe=wom, geo=geo)
wtrends = pytrend.interest_over_time()
print(wtrends)
This code always give me same results when I run it on my local machine using anaconda. You can verify the results by going to the google trends website and by selecting the region to United Kindom.

Import Image from Google Drive to Google Colab

I have mounted my Google drive to my colab notebook:
from google.colab import drive
drive.mount("/content/gdrive")
I can import csv files through it, in my case:
df = pd.read_csv("/content/gdrive/MyDrive/colab/heart-disease.csv")
But when I try to import images in markdown/text in colab
nothing happens:
![](/content/gdrive/MyDrive/colab/6-step-ml-framework.png)
Here's my directory on Google drive:
You can using OpenCV in colab by import cv2 as cv, read image by img = cv.imread('/content/gdrive/MyDrive/colab/6-step-ml-framework.png'), convert image to numpy array using img = np.float32(img)
#Mr. For Example
you should not use np.float(img), because OpenCV's method imread has converted it to np.array already

folium map not showing datbricks python

I am working on Databricks and have a folium map:
import geopandas as gpd
import matplotlib as plt
import os
import folium
from IPython.display import display
map_osm = folium.Map(location=[45.5236, -122.6750])
map_osm
I get the following:
<folium.folium.Map at 0x7f9978eec748>
I tried Folium map not displaying to no avail.
Any suggestions
Try this
import folium
import webbrowser
map_osm = folium.Map(location=[45.5236, -122.6750])
map_osm.save('map.html')
webbrowser.open('map.html')
The output of the function is a HTML file and Python IDLE fails to render the html document unless explicitly called. You can also try using the same code on Jupyter notebook which runs on a browser and can render html map at ease.
Turning the map into HTML then displaying worked for me in Databricks using Python 3.5
world_map = folium.Map()
html_map = world_map._repr_html_()
displayHTML(html_map)
The original answer came from Databricks forums by ShumZZ: https://forums.databricks.com/questions/444/how-to-create-maps-in-databricks.html

Loading a 8.9 GB dataset from Google Drive to Google Colab?

I am working on a huge laboratory dataset and want to know how to load an 8.9GB dataset from my google drive to my google colab file. The error it shows is runtime stopped, Restarting it.
I've already tried chunksize, nrows, na_filter, and dask. There might be a problem implementing them though. If you could explain to me how to use it. I am attaching my original code below.
import pandas as pd
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
id = '1M4tregypJ_HpXaQCIykyG2lQtAMR9nPe'
downloaded = drive.CreateFile({'id':id})
downloaded.GetContentFile('Filename.csv')
df = pd.read_csv('Filename.csv')
df.head()
If you suggest any of the methods I've already tried please do so with appropriate and working code.
The problem is probably from pd.read_csv('Filename.csv').
A 8.9GB CSV file will take more than 13GB RAM. You should not load the whole file into memory, but work incrementally.

Pandas DataReader

This may be a really simple question but I am truly stuck.
I am trying to call Pandas' DataReader like:
from pandas.io.date import DataReader
but it does not get DataReader. I do not know what I am doing wrong, especially for such a simple thing. All I am trying to do is to acquire data from Yahoo Finance.
Thanks a lot for the help.
Pandas data reader was removed from pandas, it is now a separate repo and a separate install
https://github.com/pydata/pandas-datareader
From the readme.
Starting in 0.19.0, pandas no longer supports pandas.io.data or pandas.io.wb, so you must replace your imports from pandas.io with those from pandas_datareader:
from pandas.io import data, wb # becomes
from pandas_datareader import data, wb
Many functions from the data module have been included in the top level API.
import pandas_datareader as pdr
pdr.get_data_yahoo('AAPL')

Resources