"No such table" error while loading the .db file in python - python-3.x

I'm trying to read the .db file in python code, whereas i getting "no table found an" error. But i could see the table when I import it onto MYSQL DB.
import sqlite3;
import pandas as pd;
con=None
def getConnection():
databaseFile="test.db"
global con
if con == None:
con=sqlite3.connect(databaseFile)
return con
def queryExec():
con=getConnection()
result=pd.read_sql_query("select * from Movie;",con)
result
queryExec()
Even I tried using the absolute path of the .db file, but no luck.

Assume you're trying to read data from SQLite database file, here is a simpler way to do it.
import sqlite3
import pandas as pd
con = sqlite3.connect("test.db")
with con:
df = pd.read_sql("select * from Movie", con)
print(df)

Related

Unable to save file in DBFS

I have took the azure datasets that are available for practice. I got the 10 days data from that dataset and now I want to save this data into DBFS in csv format. I have facing an error :
" No such file or directory: '/dbfs/temp/hive/mytest.csv'"
but on the other hand if I am able to access the path directly from DBFS. This path is correct.
My code :
from azureml.opendatasets import NoaaIsdWeather
from datetime import datetime
from dateutil import parser
from dateutil.relativedelta import relativedelta
spark.sql('DROP Table if exists mytest')
dbutils.fs.rm("dbfs:/tmp/hive",recurse = True)
basepath = "dbfs:/tmp/hive"
try:
dbutils.fs.ls(basepath)
except:
dbutils.fs.mkdirs(basepath)
else:
raise Exception("The Folder "+ basepath + " already exist, this notebook will remove in the end")
dbutils.fs.mkdirs("dbfs:/tmp/hive")
start_date = parser.parse('2020-5-1')
end_date = parser.parse('2020-5-10')
isd = NoaaIsdWeather(start_date, end_date)
pdf = isd.to_spark_dataframe().toPandas().to_csv("/dbfs/temp/hive/mytest.csv")
What should I do ?
Thanks
I tried reproducing the same issue. First I have used the following code and made sure that the directory exists using os.listdir().
from azureml.opendatasets import NoaaIsdWeather
from datetime import datetime
from dateutil import parser
from dateutil.relativedelta import relativedelta
spark.sql('DROP Table if exists mytest')
dbutils.fs.rm("dbfs:/tmp/hive",recurse = True)
basepath = "dbfs:/tmp/hive"
try:
dbutils.fs.ls(basepath)
except:
dbutils.fs.mkdirs(basepath)
else:
raise Exception("The Folder "+ basepath + " already exist, this notebook will remove in the end")
dbutils.fs.mkdirs("dbfs:/tmp/hive")
import os
os.listdir("/dbfs/tmp/hive/")
Then I used the following to write the csv using to_pandas_dataframe(). This has successfully written the required dataframe to csv file in required path.
mydf = isd.to_pandas_dataframe()
mydf.to_csv("/dbfs/tmp/hive/mytest.csv")

Python pandas into azure SQL, bulk insert

How can I arrange bulk insert of python dataframe into corresponding azure SQL.
I see that INSERT works with individual records :
INSERT INTO XX ([Field1]) VALUES (value1);
How can I insert the entire content of dataframe into Azure table?
Thanks
According to my test, we also can use to_sql to insert data to Azure sql
for example
from urllib.parse import quote_plus
import numpy as np
import pandas as pd
from sqlalchemy import create_engine, event
import pyodbc
# azure sql connect tion string
conn ='Driver={ODBC Driver 17 for SQL Server};Server=tcp:<server name>.database.windows.net,1433;Database=<db name>;Uid=<user name>;Pwd=<password>;Encrypt=yes;TrustServerCertificate=no;Connection Timeout=30;'
quoted = quote_plus(conn)
engine=create_engine('mssql+pyodbc:///?odbc_connect={}'.format(quoted))
#event.listens_for(engine, 'before_cursor_execute')
def receive_before_cursor_execute(conn, cursor, statement, params, context, executemany):
print("FUNC call")
if executemany:
cursor.fast_executemany = True
#insert
table_name = 'Sales'
# For test, I use a csv file to create dataframe
df = pd.read_csv('D:\data.csv')
df.to_sql(table_name, engine, index=False, if_exists='replace', schema='dbo')
#test after inserting
query = 'SELECT * FROM {table}'.format(table=table_name )
dfsql = pd.read_sql(query, engine)
print(dfsql)

Getting Error while trying to retrieve text for error ORA-01804 while executing aws python lambda linux

I am trying to execute below lambda function from aws lambda, I used python 3.7 as runtime environment.
import cx_Oracle
import os
import logging
import boto3
from botocore.exceptions import ClientError
from base64 import b64decode
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
logger.info('begin lambda_handler')
os.environ['LD_LIBRARY_PATH'] = os.getcwd()
dsn = cx_Oracle.makedsn("hostname", 1521, service_name="servicename")
con = cx_Oracle.connect("userid", "passwod", dsn)
cur = con.cursor()
#logger.info('username: ' + username)
#logger.info('host: ' + host)
sql = """SELECT COUNT(*) AS TEST_COUNT FROM DUAL"""
cur.execute(sql)
columns = [i[0] for i in cur.description]
rows = [dict(zip(columns, row)) for row in cur]
logger.info(rows)
con.close()
logger.info('end lambda_handler')
return "Successfully connected to oracle."
But when i execute above lambda i get below error.
Error while trying to retrieve text for error ORA-01804
Any help on this?
Check if your Oracle instant version is the same as your database. That can also lead to this error.
I tried using the latest oracle instant client v21.1 and it spews the same error like this.
It turns out the server that hosts the database is using v11.2 so I had to download the v11.2 to match it.

How to run python function by clicking html button?

I am trying to make this web app to work but I am getting an error. these are the steps that web app is supposed to handle:
import a file
run the python script
export the results
when I run python script independently( without interfering with flask), it works fine( I use Jupyter notebook) on the other hand, when I run it with flask (from prompt) I get an error:
File "app.py", line 88, in <module>
for name, df in transformed_dict.items():
NameError: name 'transformed_dict' is not defined
Any idea of how can I make this web app to work?
This is my first time using flask and I will appreciate any suggestions or guidance.
python file & html file
from flask import Flask,render_template,request,send_file
from flask_sqlalchemy import SQLAlchemy
import os
import pandas as pd
from openpyxl import load_workbook
import sqlalchemy as db
def transform(df):
# Some data processing here
return df
app=Flask(__name__)
#app.route('/')
def index():
return render_template('firstpage.html')
#app.route('/upload',methods=['Get','POST'])
def upload():
file=request.files['inputfile']
xls=pd.ExcelFile(file)
name_dict = {}
snames = xls.sheet_names
for sn in snames:
name_dict[sn] = xls.parse(sn)
for key, value in name_dict.items():
transform(value)
transformed_dict={}
for key, value in name_dict.items():
transformed_dict[key]=transform(value)
#### wirte to excel example:
writer = pd.ExcelWriter("MyData.xlsx", engine='xlsxwriter')
for name, df in transformed_dict.items():
df.to_excel(writer, sheet_name=name)
writer.save()
if __name__=='__main__':
app.run(port=5000)
Your block:
#### wirte to excel example:
writer = pd.ExcelWriter("MyData.xlsx", engine='xlsxwriter')
for name, df in transformed_dict.items():
df.to_excel(writer, sheet_name=name)
writer.save()
should be part of your upload() function since that's where you define and fill transformed_dict. You just need to match the indentation there to the block above it.
The current error is coming up because it's trying to run that code as soon as you start your script, and transformed_dict doesn't exist at that point.

how to create table into SQLite3 from importing excel data in python?

In my code, I am importing data from excel file into an SQLite database using python.
it doesn't give any error but it converts every excel column name into a table.
I have multiple excel files with the same data structure, containing 40K rows and 52 columns each file.
when I am importing these file data into SQLite database using python code it converts each column header name into a table.
import sqlite3
import pandas as pd
filename= gui_fname()
con=sqlite3.connect("cps.db")
wb = pd.read_excel(filename,sheet_name ='Sheet2')
for sheet in wb:
wb[sheet].to_sql(sheet,con,index=False,if_exists = 'append')
con.commit()
con.close()
it should create a table with the name of Sheet which I am importing.
I do some hit and trial and found the solution:
I just put con.commit() within the for loop and it works as required, but I didn't get the logic.
I will appreciate if anyone can explain to me this.
import sqlite3
import pandas as pd
filename= gui_fname()
con=sqlite3.connect("cps.db")
wb = pd.read_excel(filename,sheet_name = 'Sheet2')
for sheet in wb:
wb[sheet].to_sql(sheet,con,index=False,if_exists = 'append')
con.commit()
con.close()
import pandas as pd
def import_excel_to_sqlite_db(excelFile):
df = pd.read_excel(excelFile)
con = sqlite3.connect("SQLite.db")
cur = con.cursor()
results = cur.execute("Select * from TableName")
final = df.to_sql("TableName", con, if_exists="append", index=False)
pd.DataFrame(results, columns=final)
con.commit()
cur.close()

Resources