Using "UPDATE" and "SET" in Python to Update Snowflake Table - python-3.x

I have been using Python to read and write data to Snowflake for some time now to a table I have full update rights to using a Snowflake helper class my colleague found on the internet. Please see below for the class I have been using with my personal Snowflake connection information abstracted and a simply read query that works given you have a 'TEST' table in your schema.
from snowflake.sqlalchemy import URL
from sqlalchemy import create_engine
import keyring
import pandas as pd
from sqlalchemy import text
# Pull the username and password to be used to connect to snowflake
stored_username = keyring.get_password('my_username', 'username')
stored_password = keyring.get_password('my_password', 'password')
class SNOWDBHelper:
def __init__(self):
self.user = stored_username
self.password = stored_password
self.account = 'account'
self.authenticator = 'authenticator'
self.role = stored_username + '_DEV_ROLE'
self.warehouse = 'warehouse'
self.database = 'database'
self.schema = 'schema'
def __connect__(self):
self.url = URL(
user=stored_username,
password=stored_password,
account='account',
authenticator='authenticator',
role=stored_username + '_DEV_ROLE',
warehouse='warehouse',
database='database',
schema='schema'
)
# =============================================================================
self.url = URL(
user=self.user,
password=self.password,
account=self.account,
authenticator=self.authenticator,
role=self.role,
warehouse=self.warehouse,
database=self.database,
schema=self.schema
)
self.engine = create_engine(self.url)
self.connection = self.engine.connect()
def __disconnect__(self):
self.connection.close()
def read(self, sql):
self.__connect__()
result = pd.read_sql_query(sql, self.engine)
self.__disconnect__()
return result
def write(self, wdf, tablename):
self.__connect__()
wdf.to_sql(tablename.lower(), con=self.engine, if_exists='append', index=False)
self.__disconnect__()
# Initiate the SnowDBHelper()
SNOWDB = SNOWDBHelper()
query = """SELECT * FROM """ + 'TEST'
snow_table = SNOWDB.read(query)
I now have the need to update an existing Snowflake table and my colleague suggested I could use the read function to send the query containing the update SQL to my Snowflake table. So I adapted an update query I use successfully in the Snowflake UI to update tables and used the read function to send it to Snowflake. It actually tells me that the relevant rows in the table have been updated, but they have not. Please see below for update query I use to attempt to change a field "field" in "test" table to "X" and the success message I get back. Not thrilled with this hacky update attempt method overall (where the table update is a side effect of sorts??), but could someone please help with method to update within this framework?
# Query I actually store in file: '0-Query-Update-Effective-Dating.sql'
UPDATE "Database"."Schema"."Test" AS UP
SET UP.FIELD = 'X'
# Read the query in from file and utilize it
update_test = open('0-Query-Update-Effective-Dating.sql')
update_query = text(update_test.read())
SNOWDB.read(update_query)
# Returns message of updated rows, but no rows updated
number of rows updated number of multi-joined rows updated
0 316 0

SQL2Pandas | UPDATE row(s) in pandas

Related

how to avoid duplication in BigQuery by streaming insert

I made a function that inserts .CSV data into BigQuery in every 5~6 seconds. I've been looking for the way to avoid duplicating the data in BigQuery after inserting. I want to remove data that has same luid but I have no idea how to remove it so is it possible to check each data of .CSV has already existed in BigQuery table before inserting .
I put row_ids parameter to avoid duplicate luid but it seems not to work well .
Could you give me any idea ?? Thanks.
def stream_upload():
# BigQuery
client = bigquery.Client()
project_id = 'test'
dataset_name = 'test'
table_name = "test"
full_table_name = dataset_name + '.' + table_name
json_rows = []
with open('./test.csv','r') as f:
for line in csv.DictReader(f):
del line[None]
line_json = dict(line)
json_rows.append(line_json)
errors = client.insert_rows_json(
full_table_name,json_rows,row_ids=[row['luid'] for row in json_rows]
)
if errors == []:
print("New rows have been added.")
else:
print("Encountered errors while inserting rows: {}".format(errors))
print("end")
schedule.every(0.5).seconds.do(stream_upload)
while True:
schedule.run_pending()
time.sleep(0.1)
BigQuery doesn't have a native way to deal with this. You could either create a view off of this table that performs deduping or create an external cache of luids and lookup if they have already been written to BigQuery before writing and update the cache after writing new data. This could be as simple as a file cache or you could use an additional database.

python code to execute multiple queries and create csv

Hi I am pretty new to python I have a code that reads mysql query thorugh pandas and if there is data converts it into the csv now I need to move a step ahead and add another query and create another csv I am not sure what will be the best way to do it. Any help is appreciated thanks.
My code is something like this
def data_to_df(connection):
query = """
select * from abs
"""
data = pd.read_sql(sql=query,con=connection)
return data
def main():
# DB connection and data retrieval
cnx = database_connection(db_credentials)
print(cnx)
exit()
df = data_to_df(cnx)
Convert Dataframe to textfile
df.to_csv(file_location, sep = '|', na_rep = 'NULL', index=False, quoting=csv.QUOTE_NONE)
print('File created successfully! \n')
how can I add another query that will be executed and will create a different file altogether

Need better approach to load oracle blob data into Mongodb collection using Gridfs

Recently, I started working on new project where I need to transfer oracle table data into Mongodb collections.
Oracle table consists one BLOB datatype column.
I wanted to transfer oracle table blob data into Mongodb using GridFS and I even succeed, but I am unable to scale it up.
If I use same script for 10k or 50k records, Its taking very very long time.
Please suggest me, is there anywhere i can improve or is there better way to achieve my goal.
Thank you in advance.
Please find out sample code which I am using to load small amount of data
from pymongo import MongoClient
import cx_Oracle
from gridfs import GridFS
import pickle
import sys
client = MongoClient('localhost:27017/sample')
dbm = client.sample
db = <--oracle connection----->
cursor = db.cursor()
def get_notes_file_sys():
return GridFS(dbm,'notes')
def save_data_in_file(fs,note,file_name):
gridin = None
file_ids = {}
data_blob = pickle.dumps(note['file_content_blob'])
del note['file_content_blob']
gridin = fs.open_upload_stream(file_name, chunk_size_bytes=261120, metadata=note)
gridin.write(data_blob)
gridin.close()
file_ids['note_id'] = gridin._id
return file_ids
# ---------------------------Uploading files start---------------------------------------
fs = get_notes_file_sys()
query = ("""SELECT id, file_name, file_content_blob, author, created_at FROM notes fetch next 10 rows only""")
cursor.execute(query)
rows = cursor.fetchall()
col = [co[0] for co in cursor.description]
final_arr= []
for row in rows:
data = dict(zip(col,row))
file_name = data['file_name']
if data["file_content_blob"] is None:
data["file_content_blob"] = None
else:
# This below line is taking more time
data["file_content_blob"] = data["file_content_blob"].read()
note_id = save_data_in_file(fs,data,file_name)
data['note_id'] = note_id
final_arr.append(data)
dbm['notes'].bulk_insert(final_arr)
Two things comes to mind:
Don't move to Mongo. Just use Oracle's SODA document storage model: https://cx-oracle.readthedocs.io/en/latest/user_guide/soda.html Also take a look at Oracle's JSON DB service: https://blogs.oracle.com/jsondb/autonomous-json-database
Fetch BLOBs as Bytes, which is much faster than the method you are using https://cx-oracle.readthedocs.io/en/latest/user_guide/lob_data.html#fetching-lobs-as-strings-and-bytes There is an example at https://github.com/oracle/python-cx_Oracle/blob/master/samples/ReturnLobsAsStrings.py

Python 3 script not writing to Postgres table

The first part of the script returns all of my AD users with values converted to Python str: draft = [('Display Name', 'username'),]
I want to write this to my main_associate table (Postgres 9.5) avoiding duplicates. I know I have records in the list that are not duplicates and should be written. This returns no errors but doesn't write my records:
try:
new_conn = psycopg2.connect("dbname='test' user='usr' host='localhost' password='pswd'")
except:
print("Unable to connect to the associates database.")
sql = """INSERT INTO main_associate(displayname,username) VALUES(%s,%s)
ON CONFLICT (username) DO NOTHING"""
one_cur = new_conn.cursor()
for grp in draft:
#print(grp)
one_cur.execute(sql, (grp[0],grp[1],))
new_conn.commit
one_cur.close()
new_conn.close()
If you install sqlalchemy...
from sqlalchemy import create_engine, MetaData
engine = create_engine('postgresql://postgres:pswd#localhost/test')
meta = MetaData()
meta.reflect(bind=engine)
table = meta.tables['main_associate']
for grp in draft:
ins = table.insert({"displayname":grp[0],"username":grp[1]}).on_conflict_do_nothing(index_elements=['username'])
engine.execute(ins)

How to use passed parameter as table Name in Select query python?

i have the following function which extracts data from table, but i want to pass the table name in function as parameter...
def extract_data(table):
try:
tableName = table
conn_string = "host='localhost' dbname='Aspentiment' user='postgres' password='pwd'"
conn=psycopg2.connect(conn_string)
cursor = conn.cursor()
cursor.execute("SELECT aspects_name, sentiments FROM ('%s') " %(tableName))
rows = cursor.fetchall()
return rows
finally:
if conn:
conn.close()
when i call function as extract_data(Harpar) : Harpar is table name
but it give an error that 'Harpar' is not defined.. any hepl ?
Update: As of psycopg2 version 2.7:
You can now use the sql module of psycopg2 to compose dynamic queries of this type:
from psycopg2 import sql
query = sql.SQL("SELECT aspects_name, sentiments FROM {}").format(sql.Identifier(tableName))
cursor.execute(query)
Pre < 2.7:
Use the AsIs adapter along these lines:
from psycopg2.extensions import AsIs
cursor.execute("SELECT aspects_name, sentiments FROM %s;",(AsIs(tableName),))
Without the AsIs adapter, psycopg2 will escape the table name in your query.

Resources