I am trying to make a simple script in python, which will fetch data from an endpoint convert it into a dataframe and write it to an Amazon redshift cluster and then automate the script using a cronjob from aws. I am using psycopg2 for connecting to the redshift cluster and the script executes the commands pretty well (creates table in redshift and writes the data as well). But when I try to see the table from a sql client the table doesnt show up
from pandas.io.json import json_normalize
import json
import pandas as pd
import requests
import psycopg2
mm_get = requests.get('endpoint link'})
mm_json=mm_get.json()
data_1 = json_normalize(data = mm_json['data'],
record_path = ['courses','modules'],
record_prefix = 'courses.modules.',
meta = [['courses', 'id'],
['courses', 'title'],
'activated',
'createdAt',
'email',
'employeeId',
'firstName',
'group',
'id',
'lastName',
'phone',
'teams'
]
)
data_2 = json_normalize(data = mm_json['data'],
record_path = 'lessons',
record_prefix = 'lessons.',
meta = 'id',
meta_prefix = 'user.'
)
data_3 = data_1.merge(
data_2,
how = 'outer',
left_on = ['courses.modules.id', 'id'],
right_on = ['lessons.moduleId', 'user.id']
)
cols = data_3.columns
cols = cols.tolist()
cols = pd.DataFrame(cols)
re_cols = pd.DataFrame(cols.loc[:,0].str.replace('.','_').tolist(),index=cols.index)
data_3.teams = data_3.teams.astype(str)
data_3.teams = data_3.teams.str.replace('[','')
data_3.teams = data_3.teams.str.replace(']','')
data_3.teams = data_3.teams.str.replace("'","")
con=psycopg2.connect(dbname='name',
host='hostname',
port='xxxx',user='username',password='password')
cur = con.cursor()
cur.execute('create table testing_learn.test (courses_modules_completionDate DATE, courses_modules_id int, courses_modules_status TEXT,courses_modules_title TEXT, courses_id int,courses_title TEXT, activated bool, createdAt TIMESTAMP, email TEXT, employeeId TEXT, firstName TEXT, group_name TEXT, id TEXT, lastname TEXT, phone int8, teams TEXT, lessons_courseId int, lessons_date DATE, lessons_id int, lessons_lessonNumber int, lessons_moduleId int,lessons_score TEXT, lessons_title TEXT,user_id int);')
cur.close()
data_mat = data_3.as_matrix()
str_mat = b','.join(cur.mogrify('(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)',x) for x in tuple(map(tuple,data_mat)))
cur = con.cursor()
cur.execute('insert into testing_learn.test VALUES '+str_mat.decode('utf-8'))
I am able to see the data when I query the same table from python using psycopg2, but the same table doesnt show up. It would be of great help if anyone could help with what I am doing wrong here. Thank in advance.
According to Psycopg2-2.7.5 official documentation, the main entry points of Psycopg2 includes:
The class connection encapsulates a database session. It allows to:
create new cursor instances using the cursor() method to execute database commands and queries,
terminate transactions using the methods commit() or rollback().
Therefore, you need to call con.commit() every time after you call cur.execute() to make the changes to the database persistent. Otherwise your table won't show up in the database.
Related
I've got my data put into an SQLite3 database, and now I'm trying to work on a little script to access data I want for given dates. I got the SELECT statement to work with the date ranges, but I can't seem to add another condition to fine tune the search.
db columns id, date, driverid, drivername, pickupStop, pickupPkg, delStop, delPkg
What I've got so far:
import pandas as pd
import sqlite3
sql_data = 'driverperformance.sqlite'
conn = sqlite3.connect(sql_data)
cur = conn.cursor()
date_start = "2021-12-04"
date_end = "2021-12-10"
df = pd.read_sql_query("SELECT DISTINCT drivername FROM DriverPerf WHERE date BETWEEN :dstart and :dend", params={"dstart": date_start, "dend": date_end}, con=conn)
drivers = df.values.tolist()
for d in drivers:
driverDF = pd.read_sql_query("SELECT * FROM DriverPerf WHERE drivername = :driver AND date BETWEEN :dstart and :dend", params={"driver": d, "dstart": date_start, "dend": date_end}, con=conn)
I've tried a few different versions of the "WHERE drivername" part but it always seems to fail.
Thanks!
If I'm not mistaken, drivers will be a list of lists. Have you tried
.... params={"driver": d[0] ....
I'm using psycopg2 to connect to postgre DB, and to export the data into CSV file.
This is how I made the export DB to csv:
def export_table_to_csv(self, table, csv_path):
sql = "COPY (SELECT * FROM %s) TO STDOUT WITH CSV DELIMITER ','" % table
self.cur.execute(sql)
with open(csv_path, "w") as file:
self.cur.copy_expert(sql, file)
But the data is just the rows - without the column names.
How can I export the data with the column names?
P.S. I am able to print the column names:
sql = '''SELECT * FROM test'''
self.cur.execute(sql)
column_names = [desc[0] for desc in self.cur.description]
for i in column_names:
print(i)
I want the cleanest way to do export the DB with columns name (i.e. I prefer to do this in one method, and not rename columns In retrospect).
As I said in my comment, you can add HEADER to the WITH clause of your SQL:
sql = "COPY (SELECT * FROM export_test) TO STDOUT WITH CSV HEADER"
By default, comma delimiters are used with CSV option so you don't need to specify.
For future Questions, you should submit a minimal reproducible example. That is, code we can directly copy and paste and run. I was curious if this would work so I made one and tried it:
import psycopg2
conn = psycopg2.connect('host=<host> dbname=<dbname> user=<user>')
cur = conn.cursor()
# create test table
cur.execute('DROP TABLE IF EXISTS export_test')
sql = '''CREATE TABLE export_test
(
id integer,
uname text,
fruit1 text,
fruit2 text,
fruit3 text
)'''
cur.execute(sql)
# insert data into table
sql = '''BEGIN;
insert into export_test
(id, uname, fruit1, fruit2, fruit3)
values(1, 'tom jones', 'apple', 'banana', 'pear');
insert into export_test
(id, uname, fruit1, fruit2, fruit3)
values(2, 'billy idol', 'orange', 'cherry', 'strawberry');
COMMIT;'''
cur.execute(sql)
# export to csv
fid = open('export_test.csv', 'w')
sql = "COPY (SELECT * FROM export_test) TO STDOUT WITH CSV HEADER"
cur.copy_expert(sql, fid)
fid.close()
And the resultant file is:
id,uname,fruit1,fruit2,fruit3
1,tom jones,apple,banana,pear
2,billy idol,orange,cherry,strawberry
I am trying to convert my sql query output into a list to look a certain way.
Here is my code:
def get_sf_metadata():
import sqlite3
#Tables I want to be dynamically created
table_names=['AcceptedEventRelation','Asset', 'Book']
#SQLIte Connection
conn = sqlite3.connect('aaa_test.db')
c = conn.cursor()
#select the metadata table records
c.execute("select name, type from sf_field_metadata1 limit 10 ")
print(list(c))
get_sf_metadata()
Here is my output:
[('Id', 'id'), ('RelationId', 'reference'), ('EventId', 'reference')]
Is there any way to make the output looks like this:
[Id id, RelationId reference, EventId reference]
You can try
print(["{} {}".format(i[0], i[1]) for i in list(c)])
That will print you
['Id id', 'RelationId reference', 'EventId reference']
The first part of the script returns all of my AD users with values converted to Python str: draft = [('Display Name', 'username'),]
I want to write this to my main_associate table (Postgres 9.5) avoiding duplicates. I know I have records in the list that are not duplicates and should be written. This returns no errors but doesn't write my records:
try:
new_conn = psycopg2.connect("dbname='test' user='usr' host='localhost' password='pswd'")
except:
print("Unable to connect to the associates database.")
sql = """INSERT INTO main_associate(displayname,username) VALUES(%s,%s)
ON CONFLICT (username) DO NOTHING"""
one_cur = new_conn.cursor()
for grp in draft:
#print(grp)
one_cur.execute(sql, (grp[0],grp[1],))
new_conn.commit
one_cur.close()
new_conn.close()
If you install sqlalchemy...
from sqlalchemy import create_engine, MetaData
engine = create_engine('postgresql://postgres:pswd#localhost/test')
meta = MetaData()
meta.reflect(bind=engine)
table = meta.tables['main_associate']
for grp in draft:
ins = table.insert({"displayname":grp[0],"username":grp[1]}).on_conflict_do_nothing(index_elements=['username'])
engine.execute(ins)
I can for example get BigQuery data into local python with:
import os
from google.cloud import bigquery
project_id = "example-project"
dataset_id = "exapmle_dataset"
table_id = "table_id"
os.environ["GOOGLE_CLOUD_PROJECT"] = project_id
bq = bigquery.Client()
query = "SELECT * FROM {}.{} LIMIT 5".format(dataset_id, table_id)
resp = bq.run_sync_query(query)
resp.run()
data_list = resp.rows
The result:
print(data_list)
>>> [('BEDD', '1',), ('A75', '1',), ('CE3F', '1',), ('0D8C', '1',), ('3E9C', '1',)]
How do I then go and get the schema for this table? Such that, for example
headings = ('heading1', 'heading2')
# or
schema_dict = {'fields': [{'name': 'heading1', 'type': 'STRING'}, {'name': 'heading2', 'type': 'STRING'}]}
You can use the schema method from your resp variable.
After running the query you can retrieve it:
schema = resp.schema
schema will be a list containing the definition for each column in your query.
As an example, lets say this is your query:
query = "select '1' as fv, STRUCT<i INT64, j INT64> (1, 2) t from `dataset.table` limit 1"
The schema will be a list containing 2 entries:
[<google.cloud.bigquery.schema.SchemaField at 0x7ffa64fe6e50>,
<google.cloud.bigquery.schema.SchemaField at 0x7ffa64fe6b10>]
For each object in schema, you have the methods field_type, fields, mode and name so if you run:
schema[0].field_type, schema[0].mode, schema[0].name
The result is "STRING", "NULLABLE", "fv".
As the second column is a record, then if you run:
schema[1].field_type, schema[1].mode, schema[1].name, schema[1].fields
The result is:
"RECORD", "NULLABLE", "t", [google schema 1, google schema 2]
Where google schema 1 contains the definition for the inner fields within the record.
As far as I know, there's no way of getting a dictionary as you showed in your question, which means you'll have to loop over the entries in schema and build it yourself. It should be simple though. Not sure if this is working as I haven't fully tested it but it might give you an idea on how to do it:
def extract_schema(schema_resp):
l = []
for schema_obj in schema_resp:
r = {}
r['name'] = schema_obj.name
r['type'] = schema_obj.field_type
r['mode'] = schema_obj.mode
if schema_obj.fields:
r['fields'] = extract_schema(schema_obj.fields)
l.append(r)
return l
So you'd just have to run schema = extract_schema(resp.schema) and (hopefully) you'll be good to go.