Executing an insert query on each row in results: psycopg2.ProgrammingError: no results to fetch - psycopg2

What dumb thing am I missing here:
>>> cur.execute("select id from tracks")
>>> for row in cur:
... story = random.choice(fortunes) + random.choice(fortunes)
... cur.execute("update tracks set story=%s where id=%s", (story, row[0]))
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
psycopg2.ProgrammingError: no results to fetch
But there seem to be results:
>>> cur.execute("select id from tracks")
>>> for row in cur:
... print(row)
...
(8,)
(45,)
(12,)
(64,)
(1,)
(6,)

Looks like psycopg2 doesn't allow interleaved queries (although PostgreSQL can do it, on the back end). If the initial query isn't huge, the simplest solution would be to coalesce the results into a list - just change from row in cur: to from row in cur.fetchall(): and you should be right.

Related

Decode an encoded value from DB using Python

I encoded a value from input file and inserted into Sqlite DB
cur.execute('''INSERT INTO Locations (address, geodata)
VALUES ( ?, ? )''', (memoryview(address.encode()), memoryview(data.encode()) ) )
Now I'm trying to decode it but I'm getting
Traceback (most recent call last):
File "return.py", line 9, in
print(c.decode('utf-8'))
AttributeError: 'tuple' object has no attribute 'decode'
My code looks like this:
import sqlite3
conn = sqlite3.connect('geodata.sqlite')
cur = conn.cursor()
cur.execute('SELECT address FROM Locations')
for c in cur:
print(c.decode('utf-8'))
Regardless of how many columns are selected, rows are returned as tuples. You would get the first element of the tuple the usual way.

TypeError: not all arguments converted during string formatting in python connecting with postgresql

It seems like all no error in the code but no idea why I'm getting this.
I was creating a simple GUI app which store user details to a database(postgresql) and also they will be able to search for entries in database. This particular error ocures in this search() function so I haven't added the rest of the code. If necessary I can add them.
Hope I will get some solutions from this community.
def search(id):
conn = psycopg2.connect(dbname="postgres",user="postgres",password="1018",host="localhost",port="5432")
mycursor = conn.cursor()
query = '''select * from demotab where id=%s '''
mycursor.execute(query, (id))
row = mycursor.fetchone()
print(row)
conn.commit()
conn.close()
Getting this error below
Exception in Tkinter callback
Traceback (most recent call last):
File "c:\programdata\anaconda3\lib\tkinter\__init__.py", line 1702, in __call__
return self.func(*args)
File "appwithDB.py", line 51, in <lambda>
search_button = Button(newframe, text="Search", command=lambda : search(entry_search.get()))
File "appwithDB.py", line 71, in search
mycursor.execute(query, (id))
TypeError: not all arguments converted during string formatting
The second argument to mycursor.execute must be an iterable containing the values to insert in the query
You can use a list: mycursor.execute(query, [id])
or a one-element tuple: mycursor.execute(query, (id,))
Notice the comma. (id) is the same than id. In python, the comma is making the tuple, not the parenthesis.

I am trying to read a csv file with pandas and then to search a string in the first column, to use total row for calculations

I am reading a CSV file with pandas, and then I try to find a word like "Net income" in the first column. Then I want to use the whole row which has this structure: string/number/number/number/... to do some calculations with the numbers.
The problem is that find is not working.
data = pd.read_csv(name)
data.str.find('Net income')
Traceback (most recent call last):
File "C:\Users\thoma\Desktop\python programme\manage.py", line 16, in <module>
data.str.find('Net income')
I am using CSV files from here: Income Statement for Deutsche Lufthansa AG (DLAKF) from Morningstar.com
I found this: Python | Pandas Series.str.find() - GeeksforGeeks
Traceback (most recent call last):
File "C:\Users\thoma\Desktop\python programme\manage.py", line 16, in <module>
data.str.find('Net income')
File "C:\Users\thoma\AppData\Roaming\Python\Python37\site-packages\pandas\core\generic.py", line 5067, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'DataFrame' object has no attribute 'str'
So, it works now. But I still have a question. After using the describe function with pandas I get this:
<bound method NDFrame.describe of 2014-12 615
2015-12 612
2016-12 636
2017-12 713
2018-12 736
Name: Goodwill, dtype: object>
I have problems to use the data. So how can I f.e. use the second column here? I tried to do a new table:
new_Table['Goodwill'] = data1['Goodwill'].describe
but this does not work.
I also would like to add more "second" columns to new_Table.
Hi you should filter the column name like df[‘col name’].str.find(x) this required a series not a data frame.
I recommend setting your header row if pandas isnt recognizing named rows in your CSV file.
Something like:
new_header = data.iloc[0] #grab the first row for the header
data = data[1:] #take the data less the header row
data.columns = new_header
From there you can summarize each column by name:
data['Net Income'].describe
edit: I looked at the csv file, I recommend reshaping the data first before analyzing columns.Something like...
data=data.transpose
So in summation:
data = pd.read_csv(name)
data=data.transpose #flip the columns/rows
new_header = data.iloc[0] #grab the first row for the header
data = data[1:] #take the data less the header row
data.columns = new_header
data['Net Income'].describe #analyze

How to iterate over column names with PyTable?

I have a large matrix (15000 rows x 2500 columns) stored using PyTables and getting see how to iterate over the columns of a row. In the documentation I only see how to access each row by name manually.
I have columns like:
ID
X20160730_Day10_123a_2
X20160730_Day10_123b_1
X20160730_Day10_123b_2
The ID column value is a string like '10692.RFX7' but all other cell values are floats. This selection works and I can iterate the rows of results but I cannot see how to iterate over the columns and check their values:
from tables import *
import numpy
def main():
h5file = open_file('carlo_seth.h5', mode='r', title='Three-file test')
table = h5file.root.expression.readout
condition = '(ID == b"10692.RFX7")'
for row in table.where(condition):
print(row['ID'].decode())
for col in row.fetch_all_fields():
print("{0}\t{1}".format(col, row[col]))
h5file.close()
if __name__ == '__main__':
main()
If I just iterate with "for col in row" nothing happens. As the code is above, I get a stack:
10692.RFX7
Traceback (most recent call last):
File "tables/tableextension.pyx", line 1497, in tables.tableextension.Row.__getitem__ (tables/tableextension.c:17226)
KeyError: b'10692.RFX7'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "tables/tableextension.pyx", line 126, in tables.tableextension.get_nested_field_cache (tables/tableextension.c:2532)
KeyError: b'10692.RFX7'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./read_carlo_pytable.py", line 31, in <module>
main()
File "./read_carlo_pytable.py", line 25, in main
print("{0}\t{1}".format(col, row[col]))
File "tables/tableextension.pyx", line 1501, in tables.tableextension.Row.__getitem__ (tables/tableextension.c:17286)
File "tables/tableextension.pyx", line 133, in tables.tableextension.get_nested_field_cache (tables/tableextension.c:2651)
File "tables/utilsextension.pyx", line 927, in tables.utilsextension.get_nested_field (tables/utilsextension.c:8707)
AttributeError: 'numpy.bytes_' object has no attribute 'encode'
Closing remaining open files:carlo_seth.h5...done
You can access a column value by name in each row:
for row in table:
print(row["10692.RFX7"])
Iterate over all columns:
names = table.coldescrs.keys()
for row in table:
for name in names:
print(name, row[name])

Cassandra ODBC parameter binding

I've installed DataStax Community Edition, and added DataStax ODBC connector. Now I try to access the database via pyodbc:
import pyodbc
connection = pyodbc.connect('Driver=DataStax Cassandra ODBC Driver;Host=127.0.0.1',
autocommit = True)
cursor = connection.cursor()
cursor.execute('CREATE TABLE Test (id INT PRIMARY KEY)')
cursor.execute('INSERT INTO Test (id) VALUES (1)')
for row in cursor.execute('SELECT * FROM Test'):
print row
It works fine and returns
>>> (1, )
However when I try
cursor.execute('INSERT INTO Test (id) VALUES (:id)', {'id': 2})
I get
>>> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "test.py", line 11, in <module>
cursor.execute('INSERT INTO Test (id) VALUES (:id)', {'id': 2})
pyodbc.ProgrammingError: ('The SQL contains 0 parameter markers, but 1 parameters were supplied', 'HY000')
Alternatives do neither work:
cursor.execute('INSERT INTO Test (id) VALUES (:1)', (2))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "test.py", line 11, in <module>
cursor.execute('INSERT INTO Test (id) VALUES (?)', (2))
pyodbc.ProgrammingError: ('The SQL contains 0 parameter markers, but 1 parameters were supplied', 'HY000')
and
cursor.execute('INSERT INTO Test (id) VALUES (?)', (2))
>>> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
pyodbc.Error: ('HY000', "[HY000] [DataStax][CassandraODBC] (15) Error while preparing a query in Cassandra: [33562624] : line 1:31 no viable alternative at input '1' (...Test (id) VALUES (:[1]...) (15) (SQLPrepare)")
My Cassandra version is 2.2.3, ODBC driver is from https://downloads.datastax.com/odbc-cql/1.0.1.1002/
According to pyodbc Documentation your query should be
cursor.execute('INSERT INTO Test (id) VALUES (?)', 2)
More details on pyodbc Insert
As per the comment got a thread which says it is a open bug in pyodbc
BUG

Resources