How do I sort this one out?
code:
c.execute("INSERT INTO INPUT33 (NAME) VALUES (?);", (name3,))
c.execute("select MAX(rowid) from [input33];")
conn.commit()
for rowid in cursor:break
for elem in rowid:
m = elem
print(m)
c.execute("select MAX(rowid) from [input];")
for rowid in c:break
for elem in rowid:
m = elem
c.execute("DELETE FROM input WHERE rowid = ?", (m,))
conn.commit()
After running this, i get this:
sqlite3.OperationalError: database is locked
Taken from Python Docs
When a database is accessed by multiple connections, and one of the processes modifies the database, the SQLite database is locked until that transaction is committed. The timeout parameter specifies how long the connection should wait for the lock to go away until raising an exception. The default for the timeout parameter is 5.0 (five seconds).
Related
Is it possible to pass more than one query in a double with cursor opening statement with psycopg2 (first to open connection, then cursor)?
E.g. to replace:
import psycopg2
def connector():
return psycopg2.connect(**DB_DICT_PARAMS)
########
sql_update1 = ("UPDATE table SET array = %s::varchar[], "
"array_created = true, timestamp = now() AT TIME ZONE 'UTC' "
"WHERE id = %s")
sql_update2 = ("UPDATE table SET json_field = %s "
"WHERE id = %s")
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_update1, [stringArray, ID])
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_update2, [jsonString, ID])
by:
#(...)
sql_update1 = ("UPDATE table SET array = %s::varchar[], "
"array_created = true, timestamp = now() AT TIME ZONE 'UTC' "
"WHERE id = %s")
sql_update2 = ("UPDATE table SET json_field = %s "
"WHERE id = %s")
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_update1, [stringArray, ID])
curs.execute(sql_update2, [jsonString, ID])
What if the second query needs the first one to be completed before, and what if not?
In the shown case, they will definitely update the same record (i.e. row) in the database but not the same fields (i.e. attributes or columns).
Is this precisely authorized because the two SQL statement are committed sequentially, i.e. the first finishes first. Then, after and only after, the second is executed.?
Or is it actually forbidden because they can be executed in parallel, each query without knowing the state of the other at any instant t?
There are no fancy triggers or procedures in the DB. Let's make it first simple.
(Please note that I have purposefully written two queries here, where a single one would have perfectly fit, but it's not always the case, as some computations are still in the way before saving some other results to the same record in the DB).
If you want them to execute at the same time, simply put them in the same string seperated by a semicolon. I'm a little rusty but I think the following should work:
sql_updates = ("UPDATE table SET array = %s::varchar[], "
"array_created = true, timestamp = now() AT TIME ZONE 'UTC' "
"WHERE id = %s;"
"UPDATE table SET json_field = %s "
"WHERE id = %s;")
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_updates, [stringArray, ID, jsonString, ID])
Better avoid this:
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_update1, [stringArray, ID])
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_update2, [jsonString, ID])
Opening a database connection is pretty slow compared to doing a query, so it is much better to reuse it rather than opening a new one for each query. If your program is a script, typically you'd just open the connection at startup and close it at exit.
However, if your program spends a long time waiting between queries, and there will be many instances running, then it would be better to close the connection to not consume valuable RAM on the postgres server for doing nothing. This is common in client/server applications where the client mostly waits for user input. If there are many clients you can also use connection pooling, which offers the best of both worlds at the cost of a bit extra complexity. But if it's just a script, no need to bother with that.
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_update1, [stringArray, ID])
curs.execute(sql_update2, [jsonString, ID])
This would be faster. You don't need to build a new cursor, you can reuse the same one. note if you don't fetch the results of the first query before reusing the cursor, then you won't be able to do so after executing the second query, because a cursor only stores the results of the last query. Since these are updates, there are no results, unless you want to check the rowcount to see if it did update a row.
What if the second query needs the first one to be completed before, and what if not?
Don't care. execute() processes the whole query before returning, so by the time python gets to the next bit of code, the query is done.
Is this precisely authorized because the two SQL statement are committed sequentially, i.e. the first finishes first. Then, after and only after, the second is executed.?
Yes
Or is it actually forbidden because they can be executed in parallel, each query without knowing the state of the other at any instant t?
If you want to execute several queries in parallel, for example because a query takes a while and you want to execute it while still running other queries, then you need several DB connections and of course one python thread for each because execute() is blocking. It's not used often.
I'm trying to create a pop function getting a row of data from a sqlite database and deleting that same row. I would like to not have to create an ID column so I am using ROWID. I want to always get the first row and return it. This is the code I have:
import sqlite3
db = sqlite3.connect("Test.db")
c=db.cursor()
def sqlpop():
c.execute("SELECT * from DATA WHERE ROWID=1")
data = c.fetchall()
c.execute("DELETE from DATA WHERE ROWID=1")
db.commit()
return(data)
when I call the function it gets the first item correctly, but after the first call the function returns nothing. like this:
>>> sqlpop()
[(1603216325, 'placeholder IP line 124', 'placeholder Device line 124', '1,2,0', 1528, 1564)]
>>> sqlpop()
[]
>>> sqlpop()
[]
>>> sqlpop()
[]
what do I need to change for this function to work correctly?
update:
using what Schwern said I got the funtion to work:
def sqlpop():
c.execute("SELECT * from DATA ORDER BY ROWID LIMIT 1")
data = c.fetchone()
c.execute("DELETE from DATA ORDER BY ROWID LIMIT 1")
db.commit()
return data
rowid is not the row order, it is a unique identifier for the row created by SQLite unless you say otherwise.
SQL rows have no inherent order. You could grab just one row...
select * from table limit 1;
But you'll get them in no guaranteed order. And without a rowid you have no way to identify it again to delete it.
If you want to get the "first" row you must define what "first" means. To do that you need something to order by. For example, a timestamp. Or perhaps an auto-incrementing integer. You cannot use rowid, rowids are not guaranteed to be assigned in any particular order.
select *
from table
where created_at = max(created_at)
limit 1
So long as created_at is indexed, that should work fine. Then delete by its rowid.
You also don't need to use fetchall to fetch one row, use fetchone. In general, fetchall should be avoided as it risks consuming all your memory by slurping all the data in at once. Instead, use iterators.
for row in c.execute(...)
We are using psycopg2 jsonb cursor to fetch the data and processing but when ever new thread or processing coming it should not fetch and process the same records which first process or thread.
For that we have try to use the FOR UPDATE but we just want to know whether we are using correct syntax or not.
con = self.dbPool.getconn()
cur = conn.cursor()
sql="""SELECT jsondoc FROM %s WHERE jsondoc #> %s"”"
if 'sql' in queryFilter:
sql += queryFilter 'sql’]
When we print this query, it will be shown as below:
Query: "SELECT jsondoc FROM %s WHERE jsondoc #> %s AND (jsondoc ->> ‘claimDate')::float <= 1536613219.0 AND ( jsondoc ->> ‘claimstatus' = ‘done' OR jsondoc ->> 'claimstatus' = 'failed' ) limit 2 FOR UPDATE"
cur.execute(sql, (AsIs(self.tablename), Json(queryFilter),))
cur.execute()
dbResult = cur.fetchall()
Please help us to clarify the syntax and explain if that syntax is correct then how this query lock the fetched records of first thread.
Thanks,
Sanjay.
If this exemplary query is executed
select *
from my_table
order by id
limit 2
for update; -- wrong
then two resulting rows are locked until the end of the transaction (i.e. next connection.rollback() or connection.commit() or the connection is closed). If another transaction tries to run the same query during this time, it will be stopped until the two rows are unlocked. So it is not the behaviour you are expected. You should add skip locked clause:
select *
from my_table
order by id
limit 2
for update skip locked; -- correct
With this clause the second transaction will skip the locked rows and return next two onces without waiting.
Read about it in the documentation.
http://initd.org/psycopg/docs/extras.html
psycopg2.extras.execute_values has a parameters page_size.
I'm doing an INSERT INTO... ON CONFLICT... with RETURNING ID.
The problem is that the cursor.fetchall() give me back only the last "page", that is, 100 ids (default of page_size).
Without modifying page_size parameters, is it possible to iterate over the results, to get the total number of rows updated ?
The best and shortest answer would be using fetch = True in parameter as stated in here
all_ids = psycopg2.extras.execute_values(cur, query, data, template=None, page_size=10000, fetch=True)
# all_ids will return all affected rows with array like this [ [1], [2], [3] .... ]
I ran into the same issue. I work around this issue by batching my calls to execute_values(). I'll set my_page_size=1000, then iterate over my values, filling argslist until i have my_page_size items. Then I'll call execute_values(cur, sql, argslist, page_size=my_page_size). And iterate over cur to get those ids.
Without modifying page_size parameters, is it possible to iterate over
the results, to get the total number of rows updated ?
Yes.
try:
conn = psycopg2.connect(...)
cur = conn.cursor()
query = """
WITH
items (eggs) AS (VALUES %s),
inserted AS (
INSERT INTO spam (eggs)
SELECT eggs FROM items
ON CONFLICT (eggs) DO NOTHING
RETURNING id
)
SELECT id FROM spam
WHERE eggs IN (SELECT eggs FROM items)
UNION
SELECT id FROM inserted
"""
eggs = (('egg_{}'.format(i % 666),) for i in range(10_000))
ids = psycopg2.extras.execute_values(cur, query, argslist=eggs, fetch=True)
# Do whatever with `ids`. `len(ids)` I suppose?
finally:
if connection:
cur.close()
conn.close()
I overkilled query on purpose to address some gotchas:
WITH items (eggs) AS (VALUES %s) is done to be able to use argslist in two places at once;
RETURNING with ON CONFLICT will return only ids which were actually inserted, conflicting ones are omitted from INSERT' direct results. To solve that all this SELECT ... WHERE ... UNION SELECT mumbo jumbo is done;
to get all values which you asked for: ids = psycopg2.extras.execute_values(..., fetch=True).
A horrible interface oddity considering that all other cases are done like
cur.execute(...) # or other kind of `execute`
rows = cur.fetchall() # or other kind of `fetch`
So if you want only the number of inserted rows then do
try:
conn = psycopg2.connect(...)
cur = conn.cursor()
query = """
INSERT INTO spam (eggs)
VALUES %s
ON CONFLICT (eggs) DO NOTHING
RETURNING id
"""
eggs = (('egg_{}'.format(i % 666),) for i in range(10_000))
ids = psycopg2.extras.execute_values(cur, query, argslist=eggs, fetch=True)
print(len(ids)
finally:
if connection:
cur.close()
conn.close()
How to avoid creating table again and again in python using Oracle database?
Every time I call the function CREATE table query is executed and data is not inserted because the table already exists.
import cx_Oracle
import time
def Database(name,idd,contact):
try:
con = cx_Oracle.connect('arslanhaider/12345#AHS:1521/XE')
cur = con.cursor()
cur.execute("CREATE TABLE Mazdoor(Name varchar(255),EmpID INT,ContactNo INT)")
cur.execute("INSERT INTO Mazdoor VALUES(:1, :2, :3)",( name,idd,contact))
con.commit()
cur.execute("SELECT * FROM Mazdoor")
data = cur.fetchall()
for row in data:
print(row)
except cx_Oracle.Error:
if con:
con.rollback()
finally:
if con:
con.close()
if__name__="__main__"
while True:
n=input("Enter Name::")
i=input("Enter Idd::")
c=input("Enter Contact No::")
Database(n,i,c)
time.sleep(3)
print("Record Successfully Stored......\n\n")
"Obviously, (koff, koff ...) you must know what you are doing!"
If you ask Oracle to CREATE TABLE, knowing in advance that the table might already exist, then your logic should at least be prepared ... through the use of multiple try..except..finally blocks as appropriate, to handle this situation.
If the CREATE TABLE statement fails because the table already exists, then you can be quite sure that an exception will be thrown, and that you, in the relevant except clause, can determine that "this, indeed, is the reason." You might reasonably then choose to ignore this possibility, and to "soldier on."