I am trying to use sqlite3's executemany() to insert multiple values with Python3.
Code:
import sqlite3
conn = sqlite3.connect('rssnewsdata.db')
c = conn.cursor()
entries = [
('url1', 1234, 'title1', 'summary1', 'feedurl1'),
('url2', 1235, 'title2', 'summary2', 'feedurl2'),
('url3', 1236, 'title3', 'summary3', 'feedurl3'),
('url4', 1237, 'title4', 'summary4', 'feedurl4')
]
c.executemany('INSERT INTO entries VALUES (?, ?, ?, ?, ?)', entries)
The db file exists, the table exists, I can use Python3 to SELECT from it, so connecting to it is not a problem. The columns are of TEXT, INTEGER, TEXT, TEXT, TEXT type.
Python reports no errors. What is missing?
you need to
conn.commit()
after the insert.
Related
I have a sqlite3 database where the first column is the id and set as primary key with auto increment. I'm trying to insert the values from my python dictionary as such:
value = {'host': [], 'drive': [], 'percent': []}
soup = bs(contents, 'html.parser')
for name in soup.find_all("td", class_="qqp0_c0"):
hostname = name.parent.find('td').get_text()
drive = name.parent.find('td', class_="qqp0_c1").get_text()
used_percent = name.parent.find('td', class_="qqp0_c5").get_text()
value['host'].append(hostname)
value['drive'].append(drive)
value['percent'].append(used_percent)
#cur.executemany("INSERT INTO scrap VALUES (?, ?, ?)", hostname, drive, used_percent)
cur.execute("INSERT INTO scrap VALUES (?, ?, ?);", value)
I keep getting errors, my latest error seems to imply it needs an id value:
cur.execute("INSERT INTO scrap VALUES (?, ?, ?);", value)
sqlite3.OperationalError: table scrap has 4 columns but 3 values were supplied
Do I need to supply an id number?
This is the db schema:
CREATE TABLE scrap (
id INTEGER PRIMARY KEY AUTOINCREMENT,
hostname VARCHAR(255),
drive VARCHAR(255),
perc VARCHAR(255)
);
If the id column is auto-incrementing you don't need to supply a value for it, but you do need to "tell" the database that you aren't inserting it. Note that in order to bind a dictionary, you need to specify the placeholders by name:
cur.execute("INSERT INTO scrap (hostname, drive, perc) VALUES (:host, :drive, :percent);", value)
EDIT:
Following up on the discussion from the comments - the value dictionary should map placeholder names to their intended values, not list containing them:
soup = bs(contents, 'html.parser')
for name in soup.find_all("td", class_="qqp0_c0"):
hostname = name.parent.find('td').get_text()
drive = name.parent.find('td', class_="qqp0_c1").get_text()
used_percent = name.parent.find('td', class_="qqp0_c5").get_text()
value = {'host': hostname, 'drive': drive, 'percent': used_percent}
cur.execute("INSERT INTO scrap (hostname, drive, perc) VALUES (:host, :drive, :percent);", value)
I'm trying to create a code piece that inserts an object I've created to store data in a very specific way into an SQL table as a blob type, and it keeps giving me an ' sqlite3.InterfaceError: Error binding parameter 1 - probably unsupported type.' error.
Has any of you encountered something similar before? Do you have any ideas how to deal with it?
conn = sqlite3.connect('my_database.db')
c = conn.cursor()
params = (self.question_id, i) #i is the object in question
c.execute('''
INSERT INTO '''+self.current_test_name+''' VALUES (?, ?)
''',params)
conn.commit()
conn.close()
For starters, this would be a more appropriate execute statement as it is way cleaner:
c.execute("INSERT INTO "+self.current_test_name+" VALUES (?, ?)", (self.question_id, i))
You are also missing the table you are inserting into (or the columns if self.current_test_name is the table name.)
Also, Is the column in the database setup to handle the data type for the provided input for self.question_id and i? (Not expecting TEXT when you provided INT?)
Example of a working script to insert into a table that has 2 columns named test and test2:
import sqlite3
conn = sqlite3.connect('my_database.db')
c = conn.cursor()
c.execute("CREATE TABLE IF NOT EXISTS test(test INT, test2 INT)")
conn.commit()
for i in range(10):
params = (i, i) # i is the object in question
c.execute("INSERT INTO test (test, test2) VALUES (?, ?)", params)
conn.commit()
conn.close()
Taking INS_QUERY value from Audit table.
INS_QUERY = ("INSERT INTO sales.dbo.Customer_temp (ID,FIRST_NM,LAST_NM,CITY,COUNTRY,PHONE) VALUES ('%s','%s','%s','%s','%s','%s')" % (d[0],d[1],d[2],d[3],d[4],d[5]))
cursor = cs.cursor()
cursor.execute(INS_QUERY)
cs.commit();
if I hard coded the INS_QUERY value in script it's working fine but if I take same value from table it's giving below error message.
Error Message:
pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC SQL
Server Driver][SQL Server]Incorrect syntax near 'INSERT INTO
sales.dbo.Customer_temp (ID,FIRST_NM,LAST_NM,CITY,COUNTRY,PHONE)
VALUES ('%s','%s','%s','%s','%s','%s')'. (102) (SQLExecDirectW)")
Audit Table Insert query:
INSERT INTO DBO.AUDIT_TABLE(INST_QUERY) VALUES ('("INSERT INTO sales.dbo.Customer_temp (ID,FIRST_NM,LAST_NM,CITY,COUNTRY,PHONE) VALUES ("%s","%s","%s","%s","%s","%s")" % (d[0],d[1],d[2],d[3],d[4],d[5]))')
You are using pyodbc connector and the parameters syntax is ?, not %s
The second detail is that when the parameters are from type str, there is no need to use single quotes '?', this is done for you automatically.
Can you try this approach and tell me how it works for you?
INS_QUERY = "INSERT INTO sales.dbo.Customer_temp (ID,FIRST_NM,LAST_NM,CITY,COUNTRY,PHONE) VALUES (?, ?, ?, ?, ?, ?)"
cursor = cs.cursor()
cursor.execute(INS_QUERY, (d[0],d[1],d[2],d[3],d[4],d[5]) )
cs.commit();
I am trying to compiled query using db2 dialect ibm_db_sa. After compiling, it binds '?' instead of parameter.
I have tried same for MSSQL and Oracle dialects, they are giving expected results.
import ibm_db_sa
from sqlalchemy import bindparam
from sqlalchemy import Table, MetaData, Column, Integer
tab = Table('customers', MetaData(), Column('cust_id', Integer,
primary_key=True))
stmt = select([tab]).where(literal_column('cust_id') ==
bindparam('cust_id'))
ms_sql = stmt.compile(dialect=mssql.dialect())
oracle_q = stmt.compile(dialect=oracle.dialect())
db2 = stmt.compile(dialect=ibm_db_sa.dialect())
If i print all 3 queries, will output:
MSSQL => SELECT customers.cust_id FROM customers WHERE cust_id = :cust_id
Oracle => SELECT customers.cust_id FROM customers WHERE cust_id = :cust_id
DB2 => SELECT customers.cust_id FROM customers WHERE cust_id = ?
Is there any way to get DB2 query same as others ?
The docs that you reference have that solution:
In the case that a plain SQL string is passed, and the underlying
DBAPI accepts positional bind parameters, a collection of tuples or
individual values in *multiparams may be passed:
conn.execute(
"INSERT INTO table (id, value) VALUES (?, ?)",
(1, "v1"), (2, "v2")
)
conn.execute(
"INSERT INTO table (id, value) VALUES (?, ?)",
1, "v1"
)
For Db2, you just pass a comma-separated list of values as documented in the 2nd example:
conn.execute(stmt,1, "2nd value", storeID, whatever)
When I try to write a dataframe to ms sql server, like this:
cnxn = sqlalchemy.create_engine("mssql+pyodbc://#HOST:PORT/DATABASE?driver=SQL+Server")
df.to_sql('DATABASE.dbo.TABLENAME', cnxn, if_exists='append', index=False)
I get the following error:
ProgrammingError: (pyodbc.ProgrammingError) ('42S22', "[42S22] [Microsoft][ODBC SQL Server Driver][SQL Server]Invalid column name 'DateDay'. (207) (SQLExecDirectW)") [SQL: 'INSERT INTO [DATABASE.dbo.TABLENAME] ([DateDay], [ID], [Code], [Forecasted], [Lower95CI], [Upper95CI], [ForecastMethod], [ForecastDate]) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)'] [parameters: ((datetime.datetime(2017, 12, 10, 0, 0), '8496', "'IO'", 197, 138, 138, 'ARIMAX',...
it seems that the column name is producing the error? it is looking for [DateDay] but it finds 'DateDay' with the ' '? how to fix this?
I am using python 3.6 on a windows machine, pandas 0.22, sqlalchemy 1.1.13 and pyodbc 4.0.17
UPDATE-- SOLUTION FOUND:
So I realized that my mistake was in the tablename which calls the database: 'DATABASE.dbo.TABLENAME', when i removed the DATABASE.dbo, it worked:
df.to_sql('TABLENAME', cnxn, if_exists='append', index=False)
The problem was that I added the database name when executing the df.to_sql command, which was not needed since I had already established a connection to that database. This worked:
df.to_sql('TABLENAME', cnxn, if_exists='append', index=False)
The problem also may happen if the name you defined in the database table and the name you defined in the dataframe for the same column is different. For example, 'items' and 'itens' will not match and this will cause an error when your script tries to record the dataframe into the database.