SQLite3Cursor object: how to access row values - pharo

I am fairly new to Pharo and trying hard to get a grip of it.
Installed the SQLite3 package and now I am trying to connect to local file based database.
I followed the Getting started tutorial of the community owned SQLite3 database client. Unfortunately only brief documentation is provided.
Can someone give me an example how to iterate through the SQLite3Cursor object and print them e.g. to the Transcript please?
Secondly I would like to know how I am able to access certain row values.
Appreciate any help for a newbie.
Thank you.
If I evaluate
cursor := connection execute: 'SELECT * FROM person;'
All persons are put in that cursor object. Basically I get n SQLite3Rows within that cursor. If I inspect cursor next I see the columns and values of that row but how can I display it in Transcript?
Second question is how can I iterate through the entire cursor object and send the output to Transcript?

Basicly you can do like this
| conn rs rows read|
conn := SQLite3Connection memory.
conn open.
conn execute: 'CREATE TABLE person(
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
age INTEGER NOT NULL
);'.
conn execute: 'INSERT INTO person(name,age) VALUES (?2, ?1);' value: 25 value: 'Cyril'.
conn execute: 'INSERT INTO person(name,age) VALUES (?2, ?1);' value: 24 value: 'Marc'.
rs := conn execute: 'SELECT * FROM person;'.
""" OrderedCollection """
rows := rs rows.
rows do: [ :v |
Transcript show: (v at: 'name'); cr.
]
The best is to look in SQLite3-Core-Tests

Related

Update multiple column with bind variable using cx_Oracle.executemany()

I have been trying to update some columns of a database table using cx_Oracle in Python. I created a list named check_to_process which is a result of another sql query. I am creating log_msg in the program based on success or failure and want to update same in the table only for records in check_to_process list. When I update the table without using bind variable <MESSAGE = %s>, it works fine. But when I try to use bind variable to update the columns it gives me error :
cursor.executemany("UPDATE apps.SLCAP_CITI_PAYMENT_BATCH SET MESSAGE = %s, "
TypeError: an integer is required (got type str)
Below is the code, I am using:
import cx_Oracle
connection = cx_Oracle.connect(user=os.environ['ORA_DB_USR'], password=os.environ['ORA_DB_PWD'], dsn=os.environ['ORA_DSN'])
cursor = connection.cursor()
check_to_process = ['ACHRMUS-20-OCT-2021 00:12:57', 'ACHRMUS-12-OCT-2021 16:12:01']
placeholders = ','.join(":x%d" % i for i,_ in enumerate(check_to_process))
log_msg = 'Success'
cursor.executemany("UPDATE apps.SLCAP_CITI_PAYMENT_BATCH SET MESSAGE = %s, "
"PAYMENT_FILE_CREATED_FLAG='N' "
"WHERE PAYMENT_BATCH_NAME = :1",
[(i,) for i in check_to_process], log_msg, arraydmlrowcounts=True)
Many thanks for suggestions and insights!
Your code has an odd mix of string substitution (the %s) and bind variable placeholders (the :1). And odd code that creates bind variable placeholders that aren't used. Passing log_msg the way you do isn't going to work, since executemany() syntax doesn't support string substitution.
You probably want to use some kind of IN list, as shown in the cx_Oracle documentation Binding Multiple Values to a SQL WHERE IN Clause. Various solutions are shown there, depending on the number of values and frequency that the statement will be re-executed.
Use only bind variables. You should be able to use execute() instead of executemany(). Effectively you would do:
cursor.execute("""UPDATE apps.SLCAP_CITI_PAYMENT_BATCH
SET MESSAGE = :1
WHERE PAYMENT_BATCH_NAME IN (something goes here - see the doc)""",
bind_values)
The bottom line is: read the documentation and review examples like batch_errors.py. If you still have problems, refine your question, correct it, and add more detail.

SELECT statement returning the column name instead of the VALUE (for that said column)

I'm trying to parse information in to a SELECT statement using the two column names 'id' and 'easy_high_score' so I can manipulate values of them two columns in my program, but when trying to get the value of the column 'easy_high_score', which should be an integer like 46 or 20, it instead returns a string of ('easy_high_score',).
Even though there is no mention of [('easy_high_score',)] in the table, it still prints this out. In the table, id 1 has the proper values and information i'm trying get but to no avail. I am fairly new to SQLite3.
if mode == "Easy":
mode = 'easy_high_score'
if mode == "Normal":
mode = "normal_high_score"
if mode == 'Hard':
mode == "hard_high_score"
incrementor = 1 ##This is used in a for loop but not necessary for this post
c.execute("SELECT ? FROM players WHERE id=?", (mode, incrementor))
allPlayers = c.fetchall()
print(allPlayers) #This is printing [('easy_high_score',)], when it should be printing an integer.
Expected Result: 20 (or an integer which represents the high score for easy mode)
Actual Result: [('easy_high_score',)]
Column name cannot be specified using a parameter it should be present verbatim in the query. Modify the line that executes the query like this:
c.execute("SELECT %s FROM players WHERE id=?" % mode, (incrementor,))
A possible cause of this is double quotes vs single quotes.
'SELECT "COLUMN_NAME" FROM TABLE_NAME' # will give values as desired
"SELECT 'COLUMN_NAME' FROM TABLE_NAME" # will give column name like what you got

Importing data from Excel into Access using DAO and WHERE clause

I need to import certain information from an Excel file into an Access DB and in order to do this, I am using DAO.
The user gets the excel source file from a system, he does not need to directly interact with it. This source file has 10 columns and I would need to retrieve only certain records from it.
I am using this to retrieve all the records:
Set destinationFile = CurrentDb
Set dbtmp = OpenDatabase(sourceFile, False, True, "Excel 8.0;")
DoEvents
Set rs = dbtmp.OpenRecordset("SELECT * FROM [EEX_Avail_Cap_ALL_DEU_D1_S_Y1$A1:J65536]")
My problem comes when I want to retrieve only certain records using a WHERE clause. The name of the field where I want to apply the clause is 'Date (UCT)' (remember that the user gets this source file from another system) and I can not get the WHERE clause to work on it. If I apply the WHERE clause on another field, whose name does not have ( ) or spaces, then it works. Example:
Set rs = dbtmp.OpenRecordset("SELECT * FROM [EEX_Avail_Cap_ALL_DEU_D1_S_Y1$A1:J65536] WHERE Other = 12925")
The previous instruction will retrieve only the number of records where the field Other has the value 12925.
Could anyone please tell me how can I achieve the same result but with a field name that has spaces and parenthesis i.e. 'Date (UCT)' ?
Thank you very much.
Octavio
Try enclosing the field name in square brackets:
SELECT * FROM [EEX_Avail_Cap_ALL_DEU_D1_S_Y1$A1:J65536] WHERE [Date (UCT)] = 12925
or if it's a date we are looking for:
SELECT * FROM [EEX_Avail_Cap_ALL_DEU_D1_S_Y1$A1:J65536] WHERE [Date (UCT)] = #02/14/13#;
To use date literal you must enclose it in # characters and write the date in MM/DD/YY format regardless of any regional settings on your machine

Replace empty strings with null values

I am rolling up a huge table by counts into a new table, where I want to change all the empty strings to NULL, and typecast some columns as well. I read through some of the posts and I could not find a query, which would let me do it across all the columns in a single query, without using multiple statements.
Let me know if it is possible for me to iterate across all columns and replace cells with empty strings with null.
Ref: How to convert empty spaces into null values, using SQL Server?
To my knowledge there is no built-in function to replace empty strings across all columns of a table. You can write a plpgsql function to take care of that.
The following function replaces empty strings in all basic character-type columns of a given table with NULL. You can then cast to integer if the remaining strings are valid number literals.
CREATE OR REPLACE FUNCTION f_empty_text_to_null(_tbl regclass, OUT updated_rows int)
LANGUAGE plpgsql AS
$func$
DECLARE
_typ CONSTANT regtype[] := '{text, bpchar, varchar}'; -- ARRAY of all basic character types
_sql text;
BEGIN
SELECT INTO _sql -- build SQL command
'UPDATE ' || _tbl
|| E'\nSET ' || string_agg(format('%1$s = NULLIF(%1$s, '''')', col), E'\n ,')
|| E'\nWHERE ' || string_agg(col || ' = ''''', ' OR ')
FROM (
SELECT quote_ident(attname) AS col
FROM pg_attribute
WHERE attrelid = _tbl -- valid, visible, legal table name
AND attnum >= 1 -- exclude tableoid & friends
AND NOT attisdropped -- exclude dropped columns
AND NOT attnotnull -- exclude columns defined NOT NULL!
AND atttypid = ANY(_typ) -- only character types
ORDER BY attnum
) sub;
-- RAISE NOTICE '%', _sql; -- test?
-- Execute
IF _sql IS NULL THEN
updated_rows := 0; -- nothing to update
ELSE
EXECUTE _sql;
GET DIAGNOSTICS updated_rows = ROW_COUNT; -- Report number of affected rows
END IF;
END
$func$;
Call:
SELECT f_empty2null('mytable');
SELECT f_empty2null('myschema.mytable');
To also get the column name updated_rows:
SELECT * FROM f_empty2null('mytable');
db<>fiddle here
Old sqlfiddle
Major points
Table name has to be valid and visible and the calling user must have all necessary privileges. If any of these conditions are not met, the function will do nothing - i.e. nothing can be destroyed, either. I cast to the object identifier type regclass to make sure of it.
The table name can be supplied as is ('mytable'), then the search_path decides. Or schema-qualified to pick a certain schema ('myschema.mytable').
Query the system catalog to get all (character-type) columns of the table. The provided function uses these basic character types: text, bpchar, varchar, "char". Only relevant columns are processed.
Use quote_ident() or format() to sanitize column names and safeguard against SQLi.
The updated version uses the basic SQL aggregate function string_agg() to build the command string without looping, which is simpler and faster. And more elegant. :)
Has to use dynamic SQL with EXECUTE.
The updated version excludes columns defined NOT NULL and only updates each row once in a single statement, which is much faster for tables with multiple character-type columns.
Should work with any modern version of PostgreSQL. Tested with Postgres 9.1, 9.3, 9.5 and 13.

Inserting a number as a String into a Text column and SQLite stills removes the leading zero

I got the following number as a string: String numberString = "079674839";
When I insert this number into a SQLite DB, SQLite automatically removes the leading zero and stores the string as 79674839. Considering affinity and that the column stores TEXT, shouldn't SQLite store the whole string and keep the leading zero?
Thanks
Double-check your database schema. As documented on Datatypes in SQLite Version 3, the column type name affects how values are processed before being stored.
Here's a Python program to demonstrate, using an in-memory database:
import sqlite3
db = sqlite3.connect(':memory:')
val = "0796";
db.execute('CREATE TABLE test (i INTEGER, r REAL, t TEXT, b BLOB);')
db.execute('INSERT INTO test VALUES (?, ?, ?, ?);', (val, val, val, val))
res = db.execute('SELECT * FROM test');
print '\t'.join([x[0] for x in res.description])
for row in res.fetchall():
print '\t'.join([repr(x) for x in row])
The output is:
i r t b
796 796.0 u'0796' u'0796'
So, it looks like your column is actually an integer type. Take a look at the schema definition (sqlite3 database.db .schema works from the command line), look at the documentation again, and make sure you are using one of type names that map to TEXT affinity. Unknown type names get INTEGER affinity.
In my own case, I was using 'STR', which ends up with the default INTEGER affinity. I changed it to 'TEXT', and SQLite started respecting my leading zeros.
Use single quotes around the number, (i.e., '079674839') if it is anywhere in inline sql code. Also, if you're doing this programatically, make sure that you are not going through a numeric conversion.

Resources