HDBC-ODBC SQL Server need commit after quickQuery - haskell

I'm giving my first steps into HDBC using ODBC to connect to a local SQL Server.
After a quickQuery on the connection, I can't close it. I need to perform a commit first.
Is this the way it is supposed to be? Why is the commit necessary when I'm only performing a query?
On GHCi:
m + Database.HDBC Database.HDBC.ODBC
conn <- connectODBC "Driver={SQL Server};Server=thiagon\\sqlserver2012;Database=senior;UID=framework;PWD=framework;"
vals <- quickQuery conn "SELECT TOP 5 * FROM whatever;" []
print vals
commit conn
disconnect conn
If I remove the commit conn line, I get an exception:
*** Exception: SqlError {seState = "[\"25000\"]", seNativeError = -1, seErrorMsg = "disconnect: [\"0: [Microsoft][ODBC SQL Server Driver]Estado de transa\\65533\\65533o inv\\65533lido\"]"}
The message is in portuguese, it means "invalid transaction state".

A quickQuery could modify the table. I don't think the API analyses the string itself, or checks the database, to see whether or not the table was modified. And HDBC doesn't support autocommit.
You could use withTransaction, which will automatically handle this detail for you.
EDIT: Try using quickQuery', which is the strict version of quickQuery. In an example on http://book.realworldhaskell.org/read/using-databases.html (scroll down to ch21/query.hs), they didn't need a commit after a plain SELECT statement, but they were using quickQuery'.

Related

Node and SQL Injection by using ${variable} on query string

I was told on a question that Im having a SQL Injection problem.
Here is the question
Node with SQL Server - response with for json path query not responding as expected
and here is my code
let sqlString = `
SELECT codeid, code, validFrom, validTo,
(SELECT dbo.PLprospectAgentCodesComp.productIdentifier, dbo.masterGroupsProducts.productName, dbo.PLprospectAgentCodesComp.compensation
FROM dbo.PLprospectAgentCodesComp INNER JOIN
dbo.masterGroupsProducts ON dbo.PLprospectAgentCodesComp.productIdentifier = dbo.masterGroupsProducts.productIdentifier
WHERE (dbo.PLprospectAgentCodesComp.codeid = dbo.PLprospectAgentCodes.codeid) for json path ) as products
FROM dbo.PLprospectAgentCodes
WHERE (plid = ${userData.plid}) for json path`
let conn = await sql.connect(process.env.DB_CONNSTRING)
let recordset = await conn.query(sqlString)
But I've read at Microsoft, and even on a question on this site, that that format prevents SQL injection.
From MS:
"All values are automatically sanitized against sql injection. This is
because it is rendered as prepared statement, and thus all limitations
imposed in MS SQL on parameters apply. e.g. Column names cannot be
passed/set in statements using variables."
I was trying to use the declare #parameter for the above code, but since my code has several queries that depend one of another, Im using await for each Query... and #parameter is not working. After I process the recordset, other queries will execute.
If my code actually is dangerous for SQL injection, is it possible to sanitize sqlString before the following two lines? The reason I ask is not to change the method in about 50 routes.
let sqlString = `select * from table where userid=${userId}`
Sanitizing code here
let conn = await sql.connect(process.env.DB_CONNSTRING)
let recordset = await conn.query(sqlString)
Thanks.
According to https://tediousjs.github.io/node-mssql/ , "All values are automatically sanitized against sql injection." applies only when you use the ES6 Tagged template literals. You should add the tag sql.query before the template string.
let sqlString = sql.query`select * from mytable where id = ${value}`
For more information on tagged template literals: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals#tagged_templates

Executing more than one SQL query using psycopg2 in a double with statement

Is it possible to pass more than one query in a double with cursor opening statement with psycopg2 (first to open connection, then cursor)?
E.g. to replace:
import psycopg2
def connector():
return psycopg2.connect(**DB_DICT_PARAMS)
########
sql_update1 = ("UPDATE table SET array = %s::varchar[], "
"array_created = true, timestamp = now() AT TIME ZONE 'UTC' "
"WHERE id = %s")
sql_update2 = ("UPDATE table SET json_field = %s "
"WHERE id = %s")
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_update1, [stringArray, ID])
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_update2, [jsonString, ID])
by:
#(...)
sql_update1 = ("UPDATE table SET array = %s::varchar[], "
"array_created = true, timestamp = now() AT TIME ZONE 'UTC' "
"WHERE id = %s")
sql_update2 = ("UPDATE table SET json_field = %s "
"WHERE id = %s")
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_update1, [stringArray, ID])
curs.execute(sql_update2, [jsonString, ID])
What if the second query needs the first one to be completed before, and what if not?
In the shown case, they will definitely update the same record (i.e. row) in the database but not the same fields (i.e. attributes or columns).
Is this precisely authorized because the two SQL statement are committed sequentially, i.e. the first finishes first. Then, after and only after, the second is executed.?
Or is it actually forbidden because they can be executed in parallel, each query without knowing the state of the other at any instant t?
There are no fancy triggers or procedures in the DB. Let's make it first simple.
(Please note that I have purposefully written two queries here, where a single one would have perfectly fit, but it's not always the case, as some computations are still in the way before saving some other results to the same record in the DB).
If you want them to execute at the same time, simply put them in the same string seperated by a semicolon. I'm a little rusty but I think the following should work:
sql_updates = ("UPDATE table SET array = %s::varchar[], "
"array_created = true, timestamp = now() AT TIME ZONE 'UTC' "
"WHERE id = %s;"
"UPDATE table SET json_field = %s "
"WHERE id = %s;")
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_updates, [stringArray, ID, jsonString, ID])
Better avoid this:
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_update1, [stringArray, ID])
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_update2, [jsonString, ID])
Opening a database connection is pretty slow compared to doing a query, so it is much better to reuse it rather than opening a new one for each query. If your program is a script, typically you'd just open the connection at startup and close it at exit.
However, if your program spends a long time waiting between queries, and there will be many instances running, then it would be better to close the connection to not consume valuable RAM on the postgres server for doing nothing. This is common in client/server applications where the client mostly waits for user input. If there are many clients you can also use connection pooling, which offers the best of both worlds at the cost of a bit extra complexity. But if it's just a script, no need to bother with that.
with connector() as conn:
with conn.cursor() as curs:
curs.execute(sql_update1, [stringArray, ID])
curs.execute(sql_update2, [jsonString, ID])
This would be faster. You don't need to build a new cursor, you can reuse the same one. note if you don't fetch the results of the first query before reusing the cursor, then you won't be able to do so after executing the second query, because a cursor only stores the results of the last query. Since these are updates, there are no results, unless you want to check the rowcount to see if it did update a row.
What if the second query needs the first one to be completed before, and what if not?
Don't care. execute() processes the whole query before returning, so by the time python gets to the next bit of code, the query is done.
Is this precisely authorized because the two SQL statement are committed sequentially, i.e. the first finishes first. Then, after and only after, the second is executed.?
Yes
Or is it actually forbidden because they can be executed in parallel, each query without knowing the state of the other at any instant t?
If you want to execute several queries in parallel, for example because a query takes a while and you want to execute it while still running other queries, then you need several DB connections and of course one python thread for each because execute() is blocking. It's not used often.

Error while connecting DB2/IDAA using ADFV2

I am trying to connect DB2/IDAA using ADFV2 - while executing simple query "select * from table" - I am getting below error:
Operation on target Copy data from IDAA failed: An error has occurred on the Source side. 'Type = Microsoft.HostIntegration.DrdaClient.DrdaException, Message = Exception or type' Microsoft.HostIntegration.Drda.Common.DrdaException 'was thrown. SQLSTATE = HY000 SQLCODE = -343, Source = Microsoft.HostIntegration.Drda.Requester, '
I checked a lot and tried various options but still it's an issue.
I tried query "select * from table with ur" - query to call with read-only but still get above result.
If I use query like select * from table; commit; - then activity succeeded but no record fetch.
Is anyone have solution ?
I have my linked service setup like this. additional connection properties value is : SET CURRENT QUERY ACCELERATION = ALL

OperationalError: near "u": syntax error <- while trying to delete rows from 2 columns inner jointed

I am trying to delete rows from two tables with inner join. I don't really understand why this error pops up.
import sqlite3
login = 'uzytkownik6'
conn = sqlite3.connect('fiszki.db')
c = conn.cursor()
c.execute("DELETE u.*, t.* FROM users u INNER JOIN translations t ON
u.user_id=t.user_id WHERE u.user_name='{}'".format(login))
conn.commit()
But I get error:
OperationalError: near "u": syntax error
You should never use the normal python string formatting when executing SQL commands. Example: db.execute("DELETE FROM users WHERE userId = (?)", [userId]). Also, you don't really need to have run the db.cursor() method after connecting. See SQLite3 API documentation for Python 3.

For Update - for psycopg2 cursor for postgres

We are using psycopg2 jsonb cursor to fetch the data and processing but when ever new thread or processing coming it should not fetch and process the same records which first process or thread.
For that we have try to use the FOR UPDATE but we just want to know whether we are using correct syntax or not.
con = self.dbPool.getconn()
cur = conn.cursor()
sql="""SELECT jsondoc FROM %s WHERE jsondoc #> %s"”"
if 'sql' in queryFilter:
sql += queryFilter 'sql’]
When we print this query, it will be shown as below:
Query: "SELECT jsondoc FROM %s WHERE jsondoc #> %s AND (jsondoc ->> ‘claimDate')::float <= 1536613219.0 AND ( jsondoc ->> ‘claimstatus' = ‘done' OR jsondoc ->> 'claimstatus' = 'failed' ) limit 2 FOR UPDATE"
cur.execute(sql, (AsIs(self.tablename), Json(queryFilter),))
cur.execute()
dbResult = cur.fetchall()
Please help us to clarify the syntax and explain if that syntax is correct then how this query lock the fetched records of first thread.
Thanks,
Sanjay.
If this exemplary query is executed
select *
from my_table
order by id
limit 2
for update; -- wrong
then two resulting rows are locked until the end of the transaction (i.e. next connection.rollback() or connection.commit() or the connection is closed). If another transaction tries to run the same query during this time, it will be stopped until the two rows are unlocked. So it is not the behaviour you are expected. You should add skip locked clause:
select *
from my_table
order by id
limit 2
for update skip locked; -- correct
With this clause the second transaction will skip the locked rows and return next two onces without waiting.
Read about it in the documentation.

Resources