How to execute multiple DML statements in a variable sequentially using cx_Oracle - python-3.x

I have a variable SCRIPT which has two to three DML statements. I want to run them sequentially after connecting to my Oracle DB. I have tried the below but it is failing with below error
c.execute(SCRIPT)
cx_Oracle.DatabaseError: ORA-00933: SQL command not properly ended
Below is the piece of code tried.
SCRIPT="""UPDATE IND_AFRO.DRIVER
SET Emp_Id = 1000, update_user_id = 'RIBST-4059'
WHERE Emp_Id IN (SELECT Emp_Id
FROM IND_AFRO.DRIVER Ddq
WHERE NOT EXISTS
(SELECT 1
FROM IND_AFRO_AF.EMPLOYEE
WHERE Emp_Id = Ddq.Emp_Id)
AND Functional_Area_Cd = 'DC');
UPDATE IND_AFRO.APPOINTMENTS
SET Emp_Id = 1000, update_user_id = 'RIBST-4059'
WHERE Emp_Id IN (SELECT Emp_Id
FROM IND_AFRO.APPOINTMENTS Ddq
WHERE NOT EXISTS
(SELECT 1
FROM IND_AFRO_AF.EMP
WHERE Emp_Id = Ddq.Emp_Id));
UPDATE IND_AFRO.ar_application_for_aid a
SET a.EMP_ID = 1000
WHERE NOT EXISTS
(SELECT 1
FROM IND_AFRO_AF.EMP
WHERE emp_id = a.emp_id);"""
conn = cx_Oracle.connect(user=r'SYSTEM', password='ssadmin', dsn=CONNECTION)
c = conn.cursor()
c.execute(SCRIPT)
c.close()

The execute() and executemany() functions only work on one SQL or PL/SQL statement.
You can wrap the three statements in a PL/SQL BEGIN/END block like:
SQL> begin
2 insert into test values(1);
3 update test set a = 2;
4 end;
5 /
PL/SQL procedure successfully completed.
Alternatively you can split up your string into individual statements. If the statements originate from a file, you can write a wrapper to read file and execute each statement. This is a lot easier if you restrict the SQL syntax (particularly regarding line terminators). For an example, see https://github.com/oracle/python-cx_Oracle/blob/master/samples/SampleEnv.py#L116
However this means calling execute() more times, which isn't as efficient as the first solution.

Related

How to Update oracle table using Python passing arguments

I need to pass 2 arguments to my update statement.
I first get the distinct ID's from TBL1 into a List as below. This works fine
unit_no=[]
sql = """SELECT DISTINCT(ID) FROM TBL1"""
tmp_cursor=self.DB_conn.cursor()
for rec in tmp_cursor.execute(self.orcl_query1)
unit_no.append(rec)
Then, I get a values from another table using the above result. This also is working fine.
while i < len(unit_no):
sql = 'SELECT COL1 FROM TBL2 WHERE ID = :1)
tmp_cursor1=self.DB_conn.cursor()
tmp_cursor1.execute(sql,unit_no[i])
for result in tmp_cursor1
input1=result
sql = """update tbl1 set col1 = :1 where id = :2"""
tmp_cursor2=self.DB_conn.cursor()
tmp_cursor2.execute(sQL,col1, unit_no[i])
I am getting error - function takes at most 2 arguments (3 given), for the above line.
How do I pass 2 values as input to the update statement

force replication of replicated tables

Some of my tables are of type REPLICATE. I would these tables to be actually replicated (not pending) before I start querying my data. This will help me avoid data movement.
I have a script, which I found online, which runs in a loop and do a SELECT TOP 1 on all the tables which are set for replication, but sometimes the script runs for hours. It may seem as the server sometimes won't trigger replication even if you do a SELECT TOP 1 from foo.
How can you force SQL Datawarehouse to complete replication?
The script looks something like this:
begin
CREATE TABLE #tbl
WITH
( DISTRIBUTION = ROUND_ROBIN
)
AS
SELECT
ROW_NUMBER() OVER(
ORDER BY
(
SELECT
NULL
)) AS Sequence
, CONCAT('SELECT TOP(1) * FROM ', s.name, '.', t.[name]) AS sql_code
FROM sys.pdw_replicated_table_cache_state AS p
JOIN sys.tables AS t
ON t.object_id = p.object_id
JOIN sys.schemas AS s
ON t.schema_id = s.schema_id
WHERE p.[state] = 'NotReady';
DECLARE #nbr_statements INT=
(
SELECT
COUNT(*)
FROM #tbl
), #i INT= 1;
WHILE #i <= #nbr_statements
BEGIN
DECLARE #sql_code NVARCHAR(4000)= (SELECT
sql_code
FROM #tbl
WHERE Sequence = #i);
EXEC sp_executesql #sql_code;
SET #i+=1;
END;
DROP TABLE #tbl;
SET #i = 0;
WHILE
(
SELECT TOP (1)
p.[state]
FROM sys.pdw_replicated_table_cache_state AS p
JOIN sys.tables AS t
ON t.object_id = p.object_id
JOIN sys.schemas AS s
ON t.schema_id = s.schema_id
WHERE p.[state] = 'NotReady'
) = 'NotReady'
BEGIN
IF #i % 100 = 0
BEGIN
RAISERROR('Replication in progress' , 0, 0) WITH NOWAIT;
END;
SET #i = #i + 1;
END;
END
Henrik, if 'select top 1' doesn't trigger a replicated table build, then that would be a defect. Please file a support ticket.
Without looking at your system, it is impossible to know exactly what is going on. Here are a couple of things that could be in factoring into extended build time to look into:
The replicated tables are large (size, not necessarily rows) requiring long build times.
There are a lot of secondary indexes on the replicated table requiring long build times.
Replicated table builds require statirc20 (2 concurrency slots). If the concurrency slots are not available, the build will queue behind other running queries.
The replicated tables are constantly being modified with inserts, updates and deletes. Modifications require the table to be built again.
The best way is to run a command like this as part of the job which creates/updates the table:
select top 1 * from <table>
That will force its redistribution at the correct time, without the slow loop through the stored procedure.

For Update - for psycopg2 cursor for postgres

We are using psycopg2 jsonb cursor to fetch the data and processing but when ever new thread or processing coming it should not fetch and process the same records which first process or thread.
For that we have try to use the FOR UPDATE but we just want to know whether we are using correct syntax or not.
con = self.dbPool.getconn()
cur = conn.cursor()
sql="""SELECT jsondoc FROM %s WHERE jsondoc #> %s"”"
if 'sql' in queryFilter:
sql += queryFilter 'sql’]
When we print this query, it will be shown as below:
Query: "SELECT jsondoc FROM %s WHERE jsondoc #> %s AND (jsondoc ->> ‘claimDate')::float <= 1536613219.0 AND ( jsondoc ->> ‘claimstatus' = ‘done' OR jsondoc ->> 'claimstatus' = 'failed' ) limit 2 FOR UPDATE"
cur.execute(sql, (AsIs(self.tablename), Json(queryFilter),))
cur.execute()
dbResult = cur.fetchall()
Please help us to clarify the syntax and explain if that syntax is correct then how this query lock the fetched records of first thread.
Thanks,
Sanjay.
If this exemplary query is executed
select *
from my_table
order by id
limit 2
for update; -- wrong
then two resulting rows are locked until the end of the transaction (i.e. next connection.rollback() or connection.commit() or the connection is closed). If another transaction tries to run the same query during this time, it will be stopped until the two rows are unlocked. So it is not the behaviour you are expected. You should add skip locked clause:
select *
from my_table
order by id
limit 2
for update skip locked; -- correct
With this clause the second transaction will skip the locked rows and return next two onces without waiting.
Read about it in the documentation.

SQL Azure Schema issue error code 208 with temporary table

I have a couple of stored procedures that create different temporary tables.
At the end of the procedure i drop them (know that is not required, but it's good practice).
The stored procedures are executed as a part of a SSIS package. I got 4 different SQL jobs that execute the same SSIS package running in parallel.
When logging into the Azure portal and using the performance recommendation feature, I get a recommendation to fix the schema issues. It states an Sql error code 208. According to documentation that means "object not found".
Temporary tables are valid within the scope of the stored procedure and should get a unique name in the database, so I do not think where are any conflicts.
I have no idea what causes this, and the stored procedures seems to work alright. Anyone know what could be the cause here?
Simplified sample of one of the procedures:
SET NOCOUNT ON;
CREATE TABLE #tmpTransEan
(
Ean_Art_Str_id BIGINT ,
Artikler_id BIGINT
);
INSERT INTO #tmpTransEan
( Ean_Art_Str_id ,
Artikler_id
)
SELECT DISTINCT
eas.Ean_Art_Str_id ,
a.Artikler_id
FROM dbo.Artikkel_Priser ap
JOIN Ean_Art_Str eas ON eas.artikler_id = ap.Artikler_id
JOIN wsKasse_Til_Kasselogg ktk ON eas.Ean_Art_Str_id = ktk.ID_Primary
JOIN dbo.Artikler a ON a.Artikler_id = eas.artikler_id
JOIN dbo.Felles_Butikker b ON b.Butikker_id = ap.butikker_id
WHERE ktk.ID_Table = OBJECT_ID('Ean_Art_Str')
AND LEN(a.Artikkelnr) >= 8
AND ktk.Tidspunkt >= #tidspunkt
AND ( ( ap.butikker_id = #nButikker_id1
AND #Alle_artikler_til_kasse = 'N'
)
OR ( b.Databaser_id = #Databaser_id
AND #Alle_artikler_til_kasse = 'J'
)
)
AND b.Akt_kode = 'A'
AND a.Akt_kode = 'A'
AND a.Databaser_id IN ( -1, #Databaser_id )
SELECT DISTINCT
a.Artikkelnr ,
s.Storrelse ,
eas.* ,
EAN_12 = LEFT(eas.EAN_13, 12)
FROM dbo.Ean_Art_Str eas
JOIN #tmpTransEan t ON t.Artikler_id = eas.artikler_id
JOIN Artikler a ON a.Artikler_id = eas.artikler_id
JOIN dbo.Felles_Storrelser s ON s.Storrelser_id = eas.storrelser_id
DROP TABLE #tmpTransEan;
END;

How to optimize DELETE .. NOT IN .. SUBQUERY in Firebird

I've this kind of delete query:
DELETE
FROM SLAVE_TABLE
WHERE ITEM_ID NOT IN (SELECT ITEM_ID FROM MASTER_TABLE)
Are there any way to optimize this?
You can use EXECUTE BLOCK for sequential scanning of detail table and deleting records where no master record is matched.
EXECUTE BLOCK
AS
DECLARE VARIABLE C CURSOR FOR
(SELECT d.id
FROM detail d LEFT JOIN master m
ON d.master_id = m.id
WHERE m.id IS NULL);
DECLARE VARIABLE I INTEGER;
BEGIN
OPEN C;
WHILE (1 = 1) DO
BEGIN
FETCH C INTO :I;
IF(ROW_COUNT = 0)THEN
LEAVE;
DELETE FROM detail WHERE id = :I;
END
CLOSE C;
END
(NOT) IN can usually be optimized by using (NOT) EXISTS instead.
DELETE
FROM SLAVE_TABLE
WHERE NOT EXISTS (SELECT 1 FROM MASTER_TABLE M WHERE M.ITEM_ID = ITEM_ID)
I am not sure what you are trying to do here, but to me this query indicates that you should be using foreign keys to enforce these kind of constraints, not run queries to cleanup the mess afterwards.

Resources