Multiple SQL request on cx_Oracle? - cx-oracle

I use cx_Oracle a Python Interface for Oracle Database.
In the same query, I have to:
Change session to CDB container
do a SQL request
Here is my code:
Query = f"""ALTER SESSION SET CONTAINER=cdb$root;
select RESOURCE_NAME as "Parametre",
MAX_UTILIZATION as "Valeur courrante",
LIMIT_VALUE as "Valeur limite",
decode( nvl( MAX_UTILIZATION, 0),0, 0, ceil ( ( MAX_UTILIZATION / LIMIT_VALUE) * 100) ) as "Pourcentage utilise"
from v$resource_limit
where RESOURCE_NAME in ('sessions','processes')
and decode( nvl( MAX_UTILIZATION, 0),0, 0, ceil ( ( MAX_UTILIZATION / LIMIT_VALUE) * 100) ) > {seuil_max_occupation_param}"""
SQL_Query = pd.read_sql_query(Query,conn)
But this code occurs an Oracle errors:
ORA-00922: option erronee ou absente
This set of request wors fine on SQLPlus for example.
I have a syntax problem but which one?
Thans a lot.
Best regards.
théo

All the Oracle APIs execute a single statement at a time. From the documentation:
cx_Oracle can be used to execute individual statements, one at a time. It does not read SQL*Plus “.sql” files.
You need to make multiple calls to execute(), passing the ALTER first, and then the SELECT. You need to use the same connection for both calls.
Alternatively you could wrap the SQL calls in a PL/SQL block (and return a REF CURSOR or Implicit Result Set).

Related

Issues creating a virtual HANA table

I am trying to create a virtual table in HANA based on a remote system table view.
If I run it at the command line using hdbsql
hdbsql H00=> create virtual table HanaIndexTable at "SYSRDL#CG_SOURCE"."<NULL>"."dbo"."sysiqvindex"
0 rows affected (overall time 305.661 msec; server time 215.870 msec)
I am able to select from HanaIndexTable and get results and see my index.
When I code it in python, I use the following command:
cursor.execute("""create virtual table HanaIndexTable1 at SYSRDL#CG_source.\<NULL\>.dbo.sysiqvindex""")
I think there is a problem with the NULL. But I see in the output that the escape key is doubled.
self = <hdbcli.dbapi.Cursor object at 0x7f02d61f43d0>
operation = 'create virtual table HanaIndexTable1 at SYSRDL#CG_source.\\<NULL\\>.dbo.sysiqvindex'
parameters = None
def __execute(self, operation, parameters = None):
# parameters is already checked as None or Tuple type.
> ret = self.__cursor.execute(operation, parameters=parameters, scrollable=self._scrollable)
E hdbcli.dbapi.ProgrammingError: (257, 'sql syntax error: incorrect syntax near "\\": line 1 col 58 (at pos 58)')
/usr/local/lib/python3.7/site-packages/hdbcli/dbapi.py:69: ProgrammingError
I have tried to run the command without the <> but get the following error.
hdbcli.dbapi.ProgrammingError: (257, 'sql syntax error: incorrect syntax near "NULL": line 1 col 58 (at pos 58)')
I have tried upper case, lower case and escaping. Is what I am trying to do impossible?
There was an issue with capitalization between HANA and my remote source. I also needed more escaping rather than less.

inserting rows sql syntax error with Python 3.9

I am trying to insert rows into my database. Establishing a connection to the database is successful. When I try to insert my desired rows I get an error in the sql. The error appears to be coming from my variable "network_number". I am running nested for loops to iterate through the network number ranges from 1.1.1 - 254.254.254 and adding each unique IP to the database. The network number is written as a string so should the column for the network number be set to VARCHAR or TEXT to include full stops/period? The desired output is to populate my database table with each network number. You can find the sql query assigned to the variable sql_query.
def populate_ip_table(ip_ranges):
network_numbers = ["", "", ""]
information = "Populating the IP table..."
total_ips = (len(ip_ranges) * 254**2)
complete = 0
for octet_one in ip_ranges:
network_numbers[0] = str(octet_one)
percentage_complete = round(100 / total_ips * complete, 2)
information = f"{percentage_complete}% complete"
output_information(information)
for octet_two in range(1, 254 + 1):
network_numbers[1] = str(octet_two)
for octet_three in range(1, 254 + 1):
network_numbers[2] = str(octet_three)
network_number = ".".join(network_numbers)
complete += 1
sql_query = f"INSERT INTO ip_scan_record (ip, scanned_status, times_scanned) VALUES ({network_number}, false, 0)"
execute_sql_statement(sql_query)
information = "100% complete"
output_information(information)
Output
[ * ] Connecting to the PostgreSQL database...
[ * ] Connection successful
[ * ] Executing SQL statement
[ ! ] syntax error at or near ".50"
LINE 1: ...rd (ip, scanned_status, times_scanned) VALUES (1.1.50, false...
^
As stated by the Docs:
There is no performance difference among these three types, apart from increased storage space when using the blank-padded type, and a few extra CPU cycles to check the length when storing into a length-constrained column. While character(n) has performance advantages in some other database systems, there is no such advantage in PostgreSQL; in fact character(n) is usually the slowest of the three because of its additional storage costs. In most situations text or character varying should be used
Postgresql Docs
I think you need to use VARCHAR, due to the small varying length of your ip-string. while, text is effectively avarchar (no limit), but it may have some problems related to indexing if a record with compressed size of greater than 2712 is tried to be inserted.
Actually your problem is, you need to put an extra single qoutes on network_number. To give you a string when inserting the value in postgresql.
To prove this try insert {network_number} as this:
network_number = "'" + ".".join(network_numbers) + "'"
sql_query = f"INSERT INTO ip_scan_record (ip, scanned_status, times_scanned) VALUES ({network_number}, false, 0)"
OR:
sql_query = f"INSERT INTO ip_scan_record (ip, scanned_status, times_scanned) VALUES ('{network_number}', false, 0)"
You could also, used inet dataType, which will save you this hassle.
As stated by Docs:
PostgreSQL offers data types to store IPv4, IPv6, and MAC addresses. It is better to use these types instead of plain text types to store network addresses, because these types offer input error checking and specialized operators and functions.
PostgreSQL: Network Address Types

LoadData script in Local SQL Server instance and Azure SQL Server

I can run the following commands without any problems on my local SQL Server machine:
exec sp_configure 'show advanced options', 1
reconfigure
go
exec sp_configure 'Ad Hoc Distributed Queries', 1
reconfigure
go
exec LoadData 'C:\MyDataFile.urg';
go
But when I try to run the SP_CONFIGURE commands on Azure SQL, I get the following error:
Statement 'CONFIG' is not supported in this version of SQL Server.
And when I execute the Load data command I get the following error
Cannot bulk load because the file C:\MyDataFile.urg" could not be opened. Operating system error code (null).
The above error makes sense, since I am trying to access a file on my local machine from Azure cloud. Is there an equivalent process to Load data that I can follow in Azure to dump the contents of the file?
I can place the file in Azure blob but then what command I execute that will work similar to load data?
-- Update 1
Please keep in mind two things when answering
1) I am using a third party file that ends with .urg and is not a csv file.
2) When I use exec LoadData 'C:\MyDataFile.urg'; Note that I am not using a table name where file data will go to. LoadData command executes the file and dumps the data in respective files itself. I am assuming that .urg files gets opened and executes and has commands in it to know what data goes where.
--Update 2
So my understanding was incorrect. Found out that LoadData is a stored proc that the third party is using that takes the path to the file like this. File on the disk works great, I need to send it azure storage blob path.
CREATE PROCEDURE [dbo].[LoadData]
#DataFile NVARCHAR(MAX)
AS
DECLARE #LoadSql NVARCHAR(MAX)
SET #LoadSql = '
BULK INSERT UrgLoad FROM ''' + #DataFile + '''
WITH (
FIRSTROW = 2,
FIELDTERMINATOR = ''~'',
ROWTERMINATOR = ''0x0a'',
KEEPNULLS,
CODEPAGE = ''ACP''
)
'
EXEC sp_executesql #LoadSql
SELECT #Err = ##ERROR
Now I need to find a way to send a azure storage blob path to this stored proc in such a way that it can open it. I will update if I get into issues.
--Update 3
Since my blob storage account is not public, I am sure I need to add authorization piece. I added this piece of code to the proc
CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sp=r&st=2020-03-10T01:04:16Z&se=2020-03-10T09:04:16Z&spr=https&sv=2019-02-02&sr=b&sig=Udxa%2FvPrUBZt09GAH4YgWd9joTlyxYDC%2Bt7j7CmuhvQ%3D';
-- Create external data source with the URL of the Blob storage Account and associated credential since its not public
CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://dev.blob.core.windows.net/urg',
CREDENTIAL= MyAzureBlobStorageCredential
);
When I execute the proc says it already exists.
Msg 15530, Level 16, State 1, Procedure LoadData, Line 14 [Batch Start Line 1]
The credential with name "MyAzureBlobStorageCredential" already exists.
Msg 46502, Level 16, State 1, Procedure LoadData, Line 27 [Batch Start Line 1]
Type with name 'MyAzureBlobStorage' already exists.
When I take it out and I updated the Bulk insert piece of code like this
DECLARE #LoadSql NVARCHAR(MAX)
SET #LoadSql = '
BULK INSERT UrjanetLoad FROM ''' + #DataFile + '''
WITH ( DATA_SOURCE = ''MyAzureBlobStorage'',
FIRSTROW = 2,
FIELDTERMINATOR = ''~'',
ROWTERMINATOR = ''0x0a'',
KEEPNULLS,
CODEPAGE = ''ACP''
)
'
But it tells me
Cannot bulk load because the file "https://dev.blob.core.windows.net/urg/03_06_20_16_23.urg" could not be opened. Operating system error code 5(Access is denied.).
I guess the question is what am I missing in the authorization process and how can I make it a part of a stored proc I guess, so whenever it runs, it picks it.
Update 4: This article helped in accessing file from blob storage using credentials and dropping external data source and scoped credentials and getting a fresh SAS token in the stored proc, in case it can help someone else `https://social.technet.microsoft.com/wiki/contents/articles/52061.t-sql-bulk-insert-azure-csv-blob-into-azure-sql-database.aspx
Now I am getting error
Cannot bulk load because the file "03_06_20_16_23.urg" could not be opened. Operating system error code 32(The process cannot access the file because it is being used by another process.).
Tried this article but that does not address the file being in use by another process issue.
Update 5: Here is how the proc looks like
alter PROCEDURE [dbo].[TestLoad]
#DataFile NVARCHAR(MAX), #SAS_Token VARCHAR(MAX),#Location VARCHAR(MAX)
AS
BEGIN TRAN
-- Turn on NOCOUNT to prevent message spamming
SET NOCOUNT ON;
DECLARE #CrtDSSQL NVARCHAR(MAX), #DrpDSSQL NVARCHAR(MAX), #ExtlDS SYSNAME, #DBCred SYSNAME, #BulkInsSQL NVARCHAR(MAX) ;
SELECT #ExtlDS = 'MyAzureBlobStorage'
SELECT #DBCred = 'MyAzureBlobStorageCredential'
SET #DrpDSSQL = N'
IF EXISTS ( SELECT 1 FROM sys.external_data_sources WHERE Name = ''' + #ExtlDS + ''' )
BEGIN
DROP EXTERNAL DATA SOURCE ' + #ExtlDS + ' ;
END;
IF EXISTS ( SELECT 1 FROM sys.database_scoped_credentials WHERE Name = ''' + #DBCred + ''' )
BEGIN
DROP DATABASE SCOPED CREDENTIAL ' + #DBCred + ';
END;
';
SET #CrtDSSQL = #DrpDSSQL + N'
CREATE DATABASE SCOPED CREDENTIAL ' + #DBCred + '
WITH IDENTITY = ''SHARED ACCESS SIGNATURE'',
SECRET = ''' + #SAS_Token + ''';
CREATE EXTERNAL DATA SOURCE ' + #ExtlDS + '
WITH (
TYPE = BLOB_STORAGE,
LOCATION = ''' + #Location + ''' ,
CREDENTIAL = ' + #DBCred + '
);
';
-- PRINT #CrtDSSQL
EXEC (#CrtDSSQL);
-- Set up the load timestamp
DECLARE #LoadTime DATETIME, #Err varchar(60)
SELECT #LoadTime = GETDATE()
-- Set the bulk load command to a string and execute with sp_executesql.
-- This is the only way to do parameterized bulk loads
DECLARE #LoadSql NVARCHAR(MAX)
SET #LoadSql = '
BULK INSERT TestLoadTable FROM ''' + #DataFile + '''
WITH ( DATA_SOURCE = ''MyAzureBlobStorage'',
FIRSTROW = 2,
FIELDTERMINATOR = ''~'',
ROWTERMINATOR = ''0x0a'',
KEEPNULLS,
CODEPAGE = ''ACP''
)
'
EXEC (#LoadSql);
--EXEC sp_executesql #LoadSql
SELECT #Err = ##ERROR
IF #Err <> 0 BEGIN
PRINT 'Errors with data file ... aborting'
ROLLBACK
RETURN -1
END
SET NOCOUNT OFF;
COMMIT
GO
And this is how I am trying to call it.
EXEC TestLoad 'TestFile.csv',
'sv=2019-02-02&ss=bfqt&srt=sco&sp=rwdlacup&se=2020-03-16T02:07:03Z&st=2020-03-10T18:07:03Z&spr=https&sig=TleUPwAyEVT6dzX17fH6rq1lQQRAhIRImDHdJRKIrKE%3D',
''https://dev.blob.core.windows.net/urg';
and here is the error
Cannot bulk load because the file "TestFile.csv" could not be opened. Operating system error code 32(The process cannot access the file because it is being used by another process.).
Errors with data file ... aborting
If your file is placed on a public Azure Blob Storage account, you need to define EXTERNAL DATA SOURCE that points to that account:
CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
WITH ( TYPE = BLOB_STORAGE, LOCATION = 'https://myazureblobstorage.blob.core.windows.net');
Once you define external data source, you can use the name of that source in BULK INSERT and OPENROWSET.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'some strong password';
CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sv=2015-12-11&ss=b&srt=sco&sp=rwac&se=2017-02-01T00:55:34Z&st=2016-12-29T16:55:34Z&spr=https&sig=copyFromAzurePortal';
CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://myazureblobstorage.blob.core.windows.net',
CREDENTIAL= MyAzureBlobStorageCredential);
According my experience and all of the Azure SQL Database documents, we just can answer you that:
Azure SQL database doesn't support load file from on-premise/local computer directly.
Ref:
Azure SQL database doesn't support the .urg data file. We can not find any way support the urg file. Even Data Factory doesn't.
Reference:
Limitions: Only .mdf, .ldf, and .ndf files can be stored in
Azure Storage by using the SQL Server Data Files in Azure feature.
Data formats for import and export
Update:
I don't know if the urg file will be loaded successfully, but I find some ways you could try it:
You could upload your urg file to Blob storage firstly, then reference
this tutorial: Importing data from a file in Azure blob
storage.
Here's an another blog
Bulk insert using stored procedure and Bulk insert file path as stored procedure parameter
can help you pass the bulk insert file path as the parameter to the
stored procedure 'LoadData'.
Hope this helps.

How to make connection in python to connect as400 and call any as400 programs with parameter

Anyone knows How to make connection in python to connect as400 iseries system and call any as400 programs with parameter.
For example how to create library by connecting as400 through python. I want to call " CRTLIB LIB(TEST) " from python script.
I am able to connect to DB2 database through pyodbc package.
Here is my code to connect DB2 database.
import pyodbc
connection = pyodbc.connect(
driver='{iSeries Access ODBC Driver}',
system='ip/hostname',
uid='username',
pwd='password')
c1 = connection.cursor()
c1.execute('select * from libname.filename')
for row in c1:
print (row)
If your IBM i is set up to allow it, you can call the QCMDEXC stored procedure using CALL in your SQL. For example,
c1.execute("call qcmdexc('crtlib lib(test)')")
The QCMDEXC stored procedure lives in QSYS2 (the actual program object is QSYS2/QCMDEXC1) and does much the same as the familiar program of the same name that lives in QSYS, but the stored procedure is specifically meant to be called via SQL.
Of course, for this example to work, your connection profile has to have the proper authority to create libraries.
It's also possible that your IBM i isn't set up to allow this. I don't know exactly what goes into enabling this functionality, but where I work, we have one partition where the example shown above completes normally, and another partition where I get this instead:
pyodbc.Error: ('HY000', '[HY000] [IBM][System i Access ODBC Driver][DB2 for i5/OS]SQL0901 - SQL system error. (-901) (SQLExecDirectW)')
This gist shows how to connect to an AS/400 via pyodbc:
https://gist.github.com/BietteMaxime/6cfd5b2dc2624c094575
A few notes; in this example, SYSTEM is the DSN you're set up for the AS/400 in the with pyodbc.connect statement. You could also switch this to be SERVER and PORT with these modifications:
import pyodbc
class CommitMode:
NONE = 0 # Commit immediate (*NONE) --> QSQCLIPKGN
CS = 1 # Read committed (*CS) --> QSQCLIPKGS
CHG = 2 # Read uncommitted (*CHG) --> QSQCLIPKGC
ALL = 3 # Repeatable read (*ALL) --> QSQCLIPKGA
RR = 4 # Serializable (*RR) --> QSQCLIPKGL
class ConnectionType:
READ_WRITE = 0 # Read/Write (all SQL statements allowed)
READ_CALL = 1 # Read/Call (SELECT and CALL statements allowed)
READ_ONLY = 2 # Read-only (SELECT statements only)
def connstr(server, port, commit_mode=None, connection_type=None):
_connstr = 'DRIVER=iSeries Access ODBC Driver;SERVER={server};PORT={port};SIGNON=4;CCSID=1208;TRANSLATE=1;'.format(
server=server,
port=port,
)
if commit_mode is not None:
_connstr = _connstr + 'CommitMode=' + str(commit_mode) + ';'
if connection_type is not None:
_connstr = _connstr + 'ConnectionType=' + str(connection_type) + ';'
return _connstr
def main():
with pyodbc.connect(connstr('myas400.server.com', '8471', CommitMode.CHG, ConnectionType.READ_ONLY)) as db:
cursor = db.cursor()
cursor.execute(
"""
SELECT * FROM IASP.LIB.FILE
"""
)
for row in cursor:
print(' '.join(map(str, row)))
if __name__ == '__main__':
main()
I cleaned up some PEP-8 as well. Good luck!

Why doesn't psycopg2 allow us to open multiple server-side cursors in the same connection?

I am curious that why psycopg2 doesn't allow opening multiple server-side cursors (http://initd.org/psycopg/docs/usage.html#server-side-cursors) in the same connection. I got this problem recently and I have to solve it by replacing the second cursor by a client-side cursor. But I still want to know if there is any way to do that.
For example, I have these 2 tables on Amazon Redshift:
CREATE TABLE tbl_account (
acctid varchar(100),
regist_day date
);
CREATE TABLE tbl_my_artist (
user_id varchar(100),
artist_id bigint
);
INSERT INTO tbl_account
(acctid, regist_day)
VALUES
('TEST0000000001', DATE '2014-11-23'),
('TEST0000000002', DATE '2014-11-23'),
('TEST0000000003', DATE '2014-11-23'),
('TEST0000000004', DATE '2014-11-23'),
('TEST0000000005', DATE '2014-11-25'),
('TEST0000000006', DATE '2014-11-25'),
('TEST0000000007', DATE '2014-11-25'),
('TEST0000000008', DATE '2014-11-25'),
('TEST0000000009', DATE '2014-11-26'),
('TEST0000000010', DATE '2014-11-26'),
('TEST0000000011', DATE '2014-11-24'),
('TEST0000000012', DATE '2014-11-24')
;
INSERT INTO tbl_my_artist
(user_id, artist_id)
VALUES
('TEST0000000001', 2000011247),
('TEST0000000001', 2000157208),
('TEST0000000001', 2000002648),
('TEST0000000002', 2000383724),
('TEST0000000003', 2000002546),
('TEST0000000003', 2000417262),
('TEST0000000004', 2000076873),
('TEST0000000004', 2000417266),
('TEST0000000005', 2000077991),
('TEST0000000005', 2000424268),
('TEST0000000005', 2000168784),
('TEST0000000006', 2000284581),
('TEST0000000007', 2000284581),
('TEST0000000007', 2000000642),
('TEST0000000008', 2000268783),
('TEST0000000008', 2000284581),
('TEST0000000009', 2000088635),
('TEST0000000009', 2000427808),
('TEST0000000010', 2000374095),
('TEST0000000010', 2000081797),
('TEST0000000011', 2000420006),
('TEST0000000012', 2000115887)
;
I want to select from those 2 tables, then do something with query result.
I use 2 server-side cursors because I need 2 nested loops in my query. I want to use server-side cursor because the result can be very huge.
I use fetchmany() instead of fetchall() because I'm running on a single-node cluster.
Here is my code:
import psycopg2
from psycopg2.extras import DictCursor
conn = psycopg2.connect('connection parameters')
cur1 = conn.cursor(name='cursor1', cursor_factory=DictCursor)
cur2 = conn.cursor(name='cursor2', cursor_factory=DictCursor)
cur1.execute("""SELECT acctid, regist_day FROM tbl_account
WHERE regist_day <= '2014-11-25'
ORDER BY 1""")
for record1 in cur1.fetchmany(50):
cur2.execute("""SELECT user_id, artist_id FROM tbl_my_artist
WHERE user_id = '%s'
ORDER BY 1""" % (record1["acctid"]))
for record2 in cur2.fetchmany(50):
print '(acctid, artist_id, regist_day): (%s, %s, %s)' % (
record1["acctid"], record2["artist_id"], record1["regist_day"])
# do something with these values
conn.close()
When running, I got an error:
Traceback (most recent call last):
File "C:\Users\MLD1\Desktop\demo_cursor.py", line 20, in <module>
for record2 in cur2.fetchmany(50):
File "C:\Python27\lib\site-packages\psycopg2\extras.py", line 72, in fetchmany
res = super(DictCursorBase, self).fetchmany(size)
InternalError: opening multiple cursors from within the same client connection is not allowed.
That error occured at line 20, when I tried to fetch result from the second cursor.
An answer four years later, but it is possible to have more than one cursor open from the same connection. (It may be that the library was updated to fix the problem above.)
The caveat is that you are only allowed to call execute() only once using a named cursor, so if you reuse one of the cursors in the fetchmany loop you'd need to either remove the name or create another "anonymous" cursor.

Resources