schemacrawler oracle-plugin not returning SYNONYMS - schemacrawler

I am using schema-crawler to crawl an oracle database (Retrieve Table/Synonym metadata including columns details and foreign keys)
INFO:
-- generated by: SchemaCrawler 16.15.1
-- database: Oracle Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production
-- driver: Oracle JDBC driver 19.3.0.0.0
-- JVM system: AdoptOpenJDK OpenJDK 64-Bit Server VM 1.8.0_292-b10
In my POM I have included the oracle plugin
<groupId>us.fatehi</groupId>
<artifactId>schemacrawler-oracle</artifactId>
<version>${schemacrawler.version}</version>
</dependency>
I have set the following in LimitOptionsBuilder & LoadOptions to crawl Schema:
limitOptionsBuilder.tableTypes("TABLE,VIEW,SYNONYM");
limitOptionsBuilder.includeAllSynonyms();
final SchemaCrawlerOptions options = SchemaCrawlerOptionsBuilder.newSchemaCrawlerOptions()
.withLimitOptions(limitOptionsBuilder.toOptions())
.withLoadOptions(loadOptionsBuilder.toOptions());
Catalog cat = SchemaCrawlerUtility.getCatalog(conn, options);
In the Catalog output, I don't see any SYNONYMS. I did some debugging and it seems that the the query that is sent to the database to get the tables is using DBA_TAB_COMMENTS, which unfortunately does not contain SYNONYM information. In oracle synonyms are stored in ALL_SYNONYMS
SELECT
NULL AS TABLE_CAT,
TABLES.OWNER AS TABLE_SCHEM,
TABLES.TABLE_NAME AS TABLE_NAME,
TABLES.TABLE_TYPE AS TABLE_TYPE,
TABLES.COMMENTS AS REMARKS
FROM
DBA_TAB_COMMENTS TABLES
WHERE
TABLES.OWNER NOT IN
('ANONYMOUS', 'APEX_PUBLIC_USER', 'APPQOSSYS', 'BI', 'CTXSYS', 'DBSNMP', 'DIP',
'EXFSYS', 'FLOWS_30000', 'FLOWS_FILES', 'GSMADMIN_INTERNAL', 'IX', 'LBACSYS',
'MDDATA', 'MDSYS', 'MGMT_VIEW', 'OE', 'OLAPSYS', 'ORACLE_OCM',
'ORDPLUGINS', 'ORDSYS', 'OUTLN', 'OWBSYS', 'PM', 'SCOTT', 'SH',
'SI_INFORMTN_SCHEMA', 'SPATIAL_CSW_ADMIN_USR', 'SPATIAL_WFS_ADMIN_USR',
'SYS', 'SYSMAN', 'SYSTEM', 'TSMSYS', 'WKPROXY', 'WKSYS', 'WK_TEST',
'WMSYS', 'XDB', 'XS$NULL', 'RDSADMIN')
AND NOT REGEXP_LIKE(TABLES.OWNER, '^APEX_[0-9]{6}$')
AND NOT REGEXP_LIKE(TABLES.OWNER, '^FLOWS_[0-9]{5,6}$')
AND REGEXP_LIKE(TABLES.OWNER, '${schemas}')
AND TABLES.TABLE_NAME NOT LIKE 'BIN$%'
AND NOT REGEXP_LIKE(TABLES.TABLE_NAME, '^(SYS_IOT|MDOS|MDRS|MDRT|MDOT|MDXT)_.*$')
UNION ALL
SELECT
NULL AS TABLE_CAT,
MVIEWS.OWNER AS TABLE_SCHEM,
MVIEWS.MVIEW_NAME AS TABLE_NAME,
'MATERIALIZED VIEW' AS TABLE_TYPE,
MVIEWS.COMMENTS AS REMARKS
FROM
DBA_MVIEW_COMMENTS MVIEWS
WHERE
MVIEWS.OWNER NOT IN
('ANONYMOUS', 'APEX_PUBLIC_USER', 'APPQOSSYS', 'BI', 'CTXSYS', 'DBSNMP', 'DIP',
'EXFSYS', 'FLOWS_30000', 'FLOWS_FILES', 'GSMADMIN_INTERNAL', 'IX', 'LBACSYS',
'MDDATA', 'MDSYS', 'MGMT_VIEW', 'OE', 'OLAPSYS', 'ORACLE_OCM',
'ORDPLUGINS', 'ORDSYS', 'OUTLN', 'OWBSYS', 'PM', 'SCOTT', 'SH',
'SI_INFORMTN_SCHEMA', 'SPATIAL_CSW_ADMIN_USR', 'SPATIAL_WFS_ADMIN_USR',
'SYS', 'SYSMAN', 'SYSTEM', 'TSMSYS', 'WKPROXY', 'WKSYS', 'WK_TEST',
'WMSYS', 'XDB', 'XS$NULL', 'RDSADMIN')
AND NOT REGEXP_LIKE(MVIEWS.OWNER, '^APEX_[0-9]{6}$')
AND NOT REGEXP_LIKE(MVIEWS.OWNER, '^FLOWS_[0-9]{5,6}$')
AND REGEXP_LIKE(MVIEWS.OWNER, '${schemas}')```

Related

SingleStore 1.0.1 JDBC ResultsetMetadata returns type name VARSTRING while getColumns returns VARCHAR

JDBC version 1.0.1
Server version 7.6
A table defined as follows
create table TVCHAR ( RNUM integer not null , CVCHAR varchar(32 ) null , SHARD KEY ( RNUM ) ) ;
DatabaseMetadata.getColumns returns a type name of VARCHAR(32).
When a query select * from TVCHAR is executed, the ResultsetMetadata returned by the driver describes the column CVCHAR as VARSTRING and not VARCHAR. Would expect a consistent type name from both Resultsets.
Example shown using SQLSquirrel
Any advice?
Try updating your JDBC 1.0.1 driver to more stable version of JDBC or it might be a case that your varchar(32) 'data' exceeds its limit so the JDBC interpreted it as VARSTRING. Because JDBC converts the datatype in Result set metadata, usually when there is something wrong.

Retrieving data from IBM DB2 using pyodbc and the related error

I confirm that I have gone through multiple posts in StackOverflow with respect to similar problem, still stuck with the below problem, hence posting to seek guidance/pointers.
Following is the code
import pypyodbc as pyodbc
import configparser
config = configparser.ConfigParser()
config.read('config.ini')
conn_str = 'DRIVER={' + config['db2']['driver'] + '};' \
+ 'SERVER=' + config['db2']['server'] + ';' \
+ 'DATABASE=' + config['db2']['database'] + ';' \
+ 'UID=' + config['db2']['uid'] + ';' \
+ 'PWD=' + config['db2']['password']
print(conn_str)
connection = pyodbc.connect(
conn_str
)
cur = connection.cursor()
cur.execute('SELECT col_1, col_2 FROM schema.table_name LIMIT 2')
for row in cur:
print (row)
Output from code execution
[connect string output]
DRIVER={'IBM i Access ODBC Driver 64-bit'};SERVER='hostname';DATABASE='database';UID='userid';PWD='password'
[error from executing the code]
raise Error(state,err_text)
pypyodbc.Error: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified')
Configuration file
$ cat config.ini
[db2]
driver = 'IBM i Access ODBC Driver 64-bit'
server = 'hostname'
database = 'database'
uid = 'userid'
password = 'password'
Output of ODBC installer and uninstaller command
odbcinst -j
unixODBC 2.3.1
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /home/useradmin/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
cat /etc/odbcinst.ini
[PostgreSQL]
Description=ODBC for PostgreSQL
Driver=/usr/lib/psqlodbcw.so
Setup=/usr/lib/libodbcpsqlS.so
Driver64=/usr/lib64/psqlodbcw.so
Setup64=/usr/lib64/libodbcpsqlS.so
FileUsage=1
[MySQL]
Description=ODBC for MySQL
Driver=/usr/lib/libmyodbc5.so
Setup=/usr/lib/libodbcmyS.so
Driver64=/usr/lib64/libmyodbc5.so
Setup64=/usr/lib64/libodbcmyS.so
FileUsage=1
[IBM i Access ODBC Driver]
Description=IBM i Access for Linux ODBC Driver
Driver=/opt/ibm/iaccess/lib/libcwbodbc.so
Setup=/opt/ibm/iaccess/lib/libcwbodbcs.so
Driver64=/opt/ibm/iaccess/lib64/libcwbodbc.so
Setup64=/opt/ibm/iaccess/lib64/libcwbodbcs.so
Threading=0
DontDLClose=1
UsageCount=1
[IBM i Access ODBC Driver 64-bit]
Description=IBM i Access for Linux 64-bit ODBC Driver
Driver=/opt/ibm/iaccess/lib64/libcwbodbc.so
Setup=/opt/ibm/iaccess/lib64/libcwbodbcs.so
Threading=0
DontDLClose=1
UsageCount=1
$ cat ~/.odbc.ini
[db2]
Driver = IBM i Access ODBC Driver 64-bit
DATABASE = 'database'
SYSTEM = hostname
HOSTNAME = hostname
PORT = 446
PROTOCOL = TCPIP
$ isql db2 $username $password -v
[08001][unixODBC][IBM][System i Access ODBC Driver]The specified database can not be accessed at this time.
[ISQL]ERROR: Could not SQLConnect
I have double checked and confirm that there is no typo with driver name "IBM i Access ODBC Driver 64-bit"
OS information
x86_64 GNU/Linux
Any pointers/guidance on how to debug the issue, please?
I think you are confusing the schema name with the database name.
Odds are you can omit the database name completely (or leave it empty string and let it default to *SYSBAS). Instead, you can specify the DefaultLibraries argument.
See the IBM doc here for info on the valid connection string (and odbc.ini) keywords.
Similarly, you can omit the PORT, PROTOCOL, and HOSTNAME keywords, as they're not supported by this driver.
That will leave you with an odbc.ini that looks as simple as this:
[db2]
Driver = IBM i Access ODBC Driver 64-bit
DefaultLibraries = 'database'
SYSTEM = hostname

PyQGIS: UPDATE, INSERT before SELECT

I can run a SQL in a PostGIS Table to load the query in QGIS3.16 (running Ubuntu Desktop 20.04) like this:
uri = QgsDataSourceUri()
uri.setConnection("localhost", "5432", "dbname", "username", "password")
print("Connection Successful")
nb = 1050130
fields = '*'
sql ='''(SELECT {} FROM montebelodosul.cadastro_urbano_montebelodosul_p WHERE numero_cadastro = {})'''.format(fields,nb)
# Retrieve the query table
uri.setDataSource('', f'({sql})', 'geom', '', 'id')
# add the layer to the canvas
pg_layer = QgsVectorLayer(uri.uri(False), "queryLayer", "postgres")
QgsProject.instance().addMapLayer(pg_layer)
I am not using Psycopg2. Would anyone give me an insight or point me in a direction on how to run an UPDATE or an INSERT on the table using PyQGIS before running a SELECT as shown above?
I think you should load first the layer in order to update, insert, delete its features. See the documentation in here.

Can't access Oracle database using sqlalchemy

I'm aware that this one has been asked several times - particularly in this question, but I have not managed to solve my problem. Both snippets below have cx_Oracle and sqlalchemy installed
import cx_Oracle
from sqlachemy import create_engine
I'm trying to connect/write a pandas dataframe to an Oracle Database. I've managed to write to the database using the following code snippet:
ora_con = cx_Oracle.connect("{}/{}#{}".format(schema_name, password, name_of_service))
cur = ora_con.cursor()
statement='CREATE TABLE '+schema_Name+'.History (Name VARCHAR2(15), Entity VARCHAR2(15), Status VARCHAR2(25), Type VARCHAR2(25), Owner VARCHAR2(15), User_521 VARCHAR2(15), Manufacturer VARCHAR2(15), Model VARCHAR2(15), FusInv_Last_inventory DATE, Serial_Number VARCHAR2(25), ID VARCHAR2(15), Version_of_OS VARCHAR2(15), OS VARCHAR2(15), Date_of_Report DATE)'
cur.execute(statement)
That works.
When I try:
con_str = """oracle+cx_oracle://schema_name:password#Host_address:port/?service_name=name_of_service"""
engine = create_engine(con_str, echo=False)
pandasDataframe.to_sql('History', engine , index = False) # Insert the values from the INSERT QUERY into the table 'History'
The pandas .to_sql command fails with a cx_Oracle error of:
cx_Oracle.DatabaseError: ORA-12569: TNS:packet checksum failure
Googling the error indicates a networking error (network is fine) or a listener error (Port number, but that is also fine)
I can connect, write and read the database in SQL Developer.
Any thoughts anyone? Thanks in advance...
Your plain cx_Oracle connect string is different from the one for sqlalchemy. Note that sqlalchemy uses cx_Oracle.makedsn(). So if you have this connect syntax with plain cx_Oracle:
cx_Oracle.connect('myuser/mypassword#myhost:myport/myservice')
you would need this syntax for sqlalchemy:
con_str = 'oracle+cx_oracle://myuser:mypassword#myhost:myport/?service_name=myservice'

Unable to connect erlang-application to cassandra using erlcassa

I am unable to connect my Erlang application to Cassandra with ErlCassa. I am getting the following error message:
11> {ok, Cl} = erlcassa_client:connect("0.0.0.0", 9160).
** exception error: no case clause matching {'EXIT',{undef,[{thrift_client_util,new,
["0.0.0.0",9160,cassandra_thrift,[{framed,true}]],
[]},
{erlcassa_client,connect,2,
[{file,"src/erlcassa_client.erl"},{line,41}]},
{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,573}]},
{erl_eval,expr,5,[{file,"erl_eval.erl"},{line,364}]},
{shell,exprs,7,[{file,"shell.erl"},{line,674}]},
{shell,eval_exprs,7,[{file,"shell.erl"},{line,629}]},
{shell,eval_loop,3,[{file,"shell.erl"},{line,614}]}]}}
in function erlcassa_client:connect/2 (src/erlcassa_client.erl, line 41)
10> {ok, Cl} = erlcassa_client:connect("localhost", 9160).
** exception error: no case clause matching {'EXIT',{undef,[{thrift_client_util,new,
["localhost",9160,cassandra_thrift,[{framed,true}]],
[]},
{erlcassa_client,connect,2,
[{file,"src/erlcassa_client.erl"},{line,41}]},
{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,573}]},
{erl_eval,expr,5,[{file,"erl_eval.erl"},{line,364}]},
{shell,exprs,7,[{file,"shell.erl"},{line,674}]},
{shell,eval_exprs,7,[{file,"shell.erl"},{line,629}]},
{shell,eval_loop,3,[{file,"shell.erl"},{line,614}]}]}}
in function erlcassa_client:connect/2 (src/erlcassa_client.erl, line 41)
Erlang version:
Erlang R16B02 (erts-5.10.3) [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
Cassandra version:
INFO 12:59:51,051 Cassandra version: 1.1.12
INFO 12:59:51,051 Thrift API version: 19.33.0
INFO 12:59:51,053 CQL supported versions: 2.0.0,3.0.0-beta1 (default: 2.0.0)
I think you need to add this "https://github.com/interline/erlang-thrift" dep into your project.
Because the code of erlcassa tries to use a function "thrift_client_util " of this dep and it can't find it because the dep has not compiled with the project.

Resources