Tracing ODBC calls for Informix Client for Linux - linux

I tried to trace ODBC function calls from my program working on Linux. This program dynamically links ODBC manager and then connect to database and fetch some data.
I can trace ODBC calls with unixODBC by adding to odbcinst.ini:
[ODBC]
Trace=yes
TraceFile=/tmp/sql.log
This method is documented by IBM: Collecting data for an ODBC Problem
But when I change manager from unixODBC to Informix's own manager (libifdmr.so), the trace file is not created. Anybody successfully obtained ODBC trace from Informix manager (and driver) on Linux?
Client version: CSDK 3.50UC3
I hope that it is not a bug and something is wrong with my config.
As for unixODBC: I cannot use unixODBC in multithreaded app. I use connection pool and my app segfaulted when disconnection was from another thread than connection. It is also much slower in multithreaded app.

If you run:
strings $INFORMIXDIR/lib/cli/libifdmr.so | grep _OdbcSetTrace
do you get to see any references. If not, then the library is without the support functions. If you do see that, the mechanism outlined should work. If it doesn't, you probably have a reportable bug.
What level are you trying to trace the issues? And, since unixODBC works, why not use the driver manager that does work?
I've taken the example distsel.c from $INFORMIXDIR/demo/cli and compiled it on Solaris 10 using CSDK 3.50.FC3. I got it to the point where the connection succeeds, but the table 'item' is missing in the database I'm using, so the program stops SQLExecDirect(). When I run it under 'truss' (equivalent of 'strace' on Linux), then I see no evidence of the code even trying to open the trace file.
I compiled using:
gcc -I$INFORMIXDIR/incl/cli distsel.c -DNO_WIN32 \
-L$INFORMIXDIR/lib/cli -lifdmr -lifcli -o distsel
I used the following .odbc.ini file:
;
; odbc.ini
;
[ODBC Data Sources]
odbc_demo = IDS 11.50.FC3 stores on black
[ODBC]
Trace = yes
TraceFile = /tmp/odbc.trace
[odbc_demo]
Driver = /usr/informix/11.50.FC1/lib/cli/libifcli.so
Description = IBM Informix CLI 3.50
Server = black_19
FetchBufferSize = 99
UserName = jleffler
Password = XXXXXXXX
Database = stores
ServerOptions =
ConnectOptions =
Options =
ReadOnly = no
And this one:
;
; odbc.ini
;
[ODBC Data Sources]
odbc_demo = IDS 11.50.FC3 stores on black
[odbc_demo]
Driver = /usr/informix/11.50.FC1/lib/cli/libifcli.so
Description = IBM Informix CLI 3.50
Server = black_19
FetchBufferSize = 99
UserName = jleffler
Password = XXXXXXXX
Database = stores
ServerOptions =
ConnectOptions =
Options =
ReadOnly = no
Trace = yes
TraceFile = /tmp/odbc.trace
Consequently, I believe you have found a bug. I'm not sure whether the bug is in the FAQ you referenced or in the product - I'm inclined to think the latter. You should report the issue to IBM Technical Support. (I've not checked the Informix CLI (ODBC) manual; it might be worth checking that before trying to file a product bug; if the manual indicates that Trace doesn't work, and perhaps if it doesn't indicate that it does work, then there is a bug in the FAQ page you listed.)
If you are looking to see the SQL data, the SQLIDEBUG part of the FAQ works:
SQLIDEBUG=2:distsel ./distsel
That generated a file distsel_6004_0_102d40 for me - it will be different for you. You can then use the 'sqliprint' utility to see the data flowing between client and server.
If you cannot find 'sqliprint', get back to me.

I got ODBC trace with those settings in my odbc.ini:
[ODBC]
TRACE=1
TRACEFILE=/tmp/odbc_trace.txt
TRACEDLL=idmrs09a.so
I copied them from IBM Informix ODBC Driver Programmer’s Manual Version 3.50.
So other IBM documents seems not valid while those settings are in odbc.ini instead of odbcinst.ini and you must set TRACEDLL which was not mentioned in "Collecting data for an ODBC Problem" document.
UPDATE:
It seems IBM changed documentation: there is info on TRACEDLL, but odbcinst.ini remained.

Related

How to read Arabic text from DB2 on z/OS using Node.JS

I am trying to get data from DB2 table on z/OS in my Node application, where some of the values are stored in Arabic (IBM-420). The result that I am getting in application is the following:
������� ���� ���������
I am using ibm_db version 2.7.4 to fetch data from DB2 and:
Windows 10 - 64 bit
Node version: 14.17.3
NPM version: 7.19.1
I have tried to display the result on the console and writing it on a txt file using fs through following:
fs.writeFile('data.txt', content, err => {
if (err) {
console.error(err)
return
}
})
Any suggestion to convert the text into a proper Arabic?
Resolved by creating the system environment variable DB2CODEPAGE and setting its value to 1208 , then restarting all components to pick up this new variable.
When converting / translating between codepages , the Db2 CLI component will need to know the application code page, along with the DB2 server database code page and convert/translate between them as necessary. This requires the relevant conversion tables to be already present in the Db2 client (in this case the CLIDRIVER).
The DB2CODEPAGE environment variable value 1208 on Microsoft Windows forces the CLI components of Db2 to use Unicode as the application code page. When that DB2CODEPAGE variable is not present then the Db2 CLI component will try to derive the code page from the Microsoft Windows Control Panel Regional Options - which may not be what you need. Other values of this variable are possible, refer to the documentation for details.
When you set DB2CODEPAGE=1208 you must ensure that all the Microsoft -Windows applications really do use unicode or UTF-8 when inserting/updating data in the Db2 tables.

Pagination support available in DB2 for z/OS but not in DB2/400?

I'm aware of LIMIT and OFFSET, available in both DB2's but for my requirements I need to use WHERE.
DB2/ZOS 12 supports ...
WHERE (WORKDEPT, EDLEVEL, JOB) > ('E11', 12, 'CLERK')
but apparently not DB2/400?
Please someone tell me I'm wrong.
References
DB2/ZOS https://www.ibm.com/support/knowledgecenter/en/SSEPEK_12.0.0/wnew/src/tpc/db2z_12_sqlpagination.html
DB2/400
https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/sqlp/rbafymultiplewhere.htm
There are 3 completely different platforms for DB2
Db2 for z/OS
Db2 for IBM i
DB2 for Lunix, Unix, Windows (LUW)
Despite sharing the DB2 name, they are completely separate products with different code bases.
IBM does try to ensure compatibility, but that doesn't mean that every platform has the same capabilities or gets new features at the same time.
So no, Db2 for i doesn't currently support non-equal row-value-expressions in the WHERE. You'll have to go old school.
WHERE
(WORKDEPT = 'E11' and EDLEVEL = 12 and JOB > 'CLERK)
or (WORKDEPT = 'E11' and EDLEVEL > 12)
or (WORKDEPT > 'E11')

SSIS package works from SSMS but not from agent job

I've an SSIS package to load excel file from network drive. It's designed to load content and then move the file to archived folder.
Everything works good when the following SQL statement runs in SSMS window.
However when it's copied to SQL agent job and executes from there, the file is neither loaded nor moved. But it shows "successful" from the agent log.
The same thing also happened to "SSIS job" instead of T-SQL job, even with proxy of windows account.(same account as ssms login)
Declare #execution_id bigint
EXEC [SSISDB].[catalog].[create_execution] #package_name=N'SG_Excel.dtsx', #execution_id=#execution_id OUTPUT, #folder_name=N'ETL', #project_name=N'Report', #use32bitruntime=True, #reference_id=Null
Select #execution_id
DECLARE #var0 smallint = 1
EXEC [SSISDB].[catalog].[set_execution_parameter_value] #execution_id, #object_type=50, #parameter_name=N'LOGGING_LEVEL', #parameter_value=#var0
EXEC [SSISDB].[catalog].[start_execution] #execution_id
GO
P.S. At first relative path of network drive is applied, then switched to absolute path(\\server\folder). It's not solving the issue.
SSIS Package Jobs run under the context of the SQL Server Agent. What Account is setup to run the SQL Server Agent on the SQL Server? It may need to be run as a Domain account that has access to the network share.
Or you can copy the Excel file to local folder on the SQL Server, so the Package can access the file there.
Personally I avoid the File System Task - I have found it unreliable. I would replace that with a Script Task, and use .NET methods from the System.IO namespace e.g. File.Move. These are way more reliable and have mature error handling.
Here's a starting point for the System.IO namespace:
https://msdn.microsoft.com/en-us/library/ms404278.aspx
Be sure to select the relevant .NET version using the Other Versions link.
When I have seen things like this in the past it's been that my package isn't accessing the path I thought it was at run time, its looking somewhere else, finding an empty folder & exiting with success.
SSIS can have a nasty habit of going back to variable defaults . It may be looking at a different path you used in dev? Maybe hard code all path values as a test? or put in break points & double check the run time values of all variables & parameters.
Other long shots may be:
Name resolution, are you sure the network name is resolving correctly at runtime?
32/64 bit issues. Dev tends to run 32 bit, live may be 64 bit. May interfere with file paths? Maybe force to 32 bit at run time?
There is issue with sql statement not having statement terminator (;) that is causing issue.
Declare #execution_id bigint ;
EXEC [SSISDB].[catalog].[create_execution] #package_name=N'SG_Excel.dtsx', #execution_id=#execution_id OUTPUT, #folder_name=N'ETL', #project_name=N'Report', #use32bitruntime=True, #reference_id=Null ;
Select #execution_id ;
DECLARE #var0 smallint = 1 ;
EXEC [SSISDB].[catalog].[set_execution_parameter_value] #execution_id, #object_type=50, #parameter_name=N'LOGGING_LEVEL', #parameter_value=#var0 ;
EXEC [SSISDB].[catalog].[start_execution] #execution_id ;
GO
I have faced similar issue in service broker ..

DB2 force application failed to kill load job

We want to kill running load job. I have executed db2 force application (<agentid>) and db2 force application all, but still the load job is not killed.
DB2 version is 10.5 and server is Linux.
:~> db2 list utilities
ID = 5
Type = LOAD
Database Name = qts
Member Number = 0
Description = [LOADID: 106.2015-10-17-08.37.11.389985.0 (65530;32770)] [9.63.33.62.39376.151017123551] OFFLINE LOAD ASC AUTOMATIC INDEXING INSERT COPY NO TCS.ASSETS
Start Time = 10/17/2015 08:37:11.641208
State = Executing
Invocation Type = User
Adding a bit of information.
Regarding "force application", Yes.that's asynchronous operation.
What happens is DB2 will put force flag to target app handle(or EDU).
Depending on what the application handle(EDU) is doing, it can be force right away or wait until the app handle reach the point of checking the interrupt flag.
For example, an app handle doing rollback can't be forced in the middle.
And there are lots of conditions.
But in general, load job should be able to be forced by 'db2 force application'.
If you want to know why your job didn't kill, you may need to check with IBM DB2 support by collecting the following information.
$ db2pd -stack all
(Stack dump will be generated in db2dump directory.)
$ db2pd -latches
$ db2pd -edus
$ db2pd -apinfo -db <dbname>
$ db2pd -util
$ db2pd -db <dbname> -locks

How to resolve the "Error occurred while loading translation library" Linux ODBC connection issue?

After installing unixODBC and the Netezza drivers on a Linux client and configuring ~/.odbcinst.ini and ~/.odbc.ini data sources according to the documentation, attempting to connect to a Netezza PureData warehouse via some tools may yield an error similar to:
(Error) ('HY000', '[HY000] [unixODBC]Error occurred while loading translation library (45) (SQLDriverConnect)')
For example, this was output by the Python SQLAlchemy library via a DBAPI connection on a RHEL7 box (though has been reported from other distributions and other tools).
Does anyone know the detail of what is happening and how to properly resolve it?
Extra info: I had a similar issue with the exact same error message.
Turns out I had /etc/odbcinst.ini set up with UnicodeTranslationOption=utf8 for the [NetezzaSQL] driver.
Also a wrong (32-bit) driver was used.
Fixed with /etc/odbcinst.ini:
[NetezzaSQL]
Driver = /opt/netezza/lib64/libnzodbc.so
DebugLogging = true
LogPath = /tmp
Trace = 0
TraceAutoStop = 0
TraceFile = /tmp/trace.log
UsageCount = 1
One way to work-around the issue, is to add the following lines to the specific data-source section of your ~/.odbc.ini file:
TranslationDLL=
TranslationName=
TranslationOption=
I don't know what other implications doing that may have (for example on non-English error messages or using unusual character encodings).
Set the UnicodeTranslationOption to be UTF16 instead of UTF8 in your odbc.ini. Make sure that it has that setting in both the odbc.ini recognized by 'odbcinst -j' and also a spare copy in your home directory called just 'odbc.ini'. The latest netezza files seems to read the configuration without being a hidden file '.odbc.ini'
You can also enable Netezza ODBC debug log to figure out detailed error. In odbcinst.ini , set
DebugLogging = true

Resources