DB2 force application failed to kill load job - linux

We want to kill running load job. I have executed db2 force application (<agentid>) and db2 force application all, but still the load job is not killed.
DB2 version is 10.5 and server is Linux.
:~> db2 list utilities
ID = 5
Type = LOAD
Database Name = qts
Member Number = 0
Description = [LOADID: 106.2015-10-17-08.37.11.389985.0 (65530;32770)] [9.63.33.62.39376.151017123551] OFFLINE LOAD ASC AUTOMATIC INDEXING INSERT COPY NO TCS.ASSETS
Start Time = 10/17/2015 08:37:11.641208
State = Executing
Invocation Type = User

Adding a bit of information.
Regarding "force application", Yes.that's asynchronous operation.
What happens is DB2 will put force flag to target app handle(or EDU).
Depending on what the application handle(EDU) is doing, it can be force right away or wait until the app handle reach the point of checking the interrupt flag.
For example, an app handle doing rollback can't be forced in the middle.
And there are lots of conditions.
But in general, load job should be able to be forced by 'db2 force application'.
If you want to know why your job didn't kill, you may need to check with IBM DB2 support by collecting the following information.
$ db2pd -stack all
(Stack dump will be generated in db2dump directory.)
$ db2pd -latches
$ db2pd -edus
$ db2pd -apinfo -db <dbname>
$ db2pd -util
$ db2pd -db <dbname> -locks

Related

Maintain a session across multiple instances of app when called from same shell

I'm trying to have data (generated by an application only after its launch) persisted across multiple invocations of an application, but only when they're started from the same shell session.
One possible way to do that would be to pass the data back from the application to the calling shell, but since environment variable changes are only passed from parent to child, I don't know how to implement that.
Practical example:
There is job command that create subdirectory with current datetime and does work inside. Sometimes job needs to be killed and restarted, so it need directory where if finished, like job --resume 21Fri_1849/data. I would like to save 21Jan_1849/data so I don't have to check and type it each time I need to resume job. If I created something like .last_job, and wanted to restart job in another session, it could resume wrong (last) job, so files are not solution (AFAIK).
How can this be done?
Since you're only trying to target Linux, there are a fair number of tricks available here. Consider this one:
#!/usr/bin/env bash
current_boot_id=$(</proc/sys/kernel/random/boot_id)
# honor myprog_shell_pid if set and valid, fall back to PPID otherwise
if [[ $myprog_shell_pid ]] && [[ -e /proc/$myprog_shell_pid/stat ]]; then
parent_pid=$myprog_shell_pid
else
parent_pid=$PPID
fi
parent_start_time=$(awk '{print $22}' "/proc/$parent_pid/stat")
mkdir -p "$HOME/.cache/myscript-sessions"
data=$HOME/.cache/myscript-sessions/${current_boot_id}:${parent_pid}:${parent_start_time}
Now, we have a data file name that changes:
When we're rebooted (because current_boot_id is updated)
If we're run from a different shell (because our PPID changes).
If we're run from a different shell with the same PID (because the start time for the parent PID will be different).
...and you can easily delete files with the wrong boot id (because the system rebooted), or with names that refer to PID/start-time combinations that don't exist.
One caveat is that by default, this is sensitive to being called by subshells (output=$(./yourprog) will have a different PPID than ./yourprog will), but if the parent shell runs export myprog_shell_pid=$$, that issue goes away.
You're crossing over to where you need a simple job management engine instead of just shell. Using 'make' and writing Makefiles is the probably the simplest way to set this up. You can write a rule that tells how to turn a stage 1 file into a stage 2 file based on file extension, and then make will know how far things got and how to resume next time you run it.

SSIS package works from SSMS but not from agent job

I've an SSIS package to load excel file from network drive. It's designed to load content and then move the file to archived folder.
Everything works good when the following SQL statement runs in SSMS window.
However when it's copied to SQL agent job and executes from there, the file is neither loaded nor moved. But it shows "successful" from the agent log.
The same thing also happened to "SSIS job" instead of T-SQL job, even with proxy of windows account.(same account as ssms login)
Declare #execution_id bigint
EXEC [SSISDB].[catalog].[create_execution] #package_name=N'SG_Excel.dtsx', #execution_id=#execution_id OUTPUT, #folder_name=N'ETL', #project_name=N'Report', #use32bitruntime=True, #reference_id=Null
Select #execution_id
DECLARE #var0 smallint = 1
EXEC [SSISDB].[catalog].[set_execution_parameter_value] #execution_id, #object_type=50, #parameter_name=N'LOGGING_LEVEL', #parameter_value=#var0
EXEC [SSISDB].[catalog].[start_execution] #execution_id
GO
P.S. At first relative path of network drive is applied, then switched to absolute path(\\server\folder). It's not solving the issue.
SSIS Package Jobs run under the context of the SQL Server Agent. What Account is setup to run the SQL Server Agent on the SQL Server? It may need to be run as a Domain account that has access to the network share.
Or you can copy the Excel file to local folder on the SQL Server, so the Package can access the file there.
Personally I avoid the File System Task - I have found it unreliable. I would replace that with a Script Task, and use .NET methods from the System.IO namespace e.g. File.Move. These are way more reliable and have mature error handling.
Here's a starting point for the System.IO namespace:
https://msdn.microsoft.com/en-us/library/ms404278.aspx
Be sure to select the relevant .NET version using the Other Versions link.
When I have seen things like this in the past it's been that my package isn't accessing the path I thought it was at run time, its looking somewhere else, finding an empty folder & exiting with success.
SSIS can have a nasty habit of going back to variable defaults . It may be looking at a different path you used in dev? Maybe hard code all path values as a test? or put in break points & double check the run time values of all variables & parameters.
Other long shots may be:
Name resolution, are you sure the network name is resolving correctly at runtime?
32/64 bit issues. Dev tends to run 32 bit, live may be 64 bit. May interfere with file paths? Maybe force to 32 bit at run time?
There is issue with sql statement not having statement terminator (;) that is causing issue.
Declare #execution_id bigint ;
EXEC [SSISDB].[catalog].[create_execution] #package_name=N'SG_Excel.dtsx', #execution_id=#execution_id OUTPUT, #folder_name=N'ETL', #project_name=N'Report', #use32bitruntime=True, #reference_id=Null ;
Select #execution_id ;
DECLARE #var0 smallint = 1 ;
EXEC [SSISDB].[catalog].[set_execution_parameter_value] #execution_id, #object_type=50, #parameter_name=N'LOGGING_LEVEL', #parameter_value=#var0 ;
EXEC [SSISDB].[catalog].[start_execution] #execution_id ;
GO
I have faced similar issue in service broker ..

Using VSPerf.exe instead of VSPerfCmd.exe

I would like to prepare a number of Visual Studio Profiler (VSP) reports using a batch script. On Windows 7 I used VSPerfCmd.exe in the following way:
VSPerfCmd /start:sample /output:%OUTPUT_FILE% /launch:%APP% /args:"..."
VSPerfCmd /shutdown
VSPerfCmd /shutdown waits until the application has finished its execution, closes data collection and only then the VSP report is generated. This is what I need.
I switched to Windows Server 2012 and now VSPerfCmd does not work; I need to use VSPerf instead. The problem is that I cannot get the same behavior as VSPerfCmd.
Specifically, the /shutdown option is no longer available. Available options do not wait until the application has finished but stop or detach from the process right after execution. This means I can't use them in a batch script, where I run several processes one after another. Any ideas how to get the desired behavior?
You don't have to manually shutdown vsperf. You can simply do:
vsperf /launch:YourApp.exe
And vsperf will stop automatically after your application finishes.
See: https://msdn.microsoft.com/en-us/library/hh977161.aspx#BKMK_Windows_8_classic_applications_and_Windows_Server_2012_applications_only

What is the XDG_SESSION_COOKIE environment variable for?

I've been fighting with crontab recently because in Intrepid the gconftool uses a dbus backend, and that means that when used from crontab it doesn't work.
To make it work I have had to export the relevant environment variables when I log in so that it finds the dbus session address when the cron comes to run.
Out of curiosity I wondered what environment the cron could see and it turns out all I have is HOME, LOGNAME, PATH, SHELL, CWD and this new one on me, XDG_SESSION_COOKIE. This looks curious and several googlings have thrown up a number of bugs or other feature requests involving it but nothing that tells me what it does.
My instinct is that this variable can be used to find all the stuff that I've had to export to the file that I source before the cron job runs.
My questions, therefore, are a) can I? b) if so, how? and c) what (else) does it do?
Thanks all
This is very interesting. I found out it is the display manager setting a cookie. That one can be used to register processes to belong to a "session" which are managed by a daemon called ConsoleKit. That is to support fast user switching. My KDE4.2.1 system apparently supports it too.
Read this fedora wiki entry.
So this environment variable is like DBUS_SESSION_BUS_ADDRESS to give access to some entity (in the case of XDG_SESSION_COOKIE a login-session managed by ConsoleKit). For example having that environment variable in place, you can ask the manager for your current session:
$ dbus-send --print-reply --system --type=method_call \
--dest=org.freedesktop.ConsoleKit \
/org/freedesktop/ConsoleKit/Manager \
org.freedesktop.ConsoleKit.Manager.GetCurrentSession
method return sender=:1.1 -> dest=:1.34 reply_serial=2
object path "/org/freedesktop/ConsoleKit/Session1"
$
The Manager also supports querying for the session some process belongs to
$ [...].Manager.GetSessionForUnixProcess uint32:4494
method return sender=:1.1 -> dest=:1.42 reply_serial=2
object path "/org/freedesktop/ConsoleKit/Session1"
However, it does not list or somehow contain variables that is related to some cron job. However, documentation of dbus-launch says that libdbus will automatically find the right DBUS bus address. For example, files are stored in /home/js/.dbus/session-bus that contain the correct current dbus session addresses.

Tracing ODBC calls for Informix Client for Linux

I tried to trace ODBC function calls from my program working on Linux. This program dynamically links ODBC manager and then connect to database and fetch some data.
I can trace ODBC calls with unixODBC by adding to odbcinst.ini:
[ODBC]
Trace=yes
TraceFile=/tmp/sql.log
This method is documented by IBM: Collecting data for an ODBC Problem
But when I change manager from unixODBC to Informix's own manager (libifdmr.so), the trace file is not created. Anybody successfully obtained ODBC trace from Informix manager (and driver) on Linux?
Client version: CSDK 3.50UC3
I hope that it is not a bug and something is wrong with my config.
As for unixODBC: I cannot use unixODBC in multithreaded app. I use connection pool and my app segfaulted when disconnection was from another thread than connection. It is also much slower in multithreaded app.
If you run:
strings $INFORMIXDIR/lib/cli/libifdmr.so | grep _OdbcSetTrace
do you get to see any references. If not, then the library is without the support functions. If you do see that, the mechanism outlined should work. If it doesn't, you probably have a reportable bug.
What level are you trying to trace the issues? And, since unixODBC works, why not use the driver manager that does work?
I've taken the example distsel.c from $INFORMIXDIR/demo/cli and compiled it on Solaris 10 using CSDK 3.50.FC3. I got it to the point where the connection succeeds, but the table 'item' is missing in the database I'm using, so the program stops SQLExecDirect(). When I run it under 'truss' (equivalent of 'strace' on Linux), then I see no evidence of the code even trying to open the trace file.
I compiled using:
gcc -I$INFORMIXDIR/incl/cli distsel.c -DNO_WIN32 \
-L$INFORMIXDIR/lib/cli -lifdmr -lifcli -o distsel
I used the following .odbc.ini file:
;
; odbc.ini
;
[ODBC Data Sources]
odbc_demo = IDS 11.50.FC3 stores on black
[ODBC]
Trace = yes
TraceFile = /tmp/odbc.trace
[odbc_demo]
Driver = /usr/informix/11.50.FC1/lib/cli/libifcli.so
Description = IBM Informix CLI 3.50
Server = black_19
FetchBufferSize = 99
UserName = jleffler
Password = XXXXXXXX
Database = stores
ServerOptions =
ConnectOptions =
Options =
ReadOnly = no
And this one:
;
; odbc.ini
;
[ODBC Data Sources]
odbc_demo = IDS 11.50.FC3 stores on black
[odbc_demo]
Driver = /usr/informix/11.50.FC1/lib/cli/libifcli.so
Description = IBM Informix CLI 3.50
Server = black_19
FetchBufferSize = 99
UserName = jleffler
Password = XXXXXXXX
Database = stores
ServerOptions =
ConnectOptions =
Options =
ReadOnly = no
Trace = yes
TraceFile = /tmp/odbc.trace
Consequently, I believe you have found a bug. I'm not sure whether the bug is in the FAQ you referenced or in the product - I'm inclined to think the latter. You should report the issue to IBM Technical Support. (I've not checked the Informix CLI (ODBC) manual; it might be worth checking that before trying to file a product bug; if the manual indicates that Trace doesn't work, and perhaps if it doesn't indicate that it does work, then there is a bug in the FAQ page you listed.)
If you are looking to see the SQL data, the SQLIDEBUG part of the FAQ works:
SQLIDEBUG=2:distsel ./distsel
That generated a file distsel_6004_0_102d40 for me - it will be different for you. You can then use the 'sqliprint' utility to see the data flowing between client and server.
If you cannot find 'sqliprint', get back to me.
I got ODBC trace with those settings in my odbc.ini:
[ODBC]
TRACE=1
TRACEFILE=/tmp/odbc_trace.txt
TRACEDLL=idmrs09a.so
I copied them from IBM Informix ODBC Driver Programmer’s Manual Version 3.50.
So other IBM documents seems not valid while those settings are in odbc.ini instead of odbcinst.ini and you must set TRACEDLL which was not mentioned in "Collecting data for an ODBC Problem" document.
UPDATE:
It seems IBM changed documentation: there is info on TRACEDLL, but odbcinst.ini remained.

Resources