I'm using EXEC CICS SYNCPOINT and EXEC CICS SYNCPOINT ROLLBACK to commit/backout updates to VSAM and DB2 tables when abend happens. However, only updates to DB2 tables are backed out not on VSAM. Am I missing something? CICS parameter RLS is set to RLS=NO.
It will depend on the type of files that you are using. If you are using RLS files then you have to define the files correctly using idcams using the LOG parameter see:
https://www.ibm.com/docs/en/zos/2.2.0?topic=cics-recoverable-nonrecoverable-data-sets
If you are using non-RLS files then you need to set the attributes correctly on your FILE definition.
See the following page within the CICS documentation that describes about file recovery:
https://www.ibm.com/docs/en/cics-ts/5.6?topic=resources-recovery-files
Related
I wanted to create a job where I need to consider the latest file available as input file.
File format is as below: FILE1.TEST.TYYMMDD
is there any way to identify latest file based on date present in file name via JCL.
P.S. GDG versions are not created in existing process . Only PS file is created.
Thank you
I wanted to create a job where I need to consider the latest file available as input file. File [name] format is as below: FILE1.TEST.TYYMMDD is there any way to identify latest file based on date present in file name via JCL.
No.
You indicate that GDGs are not created in the existing process. GDGs would be the best way to accomplish your goal. Absent GDGs, you must write code.
You could accomplish your goal by writing (C, clist, COBOL, PL/I, Rexx) code using the LMDINIT and LMDLIST ISPF services. Then you would execute your code by running ISPF in batch. Many mainframe shops have a cataloged procedure to execute ISPF in batch.
Agree with #cschneid that there is not a platform way to handle this. However, I want to point out that GDGs are the platform way of managing PS files for access in a relative form.
Your comment
GDG versions are not created in existing process . Only PS file is
created.
That statement didn't make sense to me. GDGs are not a file type like physical sequential (PS) or partitioned (PO). It's a convention to allow relative reference to files created over time which sounds like what you want. I've only seen the use of GDGs for PS files.
Putting the date in the file name can have its uses but to z/OS its only part of the filename and not meta information that it operates on (like G0000v00's in GDGs.
I need to use db-migrate to add an index to a Postgres database with CREATE INDEX CONCURRENTLY. However, db-migrate wraps all migrations in a transaction by default, and trying to create a concurrent index inside a transaction results in this error code:
CREATE INDEX CONCURRENTLY cannot run inside a transaction block
I can't find any way to disable transactions as part of the db-migrate options, either CLI options or (preferably) as a configuration directive on the migration itself. Any idea if this can be accomplished?
It turns out that this can be solved on the command line by using --non-transactional. Reading the source, I can see that this sets an internal flag called notransactions, but it's not clear to me whether this can be set as part of the migration configuration or must be passed on the command line.
I kept getting the errors even when running with --non-transactional flag.
The solution for me was to run with --non-transactional AND have each CREATE INDEX CONCURRENTLY statement in its own separate migration file. It turns out you can't have more than one of it in the same file (same transaction block).
Following the inputs from the below forum, system properties were specified and customized names for DATABASECHANGELOG & DATABASECHANGELOGLOCK tables were used & setup (on liquibase update execution).
http://forum.liquibase.org/topic/configurable-databasechangelog-table-name
Liquibase version-3.5.1, Database-Oracle 12c, OS-Redhat Linux
But, on subsequent attempts to execute future liquibase updates (against same database schema), the execution fails as its trying to recreate the customized DATABASECHANGELOG table again - failing with Object Name already in use. This does not happen when trying to use the standard liquibase control tables names (i.e. DATABASECHANGELOG & DATABASECHANGELOGLOCK)
Is there an option to skip recreation of customized liquibase control tables OR another fix for this this issue?
Why setting these as system properties?
You can invoke liquibase like the following (or these arguments defined in the properties file):
liquibase <regular arguments > --liquibaseSchemaName=YOUR_SCHEMA \
--databaseChangeLogTableName=YOUR_DBCHANGELOG \
--databaseChangeLogLockTableName=YOUR_DBCHANGELOGLOCK ....
It works fine for us (liquibase 3.5.1, Oracle 12c).
Thanks
The issue was due to the case-sensitivity of the custom table names when executing against oracle databases. Weird error, but its resolved when specifying the value in upper case (via system properties/command line)
Does MemSQL support user variables in the load data command, similar to MySQL (see MySQL load NULL values from CSV data for examples)? The MemSQL documentation (https://docs.memsql.com/docs/load-data) doesn't give a clue, and my attempts at using user variables have failed.
No, variables in LOAD DATA are not currently supported in general (as of MemSQL 5.5). This is a feature we are tracking for a future release.
We only support the following syntax to skip the contents of a column in the file using a dummy variable (briefly mentioned in the docs https://docs.memsql.com/docs/load-data):
load data infile 'foo.tsv' into table foo (bar, #, #, baz);
Im implementing NFS and almoste done but the RFC section 3.3.8 says this in its description:
mode
One of UNCHECKED, GUARDED, and EXCLUSIVE. UNCHECKED
means that the file should be created without checking
for the existence of a duplicate file in the same
directory. In this case, how.obj_attributes is a sattr3
describing the initial attributes for the file. GUARDED
specifies that the server should check for the presence
of a duplicate file before performing the create and
should fail the request with NFS3ERR_EXIST if a
duplicate file exists. If the file does not exist, the
request is performed as described for UNCHECKED.
EXCLUSIVE specifies that the server is to follow
exclusive creation semantics, using the verifier to
ensure exclusive creation of the target. No attributes
may be provided in this case, since the server may use
the target file metadata to store the createverf3
verifier.
so the question if UNCHECKED is the mode should i just set the length of the file to Zero or should i let the file be as it is? and if its a directory should i remove all the content?
I believe the idea of CREATE with UNCHECKED is to apply the semantics of good old Unix system call creat -- so, truncation of a file's existing contents (if any) is implied. However I cannot find this specified all that clearly in the docs (!).
Trying to CREATE an existing directory is an error in any case -- there's a separate MKDIR for that (in NFS 3, the same applies to special files, with MKNOD -- CREATE is now for regular, normal, plain good old files only!-)