DB2 ZOS Mainframe- Archive Logs Disable - mainframe

I'm working in DB2 ZOS Version 10, I have been working under data masking project. For this project I have been executing over 100k DDL statements (delete, update,insert) .
So I need to do disable the transaction logs before the whole SCRAMBLE PROCESS starts.
In DB2 iSeries AS400, I already handle the same issue by calling the procedure which helps to disable the TRANSACTION LOG DISABLE.
Like wise, I need to do in DB2 ZOS.

You can use the NOT LOGGED attribute for all affected tablespaces, specifies that changes that are made to data in the specified tablespace are not recorded in the DB2 log
Take the following steps for your data masking process:
Take an imagecopy so you can recover
ALTER TABLESPACE database-name.table-space-name NOT LOGGED
Execute data masking process
ALTER TABLESPACE database-name.table-space-name LOGGED
Take an imagecopy to establish a recovery point
You will also probably want to lock all tables with exclusive access so that if you have to recover no one else is affected by your changes
N.B. Make sure you're aware of the recovery implications for objects that are not logged!!!

Related

WAL-enabled SQLite database blocking user on read

Here is the background:
I am currently running an ETL process on a linux server (CentOS 8) which also hosts applications which read from a local SQLite database.
At certain times when the ETL is running and writing to the SQLite database, applications are also reading from the database.
In order to avoid database locking when the SQLite database is in use by the applications, I have enabled WAL on the SQLite database so that the ETL may write to the database while applications are in use.
However there is now the following issue whereby the ETL process is unable to query the database after the connection has been established. I have logged the following information when this occurs:
The 'shinysuite' user runs the ETL process.
The 'shiny' user runs the applications.
According to the admin, these users belong to the same group.
Output from /etc/groups
First, I do not understand why the 'shiny' user owns the -wal file even though it only reads.
Second, I do not understand why the ETL process ('shinysuite') would be unable to read from the -wal file even it did not own the file.
What could be the problem here?
First, I do not understand why the 'shiny' user owns the -wal file even though it only reads.
When reading from a WAL-mode sqlite3 database, the helper -wal and -shm files are created if they don't already exist.
They're owned by the shiny user and belong to the shiny group, but shinysuite is not a member of that group so it doesn't have permission to use the files. If your application being run by shiny does so in the shinysuite group instead of shiny (If it's a binary executable, using chgrp(1) to change the group of the file and then making it set-gid with chmod g+s shinyapp is one way, or maybe just add shinysuite to the shiny group.) it should work.

How to perform backup and restore of Janusgraph database which is backed by Apache Cassandra?

I'm having trouble in figuring out on how to take the backup of Janusgraph database which is backed by persistent storage Apache Cassandra.
I'm looking for correct methodology on how to perform backup and restore tasks. I'm very new to this concept and have no idea on how to do this. It will be highly appreciated if someone explain the correct approach or point me to rightful documentation to safely execute the tasks.
Thanks a lot for your time.
Cassandra can be backed up a few ways. One way is called a "snapshot". You can issue this via "nodetool snapshot" command. What cassandra will do is to create a "snapshots" sub-directory, if it doesn't already exist, under each table that's being "backed up" (each table has its own directory where it stores its data) and then it will create the specific snapshot directory for this particular occurrence of the snapshot (either you can name the directory with the "nodetool snapshot" parameter or let it default). Cassandra will then create soft links to all of the sstables that exist for that particular table - looping through each table, keyspace or database - depending on your "nodetool snapshot" parameters. It's very fast as creating soft links takes almost 0 time. You will have to perform this command on each node in the cassandra cluster to back up all of the data. Each node's data will be backed up to the local host. I know DSE, and possibly Apache, are adding functionality to back up to object storage as well (I don't know if this is an OpsCenter-only capability or if it can be done via the snapshot command as well). You will have to watch the space consumption on this as there are no processes to clean these up.
Like many database systems, you can also purchase/use 3rd party software to perform backups (e.g. Cohesity (formally Talena), Rubrik, etc.). We use one such product in our environments and it works well (graphical interface, easy-to-use point-in-time recoveryt, etc.). They also offer easy-to-use "refresh" capabilities (e.g. refresh your PT environment from, say, production backups).
Those are probably the two best options.
Good luck.

How to Update Stats and Rebuild Indexes in Azure Geo-Replicated Database

I have a Geo-Replicated Azure SQL Database which has some serious index fragmentation and outdated statistics.
An attempt to REORGANIZE or REBUILD an index, or to UPDATE STATISTICS results in the message "Failed to update database xxx because the database is read-only." however a quick check against sys.databases shows that the database is in fact not in READ_ONLY mode.
Understandably Azure manages the database as it is a geo-replicated copy, so my question is that if I request that Indexes and Statistics updates are implemented on the MASTER copy, whether my replicated copy will receive same, or is there a way to update on my replicated copy alone?
All statements you run on the primary database to rebuild indexes and maintain statistics will also be executed on the secondary. For more information click here.

deleting local modifications when replicating couchdb

I have a master couchdb, which is replicated to a local db every time the local application starts.
The user can modify the local docs, but I want these docs to be deleted when the replication starts if they have disappeared from the master db.
How can I achieve that?
This is already how replication works. When a document is modified (including deletion), that change gets replicated.
The only possible problem you may encounter is that if a local change is made at the same time a deletion occurs, then upon sync, there will be a conflict.
So you need your local app to do some sort of conflict resolution, which selects the deleted revision. I suggest reading about the CouchDB Replication and Conflict Model as a starting place.

Can I check via script if my NotesView is corrupt?

Last week I worked on an incident, where it was stated that a Java agent caused to terminate a server. First dagnose was that objects probably were not being recycled in a sufficient manner.
After some testing (and bring down the server multiple times :-) ) I notified in my Notes client that the view was corrupt.
I could have avoid this if I were able to check if a view is OK or not.
for a database I can check if it exists
for a view I can check if it exists
But can I also check if a view is in good condition or not? or is only a client (Notes, Admin) capable of doing this?
I wish there was a programmatic way Patrick. The fixup task (load fixup -C) is one of the sure fire ways to get details of corruption, but not helpful to you in this scenario.

Resources