How to Update Stats and Rebuild Indexes in Azure Geo-Replicated Database - azure

I have a Geo-Replicated Azure SQL Database which has some serious index fragmentation and outdated statistics.
An attempt to REORGANIZE or REBUILD an index, or to UPDATE STATISTICS results in the message "Failed to update database xxx because the database is read-only." however a quick check against sys.databases shows that the database is in fact not in READ_ONLY mode.
Understandably Azure manages the database as it is a geo-replicated copy, so my question is that if I request that Indexes and Statistics updates are implemented on the MASTER copy, whether my replicated copy will receive same, or is there a way to update on my replicated copy alone?

All statements you run on the primary database to rebuild indexes and maintain statistics will also be executed on the secondary. For more information click here.

Related

How to perform backup and restore of Janusgraph database which is backed by Apache Cassandra?

I'm having trouble in figuring out on how to take the backup of Janusgraph database which is backed by persistent storage Apache Cassandra.
I'm looking for correct methodology on how to perform backup and restore tasks. I'm very new to this concept and have no idea on how to do this. It will be highly appreciated if someone explain the correct approach or point me to rightful documentation to safely execute the tasks.
Thanks a lot for your time.
Cassandra can be backed up a few ways. One way is called a "snapshot". You can issue this via "nodetool snapshot" command. What cassandra will do is to create a "snapshots" sub-directory, if it doesn't already exist, under each table that's being "backed up" (each table has its own directory where it stores its data) and then it will create the specific snapshot directory for this particular occurrence of the snapshot (either you can name the directory with the "nodetool snapshot" parameter or let it default). Cassandra will then create soft links to all of the sstables that exist for that particular table - looping through each table, keyspace or database - depending on your "nodetool snapshot" parameters. It's very fast as creating soft links takes almost 0 time. You will have to perform this command on each node in the cassandra cluster to back up all of the data. Each node's data will be backed up to the local host. I know DSE, and possibly Apache, are adding functionality to back up to object storage as well (I don't know if this is an OpsCenter-only capability or if it can be done via the snapshot command as well). You will have to watch the space consumption on this as there are no processes to clean these up.
Like many database systems, you can also purchase/use 3rd party software to perform backups (e.g. Cohesity (formally Talena), Rubrik, etc.). We use one such product in our environments and it works well (graphical interface, easy-to-use point-in-time recoveryt, etc.). They also offer easy-to-use "refresh" capabilities (e.g. refresh your PT environment from, say, production backups).
Those are probably the two best options.
Good luck.

Cloning Couch DB data from one server to another through file systems (without replicator)

We have two nodes with couchDB installed. One of the nodes have data on it, we want to copy the data from that instance to another instance of couch db. We want to avoid replicator due to volume of the data.
We tried copying data from %couchdb%/data/shards and %couchdb%/data/.shards to corresponding locations of target node as per one of the suggestions from CouchDB backups and cloning the database
but not able to see the Data in the server Fauxton UI. Can someone suggest what is missing?
Couchtransform lets you convert or just clone data from one db to another, its multi threaded and won't need to deal with massive files.

DB2 ZOS Mainframe- Archive Logs Disable

I'm working in DB2 ZOS Version 10, I have been working under data masking project. For this project I have been executing over 100k DDL statements (delete, update,insert) .
So I need to do disable the transaction logs before the whole SCRAMBLE PROCESS starts.
In DB2 iSeries AS400, I already handle the same issue by calling the procedure which helps to disable the TRANSACTION LOG DISABLE.
Like wise, I need to do in DB2 ZOS.
You can use the NOT LOGGED attribute for all affected tablespaces, specifies that changes that are made to data in the specified tablespace are not recorded in the DB2 log
Take the following steps for your data masking process:
Take an imagecopy so you can recover
ALTER TABLESPACE database-name.table-space-name NOT LOGGED
Execute data masking process
ALTER TABLESPACE database-name.table-space-name LOGGED
Take an imagecopy to establish a recovery point
You will also probably want to lock all tables with exclusive access so that if you have to recover no one else is affected by your changes
N.B. Make sure you're aware of the recovery implications for objects that are not logged!!!

Can I delete data from my local pouchDB without the delete replicating to couchDB

Is there a way I can clear down the data in my local pouchDB without the changes being replicated to the online couchDB.
I am currently using the db.sync function with live: true
The context for this is I have lots of users entering orders in an offline first environment and would like to cleardown the data every few days to keep the application quick but new lose the orders from couchDB
Unfortunately not, there is a long running open issue for purge # https://github.com/pouchdb/pouchdb/issues/802 which would do what you want, but it has not been implemented yet.
What is your use case, are you doing a 2 way sync and seeing remote updates locally or are you only doing push replication to send the orders? One way to work around this is to periodically create a fresh database locally that only contains the orders you care about.

How to shrink Azure SQL server DB (18MB of data charged for 5GB of server space already)

I have problem with Azure SQL DB.
Bacpack export of DB is only 18MB but charged DB size of server exceeds 5GB already.
Is there any way to see actual size of data?
Is there any way to move DB to simple recovery model?
Or is there any other way to shrink log files?
Or should I just drop Database and restore from backup?
Problem was caused by defragmented indexes.
You can find good scripts for fixing those from here:
http://blogs.msdn.com/b/dilkushp/archive/2013/07/28/fragmentation-in-sql-azure.aspx
After running scripts (and 24h) size of DB went back to 300MB.
The bacpac file is going to be significantly smaller than the DB as it's a compressed version of the data and I believe it strips out things like index content and only stores index definitions which are reindexed on restore so one shouldn't be indicative of the other.
For example, I have a database on SQL Azure configured as a 10GB Premium DB, which is currently using 2.7GB, which BACPACs to about 300MB
What kind of database have you configured ?
What Edition, Size and Usage settings are you currently being shown.
** Edit ** Image wasn't loading so here's the external link - http://i.snag.gy/JfsPk.jpg
The next thing to check is the size breakdown in the database by table/object.
Connect to your Azure environment with Management Studio and run the following query. which will give a table breakdown of the database with sizes in MB.
select
sys.objects.name, sum(reserved_page_count) * 8.0 / 1024
from
sys.dm_db_partition_stats, sys.objects
where
sys.dm_db_partition_stats.object_id = sys.objects.object_id
group by sys.objects.name

Resources