Offline DB Restore on running Edge server - perforce

We have a situation where we want to re-seed the edge offline database from a checkpoint of the live DB. I understand there will be downtime as a dump of the online edge database will have to be performed.
Can someone help me with the commands "how to dump the database/tables" and "how to restore the offline database from the dumped file"?
FYI-- We are runing SDP 2017 version
Regards,
Manoj

Related

Cosmos DB Emulator for Linux Docker “This is an evaluation version. There are [164] days left in the evaluation period”

I’m running Cosmos DB Emulator for Linux Docker image.
I’ve noticed the following message when the container starts running
“This is an evaluation version. There are [164] days left in the evaluation period”
Does this mean that after 164 days Cosmos DB Emulator will stop working? What can be done to extend the period or to replace the evaluation version with an image that does not expire?
I couldn’t find any information regarding this
And one more question: How can I migrate a complete Cosmos DB database from Azure to Cosmos DB Emulator Linux, running in docker on my local macOS?
I saw that there is Data Migration Tool for this that runs on Window, but is there a Data Migration Tool for Linux/macOS or is there another way to copy CosmosDB from Azure to Emulator?
Is there a service on Azure that can do that?
Can Data Migration Tool be built on macOS? If yes, is there a documentation on how to do that?
Also is there a way to send commands to the Cosmos DB Emulator for Linux running in docker (similar to what can be done on Windows https://learn.microsoft.com/en-us/azure/cosmos-db/emulator-command-line-parameters). It would be nice if I could use GetStatus to check if Cosmos DB Emulator actually started.
To answer your first question about the message "This is an evaluation version. There are [N] days left in the evaluation period" - I asked the Cosmos DB team at Microsoft and they said that the emulator will continue to work after the [N] days. Apparently the counter should just reset back to 180 days and then count down again.
I don't think it is possible to send commands to a running Cosmos DB Emulator for Linux. To see if it's started you could just make a call to it e.g. if you're using a client you could call https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.cosmos.cosmosclient.readaccountasync and check it doesn't throw.

How to save tables permanently in voltdb community edition?

I am using volt db community edition and I am creating the tables in volt db database. Once I restart the server all tables stored in database are deleted. How to save the tables permanently in volt db database using command line?
VoltDB Community Edition does not include any durability features. Those are found in the Enterprise and Pro editions, available at voltdb.com.
Disclaimer: I work for VoltDB.
Just shutdown the server with the command:
voltadmin shutdown --save
This will create a snapshot that the system use to restore the database on restart. I am also using the community edition, I have tried it and seems to work. Hope it's still useful :)
References are here: Official Documentation
Disclaimer: I don't work for VoltDB and I don't want to sel my product.

Script to incremental backup MySQL workbench in linux

I have an issue related to how to incremental backup MySQL workbench.
Can anyone tell me the script to backup this?
I want to back up all day and keep incremental with difference file.
Can anyone give me the sample script about that?
Thank,
Veasna.
The binary log (mysql-bin.log) is essentially an incremental back-up. It allows you to revert back to a previously stable database state.
see http://dev.mysql.com/doc/mysql-backup-excerpt/5.0/en/backup-policy.html
Making Incremental Backups by Enabling the Binary Log,
http://dev.mysql.com/doc/refman/5.6/en/backup-methods.html
May i know from where you are restarting the service, through command prompt Or from your control panel.
Share your error message here, you Will get further details if anyone knows.

CouchDB delete and replication

I'm having a problem with the replication of my couchDB databases.
I have a remote database which gathers measurement data and replicates it to a central server.
On the server, I add some extra parameters to these documents.
Sometimes, a measurement goes wrong and I just want to delete it.
I do that on the central server and want to replicate it to the remote database.
Since I updated the document on the central server, there is a new revision which isn't synced to the remote.
If I want to delete that measurement, couchdb deletes the latest revision.
Replicating this to the remote doesn't delete the documents on the remote.
(Probably because it doesn't sync the latest revision first, it just wants to delete the latest revision, which isn't on the remote yet).
Replicating the database to the remote before I delete the document fixes this issue.
But sometimes, the remote host is unreachable. I want to be able to delete the document on the central database and make sure that once the remote comes online, it also deletes the document. Is there a way to do this with default couchdb commands?
You could configure continuous replication so that your remote listens for changes on the central database. If it goes offline, and comes back online, re-start continuous replication.

Sharepoint disaster recovery

What are your disaster recovery plans for Windows Sharepoint Services 3.0 ?
Currently we are backuping all databases (1 content, admin, search and config) using sql backup tools, and backuping the front end server via dataprotector.
To test our backups, we use another server farm, restore the content database (following the procedure on technet) and create a new application that uses this database. We just have to redeploy solutions on the newly created sharepoint application.
However, we have to change database access credentials (on sql server) : the user accounts used on production aren't the same as those used on our "test" farm.
At the end, we can restore our content database and access all our sites. Searching doesn't work, but we're investigating.
Is this restore scenario reliable (as in supported by microsoft) ?
You can't really backup / restore both config database and search database:
restoring config database only work if your new farm have exactly the same server names
when you restore the search database, the fulltext index is not synchronize. however, this is not a problem as you can just reindex.
As a result, I would say that yes, this a reliable for content. But take care of:
You may have to redo some configuration (AAM, managed path...).
This does not include customization, you want to keep a backup of your solution
Reliability is in the eye of the beholder. In this case, if your tests of the restore process is successful, then yes, it is reliable.
A number of my clients run SharePoint (both MOSS and WSS) in virtual environments, SQL Server is also virtualised and backed up both with SQL tools and with Volume Shadow copy.
The advantage of a Virtual Environment is downtime is only as long as it takes your Virtual Server host to boot the images.
If you are not using Virtualisation, then remember to backup transaction logs regularly as this will make it easier to restore to a given point in the day - it also means that your transaction logs dont grow too big!
I prefer to use the stsadm -o backup command 'for catastrophic backup' as it says in the help. This can be scheduled but requires some maintenance of the backup metadata XML file when you start running out of disk space and need to archive older backups. It has the advantage of transferring over timer jobs (usually) and other configuration because as Nico says, restoring the config database won't work for most situations.
To restore, you can use the user interface which is nice and not have to mess around with much else. I think it restores your solutions as well but haven't tested that extensively.

Resources