We have recently setup greenplum. Now the major concern is to setup strategy for PITR. Postgres provides PITR capability but I am a little confused as how will it work in greenplum as each segment will have it's own log directory and config file
We recently introduced the concept of named restore point to serve as a building block for PITR for greenplum. In order to use this you will need to use the catalog function gp_create_restore_point() which internally creates a cluster wide consistency point across all the segments. This function returns all the restore point location () for each segment and the master. Using these recovery points you will be able to configure the recover.conf in your PITR cluster
To demonstrate how Greenplum named restore points work, a new test
directory src/test/gpdb_pitr has been added. The test showcases WAL
Archiving in conjunction with the named restore points to do
Point-In-Time Recovery.
In case you are most interested in the details, please refer to the following two commits that discusses this functionality in detail https://github.com/greenplum-db/gpdb/commit/47896cc89b4935199aa7d97043f2b7572a71042b
https://github.com/greenplum-db/gpdb/commit/40e0fd9ce6c7da3921f0b12e55118320204f0f6d
Related
We are setting up GitLab - DR, we have replicated secondary slave from primary Postgres database.
But we are getting READ ONLY issues since DB slave is in read only mode, we can't make DB- READ WRITE since it's secondary.
What should be the standard way to proceed with DR setup for GitLab?
It depends on what you are setting up for GitLab DR: Geo is the official solution since Nov. 2017 (GitLab 10.4)
If you are using Geo (premium or ultimate GitLab instance only), then the documentation would suggest:
The GitLab primary node where the write operations happen will connect to the primary database server, and the secondary nodes which are read-only will connect to the secondary database servers (which are also read-only).
Note:
In database documentation, you may see "primary" being referenced as "master"
and "secondary" as either "slave" or "standby" server (read-only).
We recommend using PostgreSQL replication slots to ensure that the primary retains all the data necessary for the secondaries to recover.
So make sure those replication slots are properly configured.
I'm having trouble in figuring out on how to take the backup of Janusgraph database which is backed by persistent storage Apache Cassandra.
I'm looking for correct methodology on how to perform backup and restore tasks. I'm very new to this concept and have no idea on how to do this. It will be highly appreciated if someone explain the correct approach or point me to rightful documentation to safely execute the tasks.
Thanks a lot for your time.
Cassandra can be backed up a few ways. One way is called a "snapshot". You can issue this via "nodetool snapshot" command. What cassandra will do is to create a "snapshots" sub-directory, if it doesn't already exist, under each table that's being "backed up" (each table has its own directory where it stores its data) and then it will create the specific snapshot directory for this particular occurrence of the snapshot (either you can name the directory with the "nodetool snapshot" parameter or let it default). Cassandra will then create soft links to all of the sstables that exist for that particular table - looping through each table, keyspace or database - depending on your "nodetool snapshot" parameters. It's very fast as creating soft links takes almost 0 time. You will have to perform this command on each node in the cassandra cluster to back up all of the data. Each node's data will be backed up to the local host. I know DSE, and possibly Apache, are adding functionality to back up to object storage as well (I don't know if this is an OpsCenter-only capability or if it can be done via the snapshot command as well). You will have to watch the space consumption on this as there are no processes to clean these up.
Like many database systems, you can also purchase/use 3rd party software to perform backups (e.g. Cohesity (formally Talena), Rubrik, etc.). We use one such product in our environments and it works well (graphical interface, easy-to-use point-in-time recoveryt, etc.). They also offer easy-to-use "refresh" capabilities (e.g. refresh your PT environment from, say, production backups).
Those are probably the two best options.
Good luck.
My question is above mentioned, have a cassandra database and wanted to use another server with this data. How can i move this all keyspace's data?
I have snapshots but i dont know can i open it to another server.
Thanks for your helps
Unfortunately, you have limited options to move data across clouds primarily COPY command or sstableloader (https://docs.datastax.com/en/cassandra/2.1/cassandra/migrating.html) or if you plan to maintain a like-to-like setup (same number of nodes) across clouds then simply copying snapshots under data would work.
If you are moving to IBM Softlayer, you maybe able to use software-defined storage solutions that get deployed on bare metal and provide features like thin clones that will allow you to create clones of cassandra clusters in matter of minutes and provide incredible space savings. This is rather useful for creating clones for dev/test purposes. Checkout Robin Systems, you may find them interesting.
The cleanest way to migrate your data from one cluster to another is using the sstableloader tool. This will allow you to stream the contents of your sstables from a local directory to a remote cluster. In this case the new cluster can also be configured differently and you also don't have to worry about assigned tokens.
Simple example to implement hive based registry?
I have an appliaction in which i should maintain the value of volume up and down, everytime when i start or turn off the application, for this i came across hive based registry. But dont know how to implement and how to use?
Please Reply
Thanks in advance
You can start by reading this post: http://geekswithblogs.net/BruceEitman/archive/2009/08/11/windows-ce-what-is-hive-registry.aspx and then read the MSDN article on setting the Hive Based Registry.
Don't change public code though, just make the changes in your platform.reg and they will take affect over the common.reg file.
Basically, you will need to encapsulate all the registry settings that are a must for boot time in you ; HIVE BOOT SECTION + ; END HIVE BOOT SECTION(don't mistake this for an ordinary reg file comment - it is a flag for the build system).
Take a look at the BSP of MAINSTONEIII, it should have it implemented at WINCE600\PLATFORM\MAINSTONEIII\FILES.
Your other option for your requirement is to keep a configuration file on a persistent storage - seems to me like a more trivial solution and more adequate in case you only want persistent registry for the volume setting.
What are your disaster recovery plans for Windows Sharepoint Services 3.0 ?
Currently we are backuping all databases (1 content, admin, search and config) using sql backup tools, and backuping the front end server via dataprotector.
To test our backups, we use another server farm, restore the content database (following the procedure on technet) and create a new application that uses this database. We just have to redeploy solutions on the newly created sharepoint application.
However, we have to change database access credentials (on sql server) : the user accounts used on production aren't the same as those used on our "test" farm.
At the end, we can restore our content database and access all our sites. Searching doesn't work, but we're investigating.
Is this restore scenario reliable (as in supported by microsoft) ?
You can't really backup / restore both config database and search database:
restoring config database only work if your new farm have exactly the same server names
when you restore the search database, the fulltext index is not synchronize. however, this is not a problem as you can just reindex.
As a result, I would say that yes, this a reliable for content. But take care of:
You may have to redo some configuration (AAM, managed path...).
This does not include customization, you want to keep a backup of your solution
Reliability is in the eye of the beholder. In this case, if your tests of the restore process is successful, then yes, it is reliable.
A number of my clients run SharePoint (both MOSS and WSS) in virtual environments, SQL Server is also virtualised and backed up both with SQL tools and with Volume Shadow copy.
The advantage of a Virtual Environment is downtime is only as long as it takes your Virtual Server host to boot the images.
If you are not using Virtualisation, then remember to backup transaction logs regularly as this will make it easier to restore to a given point in the day - it also means that your transaction logs dont grow too big!
I prefer to use the stsadm -o backup command 'for catastrophic backup' as it says in the help. This can be scheduled but requires some maintenance of the backup metadata XML file when you start running out of disk space and need to archive older backups. It has the advantage of transferring over timer jobs (usually) and other configuration because as Nico says, restoring the config database won't work for most situations.
To restore, you can use the user interface which is nice and not have to mess around with much else. I think it restores your solutions as well but haven't tested that extensively.