Sharepoint disaster recovery - sharepoint

What are your disaster recovery plans for Windows Sharepoint Services 3.0 ?
Currently we are backuping all databases (1 content, admin, search and config) using sql backup tools, and backuping the front end server via dataprotector.
To test our backups, we use another server farm, restore the content database (following the procedure on technet) and create a new application that uses this database. We just have to redeploy solutions on the newly created sharepoint application.
However, we have to change database access credentials (on sql server) : the user accounts used on production aren't the same as those used on our "test" farm.
At the end, we can restore our content database and access all our sites. Searching doesn't work, but we're investigating.
Is this restore scenario reliable (as in supported by microsoft) ?

You can't really backup / restore both config database and search database:
restoring config database only work if your new farm have exactly the same server names
when you restore the search database, the fulltext index is not synchronize. however, this is not a problem as you can just reindex.
As a result, I would say that yes, this a reliable for content. But take care of:
You may have to redo some configuration (AAM, managed path...).
This does not include customization, you want to keep a backup of your solution

Reliability is in the eye of the beholder. In this case, if your tests of the restore process is successful, then yes, it is reliable.

A number of my clients run SharePoint (both MOSS and WSS) in virtual environments, SQL Server is also virtualised and backed up both with SQL tools and with Volume Shadow copy.
The advantage of a Virtual Environment is downtime is only as long as it takes your Virtual Server host to boot the images.
If you are not using Virtualisation, then remember to backup transaction logs regularly as this will make it easier to restore to a given point in the day - it also means that your transaction logs dont grow too big!

I prefer to use the stsadm -o backup command 'for catastrophic backup' as it says in the help. This can be scheduled but requires some maintenance of the backup metadata XML file when you start running out of disk space and need to archive older backups. It has the advantage of transferring over timer jobs (usually) and other configuration because as Nico says, restoring the config database won't work for most situations.
To restore, you can use the user interface which is nice and not have to mess around with much else. I think it restores your solutions as well but haven't tested that extensively.

Related

Implementing database failover in Azure Service Fabric

My company's application experienced database connection issues this morning resulting in me having to failover to our secondary database. Within our Azure App Services, this was an easy step of changing the connection string in the configuration, however I could not find an easy way of changing these settings on our Service Fabric services without redeploying.
I'm considering options to allow failover at runtime for these services to a secondary database but don't know what the 'best practices' would be. A couple options I have:
I could create a dns entry for our database server that i manage and then just switch that to the new server name when I need to fail over.
I could have some sort of rest api to call on my app services that would return whether or not to go to the secondary database.
Any other ideas? I'd like to make failover to the secondary as seamless as possible so it can be done quickly.
Have you considered putting both your primary and secondary database connection strings into your application's config and writing some code that automatically switches between them if it detects a problem? Both of the options you presented puts a human in the path, which means your users are going to experience downtime until the human fixes the problem (maybe the human is asleep, or on vacation, or on vacation and asleep).
In Service Fabric, Application (and system) upgrades are always rolling upgrades. Rolling upgrades have the advantage of preventing global outages. For example, suppose at some point you updated your config with the wrong connection string. A global config change might be quick and easy, but now you have a global outage and some upset customers. A rolling upgrade would have caught the error in the first upgrade domain and then rolled back, so only a fraction of your application would have been affected.
You can do a config-only rolling upgrade. This is where you make a change to your config package and then create a differential upgrade package so that only the config changes go out and your service process doesn't have to restart.
Just to post an update to my issue here. SQL Azure now has automatic failover groups. This is described here

Sitecore - Manually copy indexes, do you need to restart app pool

The company I'm working for like to manually rebuild the Lucene indexes using the /admin/toolbox/rebuild-index.aspx in Sitecore 6.6. Once they have been rebuilt they then copy the files to each Content Delivery server manually and then restart the app pool on each CD server.
At the moment due to the way the site was built, the site has a long start-up time (this is being fixed sometime in the future) so the restarting of the app pools is a pain. My question is:
Does one need to restart the app pools for the new index files to be picked up?
Yes. Files will be locked by Lucene when it's running.
I guess, theoretically, it would be possible to get Lucene running on the CD servers in a read-only mode - but I've never attempted this myself, and don't know of a way to achieve it off hand.
If you are going to be doing fixes on the site in the future, might I suggest you move the indexing off server? Implement a SOLR centralised index. That way, when rebuilt, it will be readily available to any and all CD servers right away, with no need for copying and/or restarting app pools.
Scalability settings may be somethkng you need for multiple server implementation.
Does this cover lucene is something you may need to find out.

Automated migrations for Azure Blob Storage. Does this exist?

We are using Azure Blob Storage in all our projects. Through lifetime of a project the naming convention for files in Azure change: sometimes we would like to rename containers, remove extra folders and other clean-up operations.
But Azure does not allow easily to rename things, we have to do copy-delete.
Also we can change naming convention locally, during development. But we need to remember do the exact operation on production storage when we deploy new versions.
At the same time we use Entity Framework migrations: we updated database, migration script is created. Then we run "update-database" and DB is updated. The same is run automatically by deployment scripts: check if production DB needs to be updated, and update it if needed.
What would be good if we can do the same migration goodness for Azure storage: check if all the migration scripts have been applied, execute processes for missing scripts. Somewhere in the containers keep a reference to a latest executed script.
Does such thing exist? or should I have a go on it and try implementing something myself.
No, such functionality/behavior does not exists. And do remember that EF migrations are supported and are part of the EF itself, not the Data Base! So when you talk about Azure Blob Storage - it, as a service does not provide such functionality, the same way SQL Server itself does not do it.
To the question if such a library/code exists - no there isn't.
You are raising a very interesting question though!
I personally am not a big fan of "migrations". You can do it while in early stages of development life cycle. But once you hit GA/Production, you have to be very careful what you are doing. Even EF migrations might be good with small database sizes, but are you willing to run migrations on a DB which has tables with millions of records production data? Same with blobs. If you have 100 or 1000 blobs might be fine. How about 2M blobs? Are you really willing to put some code that would go through 2M entities and do some operations over it, and run this code as part of your build/deploy process? I would not.

Transfer liferay files (documentt library) between two servers

I have built my liferay website in the development environment and now ready to be published. I have also installed two liferay nodes on two different servers where I want to put my website. Server1 is active and server2 as backup.
The problem is when I started the development, I didn't know I would one day need to have the two-server structure, so I stored all the documents and images on the file system and not to a database. So basically with this setting, when I make changes on server 1, I have to transfer the document library manually to server two, just like I would do for the themes.
I tried to change the document library location from the filesystem to the database in the portal-ext.properties, but that didn't help.
So, my questions:
Is there a way to transfer these files to a database now, where they can be shared by both servers? and if not,
Is it possible to somehow transfer the document library from server1 to server2 automatically through some script?
Thanks,
Adia
If server2 is a cold standby backup server and assuming you have a correct backup of the Liferay data directory of server1 and the database at the same moment in time, you can just restore the backup of the Liferay data directory to server2, restore the DB to the corresponding moment in time as the data directory backup and start server2.
In hot standby scenario's and clustered environments things get a little bit more complicated as you would need to use a common place to store documents, images, search indexes, etc... The easiest way is to store everything in the database or on a common file system so that multiple nodes are always working on the same data.
In you want to get your current set of documents that is stored on disk into the database the easiest way is to use the Server > Server Administration > Data Migration tab in the Control Panel. It has an option to migrate documents from the existing repository aka the disk to another, which would be the JCRStore in your case as that store can be configured to use the database.

Can clone VM be application backup plan?

I am application developer and don't know much about virtual machine(VM).
however, our application is resided on a VM. frequent patch need be apply to fix/update this application. For diaster recovery, It was suggest to backup every thing on the server. so, once server is restored, no application need be re-installed and configured.
our network administrator thinks it can be done by cloning VM. but if we want to backup the clone to a tape. it would expose VM to backup drive. any one who can access to it can erase the VM and every thing woudl be gone. it is very risky.
I would appreciate it if you could let me what you think on this or any suggestion.
Cloning is perfectly acceptable.
You don't have to backup to tape... It can be done to a NAS for example, and with the proper security and setup, backups cannot be deleted by unauthorized people.
You can use any NAS and VM replication software like Veeam, Acronis or Nakivo. It will totally solve your problems. All software has various permission settings so you can control who can and who can not delete your data.

Resources