I have built my liferay website in the development environment and now ready to be published. I have also installed two liferay nodes on two different servers where I want to put my website. Server1 is active and server2 as backup.
The problem is when I started the development, I didn't know I would one day need to have the two-server structure, so I stored all the documents and images on the file system and not to a database. So basically with this setting, when I make changes on server 1, I have to transfer the document library manually to server two, just like I would do for the themes.
I tried to change the document library location from the filesystem to the database in the portal-ext.properties, but that didn't help.
So, my questions:
Is there a way to transfer these files to a database now, where they can be shared by both servers? and if not,
Is it possible to somehow transfer the document library from server1 to server2 automatically through some script?
Thanks,
Adia
If server2 is a cold standby backup server and assuming you have a correct backup of the Liferay data directory of server1 and the database at the same moment in time, you can just restore the backup of the Liferay data directory to server2, restore the DB to the corresponding moment in time as the data directory backup and start server2.
In hot standby scenario's and clustered environments things get a little bit more complicated as you would need to use a common place to store documents, images, search indexes, etc... The easiest way is to store everything in the database or on a common file system so that multiple nodes are always working on the same data.
In you want to get your current set of documents that is stored on disk into the database the easiest way is to use the Server > Server Administration > Data Migration tab in the Control Panel. It has an option to migrate documents from the existing repository aka the disk to another, which would be the JCRStore in your case as that store can be configured to use the database.
Related
I have a web application (using MongoDB database, AngularJS on front-end and NodeJS on back-end) that deployed on 2 places. First is on static ip so that it can access from anywhere and second is on one local machine so that user can use it when the internet connection is not available. So on both places, data can be inserted by user. My requirement is to sync the both databases, when internet connection is available on local machine i.e. from local system database to remote system database and vice-versa without loosing any data on both places.
One way I am thinking about is provide the sync button in the application and sync the databases using insert/update query. I am not sure is there any better and automated way to do this task so that the databases sync automatically like data copied in replica set.
Please provide the best solution to do this task. Thanks in advance.
We have a linux server for the database. Most of our data are on /var/. I would like to backup entire directory on external hard drive or on another linux system so that if something goes wrong I can entirely replace the directory. Since the directory has many files, I do not want to copy and paste every time instead I like to sync them.
Is there easy way to do that? rsyn can do that, how do I avoid of login every time the server? BTW I have limited knowledge of linux system.
Appreciated any comments and suggestions.
Bikesh
Rsyncing database files is not a recommended way of backing them up. I believe you have mysql running on the server. In that case, you can take a full database dump in the server, using steps mentioned in following link:
http://www.microhowto.info/howto/dump_a_complete_mysql_database_as_sql.html#idp134080
And, then syncing these files to your backup server. You can use rsych command for this purpose:
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
Make sure that you have installed mysql in the backup server too. You can also copy the mysql configuration file /etc/my.cnf file to the database backup server. In case you require your database to be updated always, you can setup mysql replication. You can follow the below mentioned guide to do the same.
http://kbforlinux.blogspot.in/2011/09/setup-mysql-master-slave-replication.html
How to handle data such as Mysql, web sites sources with Vagrant ?
As a programmer, I like being able to easily set up environments for develop. So I created a vagrant box and provisioned it with puppet but I'm asking to myself, what about the data in the box ? What happen if I need to destroy the box and recreate it? All my data will be erased !
I had some problems with a crashed VM and I don't want to redo the same mistake, I want to have the control of my data.
How do you do ? Do you use shared folders to put your live data ? Where do you keep your data, in or out the box ?
In the current version of Vagrant (1.0.3), you have two main options:
Use shared folders. You can put your MySQL data directory into a shared folder so that the data comes back onto your host machine. The con of this is that shared folders are actually quite slow compared to the native VM filesystem in VirtualBox, and you can run into weird permission issues as well.
Setup a task (rake, make, etc.) to copy your MySQL data to your shared folder on demand. Then, before you decide to destroy your VM, you can run the task to export your data to your shared folder, then you can reimport the data when you bring your VM back up.
What is the preferred method of keeping a server farm synchronized? It's currently a pain to have to upload to multiple servers. Looking for a balance of ease of use and cost. I read somewhere that a DFS can do it, but that's something that requires the servers to run on a domain. Are there any performance issues with using a DFS?
We use SVN to retain the server files in specific repositories and then have a script that executes to pull the latest files out of SVN onto each of the servers in the webfarm (6 servers). This employs the TortoiseSVN utility as it has an easier command line interface for the admins and updates all the machines from a single server, usually the lowest IP address in the pool.
We ensure no server has any local modifications for the checked out repository to avoid conflicts and we get a change log with the file histories in SVN with the benefits of roll back too. We also include any admin scripts so these get the benefit of versioning and change logs.
What are your disaster recovery plans for Windows Sharepoint Services 3.0 ?
Currently we are backuping all databases (1 content, admin, search and config) using sql backup tools, and backuping the front end server via dataprotector.
To test our backups, we use another server farm, restore the content database (following the procedure on technet) and create a new application that uses this database. We just have to redeploy solutions on the newly created sharepoint application.
However, we have to change database access credentials (on sql server) : the user accounts used on production aren't the same as those used on our "test" farm.
At the end, we can restore our content database and access all our sites. Searching doesn't work, but we're investigating.
Is this restore scenario reliable (as in supported by microsoft) ?
You can't really backup / restore both config database and search database:
restoring config database only work if your new farm have exactly the same server names
when you restore the search database, the fulltext index is not synchronize. however, this is not a problem as you can just reindex.
As a result, I would say that yes, this a reliable for content. But take care of:
You may have to redo some configuration (AAM, managed path...).
This does not include customization, you want to keep a backup of your solution
Reliability is in the eye of the beholder. In this case, if your tests of the restore process is successful, then yes, it is reliable.
A number of my clients run SharePoint (both MOSS and WSS) in virtual environments, SQL Server is also virtualised and backed up both with SQL tools and with Volume Shadow copy.
The advantage of a Virtual Environment is downtime is only as long as it takes your Virtual Server host to boot the images.
If you are not using Virtualisation, then remember to backup transaction logs regularly as this will make it easier to restore to a given point in the day - it also means that your transaction logs dont grow too big!
I prefer to use the stsadm -o backup command 'for catastrophic backup' as it says in the help. This can be scheduled but requires some maintenance of the backup metadata XML file when you start running out of disk space and need to archive older backups. It has the advantage of transferring over timer jobs (usually) and other configuration because as Nico says, restoring the config database won't work for most situations.
To restore, you can use the user interface which is nice and not have to mess around with much else. I think it restores your solutions as well but haven't tested that extensively.