How to handle data such as Mysql, web sites sources with Vagrant? - puppet

How to handle data such as Mysql, web sites sources with Vagrant ?
As a programmer, I like being able to easily set up environments for develop. So I created a vagrant box and provisioned it with puppet but I'm asking to myself, what about the data in the box ? What happen if I need to destroy the box and recreate it? All my data will be erased !
I had some problems with a crashed VM and I don't want to redo the same mistake, I want to have the control of my data.
How do you do ? Do you use shared folders to put your live data ? Where do you keep your data, in or out the box ?

In the current version of Vagrant (1.0.3), you have two main options:
Use shared folders. You can put your MySQL data directory into a shared folder so that the data comes back onto your host machine. The con of this is that shared folders are actually quite slow compared to the native VM filesystem in VirtualBox, and you can run into weird permission issues as well.
Setup a task (rake, make, etc.) to copy your MySQL data to your shared folder on demand. Then, before you decide to destroy your VM, you can run the task to export your data to your shared folder, then you can reimport the data when you bring your VM back up.

Related

databases and data not saving ~Linux QubeOS~

when i create a postgresql database and create tables and columns and even insert data into the columns. I cant restart my machine without losing the created databases and all the data.
i have tried changing a coupe things in the configuration file but nothing helped.
I also have to reset the password for postgres everytime I restart my machine. I mainly use mongodb I am just learning postgreSQL just so I can use it if I ever need to in the future. I am runing a Linux machine QubesOS. I have a few problems like this useing QubesOS. every tutorial I watch everybody uses Macs. Which a mac seems good and all kinda a mix between windows and linux The best of both worlds. Easy package installs and terminal control but I dont want to trade my linux machine for a Mac I would much rather just fix these problems I am having with PostgreSQL on my linux machine
You ran into an important security feature of QubesOS: All data modifications are discarded on shutdown of a so called "Qube". They are reset to their original state.
But there is the exception of data kept in some very special directories.
If you convince your data base packages to put their data into these directories, it will be preserved over reboots of your data base Qube:
Read this documentation for more information.

Migrate data from one server to another

I bought a new server and I want to move all the data (directories, sub directories, users, passwords, ..etc) from my old server to it.
Is there a way to do that?
Thanks,
Do you have physical access to both servers? If so you can use the dd command to make a clone of the disk from the old server to the disk that is going into the new server.
In order to do this though, both hard drives have to be installed in one of the servers.
You can also use netcat and dd to clone a disk over a network.
for the directories and files, use a FTP client from your server, if it allows you to, if not, just download all the content to your computer and upload it to the new server.
For the users and passwords, i guess they are in a Database, connect to the database using SSH, telnet, or MysqlAdmin or any RMDB client system and export a dump file, then log in to the new server's SQL system and import that dump file.
Anyway you should give more details of both servers anyway so we can help you, for example, are they Shared hosting or dedicated machine? and what kind of access do you have to them, also, their operative system would help people to reply you accurately
In principle, yes.
If the hardware is similar (= just more RAM, disk space but same CPU architecture and no special graphics card drivers), you might be able to copy every file and then install the boot loader once more (the boot loader config usually changes when the hard disk size changes).
Or you can create a list of all services that you use, determine which config files each one uses and then just copy those. Ideally, you shouldn't copy them but compare the old and the new versions and merge them.
The most work intensive way is to use a tool like puppet. In a nutshell, puppet allows to create install scripts for services (along with all the configuration that you need). So if you need to install a service again (new hardware, second server), you just tell puppet to do it. On the plus side, your whole installation will be documented, too. If you ever wonder why something is the way it is, you can look into the puppet files.
Of course, this approach takes a lot of time and discipline, so it might not be worth it in your case. Apply common sense.

Transfer liferay files (documentt library) between two servers

I have built my liferay website in the development environment and now ready to be published. I have also installed two liferay nodes on two different servers where I want to put my website. Server1 is active and server2 as backup.
The problem is when I started the development, I didn't know I would one day need to have the two-server structure, so I stored all the documents and images on the file system and not to a database. So basically with this setting, when I make changes on server 1, I have to transfer the document library manually to server two, just like I would do for the themes.
I tried to change the document library location from the filesystem to the database in the portal-ext.properties, but that didn't help.
So, my questions:
Is there a way to transfer these files to a database now, where they can be shared by both servers? and if not,
Is it possible to somehow transfer the document library from server1 to server2 automatically through some script?
Thanks,
Adia
If server2 is a cold standby backup server and assuming you have a correct backup of the Liferay data directory of server1 and the database at the same moment in time, you can just restore the backup of the Liferay data directory to server2, restore the DB to the corresponding moment in time as the data directory backup and start server2.
In hot standby scenario's and clustered environments things get a little bit more complicated as you would need to use a common place to store documents, images, search indexes, etc... The easiest way is to store everything in the database or on a common file system so that multiple nodes are always working on the same data.
In you want to get your current set of documents that is stored on disk into the database the easiest way is to use the Server > Server Administration > Data Migration tab in the Control Panel. It has an option to migrate documents from the existing repository aka the disk to another, which would be the JCRStore in your case as that store can be configured to use the database.

Cloning PostgreSQL database

I want to have a clone of a postgresql database. If I copy the entire data directory from one machine and replace another machine's data directory with it, will there be any problems? both of them have the same OS, btw (CentOS)
Certainly if you stop the server and then copy it, that's fine. If you don't, the cloned server will have to do recovery, which isn't so good. Or just use pg_dumpall to produce a script to recreate the data on the new machine.
You can invoke pg_start_backup() and then copy the datadir. All changes will then be written to a "log" and committed later on when you run pg_stop_backup().
http://www.postgresql.org/docs/8.1/static/backup-online.html
Se section 23.3.2. Making a Base Backup
I then think you can restore the files on another machine running identical versions of postgresql under the same architecture.
section 23.3.3. Recovering with an On-line Backup will explain how to restore the backup you have made.

How to duplicate a virtual PC with SharePoint, K2 and domain controller

Is anyone aware of an easy way of duplicating and renaming a virtual PC (can be MS VPC, VMWare or Virtual Box), which is running SharePoint, K2 and acting as a domain controller? I’m looking for a method of creating an image which can be quickly and easily copied and run by multiple parties on the same network simultaneously without name conflicts. It’s either that or go through a ground-up build on each and every machine as far as I can see.
I'd advise against it.. renaming an installed SharePoint machine is sure to cause you pain indefinately and unexpectedly. The way to go is with scripted installs:
create copy of a VM with OS
rename machine + run sysprep
script install SQL
script install MOSS
script configure MOSS (replaces config wizard + a lot of manual settings)
It can all be done unattended.
As a shortcut to install short-lived development machines I have used the following. Just make sure the SharePoint configuration wizard runs after the rename and there should be no problem.
create a copy of a VM having: OS+SQL+MOSS(no config wiz)
rename machine
script configure MOSS
It has the advantage of your development machines being identically installed. Takes about 10 minutes to create a fresh one. It doesn't have sysprep but they are renamed so you can run them all on your network. Not running sysprep has never caused me grief but I wouldn't do it for production environments. Running the configuration of MOSS scripted makes sure it will work on the renamed environment (and all MOSS farms are configured exactly the same, same ports, SSP setup, etc, yay!)
For MOSS configuration scripting see h tt p://stsadm.blogspot.com/2008/03/sample-install-script.html
Plently of samples for SQL out there too.
SharePoint doesn't like having the server re-named from under it's feet (so to speak). Neither does SQL Server (which I assume you'd have installed on the VM for the installation). Not sure about a DC being renamed, there's probably problems there as well...
Having said that, there are some instructions I've read for renaming both SharePoint machines and SQL Server machines, so you might get somewhere.
On the third hand, I've tried it a few times and always ended up rebuilding the server from the ground up for SharePoint as it can get subtly mangled in ways which aren't always apparent straight away (the admin interface and shared services seem to be especially easy to confuse). I've found that I can build a vanilla MOSS install pretty quickly these days...
Sharepoint writes the name of the server into configuration tables in SQL Server. So if you change the name of the server, things stop working.
What you can do, is to install just the OS. Then take a copy each time you need a new machine. Run sysprep
to give the machine a new name. Then install SQL Server and MOSS.
This is not exactly what you are after but it should save you some time.
I've done this, and it wasn't too bad.
Rename the SharePoint-server first, then rename the Windows server.
This posting has a nice checklist.
Don't forget to remove the NIC node from the settings file of the virtual machine, otherwise you get name collision due to duplicate MAC addresses. Here's a how-to.
I believe the solutions above are really good. But I would suggest an alternative ...
If this is a development virtual PC I would suggest that you do the following
Do not rename the server
Change the IP address to be on different network
Change the MAC address so that there are no packet collisions
Since you are using it as a development VPC, edit the computer's lmhosts file edit the entry to point to the new IP address
You might want to skip the step 2 and be on the same network. But changing the hosts file will still point back to you. For example you server name was "myserver" and it was pointed 192.168.1.100 which was the local ip (has hosts file entry) , then if you copy the server give it ip 192.168.1.150 and edit the hosts file and point myserver to 192.168.1.150, the system will still work flawlessly. There will some domain name collisions in the event log of the machine, but it wont affect your development.

Resources