Cloning PostgreSQL database - linux

I want to have a clone of a postgresql database. If I copy the entire data directory from one machine and replace another machine's data directory with it, will there be any problems? both of them have the same OS, btw (CentOS)

Certainly if you stop the server and then copy it, that's fine. If you don't, the cloned server will have to do recovery, which isn't so good. Or just use pg_dumpall to produce a script to recreate the data on the new machine.

You can invoke pg_start_backup() and then copy the datadir. All changes will then be written to a "log" and committed later on when you run pg_stop_backup().
http://www.postgresql.org/docs/8.1/static/backup-online.html
Se section 23.3.2. Making a Base Backup
I then think you can restore the files on another machine running identical versions of postgresql under the same architecture.
section 23.3.3. Recovering with an On-line Backup will explain how to restore the backup you have made.

Related

Database copy larger than original

I just created a backup of my database using command mysqldump in Centos7. Then recreated the database for making a copy of the current website for development purposes. But i noticed the size of the copy is higher than the original database. By almost 70MB. Any idea why?

databases and data not saving ~Linux QubeOS~

when i create a postgresql database and create tables and columns and even insert data into the columns. I cant restart my machine without losing the created databases and all the data.
i have tried changing a coupe things in the configuration file but nothing helped.
I also have to reset the password for postgres everytime I restart my machine. I mainly use mongodb I am just learning postgreSQL just so I can use it if I ever need to in the future. I am runing a Linux machine QubesOS. I have a few problems like this useing QubesOS. every tutorial I watch everybody uses Macs. Which a mac seems good and all kinda a mix between windows and linux The best of both worlds. Easy package installs and terminal control but I dont want to trade my linux machine for a Mac I would much rather just fix these problems I am having with PostgreSQL on my linux machine
You ran into an important security feature of QubesOS: All data modifications are discarded on shutdown of a so called "Qube". They are reset to their original state.
But there is the exception of data kept in some very special directories.
If you convince your data base packages to put their data into these directories, it will be preserved over reboots of your data base Qube:
Read this documentation for more information.

backing up entire linux server on external hard drive or another cluster

We have a linux server for the database. Most of our data are on /var/. I would like to backup entire directory on external hard drive or on another linux system so that if something goes wrong I can entirely replace the directory. Since the directory has many files, I do not want to copy and paste every time instead I like to sync them.
Is there easy way to do that? rsyn can do that, how do I avoid of login every time the server? BTW I have limited knowledge of linux system.
Appreciated any comments and suggestions.
Bikesh
Rsyncing database files is not a recommended way of backing them up. I believe you have mysql running on the server. In that case, you can take a full database dump in the server, using steps mentioned in following link:
http://www.microhowto.info/howto/dump_a_complete_mysql_database_as_sql.html#idp134080
And, then syncing these files to your backup server. You can use rsych command for this purpose:
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
Make sure that you have installed mysql in the backup server too. You can also copy the mysql configuration file /etc/my.cnf file to the database backup server. In case you require your database to be updated always, you can setup mysql replication. You can follow the below mentioned guide to do the same.
http://kbforlinux.blogspot.in/2011/09/setup-mysql-master-slave-replication.html

How to handle data such as Mysql, web sites sources with Vagrant?

How to handle data such as Mysql, web sites sources with Vagrant ?
As a programmer, I like being able to easily set up environments for develop. So I created a vagrant box and provisioned it with puppet but I'm asking to myself, what about the data in the box ? What happen if I need to destroy the box and recreate it? All my data will be erased !
I had some problems with a crashed VM and I don't want to redo the same mistake, I want to have the control of my data.
How do you do ? Do you use shared folders to put your live data ? Where do you keep your data, in or out the box ?
In the current version of Vagrant (1.0.3), you have two main options:
Use shared folders. You can put your MySQL data directory into a shared folder so that the data comes back onto your host machine. The con of this is that shared folders are actually quite slow compared to the native VM filesystem in VirtualBox, and you can run into weird permission issues as well.
Setup a task (rake, make, etc.) to copy your MySQL data to your shared folder on demand. Then, before you decide to destroy your VM, you can run the task to export your data to your shared folder, then you can reimport the data when you bring your VM back up.

FTP folders mounted locally used for SVN repository

I would like to create a SVN repository remotely using FTP protocol.
Is it advisable to do the following steps
mount the FTP directory as local with culftpfs
create a repository as if it is local with svnadmin create
use it like in everyday life?
Do you know any issue with that approach?
RESULT AFTER MY ATTEMPT
I did try an attempy but I get an errro that looks like a timeout. THe real problem is that this approach is too slow. The solution of copying the repository each time looks more feasable or a simple script to back-up the folder.
It is a dangerous approach, however if you are working alone(as in "single user"), it would work. The biggest problems are:
You cannot provide exclusive locking mechanics over network
All Users will have direct access to all repositorie's internal files, if somebody deletes a file in revs, your repository is damaged beyond repair
You should setup an apache with
SVNAutoversioning on
then you could mount your repoURL as WebDav folder. Each change on these files will result in a single commit without need of a workingcopy

Resources