Database copy larger than original - linux

I just created a backup of my database using command mysqldump in Centos7. Then recreated the database for making a copy of the current website for development purposes. But i noticed the size of the copy is higher than the original database. By almost 70MB. Any idea why?

Related

Recover deleted folder from Google VPS

We have a VPS running on Google Cloud which had a very important folder in a user directory. An employee of ours deleted that folder and we can't seem to figure out how to recover it. I came across extundelete but it seems the partition needs to be unmounted for it to work but I don't understand how I would do it on Google. This project took more than a year and that was the latest copy after a fire which took out the last copy from our local servers.
Could anyone please help or guide me in the right direction?
Getting any files back from your VM's disk may be tricky (at best) or impossible (most probably) if the files got overwritten.
Easiest way would be to get them back from a copy or snapshot of your VM's disk. If you have a snapshot of your disk (either taken manually or automatically) from before when the folder in question got delete then you will get your files back.
If you don't have any backups then you may try to recover the files - I've found many guides and tutorials, let me just link the ones I believe would help you the most:
Unix/Linux undelete/recover deleted files
Recovering accidentally deleted files
Get list of files deleted by rm -rf
------------- UPDATE -----------
Your last chance in this battle is to make two clones of the disk
and then detach original disk from the VM and attach one of the clones to keep your VM running. Then use second clone for any experiments. Keep the original untouched in case you mess up the second clone.
Now create a new Windows VM and attach your second clone as the additional disk. At this moment you're ready to try various data redovery software;
UFS Explorer
Virtual Machine Data Recovery
There are plenty of others to try from too.
Another approach would be to create an image from the original disk and export it as a VMDK imagae (and save it to a storage bucket). Then download it to yor local computer and then use for example VMware VMDK Recovery or other specialized software for extracting data from virtual machines disk images.

How do I load a neo4j backup into my new instance of neo4j on Azure?

I have a graph.db file which contains all the data from my local Neo4j where I have built my database.
I have created a Neo4j HA Cluster on Azure.
How do I get the graph.db from my local machine to the Azure version of Neo4j?
You can transfer the files via SCP
As long as you have SSH access to your Azure instance, you can copy files to it using the command scp (or any one of the number of utilities for Windows for using SCP). Just plug in the same address / credentials as you would use for SSH, and then use the command / application to send the entire graph.db directory over; if you prefer, you can tar it beforehand so only one file is sent, but then make sure to untar it once it is uploaded.
Next, make sure that the version of Neo4J that created the graph.db is the same version as the Neo4J that you are copying to. You can find the version number under the "Database" section of the first tab on the top of the control strip on the left of the web UI.
Only the version number matters for this, not the "Edition"; e.g. v3.3.3 Community is functionally equivalent to v3.3.3 Enterprise for your purposes.
Same version number
If the version of the local Neo4J and the destination Neo4J is the same, once graph.db is uploaded, it can directly replace any existing graph.db on the destination Neo4J. SSH into your box, make sure Neo4J is off, and then move the graph.db folder to the /data directory of Neo4J. Turn Neo4J back on. It should then have your locally-created database.
Different version number
If the version number is not the same, that's okay as long as the local Neo4J is an older version. There will be a small amount of additional work. Once you have copied the graph.db to the destination server, SSH in to it and make sure Neo4J is not running. Next, to import the database, run:
neo4j-admin import --mode=database --database=graph.db --from=/path/to/graph.db
Then, in the config of the instance, be sure to set dbms.allow_format_migration=true and dbms.allow_upgrade=true to allow it to upgrade the database file.
Turn Neo4J on. It may take awhile during the startup, but that is only because it is upgrading your database. After the first startup, it should start much faster. Once it is started, it should then have your locally-created database.
Afterwards, be sure to edit the config file and set dbms.allow_format_migration=false and dbms.allow_upgrade=false (or else remove them entirely; they default to false) in order to disallow future unintentional upgrading.
More info is available at the official Neo4J Upgrade Guide.

backing up entire linux server on external hard drive or another cluster

We have a linux server for the database. Most of our data are on /var/. I would like to backup entire directory on external hard drive or on another linux system so that if something goes wrong I can entirely replace the directory. Since the directory has many files, I do not want to copy and paste every time instead I like to sync them.
Is there easy way to do that? rsyn can do that, how do I avoid of login every time the server? BTW I have limited knowledge of linux system.
Appreciated any comments and suggestions.
Bikesh
Rsyncing database files is not a recommended way of backing them up. I believe you have mysql running on the server. In that case, you can take a full database dump in the server, using steps mentioned in following link:
http://www.microhowto.info/howto/dump_a_complete_mysql_database_as_sql.html#idp134080
And, then syncing these files to your backup server. You can use rsych command for this purpose:
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
Make sure that you have installed mysql in the backup server too. You can also copy the mysql configuration file /etc/my.cnf file to the database backup server. In case you require your database to be updated always, you can setup mysql replication. You can follow the below mentioned guide to do the same.
http://kbforlinux.blogspot.in/2011/09/setup-mysql-master-slave-replication.html

Relocating live mongodb data to another drive

I have a mongodb running on windows azure linux vm,
The database is located on the system drive and i wish to move it to another hard drive since there is not enough space there.
I found out this post :
Changing MongoDB data store directory
These seems to be a fine solution suggested there, yet there is another person who mentioned something about copying the files,
My database is live and getting data all the time, how can i make this proccess with lossing the least data possible ?
Thanks,
First, if this is a production system you really need to be running this as a replica set. Running production databases on singleton mongodb instances is not a best practice. I would consider 2 full members plus 1 arbiter the minimum production set up.
If you want to go the replica set route, you can first convert this instance to a replica set:
http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
this should have minimal down time.
Then add 2 new instances with the correct storage set up. After they sync you will have a full 3 member set. You can then fail over to one of the new instances. Remove this bad instance from the replica set. Finally I'd add an arbiter instance to get you back up to 3 members of the replica set while keeping costs down.
If on the other hand you do not want to run as a replica set, I'd shutdown mongod on this instance, copy the files over to the new directory structure on another appropriate volume, change the config to point to it (either changing dbpath or using a symlink) and then startup again. Downtime will be largely a factor of the size of your existing database, so the sooner you do this the better.
However - I will stress this again - if you are looking for little to no down time with mongoDB you need to use a replica set.

Cloning PostgreSQL database

I want to have a clone of a postgresql database. If I copy the entire data directory from one machine and replace another machine's data directory with it, will there be any problems? both of them have the same OS, btw (CentOS)
Certainly if you stop the server and then copy it, that's fine. If you don't, the cloned server will have to do recovery, which isn't so good. Or just use pg_dumpall to produce a script to recreate the data on the new machine.
You can invoke pg_start_backup() and then copy the datadir. All changes will then be written to a "log" and committed later on when you run pg_stop_backup().
http://www.postgresql.org/docs/8.1/static/backup-online.html
Se section 23.3.2. Making a Base Backup
I then think you can restore the files on another machine running identical versions of postgresql under the same architecture.
section 23.3.3. Recovering with an On-line Backup will explain how to restore the backup you have made.

Resources