Script to incremental backup MySQL workbench in linux - linux

I have an issue related to how to incremental backup MySQL workbench.
Can anyone tell me the script to backup this?
I want to back up all day and keep incremental with difference file.
Can anyone give me the sample script about that?
Thank,
Veasna.

The binary log (mysql-bin.log) is essentially an incremental back-up. It allows you to revert back to a previously stable database state.
see http://dev.mysql.com/doc/mysql-backup-excerpt/5.0/en/backup-policy.html
Making Incremental Backups by Enabling the Binary Log,
http://dev.mysql.com/doc/refman/5.6/en/backup-methods.html

May i know from where you are restarting the service, through command prompt Or from your control panel.
Share your error message here, you Will get further details if anyone knows.

Related

ArangoDB Corruption Bad table magic number

we're using ArangoDB 3.3.5 with RocksDB as a single instance.
we did a shut down of the machine where arangod is running, and after reboot the service didn't come up again with the following warning:
Corruption: Bad table magic number
Is there a way to repair the database? Or any other ways to get rid of the problem?
This is an issue with the RocksDB files. Please try to start specifying --log.level TRACE, store the log file, open a github issue and attach the corresponding logfile.
Cheers.

backing up entire linux server on external hard drive or another cluster

We have a linux server for the database. Most of our data are on /var/. I would like to backup entire directory on external hard drive or on another linux system so that if something goes wrong I can entirely replace the directory. Since the directory has many files, I do not want to copy and paste every time instead I like to sync them.
Is there easy way to do that? rsyn can do that, how do I avoid of login every time the server? BTW I have limited knowledge of linux system.
Appreciated any comments and suggestions.
Bikesh
Rsyncing database files is not a recommended way of backing them up. I believe you have mysql running on the server. In that case, you can take a full database dump in the server, using steps mentioned in following link:
http://www.microhowto.info/howto/dump_a_complete_mysql_database_as_sql.html#idp134080
And, then syncing these files to your backup server. You can use rsych command for this purpose:
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
Make sure that you have installed mysql in the backup server too. You can also copy the mysql configuration file /etc/my.cnf file to the database backup server. In case you require your database to be updated always, you can setup mysql replication. You can follow the below mentioned guide to do the same.
http://kbforlinux.blogspot.in/2011/09/setup-mysql-master-slave-replication.html

how can i tell (in bash script) if a clonezilla batch mode backup succeeded?

This is my first ever post to stackoverflow, so be gentle please. ;>
Ok, I'm using slightly customized Clonezilla-Live cd's to backup the drives on four PCs. Each cd is for a specific PC, saving an image of its disk(s) to a box-specific backup folder on a samba server. That's all pretty much working. But once in a while, Something Goes Wrong, and the backup isn't completed properly. Things like: the cat bit through a cat5e cable; I forgot to check if the samba server had run out of room; etc. And it is not always readily apparent that a failure happened.
I will admit right now that I am pretty much a noob as far as linux system administration goes, even though i managed somehow to setup a centos 6 box (i wish i'd picked ubuntu...) with samba, git, ssh, and bitnami-gitlab back in february.
I've spent days and days and days trying to figure out if clonezilla leaves a simple clue in a backup as to whether it succeeded completely or not, and have come up dry. Looking in the folder for a particular backup job (on the samba server) I see that the last file written is named "clonezilla-img". It seems to be a console dump that covers the backup itself. But it does not seem to include the verification pass.
Regardless of whether the batch backup task succeeded or failed, I can run a post-process bash script automagically, that I place on my clonezilla cds. I have this set to run just fine, though its not doing a whole lot right now. What I would like this post-process script to do is determine if the backup job succeeded or not, and then rename (mv) the backup job directory to include some word like "SUCCESS" or "FAILURE". I know how to do the renaming part. It's the test for success or failure that I'm at a loss about.
Thanks for any help!
I know this is old, but I've just started on looking into doing something very similar.
For your case i think you could do what you are looking for with ocs_prerun and ocs_postrun scripts.
For my setup I'm using a pen/falsh drive for some test systems and also pxe with NFSmount. PXE and nfs are much easier to test and modify quickly.
I haven't tested this yet, but I was thinking that I might be able to search the logs in /var/log/{clonezilla.log,partclone.log} via an ocs_postrun script to validate success or failure. I haven't seen anything that indicates the result is set in the environment so I'm thinking the logs might be the quick easy method over mounting or running a crc check. Clonezilla does have an option to validate the image, the results of which might be in the local logs.
Another option might be to create a custom ocs_live_run script to do something similar. There is an example at this URL http://clonezilla.org/fine-print-live-doc.php?path=./clonezilla-live/doc/07_Customized_script_with_PXE/00_customized_script_with_PXE.doc#00_customized_script_with_PXE.doc
Maybe in the script the exit code of ocs-sr can be checked? As I said I haven't tried any of this, just some thoughts.
I updated the above to reflect the log location (/var/log). The logs are in the log folder of course. :p
Regards

P4V - Duplicate workspace pointing to existing data

I was wondering if anyone had any advice on how to do the following task in p4v (I am not too familiar with P4V commands, so apologise if this is some basic command that I am missing).
Currently I have a workspace setup and data synced to my root
e.g. C:\Data\
I access this workspace from two different windows machine. (data is on both machines at c:\Data
Now, I need to move the location of where the data is stored on ONE of the machines and not the other (Machine A : c:\Data, Machine B: D:\Data\
Is this possible to do, without having to sync all the data again from the server (there is a lot and bandwidth limitations).
My initial thoughts were to create another workspace pointing to another root, but I do not know how to get this new workspace pick up the data files at this location.
Any help would be greatly appreciated
Thanks in advance
I don't know of a way to do this through P4V, but it can be done with the command line client. Here's the procedure.
After you have moved your files on machine B, and created a new workspace (without performing an "update all"), you can pass the -k switch to the sync command to let the server know what files you have.
From the web page to which I linked:
Keep existing workspace files; update the have list without updating
the client workspace. Use p4 sync -k only when you need to update the
have list to match the actual state of the client workspace.
And the command line help has this to say:
The -k flag updates server metadata without syncing files. It is
intended to enable you to ensure that the server correctly reflects
the state of files in the workspace while avoiding a large data
transfer. Caution: an erroneous update can cause the server to
incorrectly reflect the state of the workspace.
FYI: p4 flush is an alias for p4 sync -k
You can also look at the AltRoots field in the workspace. You could have one root at c:\data and the other at d:\data. As raven mentioned since the data is living on two separate disks you'll need to make sure that the data is kept in sync on both machines, although I assume you've already figured this part out since you've been running on two machines.
Any reason you can't just have one workspace per machine?

How to know which instance deleted my file on a Linux server?

I have a workflow pipeline which will generate data files on a Linux server periodically, and also a cleanup service which will remove data files which are older than a week.
However, sometimes I found that a new generated data file will be missing, which is definitely no older than a week. I'm not sure whether it is a logic bug of the cleanup service, or another program did it. Currently I don't have any idea on how to investigate this issue. Is there any method to log all the file deletion activities and related process id as well as process name?
Thanks in advance.

Resources