i have this cron job command below to backup my database in my web server. But when i got the backup file it only shows Could not open input file: Mysqldump. What can i do to see the Sql content including my tables and data etc?
php mysqldump -u'my_username' -p'my_password' 'my_database_name' > /home/cpaneluser/data/backup_folder/backup_`date +\%d\%m\%y`.sql
Related
Current code
#!/bin/bash
SFTP_SERVER="sftp.url.com:/csv/test/10"
SFTP_USER="user"
SFTP_PWD="pwd"
## not sure if this line is needed given I specify the local directory
# in the next block of code.
cd /mnt/c/Users/user/Documents/new_directory
lftp sftp://$SFTP_USER:$SFTP_PWD#$SFTP_SERVER
lftp -e mget *.csv mirror sftp.user.com:/csv/test/10 /mnt/c/Users/user/Documents/new_directory
Objective
Download all csv files and mirror my local directory folder with the remote server, so when the code is run again it won't download a second file.
Error received
open: *.csv: Name or service not known
Comments
From what I understood of the lftp man page I should be able to get all wildcard files by using mget instead of the standard get, provided I use -e to use external commands. I've run mget manually and can download the files without issue but it doesn't seem to support the *.csv in the script.
Appreciate any feedback you can provide as to why my code won't download the files and what I might have misunderstood from the man pages.
It should be like:
lftp sftp://$SFTP_USER:$SFTP_PWD#$SFTP_SERVER -e "mget *.csv; bye"
I have created a .cql file to create a table. It's in my local location in the C drive. When I try to execute the source command in cqlsh, I'm unable to execute the file.
How should I execute the source command and where to place the .cql file to be executed? Why do I get this type of an error?
You can use the -f option to give a file location to cqlsh. You just need to make sure that the location is accessible for the Cassandra user.
Here in the example, I created a test.cql file in /tmp/ folder. I was able to execute the cql file using this command.
cqlsh -f /tmp/test.cql
In my work, I use 2 Linux servers.
The first one is used for web-crawling and create it as a text file.
The other one is used for analyzing the text file from the web crawler.
So the issue is that when a text file is created on web-crawling server,
it needs to be transferred automatically on the analysis server.
I've used shell programming guides referring some tips,
and set up the crawling server to be able to execute the scp command without requiring the password (By using ssh-keygen command, Add ssh-key on authorized_keys file located in /root/.ssh directory)
But I cannot figure out how to programmatically transfer the file when it is created.
My job position is just data analyze (Not programming)
So, the lack of background programming knowledge is my big concern
If there is a way to trigger the scp to copy the file when it is created, please let me know.
You could use inotifywait to monitor the directory and run a command every time a file is created in the directory. In this case, you would fire off the scp command. IF you have it set up to not prompt for the password, you should be all set.
inotifywait -mrq -e CREATE --format %w%f /path/to/dir | while read FILE; do scp "$FILE"analysis_server:/path/on/anaylsis/server/; done
You can find out more about inotifywait at http://techarena51.com/index.php/inotify-tools-example/
I've already done a backup of my database, using mysqldump like this:
mysqldump -h localhost -u dbUsername -p dbDatabase > backup.sql
After that the file is in a location outside public access, in my server, ready for download.
How may I do something like that for files? I've tried to google it, but I get all kind of results, but that.
I need to tell the server running ubuntu to backup all files inside folder X, and put them into a zip file.
Thanks for your help.
You can use tar for creating backups for a full system backup
tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz /
for a single folder
tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz /your/folder
to create a gzipped tar file of your whole system. You might need additional excludes like
--exclude=/proc --exclude=/sys --exclude=/dev/pts.
If you are outside of the single folder you want to backup the --exclude=/backup.tar.gz isn't needed.
More details for example here (you can do it over network, split the archive etc.).
I am trying to find a way to create and update a tar archive of files on a remote system where we don't have write permissions (the remote file system is read only) over ssh. I've figured out that the way to create a archive is,
ssh user#remoteServer "tar cvpjf - /" > backup.tgz
However, I would like to know if there is some way of performing only incremental backups from this point on (of only files that have actually changed?). Any help with this is much appreciated.
You can try using the --listed-incremental option of tar:
http://www.gnu.org/software/tar/manual/html_node/Incremental-Dumps.html
The main problem is that you have no option to pipe the snar file through the stdout because you are already piping backup.tgz so the best option to store it would be to create the file in the /tmp directory where you should have write permissions and then download it at the end of the backup session.
For example:
ssh user#remoteServer "tar --listed-incremental=/tmp/backup-1.snar -cvpjf - /" > backup-1.tgz
scp user#remoteServer:/tmp/backup-1.snar
And in the following session you will use that .snar file to avoid copying the same files:
scp backup-1.snar user#remoteServer:/tmp/backup-1.snar
ssh user#remoteServer "tar --listed-incremental=/tmp/backup-1.snar -cvpjf - /" > backup-2.tgz