I'm a bit confused with the -c flag using bunzip2.
The following line of code works well:
ls -l
-> -rw-r--r-- 1 root root 163 Oct 25 13:06 access_logs.tar.bz2
bunzip2 -c access_logs.tar.bz2 | tar -t
When I would attempt to use this code without the -c flag:
bunzip2 access_logs.tar.bz2 | tar -t
I get the message:
tar: This does not look like a tar archive
tar: Exiting with failure status due to previous errors
But when showing the list ls -l:
-rw-r--r-- 1 root root 10240 Oct 25 13:06 access_logs.tar
Documentation says:
The left side of the pipeline is bunzip –c access_logs.tbz, which
decompresses the file but the (-c option) sends the output to the
screen. The output is redirected to tar –t.
According to the manual:
-c --stdout
Compress or decompress to standard output.
It seems that the decompression also works without the -c flag?
I'm confused about what you're confused about. You observed the answers to your questions, as well as read it in the documentation.
Without -c, bunzip2 will decompress the xx.gz file and save the results as the file xx. With the -c, it will not create a file, but rather send the result to stdout. If you have a pipe, |, then instead of being printed to the terminal (which would be a mess), it becomes the input to the program on the right side of the pipe.
You cant check file information type:
file access_logs.tar.bz2
Check manual: link
Related
I'm trying to download a tar.gz file from a github repo with curl, but it's evidently downloading plain ASCII and so I can't unzip or untar the file (as evidenced by the file command - see the third line of my stack trace below).
One other important detail is that this is running inside an AWS CodeBuild instance. However, I can download this with curl just fine on my mac and it is a proper tar.gz file.
Here's the command I'm running:
curl -Lk0s https://github.com/gohugoio/hugo/releases/download/v0.49/hugo_0.49_Linux-64bit.tar.gz -o /tmp/hugo.tar.gz
The full stack trace is:
[Container] 2018/12/03 05:39:44 Running command curl -Lk0s https://github.com/gohugoio/hugo/releases/download/v0.49/hugo_0.49_Linux-64bit.tar.gz -o /tmp/hugo.tar.gz
[Container] 2018/12/03 05:39:45 Running command file /tmp/hugo.tar.gz
/tmp/hugo.tar.gz: ASCII text, with no line terminators ***[NB. This is the output of the file command]***
[Container] 2018/12/03 05:39:45 Running command tar xvf /tmp/hugo.tar.gz -C /tmp
tar: This does not look like a tar archive
gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
[Container] 2018/12/03 05:39:45 Command did not exit successfully tar xvf /tmp/hugo.tar.gz -C /tmp exit status 2
[Container] 2018/12/03 05:39:45 Phase complete: INSTALL Success: false
[Container] 2018/12/03 05:39:45 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: tar xvf /tmp/hugo.tar.gz -C /tmp. Reason: exit status 2
What am I doing wrong here?
-L works for me:
curl -L https://github.com/gohugoio/hugo/releases/download/v0.49/hugo_0.49_Linux-64bit.tar.gz -o /tmp/hugo.tar.gz
I tried it without any flags first and it downloaded the redirection page.
Added -L to follow redirects and the result was a well-formed, complete .tar.gz file that decompressed perfectly. The result was a folder with a few files in it:
$ ls -l
total 41704
-rw-r--r-- 1 xxxxxxxxxxx staff 11357 Sep 24 05:54 LICENSE
-rw-r--r-- 1 xxxxxxxxxxx staff 6414 Sep 24 05:54 README.md
-rwxr-xr-x 1 xxxxxxxxxxx staff 21328256 Sep 24 06:03 hugo
UPDATE: I didn't at first try your set of params (-Lk0s) assuming it wouldn't work for me either. But I just now tried it and it works for me. I get the same .tar.gz that I got with -L and it decompresses accurately. Please cat the contents of the text file that gets downloaded and show at least some of it here. It's probably an error of some sort being sent back as plain text or html.
I have a link.txt with multiple links for download,all are protected by the same username and password.
My intention is to download multiple files at the same time, if the file contains 5 links, to download all 5 files at the same time.
I've tried this, but without success.
cat links.txt | xargs -n 1 -P 5 wget --user user007 --password pass147
and
cat links.txt | xargs -n 1 -P 5 wget --user=user007 --password=pass147
give me this error:
Reusing existing connection to www.site.com HTTP request sent,
awaiting response... 404 Not Found
This message appears in all the links i try to download, except for the last link in the file which starts to download.
i am currently use, but this download just one file at the time
wget -user=admin --password=145788s -i links.txt
Use wget's -i and -b flags.
-b
--background
Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log.
-i file
--input-file=file
Read URLs from a local or external file. If - is specified as file, URLs are read from the standard input. (Use ./- to read from a file literally named -.)
Your command will look like:
wget --user user007 --password "pass147*" -b -i links.txt
Note: You should always quote strings with special characters (eg: *).
I have to transfer a file from server A to B and then needs to trigger a script at Server B. Server B is a Load balance server which will redirect you either Server B1 or B2 that we dont know.
I have achieved this as below.
sftp user#Server
put file
exit
then executing the below code to trigger the target script
ssh user#Server "script.sh"
But the problem here is as I said it is a load balance server, Sometimes I am putting file in one server and the script get triggers in another server. How to overcome this problem?
I am thinking some solutions like below
ssh user#server "Command for sftp; sh script.sh"
(i.e) in the same server call if I put and triggers it will not give me the above mentioned problem. How can I do sftp inside ssh connection? Otherwise any other suggestions?
if you're just copying a file up and then executing a script, and it can't happen as two separate commands you can do:
gzip -c srcfile | ssh user#remote 'gunzip -c >destfile; script.sh'
This gzips srcfile, sends it through ssh to the remote end, gunzips it on that side, then executes script.sh.
If you want more than one file, you can use tar rather than gzip:
tar czf - <srcfiles> | ssh user#remote 'tar xzf -; script.sh'
if you want to get the results back from the remote end and they're files, you can just replicate the tar after the script…
tar czf - <srcfiles> | ssh user#remote 'tar xzf -; script.sh; tar czf - <remotedatafiles>' | tar xzf -
i.e. create a new pipe from ssh back to the local environment. This only works if script.sh doesn't generate any output. If it generates output, you have to redirect it, for example to /dev/null in order to prevent it messing up the tar:
tar czf - <srcfiles> | ssh user#remote 'tar xzf -; script.sh >/dev/null; tar czf - <remotedatafiles>' | tar xzf -
You can use scp command first to upload your file and then call remote command via ssh.
$ scp filename user#machine:/path/to/file && ssh user#machine 'bash -s' < script.sh
This example about uploading a local file, but there is no a problem to run it on server A.
You could create a fifo (Named Pipe) on the server, and start a program that tries to read from it. The program will block, it won't eat any CPU.
From sftp try to write the pipe -- you will fail, indeed, but the listening program will run, and check for uploaded files.
# ls -l /home/inp/alertme
prw------- 1 inp system 0 Mar 27 16:05 /home/inp/alertme
# date; cat /home/inp/alertme; date
Wed Jun 24 12:07:20 CEST 2015
<waiting for 'put'>
Wed Jun 24 12:08:19 CEST 2015
transfer testing with tar gzip compression, ssh default compression. using PV for as pipe meter (apt-get install pv)
testing on some site folder where is about 80k small images, total size of folder about 1.9Gb
Using non-standart ssh-port 2204
1) tar gzip, no ssh compression
tar cpfz - site.com|pv -b -a -t|ssh -p 2204 -o cipher=none root#removeip "tar xfz - -C /destination/"
pv meter started from 4Mb/sec, degradated down to 1.2MB/sec at end. PV shows about 1.3Gb transfered bytes (1.9GB total size of folder)
2) tar nozip, ssh compression:
tar cpf - site.com|pv -b -a -t|ssh -p 2204 root#removeip "tar xf - -C /destination/"
pv meter started from 8-9Mb/sec, degradated down to 1.8Mb/sec at end
I am having problems with getting a crontab to work. I want to automate a MySQL database backup.
The setup:
Debian GNU/Linux 7.3 (wheezy)
MySQL Server version: 5.5.33-0+wheezy1(Debian)
directories user, backup and backup2 have 755 permission
The user names for MySQL db and Debian account are the same
From the shell this command works
mysqldump -u user -p[user_password] [database_name] | gzip > dumpfilename.sql.gz
When I place this in a crontab using crontab -e
* * /usr/bin/mysqldump -u user -pupasswd mydatabase | gzip> /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz >/dev/null 2>&1
A file is created every minute in /home/user/backup directory, but has 0 bytes.
However when I redirect this output to a second directory, backup2, I note that the proper mysqldumpfile duly compressed is created in it. I am unable to figure what is the mistake that I am making that results in a 0 byte file in the first directory and the expected output in the second directory.
* * /usr/bin/mysqldump -u user -pupasswd my-database | gzip> /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz >/home/user/backup2/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz 2>&1
I would greatly appreciate an explanation.
Thanks
First the mysqldump command is executed and the output generated is redirected using the pipe. The pipe is sending the standard output into the gzip command as standard input. Following the filename.gz, is the output redirection operator (>) which is going to continue redirecting the data until the last filename, which is where the data will be saved.
For example, this command will dump the database and run it through gzip and the data will finally land in three.gz
mysqldump -u user -pupasswd my-database | gzip > one.gz > two.gz > three.gz
$> ls -l
-rw-r--r-- 1 uname grp 0 Mar 9 00:37 one.gz
-rw-r--r-- 1 uname grp 1246 Mar 9 00:37 three.gz
-rw-r--r-- 1 uname grp 0 Mar 9 00:37 two.gz
My original answer is an example of redirecting the database dump to many compressed files (without double compressing). (Since I scanned the question and seriously missed - sorry about that)
This is an example of recompressing files:
mysqldump -u user -pupasswd my-database | gzip -c > one.gz; gzip -c one.gz > two.gz; gzip -c two.gz > three.gz
$> ls -l
-rw-r--r-- 1 uname grp 1246 Mar 9 00:44 one.gz
-rw-r--r-- 1 uname grp 1306 Mar 9 00:44 three.gz
-rw-r--r-- 1 uname grp 1276 Mar 9 00:44 two.gz
This is a good resource explaining I/O redirection: http://www.codecoffee.com/tipsforlinux/articles2/042.html
if you need to add a date-time to your backup file name (Centos7) use the following:
/usr/bin/mysqldump -u USER -pPASSWD DBNAME | gzip > ~/backups/db.$(date +%F.%H%M%S).sql.gz
this will create the file: db.2017-11-17.231537.sql.gz
Besides m79lkm's solution, my 2 cents on this topic:
Don't directly pipe | the result into gzip but first dump it as a .sql file, and then gzip it.
So go for && gzip instead of | gzip if you have the free disk space.
Depending on your system, the dump itself can easily be double as fast but you will need a lot more free disk space. Your tables will be locked for less time so less downtime/slow-responding of your application. The end result will be exactly the same.
So very important is to check for free disk space first withdf -h
Then estimate the dump size of your database and see if it fits the free space:
# edit this code to only get the size of what you would like to dump
SELECT Data_BB / POWER(1024,2) Data_MB, Data_BB / POWER(1024,3) Data_GB
FROM (SELECT SUM(data_length) Data_BB FROM
information_schema.tables WHERE table_schema NOT IN ('information_schema','performance_schema','mysql')) A;
(credits dba.stackexchange.com/a/37168)
And then execute your dump like this:
mysqldump -u user-p [database_name] > dumpfilename.sql && gzip dumpfilename.sql
Another tip is to use the option --single-transaction. It prevents the tables being locked but still result in a solid backup! See docs here. And since this does not lock your tables for most queries you can actually pipe the dump | directly in gzip... (in case you don't have the free disk space)
mysqldump --single-transaction -u user -p [database_name] | gzip > dumpfilename.sql.gz
You can use the tee command to redirect output:
/usr/bin/mysqldump -u user -pupasswd my-database | \
tee >(gzip -9 -c > /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz) | \
gzip> /home/user/backup2/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz 2>&1
see documentation here
Personally, I have create a file.sh (right 755) in the root directory, file who do this job, on order of the crontab.
Crontab code:
10 2 * * * root /root/backupautomatique.sh
File.sh code:
rm -f /home/mordb-148-251-89-66.sql.gz #(To erase the old one)
mysqldump mor | gzip > /home/mordb-148-251-89-66.sql.gz (what you have done)
scp -P2222 /home/mordb-148-251-89-66.sql.gz root#otherip:/home/mordbexternes/mordb-148-251-89-66.sql.gz
(to send a copy somewhere else if the sending server crashes, because too old, like me ;-))
I am trying to run a bash script as user apache. and it throws up the following
[apache#denison public]$ ll
total 32
drwxr-xr-x 2 apache apache 4096 Jul 17 08:14 css
-rw-r--r-- 1 apache apache 4820 Jul 17 10:04 h3111142_58_2012-07-17_16-03-58.php
-rwxrwxrwx 1 apache apache 95 Jul 17 10:04 h31111.bash
drwxr-xr-x 2 apache apache 4096 Jul 17 08:14 images
-rw-r--r-- 1 apache apache 754 Jul 17 08:13 index.php
drwxr-xr-x 2 apache apache 4096 Jul 17 08:14 javascript
drwxr-xr-x 5 apache apache 4096 Jul 17 08:14 jquery-ui-1.8.21.custom
[apache#denison public]$ bash h31111.bash
: command not found :
contents of the file are:
#!/bin/bash
/usr/bin/php /opt/eposdatatransfer/public/h3111142_58_2012-07-17_16-03-58.php
php script runs fine below are the results
[apache#denison public]$ /bin/bash h31111.bash
: command not found:
[apache#denison public]$ chmod +x h31111.bash
[apache#denison public]$ ./h31111.bash
./h31111.bash: Command not found.
[apache#denison public]$ php h3111142_58_2012-07-17_16-03-58.php
creation of file:
$batchFile = $this->session->username . "_" . $index . "_" . $date . ".sh";
$handle = fopen($batchFile, 'w');
$data = "#!/bin/bash
/usr/bin/php /opt/eposdatatransfer/public/$file
";
/*
rm -rf /opt/eposdatatransfer/public/$file
rm -rf /opt/eposdatatransfer/public/$batchFile*";*/
fwrite($handle, $data);
fclose($handle);
batchfile is the bash script and file is the php file. These get craeted automatically based on user input in webapp. my webapp runs on linux.
I'm guessing you uploaded the script from a windows machine, and didn't strip the carriage returns from the end of the lines. This causes the #! mechanism (the first line of most scripts) to fail, because it searches for #!/some/interpreter^M, which rarely exists.
You can probably strip the carriage returns, if you have them, using fromdos or:
tr -d '\015' < /path/to/script > /tmp/script; chmod 755 /tmp/script; mv /tmp/script /path/to/script
What happens if you try to run your script with
$ /bin/bash h31111.bash
Try this (assuming your script file is named "h31111.bash"):
$ chmod +x h31111.bash
then to run it
$ ./h31111.bash
Also are you sure you have the right path for your php command? What does which php report?
--
As #jordanm correctly suggests based on the output of the file command I suggested you run, you need to run the dos2unix command on your file. If you don't have that installed this tr -d '\r' will also work. I.e.,
$ tr -d '\r' h31111.bash > tmp.bash
$ mv tmp.bash h31111.bash
and you should be all set.
Under some versions of Ubuntu these utilities (e.g., dos2unix) don't come installed, for information on this see this page.
It looks to me the problem is your $PATH. Some users on the system will have . (the current directory) in their $PATH and others will not. If typing ./h31111.bash works, then that's your problem. You can either specify the file with a relative or absolute path, or you can add . to the $PATH of that user but never do that for root.
Since you're not sure where it's failing, let's try to find out.
First, can you execute the program?
./h31111.bash
That should be equivalent to invoking it with:
/bin/bash h31111.bash
If the above gives you the same error message than it's likely a problem with the script contents. To figure out where something's gone awry in a bash script, you can use set -x and set -v.
set -x will show you expansions
set -v will show you the lines before their read
So, you'd change your script contents to something like the following:
#!/bin/bash
set -x
set -v
/usr/bin/php /opt/eposdatatransfer/public/h3111142_58_2012-07-17_16-03-58.php
Another possibility, which you probably only learn by experience, is that the file is in MSDOS mode (i.e., has CR LF's instead of just LFs). This often happens if you're still using FTP (as opposed to SFTP) and in ASCII mode. You may be able to set your transfer mode to binary and have everything work successfully. If not, you have a few options. Any of the following lines should work:
dos2unix /path/to/your/file
sed -i 's/\r//g' /path/to/your/file
tr -d '\r' < /path/to/your/file > /path/to/temp/file && mv /path/to/temp/file /path/to/your/file
In Vim, you could do :set ff=unix and save the file to change the format as well.
Let's take a look at what you have in PHP:
$handle = fopen($batchFile, 'w');
$data = "#!/bin/bash
/usr/bin/php /opt/eposdatatransfer/public/$file
";
Since you have a multi-line string, the CR LF characters that are embedded will depend on whether your PHP file is in Windows or Unix format. You could switch that file to Windows format and you'd be fine. But that seems easy to break in the future, so I'd go with something a little less volatile:
$handle = fopen($batchFile, 'w');
fwrite($handle, "#!/bin/bash\n");
fwrite($handle, "/usr/bin/php /opt/eposdatatransfer/public/$file\n");
fclose($handle);