I'm trying to run a command that will send a POST request with data that is a result of another command. An example will say it all:
wget -O- --post-data=$(ls -lah) http://192.168.30.53/api/v2/network/clients
This is obviously wrong, but I have no idea how to "escape" the value of ls -lah command before passing it as a parameter.
Current output if needed:
wget: invalid option -- '>'
wget: --wait: Invalid time period '-r--r--'
You do not escape - you quote the usage. Check your scripts with shellcheck.
... --post-data="$(ls -lah)" ...
I have no idea how to "escape" the value of ls -lah command before
passing it as a parameter.
wget man page describe --post-data=string and --post-file=file in unison, relevant for this case is that
--post-data sends string as data, whereas --post-file sends the
contents of file. Other than that, they work in exactly the same way.
and that
argument to "--post-file" must be a regular file
due to above limitation piping stdin would not work, but that is not problem unless you are banning from creating normal files - just create temporary file for that, something like that
ls -lah > lsdata
wget -O- --post-file=lsdata http://192.168.30.53/api/v2/network/clients
rm lsdata
should work (I do not have ability to test it)
Related
First of all, thank you everyone for your help. I have the following file that contains a series of URL:
Salmonella_enterica_subsp_enterica_Typhi https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/003/717/755/GCF_003717755.1_ASM371775v1/GCF_003717755.1_ASM371775v1_translated_cds.faa.gz
Salmonella_enterica_subsp_enterica_Paratyphi_A https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/000/818/115/GCF_000818115.1_ASM81811v1/GCF_000818115.1_ASM81811v1_translated_cds.faa.gz
Salmonella_enterica_subsp_enterica_Paratyphi_B https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/000/018/705/GCF_000018705.1_ASM1870v1/GCF_000018705.1_ASM1870v1_translated_cds.faa.gz
Salmonella_enterica_subsp_enterica_Infantis https://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/011/182/555/GCA_011182555.2_ASM1118255v2/GCA_011182555.2_ASM1118255v2_translated_cds.faa.gz
Salmonella_enterica_subsp_enterica_Typhimurium_LT2 https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/000/006/945/GCF_000006945.2_ASM694v2/GCF_000006945.2_ASM694v2_translated_cds.faa.gz
Salmonella_enterica_subsp_diarizonae https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/003/324/755/GCF_003324755.1_ASM332475v1/GCF_003324755.1_ASM332475v1_translated_cds.faa.gz
Salmonella_enterica_subsp_arizonae https://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/900/635/675/GCA_900635675.1_31885_G02/GCA_900635675.1_31885_G02_translated_cds.faa.gz
Salmonella_bongori https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/006/113/225/GCF_006113225.1_ASM611322v2/GCF_006113225.1_ASM611322v2_translated_cds.faa.gz
And I have to download the url using wget I have already achieve to download the URL but the typicall output in shell appears:
--2021-04-23 02:49:00-- https://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/900/635/675/GCA_900635675.1_31885_G02/GCA_900635675.1_31885_G02_translated_cds.faa.gz
Reusing existing connection to ftp.ncbi.nlm.nih.gov:443.
HTTP request sent, awaiting response... 200 OK
Length: 1097880 (1,0M) [application/x-gzip]
Saving to: ‘GCA_900635675.1_31885_G02_translated_cds.faa.gz’
GCA_900635675.1_31885_G0 100%[=================================>] 1,05M 2,29MB/s in 0,5s
2021-04-23 02:49:01 (2,29 MB/s) - ‘GCA_900635675.1_31885_G02_translated_cds.faa.gz’ saved [1097880/1097880]
I want to redirect that output to a log file. Also as the files download, I want to decompress them, because they are zip in .gz. My code is the following
cat $ncbi_urls_file | while read line
do
echo " Downloading fasta files from NCBI..."
awk '{print $2}' | wget -i-
done
wget
wget does have options allowing logging to files, from man wget
Logging and Input File Options
-o logfile
--output-file=logfile
Log all messages to logfile. The messages are normally reported to standard error.
-a logfile
--append-output=logfile
Append to logfile. This is the same as -o, only it appends to logfile instead of overwriting the old log file. If logfile does not exist, a new file is created.
-d
--debug
Turn on debug output, meaning various information important to the developers of Wget if it does not work properly. Your system administrator may have chosen to compile Wget without debug support, in which case -d will not work. Please note that compiling with debug support is always safe---Wget compiled with the debug support will not print any debug info unless requested with -d.
-q
--quiet
Turn off Wget's output.
-v
--verbose
Turn on verbose output, with all the available data. The default output is verbose.
-nv
--no-verbose
Turn off verbose without being completely quiet (use -q for that), which means that error messages and basic information still get printed.
You would need to experiment to got what you need, if you need all logs in single file use -a log.out, which will cause wget to append logging information to said file and not writing to stderr.
Standard output can be redirected to a file in bash using the >> operator (for appending to the file) or the > operator (for truncating / overwriting the file). e.g.
echo hello >> log.txt
will append "hello" to log.txt. If you still want to be able to see the output in your terminal and also write it to a log file, you can use tee:
echo hello | tee.txt
However, wget outputs most of its basic progress information through standard error rather than standard output. This is actually a very common practice. Displaying progress information often involves special characters to overwrite lines (e.g. to update a progress bar), change terminal colors, etc. Terminals can process these characters sensibly in real time, but it often does not make much sense to store them in a file. For this reason, such kinds of incremental progress output are often separated from other output which is more sensible to store in a log file to make them easier to redirect accordingly, and hence incremental progress information is often output through standard error rather than standard output.
However, you can still redirect standard error to a log file:
wget example.com 2>> log.txt
Or using tee:
wget example.com 2>&1 | tee log.txt
(2>&1 redirects standard error through standard output, which is then piped to tee).
I need to perform a md5sum check after downloading a zip file. My shell script looks like this:
wget $1 -O "$5.zip"
md5pkg=md5sum $5.zip
#perform check and other operations with md5pkg
Now, the check is performed before download completion, resulting to an error since the .zip file hasn't bean downloaded yet.
What's the best approach to solve this problem?
thanks in advance.
If there is an ampersand in the value of $1, it will be parsed as the background operator, allowing the rest of your script to proceed. Quote it:
wget "$1" -O "$5.zip"
md5pkg=$( md5sum "$5.zip" )
In this case, I would expect the portion after the ampersand to be an invalid shell command and cause an error, which you don't mention. There may be other problems, but you should quote your variables in any case.
I am trying to run the following command:
postfix status > tmp
however the resulting file never has any content written, and instead the output is still sent to the terminal.
I have tried adding the following into the mix, and even piping to echo before redirecting the output, but nothing seems ot have any effect
postfix status 2>&1 > tmp
Other commands work no problem.
script -c 'postfix status' -q tmp
It looks like it writes to the terminal instead to stdout. I don't understand piping to 'echo', did you mean piping to 'cat'?
I think you can always use the 'script' command, that logs everything that you see on the terminal. You would run 'script', then your command, then exit.
Thanks to another SO user, who deleted their answer, so now I can't thank, I was put on the right track. I found the answer here:
http://irbs.net/internet/postfix/0211/2756.html
So for those who want to be able to catch the response of the posfix, I used the following method.
Create a script which causes the output to go to where you wish. I did that like this:
#!/bin/sh
cat <<EOF | expect 2>&1
set timeout -1
spawn postfix status
expect eof
EOF
Then i ran the script (say script.sh) and could pipe/redirect from there. i.e. script.sh > file.txt
I needed this for PHP so I could use exec and actually get a response.
i have a function in a php web app that needs to get periodically called by a cron job. originally i just did a simple wget to the url to call the function and everything worked fine, but ever since we added user auth i am having trouble getting it to work.
if i manually execute these commands i can login, get the cookie and then access the correct url:
site=http://some.site/login/in/here
cookie=`wget --post-data 'username=testuser&password=testpassword' $site -q -S -O /dev/null 2>&1 | awk '/Set-Cookie/{print $2}' | awk 'NR==2{print}'`
wget -O /dev/null --header="Cookie: $cookie" http://some.site/call/this/function
but when executed as a script, either manually or through cron, it doesn't work.
i am new to shell scripting, any help would be appreciated
this is being run on ubuntu server 10.04
OK simple things first -
I assume the file begins with #!/bin/bash or something
You have chmodded the file +x
You're using unix 0x0d line endings
And you're not expecting to return any of the variables to the calling shell, I presume?
Failing this try teeing the output of each command to a log file.
In theory, the only difference from manually executing these and using a script would be the timing.
Try inserting a sleep 5 or so before the last command. Maybe the http server does some internal communication and that takes a while. Hard to say, because you didn't post the error you get.
I quickly searched for this before posting, but could not find any similar posts. Let me know if they exist.
The commands being executed seem very simple. A directory listing is used as the input for a function.
The directory contains a bunch of files named "epi1_mcf_0###.nii.gz"
Command-line version (bash is running when this is executed):
fslmerge -t output_file `ls epi1_mcf_0*.nii.gz`
Shell script version:
#!/bin/bash
fslmerge -t output_file `ls epi1_mcf_0*.nii.gz`
The command-line version fails, but the shell script one works perfectly.
The error message is specific to the function, but it's included anyway.
** ERROR (nifti_image_read): failed to find header file for 'epi1_mcf_0000.nii.gz'
** ERROR: nifti_image_open(epi1_mcf_0000.nii.gz): bad header info
Error: failed to open file epi1_mcf_0000.nii.gz
Cannot open volume epi1_mcf_0000.nii.gz for reading!
I have been very frustrated with this problem (less so after I figured out that there was a way to get the command to work).
Any help would be appreciated.
(Or is the general consensus that the problem should be looked for in the "fslmerge" function?)
`ls epi1_mcf_0*.nii.gz` is better written as simply epi1_mcf_0*.nii.gz. As in:
fslmerge -t output_file epi1_mcf_0*.nii.gz
The `ls` doesn't add anything.
Note: Posted as an answer instead of comment. The Markdown-lite comment parser choked on my `` `ls epi1_mcf_0*.nii.gz` `` markup.
(I mentioned this in a comment first, but I'll make an answer since it helped!)
Do you have any shell aliases defined? (Type alias) Those will affect commands typed at the command line, but not scripts.
Linux often has ls defined as ls --color. This may affect the output since the colour codes are sent as escape codes through the regular output stream. If you use ls --color=auto it will auto-detect whether its output is a terminal or not. From man ls:
By default, color is not used to distinguish types of files. That is
equivalent to using --color=none. Using the --color option without the
optional WHEN argument is equivalent to using --color=always. With
--color=auto, color codes are output only if standard output is connected to a terminal (tty).