What does `!:-` do? - linux

I am very new to bash scripting and in Ubuntu\Debian package system.
Today I am studying the content of this preinst file that the script executes before that package is unpacked from its Debian archive (.deb) file.
My fist doubt is about a line containing this:
!:-
Probably it is a stupid question but, using Google, I can't find an answer.

Insert the last command without the last argument (bash)
/usr/sbin/ab2 -f TLS1 -S -n 1000 -c 100 -t 2 http://www.google.com/
then
!:- http://www.stackoverflow.com/
is the same as
/usr/sbin/ab2 -f TLS1 -S -n 1000 -c 100 -t 2 http://www.stackoverflow.com/

Related

How to replace backup file with timestamp in its name without producing duplicates in Linux Bash (shell script)

#!/usr/bin/env bash
# usage: wttr [location], e.g. wttr Berlin, wttr New\ York
# Standard location if no parameters were passed
location=''
language=''
time=`date`
# Expand terminal display
if [ -z "$language" ]; then
language=${LANG%_*}
fi
curl \
-H -x "Accept-Language: ${language}" \
-x wttr.in/"${1:-${location}}" |
head -n 7 |
tee /home/of/weather.txt |
tee -a /home/of/weather.log |
tee /home/of/BACKUP/weather_"$time".txt
#cp weather.txt /home/of/BACKUP
#mv -f /home/of/BACKUP/weather.txt /home/of/BACKUP/weather_"$time".txt
I'm very new to Linux Bash and Shell scripting and can't figure out the following.
I have a problem with the shell script above.
It works fine so far (curling ASCII data from website and writing it to weather.txt and .log).
It is also in set in crontab to run every 5 minutes.
Now I need to make a backup of weather.txt under /home/of/, in /home/of/BACKUP with the filename weather_<timestamp>.txt.
I tried to delete (rm weather*.txt) the old timestamped files in /home/of/BACKUP and then copy and rename the file everytime the cronjob is running.
I tried piping cp and mv and so on but somehow I end up with producing many duplicates as due to the timestamp the filenames are different or nothing at all when I try to delete the content of the folder first.
All I need is ONE backup file of weather.txt as weather_<timestamp>.txt which gets updated every 5 minutes with the actual timestamp bit I can't figure it out.
If I understand your question at all, then simply
rm -f /home/of/BACKUP/weather_*.txt
cp /home/of/weather.txt /home/of/BACKUP/weather_"$time".txt
cp lets you rename the file you are copying to; it doesn't make sense to separately cp and then mv.
For convenience, you might want to cd /home/of so you don't have to spell out the full paths, or put them in a variable.
dir=/home/of
rm -f "$dir"/BACKUP/weather_*.txt
cp "$dir"/weather.txt "$dir"/BACKUP/weather_"$time".txt
If you are running out of the cron of the user named of then your current working directory will be /home/of (though if you need to be able to run the script manually from anywhere, that cannot be guaranteed).
Obviously, make sure the wildcard doesn't match any files you actually want to keep.
As an aside, you can simplify the tee commands slightly. If this should only update the files and not print anything to the terminal, you could even go with
curl \
-H -x "Accept-Language: ${language}" \
-x wttr.in/"${1:-${location}}" |
head -n 7 |
tee /home/of/weather.txt \
>>/home/of/weather.log
I took out the tee to the backup file since you are deleting it immediately after anyway. You could alternatively empty the backup directory first, but then you will have no backups if the curl fails.
If you want to keep printing to the terminal, too, probably run the script with redirection to /dev/null in the cron job to avoid having your email inbox fill up with unread copies of the output.

Executing same command for several files in same repository in linux

I'd like to execute the following command for several files in same repository in linux:
../../../../../openSMILE-2.1.0/SMILExtract -C ../../../../../openSMILE-2.1.0/config/IS13_ComParE.conf -I inputfilename.wav -D outputfilename.csv
there are several files (named 1.wav, 2.wav, 3.wav) in the directory and if I execute
../../../../../openSMILE-2.1.0/SMILExtract -C ../../../../../openSMILE-2.1.0/config/IS13_ComParE.conf -nologfile 1 -noconsoleoutput 1 -I 1.wav -D 1.csv
it outputs 1.csv.
How can I create 1.csv, 2.csv, 3.csv, .. by executing just one single command in linux? (or do I have to make .sh file?)
It's probably cleaner to put the following to a script, but you can type it directly into the bash command line as well:
#! /bin/bash
for file in *.wav ; do
prefix=${file%.wav} # Remove from the right.
../../../../../openSMILE-2.1.0/SMILExtract \
-C ../../../../../openSMILE-2.1.0/config/IS13_ComParE.conf \
-I "$file" -D "$prefix".csv
done

parallel gnu command in conjunction with piping

I am relatively new to informatics, and have just discovered the virtues of the parallel command. However, I am having trouble using this in conjunction with piping and output.
I am using this command:
parallel -j 2 echo ./hisat2 --dta -p 32 -x path/to/index -U {} | ./samtools view -b - > /path/to/storage/folder/{/.}.bam :::: fs1 > executable.sh
fs1 contains a list of all the files I want to run. executable.sh is the executable command list. I wish for each file listed in fs1 to be individually processed by a program (called hisat2) and the ouput sam file to be converted into bam format with samtools. However, it does not seem to like the piping - it complains with the following:
bash: /path/to/storage/folder/{/.}.bam: No such file or directory
parallel: Warning: Input is read from the terminal. Only experts do this on purpose. Press CTRL-D to exit.
How can I overcome this? Is the only way around this to first process all files to sam, and then parallel bam convert?
You need to quote the pipe and redirection:
parallel -j 2 "./hisat2 --dta -p 32 -x path/to/index -U {} | ./samtools view -b - > /path/to/storage/folder/{/.}.bam" :::: fs1
Use --dry-run to see what would be run:
parallel --dry-run -j 2 "./hisat2 --dta -p 32 -x path/to/index -U {} | ./samtools view -b - > /path/to/storage/folder/{/.}.bam" :::: fs1
(Are you sure samtools is in current dir? Usually that is installed for a wider audience.)
May I suggest you spend an hour walking through man parallel_tutorial? Your command line will love you for it.

creating bash script to automate task for analyzing multiple files

I don't have a lot of experience with scripting.
I have a directory that contains, among many other files, a set of *.phylip files I need to analyze with a program. I would like to automate this task. I think a loop bash shell script would be appropriate, although I could be wrong.
If I was to perform the analysis manually on one .phylip file, I would use the following command in terminal:
./raxmlHPC-SSE3 -m GTRCAT -y -s uce-5.phylip --print-identical-sequences -p 12345 -n uce-5_result
For the bash shell script, I think it would be close to:
#!/bin/sh
for i in $( ls ); do
./raxmlHPC-SSE3 -m GTRCAT -y -s uce-5.phylip --print-identical-sequences -p 12345 -n test_5 $i
done
The issue I'm aware of, but don't know how to fix, is the -s option, which specifies the input phylip file. Any suggestions on how to modify the script to do what I need done?
Try the below code:
#!/bin/bash
for i in *.phylip
do
./raxmlHPC-SSE3 -m GTRCAT -y -s "$i" --print-identical-sequences -p 12345 -n ${i%.phylip}_result
done
-s option will be passed $i which has the file name with .phylip extension in the current directory.
${i%.phylip}_result replaces the .phylip extension with _result which i guess is what you expect. (Ref: Parameter Substitution)

Parallel download using Curl command line utility

I want to download some pages from a website and I did it successfully using curl but I was wondering if somehow curl downloads multiple pages at a time just like most of the download managers do, it will speed up things a little bit. Is it possible to do it in curl command line utility?
The current command I am using is
curl 'http://www...../?page=[1-10]' 2>&1 > 1.html
Here I am downloading pages from 1 to 10 and storing them in a file named 1.html.
Also, is it possible for curl to write output of each URL to separate file say URL.html, where URL is the actual URL of the page under process.
My answer is a bit late, but I believe all of the existing answers fall just a little short. The way I do things like this is with xargs, which is capable of running a specified number of commands in subprocesses.
The one-liner I would use is, simply:
$ seq 1 10 | xargs -n1 -P2 bash -c 'i=$0; url="http://example.com/?page${i}.html"; curl -O -s $url'
This warrants some explanation. The use of -n 1 instructs xargs to process a single input argument at a time. In this example, the numbers 1 ... 10 are each processed separately. And -P 2 tells xargs to keep 2 subprocesses running all the time, each one handling a single argument, until all of the input arguments have been processed.
You can think of this as MapReduce in the shell. Or perhaps just the Map phase. Regardless, it's an effective way to get a lot of work done while ensuring that you don't fork bomb your machine. It's possible to do something similar in a for loop in a shell, but end up doing process management, which starts to seem pretty pointless once you realize how insanely great this use of xargs is.
Update: I suspect that my example with xargs could be improved (at least on Mac OS X and BSD with the -J flag). With GNU Parallel, the command is a bit less unwieldy as well:
parallel --jobs 2 curl -O -s http://example.com/?page{}.html ::: {1..10}
Well, curl is just a simple UNIX process. You can have as many of these curl processes running in parallel and sending their outputs to different files.
curl can use the filename part of the URL to generate the local file. Just use the -O option (man curl for details).
You could use something like the following
urls="http://example.com/?page1.html http://example.com?page2.html" # add more URLs here
for url in $urls; do
# run the curl job in the background so we can start another job
# and disable the progress bar (-s)
echo "fetching $url"
curl $url -O -s &
done
wait #wait for all background jobs to terminate
As of 7.66.0, the curl utility finally has built-in support for parallel downloads of multiple URLs within a single non-blocking process, which should be much faster and more resource-efficient compared to xargs and background spawning, in most cases:
curl -Z 'http://httpbin.org/anything/[1-9].{txt,html}' -o '#1.#2'
This will download 18 links in parallel and write them out to 18 different files, also in parallel. The official announcement of this feature from Daniel Stenberg is here: https://daniel.haxx.se/blog/2019/07/22/curl-goez-parallel/
For launching of parallel commands, why not use the venerable make command line utility.. It supports parallell execution and dependency tracking and whatnot.
How? In the directory where you are downloading the files, create a new file called Makefile with the following contents:
# which page numbers to fetch
numbers := $(shell seq 1 10)
# default target which depends on files 1.html .. 10.html
# (patsubst replaces % with %.html for each number)
all: $(patsubst %,%.html,$(numbers))
# the rule which tells how to generate a %.html dependency
# $# is the target filename e.g. 1.html
%.html:
curl -C - 'http://www...../?page='$(patsubst %.html,%,$#) -o $#.tmp
mv $#.tmp $#
NOTE The last two lines should start with a TAB character (instead of 8 spaces) or make will not accept the file.
Now you just run:
make -k -j 5
The curl command I used will store the output in 1.html.tmp and only if the curl command succeeds then it will be renamed to 1.html (by the mv command on the next line). Thus if some download should fail, you can just re-run the same make command and it will resume/retry downloading the files that failed to download during the first time. Once all files have been successfully downloaded, make will report that there is nothing more to be done, so there is no harm in running it one extra time to be "safe".
(The -k switch tells make to keep downloading the rest of the files even if one single download should fail.)
Curl can also accelerate a download of a file by splitting it into parts:
$ man curl |grep -A2 '\--range'
-r/--range <range>
(HTTP/FTP/SFTP/FILE) Retrieve a byte range (i.e a partial docu-
ment) from a HTTP/1.1, FTP or SFTP server or a local FILE.
Here is a script that will automatically launch curl with the desired number of concurrent processes: https://github.com/axelabs/splitcurl
Starting from 7.68.0 curl can fetch several urls in parallel. This example will fetch urls from urls.txt file with 3 parallel connections:
curl --parallel --parallel-immediate --parallel-max 3 --config urls.txt
urls.txt:
url = "example1.com"
output = "example1.html"
url = "example2.com"
output = "example2.html"
url = "example3.com"
output = "example3.html"
url = "example4.com"
output = "example4.html"
url = "example5.com"
output = "example5.html"
curl and wget cannot download a single file in parallel chunks, but there are alternatives:
aria2 (written in C++, available in Deb and Cygwin repo's)
aria2c -x 5 <url>
axel (written in C, available in Deb repo)
axel -n 5 <url>
wget2 (written in C, available in Deb repo)
wget2 --max-threads=5 <url>
lftp (written in C++, available in Deb repo)
lftp -n 5 <url>
hget (written in Go)
hget -n 5 <url>
pget (written in Go)
pget -p 5 <url>
Run a limited number of process is easy if your system have commands like pidof or pgrep which, given a process name, return the pids (the count of the pids tell how many are running).
Something like this:
#!/bin/sh
max=4
running_curl() {
set -- $(pidof curl)
echo $#
}
while [ $# -gt 0 ]; do
while [ $(running_curl) -ge $max ] ; do
sleep 1
done
curl "$1" --create-dirs -o "${1##*://}" &
shift
done
to call like this:
script.sh $(for i in `seq 1 10`; do printf "http://example/%s.html " "$i"; done)
The curl line of the script is untested.
I came up with a solution based on fmt and xargs. The idea is to specify multiple URLs inside braces http://example.com/page{1,2,3}.html and run them in parallel with xargs. Following would start downloading in 3 process:
seq 1 50 | fmt -w40 | tr ' ' ',' \
| awk -v url="http://example.com/" '{print url "page{" $1 "}.html"}' \
| xargs -P3 -n1 curl -o
so 4 downloadable lines of URLs are generated and sent to xargs
curl -o http://example.com/page{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16}.html
curl -o http://example.com/page{17,18,19,20,21,22,23,24,25,26,27,28,29}.html
curl -o http://example.com/page{30,31,32,33,34,35,36,37,38,39,40,41,42}.html
curl -o http://example.com/page{43,44,45,46,47,48,49,50}.html
Bash 3 or above lets you populate an array with multiple values as it expands sequence expressions:
$ urls=( "" http://example.com?page={1..4} )
$ unset urls[0]
Note the [0] value, which was provided as shorthand to make the indices line up with page numbers, since bash arrays autonumber starting at zero. This strategy obviously might not always work. Anyway, you can unset it in this example.
Now you have a an array, and you can verify the contents with declare -p:
$ declare -p urls
declare -a urls=([1]="http://example.com?Page=1" [2]="http://example.com?Page=2" [3]="http://example.com?Page=3" [4]="http://example.com?Page=4")
Now that you have a list of URLs in an array, expand the array into a curl command line:
$ curl $(for i in ${!urls[#]}; do echo "-o $i.html ${urls[$i]}"; done)
The curl command can take multiple URLs and fetch all of them, recycling the existing connection (HTTP/1.1) to a common server, but it needs the -o option before each one in order to download and save each target. Note that characters within some URLs may need to be escaped to avoid interacting with your shell.
I am not sure about curl, but you can do that using wget.
wget \
--recursive \
--no-clobber \
--page-requisites \
--html-extension \
--convert-links \
--restrict-file-names=windows \
--domains website.org \
--no-parent \
www.website.org/tutorials/html/

Resources