How do I fetch host and pass from file in linux - linux

#!/bin/bash
hosts=(sarv savana simra punit)
pass=(sarva 1save xvyw23 asdwe87)
for i in "${!hosts[#]}"; do
sshpass -p "${pass[i]}" ssh-copy-id -f root#"${hostnames[i]}" -p 22
done
Is it possible to fetch the password and hostname from a different file which consists of all hosts and their corresponding passwords in the following format:
host pass
sarv sarva
savana 1save
simra xvyw23
punit asdwe87
I apologize for not describing it properly.
The first word of each line in the file is the host-name and the second word is it's password.
Instead of writing hosts=(sarv savana simra punit) pass=(sarva 1save xvyw23 asdwe87) in the script.

Reading them from a file which combines both host and password could be done like this:
while IFS=' ' read -r host pass
do
# your command using them goes here:
echo "$host=$pass"
done < hostpass-list.txt
IFS=' ' sets the input field separator to a space. The first space delimited word will be put in host and the rest of the line will be put in pass.
You may want to read all hosts/passwords first and use them later. In that case you could use an associative array.
# declare an associative array
declare -A hp
# fill the array
while IFS=' ' read -r host pass
do
hp["$host"]="$pass"
done < hostpass-list.txt
# use the array
for host in "${!hp[#]}"
do
pass="${hp["$host"]}"
# your command using them goes here:
echo "$host=$pass"
done
For reading from separate files (if that becomes necessary) I suggest using readarray:
readarray -t hosts < hosts-list.txt
readarray -t pass < passwords-list.txt
The -t means:
Remove a trailing delim (default newline) from each line read.

Of course:
#!/bin/bash
hosts=$(cat hosts-list.txt)
pass=$(cat passwords-list.txt)
for i in "${!hosts[#]}"; do
sshpass -p "${pass[i]}" ssh-copy-id -f root#"${hostnames[i]}" -p 22
done

Related

Problem with splitting files based on numeric suffix

I have a file called files.txt and I need to split it based on lines. The command is as follows -
split -l 1 files.txt file --numeric-suffixes=1 --suffix-length=4
The numeric suffixes here start from file0001 to file9000. But I want it to be from 1 to 9000.
I can't seem to change it when --suffix-length=1, as split exhausted output filenames. Any suggestions using the same split command?
I don't think split will do what you want it to do, though I'm on macOS, so the *nix I'm using is Darwin not Linux; however, a simple shell script would do the trick:
#!/bin/bash
N=1
cat $1 | while read line
do
echo "$line" > file$N
N=`expr $N + 1`
done
Assuming you save it as mysplit (don't forget chmod -x mysplit), then you run it:
./mysplit files.txt

For loop in command line runs bash script reading from text file line by line

I have a bash script which asks for two arguments with a space between them. Now I would like to automate filling out the prompt in the command line with reading from a text file. The text file contains a list with the argument combinations.
So something like this in the command line I think;
for line in 'cat text.file' ; do script.sh ; done
Can this be done? What am I missing/doing wrong?
Thanks for the help.
A while loop is probably what you need. Put the space separated strings in the file text.file :
cat text.file
bingo yankee
bravo delta
Then write the script in question like below.
#!/bin/bash
while read -r arg1 arg2
do
/path/to/your/script.sh "$arg1" "$arg2"
done<text.file
Don't use for to read files line by line
Try something like this:
#!/bin/bash
ARGS=
while IFS= read -r line; do
ARGS="${ARGS} ${line}"
done < ./text.file
script.sh "$ARGS"
This would add each line to a variable which then is used as the arguments of your script.
'cat text.file' is a string literal, $(cat text.file) would expand to output of command however cat is useless because bash can read file using redirection, also with quotes it will be treated as a single argument and without it will split at space tab and newlines.
Bash syntax to read a file line by line, but will be slow for big files
while IFS= read -r line; do ... "$line"; done < text.file
unsetting IFS for read command preserves leading spaces
-r option preserves \
another way, to read whole file is content=$(<file), note the < inside the command substitution. so a creative way to read a file to array, each element a non-empty line:
read_to_array () {
local oldsetf=${-//[^f]} oldifs=$IFS
set -f
IFS=$'\n' array_content=($(<"$1")) IFS=$oldifs
[[ $oldsetf ]]||set +f
}
read_to_array "file"
for element in "${array_content[#]}"; do ...; done
oldsetf used to store current set -f or set +f setting
oldifs used to store current IFS
IFS=$'\n' to split on newlines (multiple newlines will be treated as one)
set -f avoid glob expansion for example in case line contains single *
note () around $() to store the result of splitting to an array
If I were to create a solution determined by the literal of what you ask for (using a for loop and parsing lines from a file) I would use iterations determined by the number of lines in the file (if it isn't too large).
Assuming each line has two strings separated by a single space (to be used as positional parameters in your script:
file="$1"
f_count="$(wc -l < $file)"
for line in $(seq 1 $f_count)
do
script.sh $(head -n $line $file | tail -n1) && wait
done
You may have a much better time using sjsam's solution however.

bash escape exclamation character inside variable with backtick

I have this bash script:
databases=`mysql -h$DBHOST -u$DBUSER -p$DBPASSWORD -e "SHOW DATABASES;" | tr -d "| " | grep -v Database`
and the issue is when the password has all the characters possible. how can i escape the $DBPASSWORD in this case? If I have a password with '!' and given the fact that command is inside backticks. I have no experience in bash scripts but I've tried with "$DBPASSWORD" and with '$DBPASSWORD' and it doesn't work. Thank you
LATER EDIT: link to script here, line 170 -> https://github.com/Ardakilic/backmeup/blob/master/backmeup.sh
First: The answer from #bishop is spot on: Don't pass passwords on the command line.
Second: Use double quotes for all shell expansions. All of them. Always.
databases=$(mysql -h"$DBHOST" -u"$DBUSER" -p"$DBPASSWORD" -e "SHOW DATABASES;" | tr -d "| " | grep -v Database)
Don't pass the MySQL password on the command line. One, it can be tricky with passwords containing shell meta-characters (as you've discovered). Two, importantly, someone using ps can sniff the password.
Instead, either put the password into the system my.cnf, your user configuration file (eg .mylogin.cnf) or create an on-demand file to hold the password:
function mysql() {
local tmpfile=$(mktemp)
cat > "$tmpfile" <<EOCNF
[client]
password=$DBPASSWORD
EOCNF
mysql --defaults-extra-file="$tmpfile" -u"$DBUSER" -h"$DBHOST" "$#"
rm "$tmpfile"
}
Then you can run it as:
mysql -e "SHOW DATABASES" | tr -d "| " ....
mysql -e "SELECT * FROM table" | grep -v ...
See the MySQL docs on configuration files for further examples.
I sometimes have the same problem when automating activities:
I have a variable containing a string (usually a password) that is set in a config file or passed on the command-line, and that string includes the '!' character.
I need to pass that variable's value to another program, as a command-line argument.
If I pass the variable unquoted, or in double-quotes ("$password"), the shell tries to interpret the '!', which fails.
If I pass the variable in single quotes ('$password'), the variable isn't expanded.
One solution is to construct the full command in a variable and then use eval, for example:
#!/bin/bash
username=myuser
password='my_pass!'
cmd="/usr/bin/someprog -user '$username' -pass '$password'"
eval "$cmd"
Another solution is to write the command to a temporary file and then source the file:
#!/bin/bash
username=myuser
password='my_pass!'
cmd_tmp=$HOME/.tmp.$$
touch $cmd_tmp
chmod 600 $cmd_tmp
cat > $cmd_tmp <<END
/usr/bin/someprog -user '$username' -pass '$password'
END
source $cmd_tmp
rm -f $cmd_tmp
Using eval is simple, but writing a file allows for multiple complex commands.
P.S. Yes, I know that passing passwords on the command-line isn't secure - there is no need for more virtue-signalling comments on that topic.

Bash Issue: AWK

I came back to work from a break to see that my Bash script wasn't working like it used to. The below tid-bit of code would grab and filter what's in a file. Here's the contents of said file:
# A colon, ':', is used as the field terminator. A new line terminates
# the entry. Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
# $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively. The third filed indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
OEM:/software/oracle/agent/agent12c/core/12.1.0.3.0:N
*:/software/oracle/agent/agent11g:N
dev068:/software/oracle/ora-10.02.00.04.11:Y
dev299:/software/oracle/ora-10.02.00.04.11:Y
xtst036:/software/oracle/ora-10.02.00.04.11:Y
xtst161:/software/oracle/ora-10.02.00.04.11:Y
dev360:/software/oracle/ora-11.02.00.04.02:Y
dev361:/software/oracle/ora-11.02.00.04.02:Y
xtst215:/software/oracle/ora-11.02.00.04.02:Y
xtst216:/software/oracle/ora-11.02.00.04.02:Y
dev298:/software/oracle/ora-11.02.00.04.03:Y
xtst160:/software/oracle/ora-11.02.00.04.03:Y
What the code used to produce and throw into an array:
dev068
dev299
xtst036
xtst161
dev360
dev361
xtst215
xtst216
dev298
xtst160
It would look at the file (oratab), find the database names (e.g. xtst160), and put them into an array. I then used this array for other tasks later in the script. Here's the relevant Bash script code:
# Collect the databases using a mixture of AWK and regex, and throw it into an array.
printf "\n2) Collecting databases on %s:\n" $HOSTNAME
declare -a arr_dbs=(`awk -F: -v key='/software/oracle/ora' '$2 ~ key{print $ddma_input}' /etc/oratab`)
# Loop through and print the array of databases.
for i in ${arr_dbs[#]}
do
printf "%s " $i
done
It doesn't seem anyone has modified the code or that the oratab file format has changed. So I'm not 100% sure what's going on now. Instead of grabbing the few characters, it's grabbing the entire line:
dev068:/software/oracle/ora-10.02.00.04.11:Y
I'm trying to understand Bash and regex more but I'm stumped. Definitely not my forte. A broken down explanation of the awk line would be greatly appreciated.
I found the error. We changed the amount of arguments being passed in and the order they are received.
printing $1 instead $ddma_input and resolve the issue as well.
declare -a arr_dbs=(`awk -F ":" -v key='/software/oracle/ora' '$2 ~ key{print $1}' /etc/oratab`)
# Loop through and print the array of databases.
for i in ${arr_dbs[#]}
do
printf "%s " $i
done
You could easily implement this whole thing in native bash with no external tools at all:
arr_dbs=( )
while IFS= read -r line; do
case $line in
"#"*) continue ;;
*:/software/oracle/ora*:*) arr_dbs+=( "${line%%:*}" ) ;;
esac
done </etc/oratab
printf ' %s\n' "${arr_dbs[#]}"
This actually avoids some bugs you had in your original implementation. Let's say you had a line like the following:
*:/software/oracle/ora-default:Y
If you aren't careful with how you handle that *, it'll be replaced with a list of filenames in the current directory by the shell whenever expansion occurs.
What does "whenever expansion occurs" mean in this context? Well:
# this will expand a * into a list of filenames during the assignment to the array
arr=( $(echo "*") ) # vs the correct read -a arr < <(echo "*")
# this will expand a * into a list of filenames while generating items to iterate over
for i in ${arr[#]} # vs the correct for i in "${arr[#]}"
# this will expand a * into a list of filenames while building the argument list for echo
i="*"
echo $i # vs the correct printf '%s\n' "$i"
Note the use of printf over echo -- see the APPLICATION USAGE section of the POSIX specification of echo.

Bash and Variable Substitution for file with space in their name: application for gpsbabel

I am trying to program a script to run gpsbabel. I am stuck to handle files with name containing (white) spaces.
My problem is in the bash syntax. Any help or insight from bash programmers will be much appreciated.
gpsbabel is software which permit merging of tracks recorded by gps devices.
The syntax for my purpose and which is working is:
gpsbabel -i gpx -f "file 1.gpx" -f "file 2.gpx" -o gpx -F output.gpx -x track,merge
The input format of the GPS data is given by -i , the output format by -o.
The input data files are listed after -f, and the resulting file after -F
(ref. gpsbabel manual, see example 4.9)
I am trying to write a batch to run this syntax with a number of input file not known initially. It means that the sequence -f "name_of_the_input_file" has to be repeated for each input file passed from the batch parameters.
Here is a script working for file with no spaces in their name
#!/bin/bash
# Append multiple gpx files easily
# batch name merge_gpx.sh
# Usage:
# merge_gpx.sh track_*.gpx
gpsbabel -i gpx $(echo $* | for GPX; do echo -n " -f $GPX "; done) \
-o gpx -F appended.gpx
`
So I tried to modify this script to handle also filename with containing spaces.
I got lost in the bash substitution and wrote and more sequenced bash for debugging purpose with no success.
Here is one of my trial
I get an error from gpsbabel "Extra arguments on command line" suggesting that I made a mistake in the variable usage.
#/bin/bash
# Merging all tracks in a single one
old_IFS=$IFS # Backup internal separator
IFS=$'\n' # New IFS
let i=0
echo " Merging GPX files"
for file in $(ls -1 "$#")
do
let i++
echo "i=" $i "," "$file"
tGPX[$i]=$file
done
IFS=$old_IFS #
#
echo "Number of files:" ${#tGPX[#]}
echo
# List of the datafile to treat (each name protected with a ')
LISTE=$(for (( ifile=1; ifile<=${#tGPX[#]} ; ifile++)) ;do echo -ne " -f '""${tGPX[$ifile]}""'"; done)
echo "LISTE: " $(echo -n $LISTE)
echo "++Merging .."
if (( $i>=1 )); then
gpsbabel -t \
-i gpx $(echo -n $LISTE) \
-x track,merge,title="TEST COMPIL" \
-o gpx -F track_compil.gpx
else
echo "Wrong selection of input file"
fi
#end
You are making things way more complicated for yourself than they need to be.
Any reasonably posix/gnu-compatible utility which takes an option in the form of two command-line arguments (-f STRING, or equivalently -f FILENAME) should also accept a single command-line argument -fSTRING. If the utility uses either getopt or getopt_long, this is automatic. gpsbabel appears to not use standard posix or gnu libraries for argument parsing, but I believe it still gets this right.
Apparently, your script expects its arguments to be a list of filenames; presumably, if the filenames include whitespace, you will quote the names which include whitespace:
./myscript "file 1.gpx" "file 2.gpx"
In that case, you only need to change the list of arguments by prepending -f to each one, so that the argument list becomes, in effect:
"-ffile 1.gpx" "-ffile 2.gpx"
That's extremely straightforward. We'll use the bash-specific find-and-replace syntax, described in the bash manual: (I highlighted the two features this solution uses)
${parameter/pattern/string}
Pattern substitution. The pattern is expanded to produce a pattern just as in pathname expansion. Parameter is expanded and the longest match of pattern against its value is replaced with string. If pattern begins with /, all matches of pattern are replaced with string. Normally only the first match is replaced. If pattern begins with #, it must match at the beginning of the expanded value of parameter. If pattern begins with %, it must match at the end of the expanded value of parameter. If string is null, matches of pattern are deleted and the / following pattern may be omitted. If parameter is # or *, the substitution operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with # or *, the substitution operation is applied to each member of the array in turn, and the expansion is the resultant list.
So, "${#/#/-f}" is the list of arguments (#), with the empty pattern at the beginning (#) replaced with -f:
#/bin/bash
# Merging all tracks in a single one
# $# is the number of arguments to the script.
if (( $# > 0 )); then
gpsbabel -t \
-i gpx "${#/#/-f}" \
-x track,merge,title="TEST COMPIL" \
-o gpx -F track_compil.gpx
else
# I changed the error message to make it more clear, sent it to stderr
# and cause the script to fail.
echo "No input files specified" >> /dev/stderr
exit 1
fi
Use an array:
files=()
for f; do
files+=(-f "$f")
done
gpsbabel -i gpx "${files[#]}" -o gpx -F appended.gpx
for f; do is short for for f in "$#"; do; most often you want to use $# to access the command-line arguments instead of $*. Quoting "${files[#]}" produces a list of words, one per element, that are treated as if they were quoted, so array elements containing whitespace are treated as a single word.

Resources