bash unable to export the variable to script - linux

i am stuck with my piece of code any help is appreciated. This is the piece of code i am executing from jenkins.
#!/bin/bash
aws ec2 describe-instances --query 'Reservations[*].Instances[*].[Tags[?Key==`Name`].Value|[0],PrivateIpAddress]' --output text | column | grep devtools > devtools
ip=`awk '{print $2}' devtool`
echo $ip
ssh ubuntu#$ip -n "aws s3 cp s3://bucket/$userlistlocation . --region eu-central-1"
cd builds/${BUILD_NUMBER}/
scp * ubuntu#$ip:/home/ubuntu
if [ $port_type == "normal" ]; then
if [ $duplicate_value == "no" ]; then
if [ $userlist == "uuid" ]; then
ssh ubuntu#$ip -n "export thread_size='"'$thread_size'"'; filename=$(echo $userlistlocation | sed -E 's/.*\/(.*)$/\1/') ; echo $filename ; echo filename='"'$filename'"'; chmod +x uuidwithduplicate.sh; ./uuidwithduplicate.sh"
fi
fi
fi
fi
userlistlocation --> is an user input it can be in any format /rahul/december/file.csv or simply it can be file.csv.
Through sed command i am able to get the output and stored in "filename" variable.
But when i try to echo $filename it's printing as echo $filename it should print as file.csv
this file.csv will be the source file for one more script to run i.e for uuidwithduplicate.sh
both userlistlocation and thread_size are specified through Jenkins job parameters.
I am not facing issues while exporting thread_size, only issue is with filename.
It's just printing echo $filename --> it should print file.csv

Breaking down the ssh command:
ssh ubuntu#$ip -n "export thread_size='"'$thread_size'"'; filename=$(echo $userlistlocation | sed -E 's/.*\/(.*)$/\1/') ; echo $filename ; echo filename='"'$filename'"'; chmod +x uuidwithduplicate.sh; ./uuidwithduplicate.sh"
Into segments of single/double quoted items
"export thread_size='"
'$thread_size'
"#'; filename=$(echo $userlistlocation | sed -E 's/./(.)$/\1/') ; echo $filename ; echo filename='#"
'$filename'
"'; chmod +x uuidwithduplicate.sh; ./uuidwithduplicate.sh"
Note: On the 3rd token, an '#' was added between double quotes and single quote to make it more readable. Not part of the command.
On surface few issues:
The '$thread_size' should be "$thread_size" to enable expansion
The 'echo $filename' is in double quote, resulting in expansion on the local host, where as setting filename=$(echo ...) is executed on the remote host.
There are two echo for filename, not sure why
Proposed solution is to move the setting of filename to the local host (simplify command), and move the thread_size into double quotes. It is possible to put complete command into single double-quoted item:
filename=$(echo $userlistlocation | sed -E 's/.*\/(.*)$/\1/')
ssh localhost -n "export thread_size='$thread_size'; echo '$filename' ; echo filename='$filename'; chmod +x uuidwithduplicate.sh; ./uuidwithduplicate.sh"

Related

I have to read config file and after reading it will run scp command to fetch all details from the available servers in config

I have a config file that has details like
#pem_file username ip destination
./test.pem ec2-user 00.00.00.11 /Desktop/new/
./test1.pem ec2-user 00.00.00.22 /Desktop/new/
Now I need to know how can I fix the below script to get all the details using scp
while read "$(cat $conf | awk '{split($0,array,"\n")} END{print array[]}')"; do
scp -i array[1] array[2]#array[3]:/home/ubuntu/documents/xyz.xml array[4]
done
please help me.
Build your while read like this:
#!/bin/bash
while read -r file user ip destination
do
echo $file
echo $user
echo $ip
echo $destination
echo ""
done < <(grep -Ev "^#" "$conffile")
Use these variables to build your scp command.
The grep is to remove commented out lines.
If you prefer using an array, you can do this:
#!/bin/bash
while read -a line
do
echo ${line[0]}
echo ${line[1]}
echo ${line[2]}
echo ${line[3]}
echo ""
done < <(grep -Ev "^#" "$conffile")
See https://mywiki.wooledge.org/BashFAQ/001 for looping on files and commands output using while.

Using ssh inside a script to run another script that itself calls ssh

I'm trying to write a script that builds a list of nodes then ssh into the first node of that list
and runs a checknodes.sh script which it's self is just a for i loop that calls checknode.sh
The first 2 lines seems to work ok, the list builds successfully, but then I get either get just the echo line of checknodes.sh to print out or an error saying cat: gpcnodes.txt: No such file or directory
MYSCRIPT.sh:
#gets the master node for the job
MASTERNODE=`qstat -t -u \* | grep $1 | awk '{print$8}' | cut -d'#' -f 2 | cut -d'.' -f 1 | sed -e 's/$/.com/' | head -n 1`
#builds list of nodes in job
ssh -qt $MASTERNODE "qstat -t -u \* | grep $1 | awk '{print$8}' | cut -d'#' -f 2 | cut -d'.' -f 1 | sed -e 's/$/.com/' > /users/issues/slow_job_starts/gpcnodes.txt"
ssh -qt $MASTERNODE cd /users/issues/slow_job_starts/
ssh -qt $MASTERNODE /users/issues/slow_job_starts/checknodes.sh
checknodes.sh
for i in `cat gpcnodes.txt `
do
echo "### $i ###"
ssh -qt $i /users/issues/slow_job_starts/checknode.sh
done
checknode.sh
str=`hostname`
cd /tmp
time perf record qhost >/dev/null 2>&1 | sed -e 's/^/${str}/'
perf report --pretty=raw | grep % | head -20 | grep -c kernel.kallsyms | sed -e "s/^/`hostname`:/"
When ssh -qt $MASTERNODE cd /users/issues/slow_job_starts/ is finished, the changed directory is lost.
With the backquotes replaced by $(..) (not an error here, but get used to it), the script would be something like
for i in $(cat /users/issues/slow_job_starts/gpcnodes.txt)
do
echo "### $i ###"
ssh -nqt $i /users/issues/slow_job_starts/checknode.sh
done
or better
while read -r i; do
echo "### $i ###"
ssh -nqt $i /users/issues/slow_job_starts/checknode.sh
done < /users/issues/slow_job_starts/gpcnodes.txt
Perhaps you would also like to change your last script (start with cd /users/issues/slow_job_starts)
You will find more problems, like sed -e 's/^/${str}/' (the ${str} inside single quotes won't be replaced by a host), but this should get you started.
EDIT:
I added option -n to the ssh call.
Redirects stdin from /dev/null (actually, prevents reading from stdin).
Without this option only one node is checked.

Can't run bash file inside ZSH

I've placed a bash file inside .zshrc and tried all different ways to run it every time I open a new terminal window or source .zshrc but no luck.
FYI: it was working fine on .bashrc
here is .zshrc script:
#Check if ampps is running
bash ~/ampps_runner.sh & disown
Different approach:
#Check if ampps is running
sh ~/ampps_runner.sh & disown
Another approach:
#Check if ampps is running
% ~/ampps_runner.sh & disown
All the above approaches didn't work (meaning it supposes to run an app named ampps but it doesn't in zsh.
Note: It was working fine before switching to zsh from bash. so it does not have permission or syntax problems.
Update: content of ampps_runner.sh
#! /usr/bin/env
echo "########################"
echo "Checking for ampps server to be running:"
check=$(pgrep -f "/usr/local/ampps" )
#[ -z "$check" ] && echo "Empty: Yes" || echo "Empty: No"
if [ -z "$check" ]; then
echo "It's not running!"
cd /usr/local/ampps
echo password | sudo -S ./Ampps
else
echo "It's running ..."
fi
(1) I believe ~/.ampps_runner.sh is a bash script, so, its first line should be
#!/bin/bash
or
#!/usr/bin/bash
not
#! /usr/bin/env
(2) Then, the call in zsh script (~/.zshrc) should be:
~/ampps_runner.sh
(3) Note: ~/.ampps_runner.sh should be executable. Change it to executable:
$ chmod +x ~/ampps_runner.sh
The easiest way to run bash temporarily from a zsh terminal is to
exec bash
or just
bash
Then you can run commands you previously could only run in bash. An example
help exec
To exit
exit
Now you are back in your original shell
If you want to know your default shell
echo $SHELL
or
set | grep SHELL=
If you want to reliably know your current shell
ps -p $$
Or if you want just the shell name you might use
ps -p $$ | awk "NR==2" | awk '{ print $4 }' | tr -d '-'
And you might just put that last one in a function for later, just know that it is only available if it was sourced in a current shell.
whichShell(){
local defaultShell=$(echo $SHELL | tr -d '/bin/')
echo "Default: $defaultShell"
local currentShell=$(ps -p $$ | awk "NR==2" | awk '{ print $4 }' | tr -d '-')
echo "Current: $currentShell"
}
Call the method to see your results
whichShell

Remove all but the latest X files from sftp via bash-script

I have a working bash script to create backups and upload them as a tar archive to a remote sftp server.
After the upload, the script should remove all but the latest 20 backup files. I can't use any, pipe, grep, whatever on the sftp. Also I don't get the file-listing result handled in my bash-script.
export SSHPASS=$(cat /etc/backup/pw)
SFTPCONNECTION=$(cat /etc/backup/sftp-connection)
sshpass -e sftp $SFTPCONNECTION - << SOMEDELIMITER
ls -lt backup-*.tar
quit
SOMEDELIMITER
There is this nice oneliner, but I did not figure out how the use it in my case (sftp).
This script deletes all tar files in the given directory except the last 20 ones. The -t flag sorts by time & date. The <<< redirect expands $RESULT feed's it into the stdin of the while loop. I'm not entirely pleased with it as it has to create multiple connections, but with sftp I don't believe there is another way.
RESULT=`echo "ls -t path/to/old_backups/" | sftp -i ~/.ssh/your_ssh_key user#server.com | grep tar`
i=0
max=20
while read -r line; do
(( i++ ))
if (( i > max )); then
echo "DELETE $i...$line"
echo "rm $line" | sftp -i ~/.ssh/your_ssh_key user#server.com
fi
done <<< "$RESULT"
Thanks to codelitt I went with this solution:
export SSHPASS=$(cat /etc/backup/pw)
SFTPCONNECTION="username#host"
RESULT=`echo "ls -tl backup*.tar" | sshpass -e sftp $SFTPCONNECTION | grep -oP "backup.*\.tar" `
i=0
max=24
while read -r line; do
# echo "$line "
(( i++ ))
if (( i > max )); then
echo "DELETE $i...$line"
echo "rm $line" | sshpass -e sftp $SFTPCONNECTION
fi
done <<< "$RESULT"
It's a slight modification of his version:
it counts/removes only files named backup*.tar
it uses ls -l (for line based listings)
I had to use sshpass instead of a certificate-based authentication. The sftp password is inside /etc/backup/pw

Shell Script to for remote copy and then processing the file

The below script works fine. But when I try to add a command to remote copy and then assign the variable FILENAME with the file received from the remote copy, the while loop doesn't work. I am quite new to scripting so I'm not able to find out what I'm missing. Please help!
#!/bin/sh
#SCRIPT: File processing
#PURPOSE: Process a file line by line with redirected while-read loop.
SSID=$1
ASID=$2
##rcp server0:/oracle/v11//dbs/${SSID}_ora_dir.lst /users/global/rahul/${ASID}_clone_dir.lst
##FILENAME=/users/global/rahul/${ASID}_clone_dir.lst
count=0
while read LINE
do
echo $LINE | sed -e "s/${SSID}/${ASID}/g"
count=`expr $count + 1`
done < $FILENAME
echo -e "\nTotal $count Lines read"
grep -v -e "pattern3" -e "pattern5" -e "pattern6" -e "pattern7" -e "pattern8" -e "pattern9" -e "pattern10" -e "pattern11" -e "
pattern12" ${ASID}_.lst > test_remote.test
When you say, "the while loop doesn't work", if you get an error message you should include that in your question to give us a clue.
Are you sure the rcp command is successful? The file /users/global/rahul/${ASID}_clone_dir.lst exists after the rcp is completed?
Btw your while loop is inefficient. This should be equivalent:
sed -e "s/${SSID}/${ASID}/g" < "$FILENAME"
count=$(wc -l "$FILENAME" | awk '{print $1}')
echo -e "\nTotal $count Lines read"

Resources