Executing a set of commands inside a new bash instance from as script - linux

I'm trying execute a set of commands in a new bash session:
exec bash <<- EOF
ln -snf $JDK_REPO'/jdk'$1 $CURRENT;
JAVA_HOME=$(readlink -f $CURRENT);
echo $JAVA_HOME;
export PATH= $JAVA_HOME/bin:$PATH;
exec usejdk
EOF
I get this error :
warning: here-document at line 46 delimited by end-of-file (wanted `EOF')
I tried to debug it with whatswrongwithmyscript, I get :
Use <<- instead of << if you want to indent the end token.
Any suggestion to execute a set of commands in a new bash instance ?

doing it this way works for me:
cmd="
ln -snf $JDK_REPO'/jdk'$1 $CURRENT;
JAVA_HOME=$(readlink -f $CURRENT);
echo $JAVA_HOME;
export PATH= $JAVA_HOME/bin:$PATH;
exec usejdk"
bash <<< "$cmd"
The bash <<< "$cmd" is equivalent to echo "$cmd" | bash or bash -c "$cmd"

Related

Using escape characters inside double quotes in ssh command in bash script

I want to run some commands each time when I log in to a remote system. Storing commands in .bashrc on remote is not an option.
What is the proper way to escape the escape chars inside of quotes in bash script for ssh?
How can I write each command in new line?
My script
#!/bin/bash
remote_PS1=$'\[\033[01;32m\]\u#\[\033[03;80m\]\h\[\033[00m\]:\[\033[01;34m\]\!:\w\[\033[00m\]\$ '
ssh -t "$#" 'export SYSTEMD_PAGER="";' \
'export $remote_PS1;' \
'echo -e "set nocompatible" > /home/root/.vimrc;' \
'bash -l;'
didn't work.
Escaping escape characters inside double-quotes and run them on remote server is way too complicated for me :)
Instead, I wrote a remoterc file for remote and a small remotessh script.
In remotessh, first I copy remoterc on remote machine and run bash command with that remoterc file interactively.
remoterc:
#!/bin/bash
SYSTEMD_PAGER=""
PS1="\[\033[01;32m\]\u#\[\033[03;80m\]\h\[\033[00m\]:\[\033[01;34m\]\!:\w\[\033[00m\]\$ "
echo -e "set nocompatible" > /home/root/.vimrc
remotessh:
#!/bin/bash
scp remoterc "$1":/home/root/
ssh "$1" -t "bash --rcfile remoterc -i"
It works :)
You can use Bash's printf %q.
According to help printf:
%q      quote the argument in a way that can be reused as shell input
See the following example:
$ cat foo.sh
ps1='\[\033[1;31m\]\u:\w \[\033[0m\]\$ '
ps1_quoted=$( printf %q "$ps1" )
ssh -t foo#localhost \
'export FOO=bar;' \
"export PS1=$ps1_quoted;" \
'bash --norc'
Result:

Bash return code error handling when using heredoc input

Motivation
I'm in a situation where I have to run multiple bash commands with a single bash invocation without the possibility to write a full script file (use case: Passing multiple commands to a container in Kubernetes). A common solution is to combine commands with ; or &&, for instance:
bash -c " \
echo \"Hello World\" ; \
ls -la ; \
run_some_command "
In practice writing bash scripts like that turns out to be error prone, because I often forget the semicolon leading to subtle bugs.
Inspired by this question, I was experiment with writing scripts in a more standard style by using a heredoc:
bash <<EOF
echo "Hello World"
ls -la
run_some_command
EOF
Unfortunately, I noticed that there is a difference in exit code error handling when using a heredoc. For instance:
bash -c " \
run_non_existing_command ; \
echo $? "
outputs (note that $? properly captures the exit code):
bash: run_non_existing_command: command not found
127
whereas
bash <<EOF
run_non_existing_command
echo $?
EOF
outputs (note that $? fails to capture the exit code compared to standard script execution):
bash: line 1: run_non_existing_command: command not found
0
Why is the heredoc version behaving differently? Is it possible to write the script in the heredoc style and maintaining normal exit code handling?
Why is the heredoc version behaving differently?
Because $? is expanded before running the command.
The following will output 1, that is the exit status of false command:
false
bash <<EOF
run_non_existing_command
echo $?
EOF
It's the same in principle as the following, which will print 5:
variable=5
bash <<EOF
variable="This is ignored"
echo $variable
EOF
Is it possible to write the script in the heredoc style and maintaining normal exit code handling?
If you want to have the $? expanded inside the subshell, then:
bash <<EOF
run_non_existing_command
echo \$?
EOF
or
bash <<'EOF'
run_non_existing_command
echo $?
EOF
Also note that:
bash -c \
run_non_existing_command ;
echo $? ;
is just equal to:
bash -c run_non_existing_command
echo $?
The echo $? is not executed inside bash -c.

bash -c variable does not get assigned

I am trying to execute the following command:
$ bash -c "var='test' && echo $var"
and only an empty line is being printed.
If I execute the same command without bash -c
$ var='test' && echo $var
test
the value assigned to $var is being printed. Could someone explain why I can't assign variables in the first example?
Double quotes expand variables, so your command is expanded to
bash -c "var='test' && echo"
if $var is empty when you run it. You can verify the behaviour with
var=hey
bash -c "var='test' && echo $var"
Switch the quotes:
bash -c 'var="test" && echo $var'

How to log non-interactive bash command sent through ssh

I'm sending a command through ssh:
ssh server.org 'bash -s' << EOF
ls -al
whoami
uptime
EOF
How to log it in the system (remote server)? I'd like to log those commands in some file (.bash_history or /tmp/log).
I've tried to add the line below to sshd_config:
ForceCommand if [[ -z $SSH_ORIGINAL_COMMAND ]]; then bash; else echo "$SSH_ORIGINAL_COMMAND" >> .bash_history; bash -c "$SSH_ORIGINAL_COMMAND"; fi
But it logs "bash -s" only.
I'll appreciate any help.
When bash shell exits, bash reads and executes commands from the ~/.bash_logout file. Probably you can run the history command at the end in the .bash_logout(of the server) and save it to some location.
If it suffices to work with the given command, we can put the necessary additions to enable and log command history at the beginning and end, e. g.
ssh server.org bash <<EOF
set -o history
ls -al
whoami
uptime
history|sed 's/ *[0-9]* *//' >>~/.bash_history
EOF
Or we could put them into the awfully long ForceCommand line:
… if [[ "$SSH_ORIGINAL_COMMAND" == bash* ]]; then echo "set -o history"; cat; echo "history|sed 's/ *[0-9]* *//' >>~/.bash_history"; else cat; fi | bash -c "$SSH_ORIGINAL_COMMAND"; fi

bash init script - ambiguous redirect

I am trying to execute a hadoop job from an init script and output to a log file. The commands are as follows
log_file="/home/hadoop/log_`date +%Y%m%d-%H:%M:%S`.log"
echo $log_file
su - hadoop -c 'java -jar /home/hadoop/testing.jar > $log_file'
But I keep getting -bash: $log_file: ambiguous redirect.
What is wrong with my statement?
The problem is that you are using single quotes around your command, so the variable $log_file is not being expanded.
edit
As clarified by William Pursell's comment, $log_file is being expanded but by the wrong shell. See this example (I am using set -x to show the commands being evaluated):
$ log_file="/dir/file.log"
$ set -x
$ su - -c 'set -x; echo $log_file'
+ su - -c 'set -x; echo $log_file' # no expansion of $log_file yet
+ echo # expansion here, no variable $log_file
$ su - -c "set -x; echo $log_file"
+ su - -c 'set -x; echo /dir/file.log' # expansion here, $log_file exists
+ echo /dir/file.log
/dir/file.log
solution
Try using double quotes instead, so $log_file is expanded at the correct point:
su - hadoop -c "java -jar /home/hadoop/testing.jar > $log_file"

Resources