How to echo line with multiple quotes/special characters into file? - linux

I am trying to echo the following line into a .profile but it keeps getting confused by either the many quotes or special characters.
bind '"e[A": history-search-backward'
I've tried all sorts of things but can't get it nailed.
This is what I currently have:
sudo su -c 'echo "bind \'\"\\e[A\": history-search-backward\'" >> /etc/profile' -
This is what it returns:
su: user '"\e[A": does not exist
Yet if I just use:
echo bind \'\"\\e[A\": history-search-backward\'" >> /home/user/testfile
It works just fine.
I have all manner of "sudo su -c "echo blah..." in the rest of my script that work just fine.
Any ideas?

Try this
sudo su -c $'echo \"bind \'\"\\e[A\": history-search-backward\'\" >> /etc/profile\' -'
From the bash man page:
A single quote may not occur between single quotes, even when preceded by a backslash.
Text quoted by $'...' may contain backslash-escaped quotes, both single and double.
Another option is to add a simpler expression to ~/.inputrc:
echo '"\e[A": history-search-backward' >> ~/.inputrc
There doesn't seem to be a system-wide equivalent of .inputrc that is read by all users. Also, this makes the key binding available to any program that uses readline. If you really do want to restrict it to bash, add a conditional expression:
cat >> ~/.inputrc <<'EOF'
$if Bash
"\e[A": history-search-backward
$endif
EOF

Every character is interpreted literally between single quotes, except ' itself. So you can put a single quote inside a literal string like this: 'single'\''quoted' is the string single'quoted.
Your command is complicated because there are two shells involved: the shell you're running this command from, and the shell that su runs. Note that it's weird to run sudo su since sudo already runs the specified command as root; sudo sh -c … makes more sense. So you need to quote for both. It's usually clearest to use single quotes for the outer shell, and double quotes or single quotes or backslashes for the inner shell.
There's another problem with your command: you're targeting the wrong file. /etc/profile is only read by login shells, whereas the bind command is specific to bash but should be read by all instances of bash, not just login shells. Instead of writing this line to /etc/profile, you should write it to the system-wide bashrc, if there is one (it's usually /etc/bash.bashrc).
sudo sh -c 'echo "bind '\''\"\\e[A\": history-search-backward'\''" >>/etc/bash.bashrc'
You may put this setting directly into the readline configuration file, /etc/inputrc. You'll save on a level of quoting.
sudo sh -c 'echo '\''"\e[A": history-search-backward'\'' >>/etc/inputrc'
An easier way to pass an arbitrary string to a command would be to pass it as input instead of as an argument and use a here document.
sudo sh -c 'cat >>/etc/inputrc' <<'EOF'
"\e[A": history-search-backward
EOF

Related

ssh to remote machine via proxy server to run a script with parameters

I want to ssh to a machine via proxy server and run a script on that server with the inputs as parameters passed
I am using below :
ssh -t tooladm#200.81.36.188 "ssh -t tooladm#apuatt01" ". ./.profile >/dev/null 2>&1; cd /astxpinfs/ast/tooladm/JHF_SYNC_Particular_HF ; ./SyncToSite.ksh $product $release "${hf_list}" ${LOG_DIR_NAME} 2>&1 > /dev/null"
For clearance :
Suppose I am in machine A, and want to run the script located in machine apuatt01
There is no direct connectivity b/w machine A and apuatt01
So I am connecting apuatt01 via 200.81.36.188
By using this, I am not able to run the above script
Please can you help, where I am doing wrong
You can use Bash here document to make the script cleaner:
ssh -t tooladm#200.81.36.188 -- ssh -t tooladm#apuatt01 <<EOS
source .profile >/dev/null 2>&1
cd /astxpinfs/ast/tooladm/JHF_SYNC_Particular_HF
./SyncToSite.ksh $product $release "${hf_list}" ${LOG_DIR_NAME} 2>&1 > /dev/null
EOS
Note, the double dash separates the command from arguments passed to ssh.
This syntax works in ksh, too. From the KornShell manual(man ksh):
<<[-]word The shell input is read up to a line that is the same
as word after any quoting has been removed, or to an end-of-file.
No parameter substitution, command substitution, arithmetic
substitution or file name generation is performed on word. The
resulting document, called a here-document, becomes the standard
input. If any character of word is quoted, then no interpretation is
placed upon the characters of the document; otherwise, parameter
expansion, command substitution, and arithmetic substitution occur,
\new- line is ignored, and \ must be used to quote the characters \,
$, `. If - is appended to <<, then all leading tabs are stripped from
word and from the document. If # is appended to <<, then leading
spaces and tabs will be stripped off the first line of the document
and up to an equivalent indentation will be stripped from the
remaining lines and from word. A tab stop is assumed to occur at
every 8 columns for the purposes of determining the indentation.

coloring terminal with shell script

Someone can explain me why when I copy and paste the following command in the terminal it displays the colorful test correctly, but when I run it via sh myscript.sh it does not display the colored text?
blue='\e[1;34m'
NC='\e[0m'
echo -e "${blue}Test${NC}"
EDIT
Sudo is not the problem. If I copy the above and paste directly into the terminal, everything works. If you run through file, sh myscript.sh not work
Probably because sh isn't bash on your system.
$ file /bin/sh
/bin/sh: symbolic link to `dash'
Try
bash myscript.sh
Your interactive shell seems to be GNU Bash, while sh is a generic POSIX shell, which actually may be dash, busybox sh or something else. The problem is that neither -e option for echo nor \e are POSIX-compliant.
But you can easily use printf instead of echo -e (do not forget to explicitly specify newline character \n) and \033 instead of \e:
blue='\033[1;34m'
NC='\033[0m'
printf "${blue}%s${NC}\n" 'Test'
Or, of course, you can just use bash (as Elliott Frisch suggested) if you are sure that it would be available on target system.
Also I should point out, that what you done is not right way to run shell scripts at all. If you’re writing a standalone script, then you’d better to use hashbang and set execution bit to file.
$ cat myscript
#!/bin/sh
blue='\033[1;34m'
NC='\033[0m'
printf "${blue}%s${NC}\n" 'Test'
$ chmod +x myscript
$ ./myscript
But if you’re writing a command sequence (a macros, if you will) for interactive shell, there is source (or simply .) command:
$ source myscript
(Then all of above about POSIX-compliance does not matter of course.)

Escape newline character in heredoc on solaris

I am using bash and this works on Linux:
read -r -d '' VAR<<-EOF
Hello\nWorld
EOF
echo $VAR > trail
i.e the contents of the file on Linux is
Hello\nWorld
When i run on Solaris
trial file has
Hello
World
The newline(\n) is being replaced with a newline. How can i avoid it?
Is it a problem with heredoc or the echo command?
[UPDATE]
Based on the explanation provided here:
echo -E $VAR > trail
worked fine on Solaris.
The problem is with echo. Behavior is defined in POSIX, where interpretting \n is part of XSI but not basic POSIX itself.
You can avoid this on all platforms using printf (which is good practice anyways):
printf "%s\n" "$VAR"
This is not a problem for bash by the way. If you had used #!/usr/bin/env bash as the shebang (and also not run the script with sh script), behavior would have been consistent.
If you use #!/bin/sh, you'll get whichever shell the system uses as a default, with varying behaviors like this.
To complement #that other guy's helpful answer:
Even when it is bash executing your script, there are several ways in which the observed behavior - echo by default interpreting escape sequences such as \n - can come about:
shopt -s xpg_echo could be in effect, which makes the echo builtin interpret \ escape sequences by default.
enable -n echo could be in effect, which disables the echo builtin and runs the external executable by default - and that executable's behavior is platform-dependent.
These options are normally NOT inherited when you run a script, but there are still ways in which they could take effect:
If your interactive initialization files (e.g., ~/.bashrc) contain commands such as the above and you source (.) your script from an interactive shell.
When not sourcing your script: If your environment contains a BASH_ENV variable that points to a script, that script is sourced before your script runs; thus, if that script contains commands such as the above, they will affect your script.

How to pass local variable to remote ssh commands?

I need to execute multiple commands on remote machine, and use ssh to do so,
ssh root#remote_server 'cd /root/dir; ./run.sh'
In the script, I want to pass a local variable $argument when executing run.sh, like
ssh root#remote_server 'cd /root/dir; ./run.sh $argument'
It does not work, since in single quote $argument is not interpreted the expected way.
Edit: I know double quote may be used, but is there any side effects on that?
You can safely use double quotes here.
ssh root#remote_server "cd /root/dir; ./run.sh $argument"
This will expand the $argument variable. There is nothing else present that poses any risk.
If you have a case where you do need to expand some variables, but not others, you can escape them with backslashes.
$ argument='-V'
$ echo "the variable \$argument is $argument"
would display
the variable $argument is -V
You can always test with double quotes to discover any hidden problems that might catch you by surprise. You can always safely test with echo.
Additionally, another way to run multiple commands is to redirect stdin to ssh. This is especially useful in scripts, or when you have more than 2 or 3 commands (esp. any control statements or loops)
$ ssh user#remoteserver << EOF
> # commands go here
> pwd
> # as many as you want
> # finish with EOF
> EOF
output, if any, of commands will display
$ # returned to your current shell prompt
If you do this on the command line, you'll get a stdin prompt to write your commands. On the command line, the SSH connection won't even be attempted until you indicate completion with EOF. So you won't see results as you go, but you can Ctrl-C to get out and start over. Whether on the command line or in a script, you wrap up the sequence of commands with EOF. You'll be returned to your normal shell at that point.
You could run xargs on the remote side:
$ echo "$argument" | ssh root#remote_server 'cd /root/dir; xargs -0 ./run.sh'
This avoids any quoting issues entirely--unless your argument has null characters in it, I suppose.

How can I execute a series of commands in a bash subshell as another user using sudo?

I'm writing a bash script that needs to sudo multiple commands. I can do this:
( whoami ; whoami )
but I can't do this:
sudo ( whoami ; whoami )
How do I solve this?
You can pass the commands as standard input into sudo'ed bash with a here document:
sudo bash <<"EOF"
whoami
id
EOF
This way there is no need to fiddle with correct quoting, especially if you have multiple levels, e.g.:
sudo bash <<"EOF"
whoami
echo $USER ~
sudo -u apache bash <<"DOF"
whoami
echo $USER ~
DOF
EOF
Produces:
root
root /root
apache
apache /usr/share/httpd
(Note that you can't indent the inner terminator — it has to be alone on its line. If you want to use indentation in a here document, you can use <<- instead of <<, but then you must indent with tabs, not spaces.)
Run a shell inside sudo:
sudo bash -c 'whoami; whoami'
You can use any character except ' itself inside the single quotes. If you really want to have a single quote in that command, use '\'' (which technically is: end single-quote literal, literal ' character, start single-quoted literal; but effectively this is a way to inject a single quote in a single-quoted literal string).
If you would like to get syntax highlighting from your editor, not use quotes around your code, and have proper indentation, you can write your commands in a function and send it to bash using the declare command:
function run_as_root() {
whoami
id
echo $USER
}
sudo bash -c "$(declare -f run_as_root); run_as_root"
for example try this, I tested it:
sudo bash -c "cd /;ls;ls|grep o"
In this example you first change dir to /root, next list root directory and finally for root directory filter only directories having name with letter 'o'.
But i thing better way is writting script that do all you need and give exitcode for all complex action. Then you can sudo script instead group of single commands like example above.
The Brackets means that execute the command in a new bash.It execute the command with the interval of semicolon.Just use the code below instead.
(sudo whoami;sudo whoami)
BYW:the space is not necessary when using '()'.
sudo only asks for your passwd the first time.The passwd answered is valid for about 5 minutes by default.You can change this value as this told.So just worry about the passwd prompt at the beginning of your script,then you can use sudo through out.
changing Defaults:user_name timestamp_timeout's value to -1 may be a security hole on your system.

Resources