# allow CTRL-Q and CTRL-S keybindings
vim() {
(
# No ttyctl, so we need to save and then restore terminal settings
# osx users, use stty -g
local STTYOPTS="$(stty --save 2> /dev/null)"
trap "stty $STTYOPTS" EXIT
stty stop '' start '' -ixoff
command vim "$#"
)
}
I'm using the above shell function to temporarily change stty options so that CTRL-Q and CTRL-S can function as keybinds in vim.
This works nicely, but as a side-effect I can no longer see which file corresponds to a background job when I pause vim with CTRL-Z. I frequently work with multiple sessions in the background and it would be really handy to be able to see which job is which again.
Current output from jobs with a background task:
root#rock64:~# jobs
[1]+ Stopped ( local STTYOPTS="$(stty --save 2> /dev/null)"; trap "stty $STTYOPTS" EXIT; stty stop '' start '' -ixoff; command vim "$#" )
root#rock64:~#
Unwrapped output like this would be ideal:
root#rock64:~# jobs
[1]+ Stopped vim .bashrc
root#rock64:~#
Is there a different way to achieve the same effect (temporarily changing STTY options with restore on job completion) without squashing the background jobs listing?
I'm running Bash 4.4.x at the moment but I could easily compile a newer version if needed.
So what I am suggesting in the comment is difficult to convey without better code formatting. I am simply suggesting removing the surrounding ( and ). Additionally, the RETURN trap would need to be removed, so I just made a second function that, when return is called, we jump back to the original function and remove the trap.
runvim() {
local STTYOPTS="$(stty --save 2> /dev/null)"
trap "stty $STTYOPTS" RETURN # This may need to be changed to RETURN
stty stop '' start '' -ixoff
command vim "$#"
}
# allow CTRL-Q and CTRL-S keybindings
vim() {
# No ttyctl, so we need to save and then restore terminal settings
# osx users, use stty -g
runvim "$#"
trap - RETURN
}
Alternative
You could put this in a script called vim in $HOME/bin:
#!/bin/bash
STTYOPTS="$(stty --save 2> /dev/null)"
trap "stty $STTYOPTS" EXIT
stty stop '' start '' -ixoff
#/usr/bin/vim "$#" # or where ever your vim is
$( whereis vim | cut -d\ -f3) "$#" # Here is a more generic version.
Then add that directory to the front of your PATH variable by adding export PATH="$HOME/bin:$PATH" to your favorite dot file.
Edit:
I went ahead and improved the wrapper to look for the next real vim in the path if it's run as a symlink called 'vim' in the user path. This avoids needing to use an alias pointing 'vim' at vimwrapper.sh and transparently forwards 'vim' calls to the actual vim binary. I think this is basically complete now.
#!/usr/bin/env bash
#
# vimwrapper.sh: Wrapper script to enable use of CTRL-Q/S keybinds in Vim
#
# Using a wrapper script avoids clobbering the editor command line with an
# anonymous subshell that's hard to read when vim is suspended with ^Z. We need
# the scope of the subshell to trap our trap (aaayyyyy) and keep the stty magic
# out of our interactive environment's namespace. The wrapper script just
# makes background jobs look sane if you interrupt vim with CTRL-Z.
# set -x
case $(basename "$0") in
"vim")
# Check if we're shadowing real vim by existing earlier in the path as a
# symlink and call it directly if so. This lets us symlink vimwrapper.sh to
# "$HOME/bin/vim", munge "$HOME:/bin" onto the beginning of the path and
# transparently wrap calls to 'vim' without our script going recursive.
for _v in $(which -a "vim"); do
# I refuse to fork myself. You know what, fork you too.
[[ $(realpath "$_v") == $(realpath "$0") ]] && continue
#printf "found real vim in path at '%s'\n" "$(realpath $_v)"
cmd="$_v" && break
done
if [[ -z "$cmd" ]]; then
echo "$(basename $0): Unable to find real vim in path"
exit 1
fi
;;
*)
cmd="vim"
;;
esac
STTYOPTS="$(stty --save 2> /dev/null)"
trap "stty $STTYOPTS" EXIT
stty stop '' start '' -ixoff
command "$cmd" "$#"
Original post:
After playing with this for a while today I think I've got a decent solution. A subshell is necessary to scope/contain the stty parameter changes and the vim process that we want effected by them, but it doesn't have to be an anonymous function in the main shell environment.
#!/usr/bin/env bash
#
# vimwrapper.sh: Wrapper script to enable use of CTRL-Q/S keybinds in Vim
# For best results bind 'alias vim="/path/to/vimwrapper.sh"
#
# Using a wrapper and alias avoids clobbering the editor command line with an
# anonymous subshell that's hard to read when vim is suspended with ^Z. We need
# the scope of the subshell to trap our trap (aaayyyyy) and keep the stty magic
# out of our interactive environment's namespace. The wrapper script just
# makes background jobs look sane if you interrupt vim with CTRL-Z.
# We'll be paranoid and make sure our wrapper script isn't the target of the
# 'command vim' call that comes next.
if [[ $(realpath $(basename "$0")) == $(realpath $(which vim)) ]]; then
echo "$0: I refuse to fork myself. You know what, fork you too."
else
# Save stty state and restore on exit.
STTYOPTS="$(stty --save 2> /dev/null)"
trap "stty $STTYOPTS" EXIT
stty stop '' start '' -ixoff
command vim "$*"
fi
exit 0
Binding a call to the wrapper script as alias vim="~/foo/vimwrapper.sh" takes care of everything nicely:
root#rock64:~/bin# vim vim.sh
[1]+ Stopped ~/bin/vim.sh vim.sh
It's possible to symlink vimwrapper.sh as 'vim' somewhere in the path provided its location has a lower priority than real vim's too. I added a check for that so it doesn't go recursive by accident. I'll probably expand that a little bit so the script can shadow the real vim in the path and figure out which is the right command to call by looking at which -a vim and picking the next entry that isn't itself.
Thanks #Jason for pointing me in the right direction, I really appreciate it.
Related
example.sh:
x=0
while true
do
x=$(expr $x + 1)
clear #to print no. # same location
printf "\n\n $x"|figlet -f block
sleep 1;
done
Like, htop/cmatrix/vim/nano/bashtop,etc...
After running it i have to get back to the last prompt,
not like, cat/find/,etc...
closest solution had come up with is to, run script inside a tmux session
what i mant was, i dont want to lose my command outputs i ran before,
like nano/vim/cmatrix it clears the screen then run it, then when we exit out of it like, ^c/q , we are back where we left the prompt, with the history[last ran commands and outputs]
is there a command which does this?
┌──(kali㉿kali)-[~]
└─$ vi
┌──(kali㉿kali)-[~]
└─$ nano
┌──(kali㉿kali)-[~]
└─$ cat copy
file contents
┌──(kali㉿kali)-[~]
└─$
Here, i opened nano, i opened vim, but u cant see my inside vim or nano , but thats not the case of cat, it just print it on the same session , with vim/nano/htop terminal is clean, thats not the case in sed/cat/ps
I wanted to create a script like that,[which will not effect the terminal session(like which run in another dimention)],
I tried reading the [bashtop][1] source code, which also have same behavior, but i couldnt find the code/command which does it
vim and less and many other tools access the terminal's alternate screen and then restore the original screen when they are done. You can use tput to access the alternate screen from your shell script:
#!/bin/sh
tput smcup # begin using the alternate screen
# ...
tput rmcup # stop using alternate screen (restore terminal to original state)
Note that you don't want to remain in the alternate screen when the script ends, so you may prefer to do:
#!/bin/sh
trap 'tput rmcup' 0
tput smcup
In bashtop, they had used tput on previous updates, then they changed it to,
#!/bin/sh
echo -en '\033[?1049h' #* Switch to alternate screen
# codes to run inside
echo -en '\033[?1049l' #* Switch to normal screen
This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 6 years ago.
I'm very very new to Linux(coming from windows) and trying to write a script that i can hopefully execute over multiple systems. I tried to use Python for this but fount it hard too. Here is what i have so far:
cd /bin
bash
source compilervars.sh intel64
cd ~
exit #exit bash
file= "~/a.out"
if[! -f "$file"]
then
icc code.c
fi
#run some commands here...
The script hangs in the second line (bash). I'm not sure how to fix that or if I'm doing it wrong. Please advice.
Also, any tips of how to run this script over multiple systems on the same network?
Thanks a lot.
What I believe you'd want to do:
#!/bin/bash
source /bin/compilervars.sh intel64
file="$HOME/a.out"
if [ ! -f "$file" ]; then
icc code.c
fi
You would put this in a file and make it executable with chmod +x myscript. Then you would run it with ./myscript. Alternatively, you could just run it with bash myscript.
Your script makes little sense. The second line will open a new bash session, but it will just sit there until you exit it. Also, changing directories back and forth is very seldom required. To execute a single command in another directory, one usually does
( cd /other/place && mycommand )
The ( ... ) tells the shell that you'd like to do this in a sub-shell. The cd happens within that sub-shell and you don't have to cd back after it's done. If the cd fails, the command will not be run.
For example: You might want to make sure you're in $HOME when you compile the code:
if [ ! -f "$file" ]; then
( cd $HOME && icc code.c )
fi
... or even pick out the directory name from the variable file and use that:
if [ -f "$file" ]; then
( cd $(dirname "$file") && icc code.c )
fi
Assigning to a variable needs to happen as I wrote it, without spaces around the =.
Likewise, there needs to be spaces after if and inside [ ... ] as I wrote it above.
I also tend to use $HOME rather than ~ in scripts as it's more descriptive.
A shell script isn't a record of key strokes which are typed into a terminal. If you write a script like this:
command1
bash
command2
it does not mean that the script will switch to bash, and then execute command2 in the different shell. It means that bash will be run. If there is a controlling terminal, that bash will show you a prompt and wait for a command to be typed in. You will have to type exit to quit that bash. Only then will the original script then continue with command2.
There is no way to switch a script to a different shell halfway through. There are ways to simulate this. A script can re-execute itself using a different shell. In order to do that, the script has to contain logic to detect that it is being re-executed, so that it can prevent re-executing itself again, and to skip some code that shouldn't be run twice.
In this script, I implemented such a re-execution hack. It consists of these lines:
#
# The #!/bin/sh might be some legacy piece of crap,
# not even up to 1990 POSIX.2 spec. So the first step
# is to look for a better shell in some known places
# and re-execute ourselves with that interpreter.
#
if test x$txr_shell = x ; then
for shell in /bin/bash /usr/bin/bash /usr/xpg4/bin/sh ; do
if test -x $shell ; then
txr_shell=$shell
break
fi
done
if test x$txr_shell = x ; then
echo "No known POSIX shell found: falling back on /bin/sh, which may not work"
txr_shell=/bin/sh
fi
export txr_shell
exec $txr_shell $0 ${#+"$#"}
fi
The txr_shell variable (not a standard variable, my invention) is how this logic detects that it's been re-executed. If the variable doesn't exist then this is the original execution. When we re-execute we export txr_shell so the re-executed instance will then have this environment variable.
The variable also holds the path to the shell; that is used later in the script; it is passed through to a Makefile as the SHELL variable, so that make build recipes use that same shell. In the above logic, the contents of txr_shell don't matter; it's used as Boolean: either it exists or it doesn't.
The programming style in the above code snippet is deliberately coded to work on very old shells. That is why test x$txr_shell = x is used instead of the modern syntax [ -z "$txr_shell" ], and why ${#+"$#"} is used instead of just "$#".
This style is no longer used after this point in the script, because the
rest of the script runs in some good, reasonably modern shell thanks to the re-execution trick.
In Unix or Unix like systems, The command is in the type of "shell builtin". The commands like cd, echo are shell builtin. Is there any way
to list all the shell builtin commands or Is there any command available to list all the builtin command of the shell ?
Look up the man page for your shell.
There should be a section "SHELL BUILTIN COMMANDS"
$ man bash
It depends upon the shell. So read the documentation of your particular shell.
For bash, see here.
For zsh, see here.
For fish, see this.
For tcsh(which I don't recommend, STFW for csh considered harmful), see here.
For the POSIX shell specification, read that. If you want to code somehow "portable" shell scripts, you should restrict yourself to that specification.
Some rescue shells have many builtins, to be useful on a severely corrupted or broken system which e.g. has no more any /bin/mv or /bin/cp executables. For example sash.
Some shells are able to load plugins (somehow possibly defining new builtins), or to define functions.
Some shells have a restricted form, which removes some builtins (notably cd). For restricted bash see here.
TAB shows the available commands, from PATH and builtins.
Reset your PATH and enter [TAB] twice and answer y
$ PATH=
$ [TAB][TAB]
Display all 123 possibilities? (y or n)
- compgen fg _ooexp_ spwd
: complete fi path startx
! compopt for _pkcon _strip
. _compreply_ function __pkconcomp suspend
.. continue _gdb_ _pkcon_search test
... coproc getopts popd then
[ __dbus_send hash ppwd time
[[ declare help printf times
]] dir history pushd trap
{ dirs if pwd true
} disown in rd type
+ do jobs read typeset
alias done kill readarray ulimit
beep echo l readonly umask
bg elif la rehash unalias
bind else let remount unmount
break enable ll return unset
builtin esac local _scout until
caller eval logout _scpm wait
case exec ls select while
cd exit ls-l set _yast2
_cd_ _exp_ _man_ shift you
cd.. export mapfile shopt _zypper
command false md skipthis
command_not_found_handle fc o source
just type help on the shell, and a list will appear
I've written a bash script which install 3 packages from source. The script is fairly simple with the ./configure, make, make install statements written thrice (after cding into the source folder). To make it look a little cleaner, I have redirected the output to another file like so:./configure >> /usr/local/the_packages/install.log 2>&1.
The question is, if any one package fails to compile due to some reason (I'm not even sure of what reason, because it has always run successfully till now - this is just something I want to add), I'd like to terminated the script and rollback.
I figure rolling back would simply be deleting the destination folders specified in prefix=/install/path but how do I terminate the script itself?
Perhaps something like this could work:
./configure && make && make install || rm -rf /install/path
Option 1
You can check the return code of something run from a script with the $? bash variable.
moo#cow:~$ false
moo#cow:~$ echo $?
1
moo#cow:~$ true
moo#cow:~$ echo $?
0
Option 2
You can also check the return code by directly putting the command into a if statement like so.
moo#cow:~$ if echo a < bad_command; then echo "success"; else echo "fail"; fi
fail
Invert the return code
The return code of a command can be inverted with the ! character.
moo#cow:~$ if ! echo a < bad_command; then echo "success"; else echo "fail"; fi
success
Example Script
Just for fun, I decided to write this script based on your question.
#!/bin/bash
_installed=()
do_rollback() {
echo "rolling back..."
for i in "${_installed[#]}"; do
echo "removing $i"
rm -rf "$i"
done
}
install_pkg() {
local _src_dir="$1"
local _install_dir="$2"
local _prev_dir="$PWD"
local _res=0
# Switch to source directory
cd "$_src_dir"
# Try configuring
if ! ./configure --prefix "$_install_dir"; then
echo "error: could not configure pkg in $_src_dir"
do_rollback
exit 1
fi
# Try making
if ! make; then
echo "error: could not make pkg in $_src_dir"
do_rollback
exit 1
fi
# Try installing
if ! make install; then
echo "error: could not install pkg from $_src_dir"
do_rollback
exit 1
fi
# Update installed array
echo "installed pkg from $_src_dir"
_installed=("${_installed[#]}" "$_install_dir")
# Restore previous directory
cd "$_prev_dir"
}
install_pkg /my/source/directory1 /opt/install/dir1
install_pkg /my/source/directory2 /opt/install/dir2
install_pkg /my/source/directory3 /opt/install/dir3
In two parts:
To make the script abort as soon as any command returns an error, you want to use set -e. From the man page (BUILTINS section; description of the set builtin):
-e
Exit immediately if a pipeline (which may consist of a single simple command),
a subshell command enclosed in parentheses, or one of the commands executed as
part of a command list enclosed by braces (see SHELL GRAMMAR above) exits with
a non-zero status. The shell does not exit if the command that fails is part of
the command list immediately following a while or until keyword, part of the
test following the if or elif reserved words, part of any command executed in a
&& or ││ list except the command following the final && or ││, any command in a
pipeline but the last, or if the command's return value is being inverted with
!. A trap on ERR, if set, is executed before the shell exits. This option
applies to the shell environment and each subshell environment separately (see
COMMAND EXECUTION ENVIRONMENT above), and may cause subshells to exit before
executing all the commands in the subshell.
You can set this in three ways: Chang your shebang line to #!/bin/bash -e; call the script as bash -e scriptname; or simply use set -e near the top of your script.
The second part of the question is (to paraphrase) how to catch the exit and clean up before exiting. The answer is referenced above - You want to set a trap on ERR.
To show you how these work together, here's a simple script being run. Note that as soon as we have a non-zero exit code, execution transfers to the signal handler which takes care of doing the cleanup:
james#bodacious:tmp$cat test.sh
#!/bin/bash -e
cleanup() {
echo I\'m cleaning up!
}
trap cleanup ERR
echo Hello
false
echo World
james#bodacious:tmp$./test.sh
Hello
I'm cleaning up!
james#bodacious:tmp$
I use filter commands in vim to call bash scripts that I need to interrupt sometimes. I do this by hitting Ctrl+C in the vim window. The bash script terminates then (at least vim stops the filter command) but the text I passed to the filter command (usually a visual selection) will be missing. I would like vim to return to the state before the filter command if I interrupt execution by Ctrl+C or the bash script finishes with exit state other than 0. Note that I know that I can press u to undo, but I would like to modify the filter command to do this since I could forget to press u and loose the text without noticing.
You can set signal and/or exit handlers in bash. man bash, /trap.*sigspec
Something like:
trap "your_commands" SIGINT
my_program
To make it "preserve" the text, you probably need something like that:
TIN=`mktemp`
TOUT=`mktemp`
cat > $TIN
trap "cat $TIN; rm -f $TIN; rm -f $TOUT" EXIT
do_something < $TIN > $TOUT || cp $TIN $TOUT
mv $TOUT $TIN
Checked in my Vim, seems to be working. (tested with sed and sleep as do_something)