I am wondering how I can add linux command option at the end without error.
For example
rm file1/ -r
cp file1/ file2/ -r
I experience some Linux cluster system can do it, bus others cannot.
As searched for while, getopts command may help but I am not sure if getopts is best choice for that and (also not sure how to implement for that my purpose).
Do I need to customize command by command or there is more general solution that can be applied any command?
Thank you for your help.
Consider this command:
rm -f myfile -r -- -i
The GNU flag convention is to allow options anywhere, up until an optional -- to indicate "end of options". Programs following it will see the options -r and -f, plus the arguments myfile and -i
The BSD flag convention is that flags are only allowed up until the first non-flag argument, or until an optional --. Programs following it will see the option -f, plus the arguments myfile, -r, -- and -i.
POSIX only requires utilities to support the BSD style.
It's up to the individual program to decide how to interpret flags. If you're on a BSD style system like FreeBSD or macOS, you can install GNU tools and use those. If you can't, you're mostly stuck with the system's flag convention.
Related
I am using iterm2 on Mac OS Catalina, however, I prefer all the GNU utils over the FreeBSD ones. Specifically, the cp command in FreeBSD lacks the -t option to specify the target, which I like to use when piping find | xargs cp -t <dest>.
So I used brew to install the GNU core utils as described in this post: https://apple.stackexchange.com/questions/69223/how-to-replace-mac-os-x-utilities-with-gnu-core-utilities
And so, I now have the GNU versions of the common shell tools, my ls is now using the /usr/local/opt/coreutils/libexec/gnubin/ls. The downside is that my ls colors are now gone. See below:
env and command outputs
I can obviously get around this by aliasing my ls command with the /bin/ls but I am wondering if there is a better way. How can I get the GNU ls to recognize my environment settings for colors?
You need to use dircolors to change the coreutil ls output. This link has details on how to use it. https://www.gnu.org/software/coreutils/manual/html_node/dircolors-invocation.html
I had the same issue.
Looks like all GNU ls needed was --color=auto
So this alias in my .zshrc worked -
alias ls='ls -G --color=auto'
I assume the same should work in .bashrc (not tested).
On linux systems when you type a command in a shell like rm * -rf, the order of the * and the -rf doesn't matter. My shell interpret it the same way. Now, on my Mac when I type rm -rf * everything works fine, but if I do rm * -rf an error shows up rm: -rf: No such file or directory
I tried that on a macOS and a linux both with fish and bash shells. Same problems.
Does anyone have any idea why the command interpreter on macOS thinks that -rf at the end of the command is not interpreted as parameters of the command ?
It's not about the shell, it's about the commands.
The parsing of command line arguments is not a feature and responsibility of the shell, but of the actual commands.
In both systems the shell faithfully passes the command line arguments in whatever order they were specified, and then it's up to the implementation of the commands to parse them as they see fit.
In linux, the core utilities are typically of the GNU implementation,
while on osx, the core utilities are typically of the BSD implementation.
The man page of the commands should tell you which implementation it is.
For example the last line of man rm in Linux is something like this:
GNU coreutils 8.21 March 2016 RM(1)
On osx:
BSD January 28, 1999 BSD
Order of the arguments in any shell has historically been relevant in unix.
rm incidentally even has an option -- to stop parsing options (to be able to remove files that start with "-" e.g.)
See rm(1) and getopt(3) man pages
if the shell doesn't respect order of the arguments it is given just what would the result be of this sequence:
$ touch a b
$ mv a b
what file would remain ?
I have been working on a shell script to automate some tasks. What is the best way to make sure the shell script would run without any issues in most of the platforms. For ex., I have been using echo -n command to print some messages to the screen without a trailing new line and the -n switch doesn't work in some ksh shells. I was told the script must be POSIX compliant. How do I make sure that the script is POSIX compliant. Is there a tool? Or is there a shell that supports only bare minimum POSIX requirements?
POSIX
One first step, which gives you indications of what works or not and why, is to set the shebang to /bin/sh and use shellcheck site to analyze your script.
For example, paste this script in the shellcheck editor window:
#!/bin/sh
read -r a b <<<"$1"
echo $((a+b))
to get an indication that: "In POSIX sh, here-strings are undefined".
As a second step, you can use a shell that is as compatible with POSIX as possible.
One shell that is compatible with most other simple shells, is dash, Debian default system shell, which is a derivative of the older BSD ash.
Another shell compatible with posix is posh.
However, dash and/or posh may not be available for some systems.
There is lksh (with a ksh flavor), with the goal to be compatible with legacy (old) shell scripts. From its manual:
lksh is a command interpreter intended exclusively for running legacy shell scripts.
But there is the need to use options when calling lksh, like -o posix and -o sh:
Note that it's strongly recommended to invoke lksh with at least the -o posix option, if not both that and -o sh, to fully enjoy better compatibility to the POSIX standard (which is probably why you use lksh over mksh in the first place) or legacy scripts, respectively.
You would call lksh -o posix -o sh instead of the simple lksh.
Using options is a way to make other shells become POSIX compatible. Like lksh, using the option -o posix, like bash -o posix.
In bash, it is even possible to turn the POSIX option inside an script, with:
shopt -o posix # also with: set -o posix
It is also possible to make a local link to bash or zsh that makes both act like an old sh shell. Like this:
$ ln -s /bin/bash ./sh
$ ./sh
There are plenty of alternatives (dash, posh, lksh, bash, zsh, etc.) to get a shell that will work as a POSIX shell.
Portable
However, even so, all the above does not ensure "portability".
Unfortunately, making a shell script 'POSIX-compliant' is usually easier than making it run on any real-world shell.
The only real-world sensible recommendation is test your script in several shells.
Like the list above: dash, posh, lksh, and bash --posix.
Solaris is a world on its own, probably you will need to test against /bin/sh and xpg4/sh.
Followup:
How can I test for POSIX compliance for shell scripts?
Starting Bash with the --posix command-line option or executing ‘set -o posix’ while Bash is running will cause Bash to conform more closely to the POSIX standard by changing the behavior to match that specified by POSIX in areas where the Bash default differs.
Reference
Note:
This answer complements user8017719's great answer.
As requested in the question, a tool is discussed below: while it does not directly check for POSIX compliance, it runs a given script in multiple shells, notably including /bin/sh.
/bin/sh, the system default shell, should not be assumed to support any features other than POSIX-prescribed ones, though in practice it does, to varying degrees, depending on the specific implementation. Therefore, successfully running via /bin/sh on one platform does not guarantee that the script will work on another. Among widely used shells, dash comes closest to being a POSIX-features-only shell.
Running successfully in multiple shells is important:
if you're authoring a script that needs to be sourced in various shells.
if you know that your script will encounter only a limited set of known-in-advance shells.
For a proof-of-the-pudding-is-in-the-eating approach, consider using shall (a utility I wrote), which allows you to invoke a given script or command with multiple shells at once, with feedback about which of the targeted shells the script/command executed successfully with.
If you have Node.js installed, you can easily install it with npm install -g shall (if not, follow the above link to the GitHub repo for manual installation instructions) and then use it as follows:
shall scriptFile
or, with an ad-hoc command:
shall -c '<shell-commands>'
By default, it invokes sh, and, if installed, dash, bash, zsh, and ksh, but you can target any set of shells that you have installed by using the SHELLS environment variable.
Using the example of the echo -n command on macOS to only target shells sh and bash:
$ SHELLS=sh,bash shall -c 'echo -n hi'
✓ sh (bash variant) [0.00s]
-n hi
✓ bash [0.00s]
hi
OK - All 2 shells (sh, bash) report success.
On macOS, bash (effectively) acts as sh, and while echo -n didn't fail when used with sh, you can also see that -n wasn't recognized as an option when bash ran as sh.
Another macOS example that shows that bash permits certain Bash-specific extensions even when running as sh, such as using nonstandard [[ ... ]] conditionals (assumes that dash - which acts as sh on Ubuntu systems - was installed via Homebrew):
$ SHELLS=sh,bash,dash shall -c '[[ -n nonempty ]] && echo nonempty'
✓ sh (bash variant) [0.00s]
nonempty
✓ bash [0.00s]
nonempty
✗ dash [0.01s]
dash: 1: [[: not found
FAILED - 1 shell (dash) reports failure, 2 (sh, bash) report success.
As you can see, Bash running as sh still accepted [[ ... ]], whereas dash, which is a (mostly) POSIX-features-only shell, failed, because POSIX only mandates [ ... ] conditionals (as an alias of test ... commands).
I've got a problem with ANSI escape codes in my terminal on OpenSuse 13.2.
My Makefile use to display pretty colors on OSX at work but at home when I use it I get the litteral termcaps such as \033[1;30m ... \033[0m
I know close to nothing about termcaps, I just found these escape characters that seemed to be working fine ! The strangest is that both my OSX and Linux terminal are configured with TERM=xterm-256color so I really don't know where to look for the correct setting I'm currently missing on Linux.
TL;DR: How to get escape codes such as \033[1;30m working in Konsole with xterm-256color ?
Edit: Here's a snippet of the Makefile I am talking about:
\Here's a snippet of the Makefile I am talking about:
# Display settings
RED_L = \033[1;31m
GREEN_L = \033[1;32m
GREEN = \033[0;32m
BLUE = \033[0;34m
RED = \033[0;31m
all: $(OBJ_DIR) $(NAME)
$(OBJ_DIR):
#mkdir -p $(OBJ_DIR)
$(NAME): $(OBJ)
#echo "$(BLUE)Linking binary $(RED)$(NAME)$(BLUE).\n"
#$(CC) -o $# $^ $(LFLAGS)
#echo "\t✻ $(GRAY)$(CC) -o $(RED)$(NAME)$(GRAY) object files:$(GREEN) OK! √\n$(NC)
The example which you gave does not rely upon the setting of TERM (unless it is going someplace other than the terminal, e.g., via some program which interprets it such as the ls program, which has its own notion about colors). It would help if you quoted the section of the makefile which uses the escape sequences. Without that, we can offer only generic advice, e.g,. by assuming you have an echo command in the makefile.
The place to start looking is at the shell which your makefile uses. One would expect bash to be the default shell on OpenSUSE. But suppose you are actually using some other shell which happens to not recognize the syntax you are using, and trying to do something like
echo '\033[1;34mhello\033[m'
To help ensure that you are using the expected shell, you can put an assignment in your makefile, e.g.,
SHELL = /bin/sh
This assumes that /bin/sh itself is going to work as intended. However, that is commonly a symbolic link (for Linux) to the real shell. If so, one possible solution would be to change the real shell using OpenSUSE's update-alternatives feature to change the shell to bash (or zsh).
For additional information, see the discussion of SHELL in the GNU make manual.
Reflecting comments on the version of make -- GNU make 4.0 is known to have incompatible changes versus 3.81, as noted in the thread GNU Make 4.0 released on LWN.net. In particular, there are several comments relating to your problem, starting here.
However, checking a recent Fedora, it seems that the problem really is that the default behavior for echo has changed. As noted in other discussions (such as Why doesn't echo support “\e” (escape) when using the -e argument in MacOSX), this was done to improve POSIX compatibility. You can get your colors back by adding a -e option to the echo commands.
I finally found the solution:
the problem was I used echo instead of echo -e which seems to be the default behaivour on Mac OSX.
Thanks for your help though, it lead me to good lectures :)
You can use a semicolon in bash shell to specify multiple commands.
Sometimes, one of those commands pops a question, requiring user input. (typically 'y' / 'n', or whatever really)
If I know what I want to answer in advance, is there a way to parse it to the commands somehow, like an argument, or some weird magical pipe stuff?
You don't need any "weird magical pipe stuff", just a pipe.
./foo ; echo "y" | ./bar ; ./baz
Or magical herestring syntax if you prefer:
./foo ; ./bar <<<"y" ; ./baz
You can use the yes command to put a lot of 'y' 's to a pipe.
For example, if you want to remove all your text files, you can use
yes | rm -r *.txt
causing every question asked by rm being answered with a y.
If you want another default answer, you can give it as an argument to yes:
yes n | rm -r *.txt
This will output a 'n'.
For more information, see http://en.wikipedia.org/wiki/Yes_(Unix)
For the simple "yes" answer there is a command yes, available on most Unix and Linux platforms:
$ yes | /bin/rm -i *
For an advanced protocol you may want to check the famous Expect, also widely available. It needs basic knowledge of Tcl.
First, it's not bash popping these questions. It is the particular program called (for instance, cp -i asks before overwriting files). Frequently those commands also have switches to answer the questions, like -y for fsck, or how -f overrides -i in rm. Many programs could be answered through a pipe, which is why we have the command "yes", but some go to extra lengths to ensure they cannot; for instance ssh when asking for passwords. Also, there's nothing magical about pipes. If the program only sometimes asks a question, and it matters what question, there are tools designed for that such as "expect".
In a typical shell script, when you do know exactly what you want to feed in and the program accepts input on stdin, you could handle it using a plain pipe:
echo -e '2+2\n5*3' | bc
If it's a longer piece then a here document might be helpful:
bc <<EOF
2+2
3*5
EOF
Sometimes a command provides an option to set default answer to a question. One notable example is apt-get - a package manager for Debian/Ubuntu/Mint. It provides and options -y, --yes, --assume-yes to be used in non-interactive scripts.