Is it possible and, if yes, how to convert the following expression to one-liner?
DEV=$(lsblk -no KNAME,MODEL | grep 'ModelNAME')
DEV=${DEV%%'ModelNAME'}
Simple DEV=${(lsblk -no KNAME,MODEL | grep 'ModelNAME')%%'ModelNAME'} doesn't work
zsh allows you to combine parameter expansions. Bash does not.
For either bash or POSIX sh (both of which support this particular parameter expansion), you'll need to do this as two separate commands.
That said, there are other options available. For instance:
# tell awk to print first field and exit on a match
dev=$(lsblk -no KNAME,MODEL | awk '/ModelNAME/ { print $1; exit }')
...or, even easier (but requiring bash or another modern ksh derivative):
# read first field of first line returned by grep; _ is a placeholder for other fields
read -r dev _ < <(lsblk -no KNAME,MODEL | grep -e ModelNAME)
Related
I'm learning about grep commands.
I want to make a program that when a user enters more than one word, outputs a line containing the word in the data file.
So I connected the words that the user typed with '|' and put them in the grep command to create the program I intended.
But this is OR operation. I want to make AND operation.
So I learned how to use AND operation with grep commands as follows.
cat <file> | grep 'pattern1' | grep 'pattern2' | grep 'pattern3'
But I don't know how to put the user input in the 'pattern1', 'pattern2', 'pattern3' position. Because the number of words the user inputs is not determined.
As user input increases, grep must be executed using more and more pipes, but I don't know how to build this part.
The user input is as follows:
$ [the name of my program] 'pattern1' 'pattern2' 'pattern3' ...
I'd really appreciate your help.
With grep -f you can grep multiple items, when each of them is on a line in a file.
With <(command) you can let Bash think that the result of command is a file.
With printf "%s\n" and a list of arguments, each argument is printed on a new line.
Together:
grep -f <(printf "%s\n" "$#") datafile
suggesting to use awk pattern logic:
awk '/RegExp-pattern-1/ && /RegExp-pattern-2/ && /RegExp-pattern-3/ 1' input.txt
The advantages: you can play with logic operators && || on RegExp patterns. And your are scanning the whole file once.
The disadvantages: must provide files list (can't traverse sub directories), and limited RegExp syntax compared to grep -E or grep -P
In principle, what you are asking could be done with a loop with output to a temporary file.
file=inputfile
temp=$(mktemp -d -t multigrep.XXXXXXXXX) || exit
trap 'rm -rf "$temp"' ERR EXIT
for regex in "$#"; do
grep "$regex" "$file" >"$temp"/output
mv "$temp"/output "$temp"/input
file="$temp"/input
done
cat "$temp"/input
However, a better solution is probably to arrange for Awk to check for all the patterns in one go, and avoid reading the same lines over and over again.
Passing the arguments to Awk with quoting intact is not entirely trivial. Here, we simply pass them as command-line arguments and process those into an array within the Awk script itself.
awk 'BEGIN { for(i=1; i<ARGC; ++i) a[i]=ARGV[i];
ARGV[1]="-"; ARGC=1 }
{ for(n=1; n<=i; ++n) if ($0 !~ a[n]) next; }1' "$#" <file
In brief, in the BEGIN block, we copy the command-line arguments from ARGV to a, then replace ARGV and ARGC to pass Awk a new array of (apparent) command-line arguments which consists of just - which means to read standard input. Then, we simply iterate over a and skip to the next line if the current input line from standard input does not match. Any remaining lines have matched all the patterns we passed in, and are thus printed.
I have a shell script of more than 1000 lines, i would like to check if all the commands used in the script are installed in my Linux operating system.
Is there any tool to get the list of Linux commands used in the shell script?
Or how can i write a small script which can do this for me?
The script runs successfully on the Ubuntu machine, it is invoked as a part of C++ application. we need to run the same on a device where a Linux with limited capability runs. I have identified manually, few commands which the script runs and not present on Device OS. before we try installing these commands i would like to check all other commands and install all at once.
Thanks in advance
I already tried this in the past and got to the conclusion that is very difficult to provide a solution which would work for all scripts. The reason is that each script with complex commands has a different approach in using the shells features.
In case of a simple linear script, it might be as easy as using debug mode.
For example: bash -x script.sh 2>&1 | grep ^+ | awk '{print $2}' | sort -u
In case the script has some decisions, then you might use the same approach an consider that for the "else" cases the commands would still be the same just with different arguments or would be something trivial (echo + exit).
In case of a complex script, I attempted to write a script that would just look for commands in the same place I would do it myself. The challenge is to create expressions that would help identify all used possibilities, I would say this is doable for about 80-90% of the script and the output should only be used as reference since it will contain invalid data (~20%).
Here is an example script that would parse itself using a very simple approach (separate commands on different lines, 1st word will be the command):
# 1. Eliminate all quoted text
# 2. Eliminate all comments
# 3. Replace all delimiters between commands with new lines ( ; | && || )
# 4. extract the command from 1st column and print it once
cat $0 \
| sed -e 's/\"/./g' -e "s/'[^']*'//g" -e 's/"[^"]*"//g' \
| sed -e "s/^[[:space:]]*#.*$//" -e "s/\([^\\]\)#[^\"']*$/\1/" \
| sed -e "s/&&/;/g" -e "s/||/;/g" | tr ";|" "\n\n" \
| awk '{print $1}' | sort -u
the output is:
.
/
/g.
awk
cat
sed
sort
tr
There are many more cases to consider (command substitutions, aliases etc.), 1, 2 and 3 are just beginning, but they would still cover 80% of most complex scripts.
The regular expressions used would need to be adjusted or extended to increase precision and special cases.
In conclusion if you really need something like this, then you can write a script as above, but don't trust the output until you verify it yourself.
Add export PATH='' to the second line of your script.
Execute your_script.sh 2>&1 > /dev/null | grep 'No such file or directory' | awk '{print $4;}' | grep -v '/' | sort | uniq | sed 's/.$//'.
If you have a fedora/redhat based system, bash has been patched with the --rpm-requires flag
--rpm-requires: Produce the list of files that are required for the shell script to run. This implies -n and is subject to the same limitations as compile time error checking checking; Command substitutions, Conditional expressions and eval builtin are not parsed so some dependencies may be missed.
So when you run the following:
$ bash --rpm-requires script.sh
executable(command1)
function(function1)
function(function2)
executable(command2)
function(function3)
There are some limitations here:
command and process substitutions and conditional expressions are not picked up. So the following are ignored:
$(command)
<(command)
>(command)
command1 && command2 || command3
commands as strings are not picked up. So the following line will be ignored
"/path/to/my/command"
commands that contain shell variables are not listed. This generally makes sense since
some might be the result of some script logic, but even the following is ignored
$HOME/bin/command
This point can however be bypassed by using envsubst and running it as
$ bash --rpm-requires <(<script envsubst)
However, if you use shellcheck, you most likely quoted this and it will still be ignored due to point 2
So if you want to use check if your scripts are all there, you can do something like:
while IFS='' read -r app; do
[ "${app%%(*}" == "executable" ] || continue
app="${app#*(}"; app="${app%)}";
if [ "$(type -t "${app}")" != "builtin" ] && \
! [ -x "$(command -v "${app}")" ]
then
echo "${app}: missing application"
fi
done < <(bash --rpm-requires <(<"$0" envsubst) )
If your script contains files that are sourced that might contain various functions and other important definitions, you might want to do something like
bash --rpm-requires <(cat source1 source2 ... <(<script.sh envsubst))
Based #czvtools’ answer, I added some extra checks to filter out bad values:
#!/usr/bin/fish
if test "$argv[1]" = ""
echo "Give path to command to be tested"
exit 1
end
set commands (cat $argv \
| sed -e 's/\"/./g' -e "s/'[^']*'//g" -e 's/"[^"]*"//g' \
| sed -e "s/^[[:space:]]*#.*\$//" -e "s/\([^\\]\)#[^\"']*\$/\1/" \
| sed -e "s/&&/;/g" -e "s/||/;/g" | tr ";|" "\n\n" \
| awk '{print $1}' | sort -u)
for command in $commands
if command -q -- $command
set -a resolved (realpath (which $command))
end
end
set resolved (string join0 $resolved | sort -z -u | string split0)
for command in $resolved
echo $command
end
I am trying to just echo a command within my bash script code.
OVERRUN_ERRORS="$ifconfig | egrep -i "RX errors" | awk '{print $7}'"
echo ${OVERRUN_ERRORS}
however it gives me an error and the $7 does not show up in the command. I have to store it in a variable, because I will process the output (OVERRUN_ERRORS) at a later point in time. What's the right syntax for doing this? Thanks.
On Bash Syntax
foo="bar | baz"
...is assigning the string "bar | baz" to the variable named foo; it doesn't run bar | baz as a pipeline. To do that, you want to use command substitution, in either its modern $() syntax or antiquated backtick-based form:
foo="$(bar | baz)"
On Storing Code For Later Execution
Since your intent isn't clear in the question --
The correct way to store code is with a function, whereas the correct way to store output is in a string:
# store code in a function; this also works with pipelines
get_rx_errors() { cat /sys/class/net/"$1"/statistics/rx_errors; }
# store result of calling that function in a string
eth0_errors="$(get_rx_errors eth0)"
sleep 1 # wait a second for demonstration purposes, then...
# compare: echoing the stored value, vs calculating a new value
echo "One second ago, the number of rx errors was ${eth0_errors}"
etho "Right now, it is $(get_rx_errors eth0)"
See BashFAQ #50 for an extended discussion of the pitfalls of storing code in a string, and alternatives to same. Also relevant is BashFAQ #48, which describes in detail the security risks associated with a eval, which is often suggested as a workaround.
On Collecting Interface Error Counts
Don't use ifconfig, or grep, or awk for this at all -- just ask your kernel for the number you want:
#!/bin/bash
for device in /sys/class/net/*; do
[[ -e $device/statistics/rx_errors ]] || continue
rx_errors=$(<"${device}/statistics/rx_errors")
echo "Number of rx_errors for ${device##*/} is $rx_errors"
done
Use $(...) to capture the output of a command, not double quotes.
overrun_errors=$(ifconfig | egrep -i "RX errors" | awk '{print $7}')
Your double quotes around RX errors are a problem. Try;
OVERRUN_ERRORS="$ifconfig | egrep -i 'RX errors' | awk '{print $7}'"
To see the commands as they are executing, you can use
set -v
or
set -x
For example;
set -x
OVERRUN_ERRORS="$ifconfig | egrep -i 'RX errors' | awk '{print $7}'"
set +x
I have the following bash script which takes the tabular data as input,
get the first line and spit them vertically:
#!/bin/bash
# my_script.sh
export LC_ALL=C
file=$1
head -n1 $file |
tr "\t" "\n" |
awk '{print $1 " " NR-1}'
The problem is that I can only execute it this way:
$ myscript.sh some_tab_file.txt
What I want to do is on top of the above capability also allows you to do this:
$ cat some_tab_file.txt myscript.sh | myscript.sh
Namely take it from pipe output. How can I achieve that?
I'd normally write:
export LC_ALL=C
head -n1 "$#" |
tr "\t" "\n" |
awk '{print $1 " " NR-1}'
This works with any number of arguments, or none if there are none. Using "$#" is important in this and many other contexts. See the Bash manual on special parameters and shell parameter expansion for more information on the many and varied notations available for controlling how shell parameters are handled. Generally, double quotes are a good idea, especially if the file names may contain spaces.
A common idiom is to fall back to the input file - if there are no parameters. There is a convenient shorthand for that;
file=${1--}
The substitution ${variable-fallback} evaluates to the variable's value, or fallback if it's unset.
I believe your script should work as-is, though; head will read standard input if the (unquoted!) file name you pass in evaluates to the empty string.
Take care to properly double-quote all interpolations of "$file", by the way; otherwise, your script won't work on filenames containing spaces or shell metacharacters. (Then you break the fortunate side effect of not passing a filename to head if your script did not receive one, though.)
I want to use echo to display(not content) directories that start with atleast 2 characters but can't begin with "an"
For example if had the following in the directory:
a as an23 an23 blue
I would only get
as blue back
I tried echo ^an* but that returns the directory with 1 charcter too.
Is there any way i can do this in the form of echo globalpattern
You can use the shells extended globbing feature, in bash:
bash$ setsh -s extglob
bash$ echo !(#(?|an*))
The !() construct inverts its internal expression, see this for more.
In zsh:
zsh$ setopt extendedglob
zsh$ print *~(?|an*)
In this case the ~ negates the pattern before the tilde. See the manual for more.
Since you want at least two characters in the names, you can use printf '%s\n' ??* to echo each such name on a separate line. You can then eliminate those names that start with an with grep -v '^an', leading to:
printf '%s\n' ??* | grep -v '^an'
The quotes aren't strictly necessary in the grep command with modern shells. Once upon a quarter of a century or so ago, the Bourne shell had ^ as a synonym for | so I still use quotes around carets.
If you absolutely must use echo instead of printf, then you'll have to map white space to newlines (assuming you don't have any names that contain white space).
I'm trying with just the echo command, no grep either?
What about:
echo [!a]?* a[!n]*
The first term lists all the two-plus character names not beginning with a; the second lists all the two-plus character names where the first is a and the second is not n.
This should do it, but you'd likely be better off with ls or even find:
echo * | tr ' ' '\012' | egrep '..' | egrep -v '^an'
Shell globbing is a form of regex, but it's not as powerful as egrep regex's.