How to get the command line args passed to a running process on unix/linux systems? - linux

On SunOS there is pargs command that prints the command line arguments passed to the running process.
Is there is any similar command on other Unix environments?

There are several options:
ps -fp <pid>
cat /proc/<pid>/cmdline | sed -e "s/\x00/ /g"; echo
There is more info in /proc/<pid> on Linux, just have a look.
On other Unixes things might be different. The ps command will work everywhere, the /proc stuff is OS specific. For example on AIX there is no cmdline in /proc.

This will do the trick:
xargs -0 < /proc/<pid>/cmdline
Without the xargs, there will be no spaces between the arguments, because they have been converted to NULs.

Full commandline
For Linux & Unix System you can use ps -ef | grep process_name to get the full command line.
On SunOS systems, if you want to get full command line, you can use
/usr/ucb/ps -auxww | grep -i process_name
To get the full command line you need to become super user.
List of arguments
pargs -a PROCESS_ID
will give a detailed list of arguments passed to a process. It will output the array of arguments in like this:
argv[o]: first argument
argv[1]: second..
argv[*]: and so on..
I didn't find any similar command for Linux, but I would use the following command to get similar output:
tr '\0' '\n' < /proc/<pid>/environ

You can use pgrep with -f (full command line) and -l (long description):
pgrep -l -f PatternOfProcess
This method has a crucial difference with any of the other responses: it works on CygWin, so you can use it to obtain the full command line of any process running under Windows (execute as elevated if you want data about any elevated/admin process). Any other method for doing this on Windows is more awkward ( for example ).
Furthermore: in my tests, the pgrep way has been the only system that worked to obtain the full path for scripts running inside CygWin's python.

On Linux
cat /proc/<pid>/cmdline
outputs the commandline of the process <pid> (command including args) each record terminated by a NUL character.
A Bash Shell Example:
$ mapfile -d '' args < /proc/$$/cmdline
$ echo "#${#args[#]}:" "${args[#]}"
#1: /bin/bash
$ echo $BASH_VERSION
5.0.17(1)-release

Another variant of printing /proc/PID/cmdline with spaces in Linux is:
cat -v /proc/PID/cmdline | sed 's/\^#/\ /g' && echo
In this way cat prints NULL characters as ^# and then you replace them with a space using sed; echo prints a newline.

Rather than using multiple commands to edit the stream, just use one - tr translates one character to another:
tr '\0' ' ' </proc/<pid>/cmdline

ps -eo pid,args prints the PID and the full command line.

You can simply use:
ps -o args= -f -p ProcessPid

In addition to all the above ways to convert the text, if you simply use 'strings', it will make the output on separate lines by default. With the added benefit that it may also prevent any chars that may scramble your terminal from appearing.
Both output in one command:
strings /proc//cmdline /proc//environ
The real question is... is there a way to see the real command line of a process in Linux that has been altered so that the cmdline contains the altered text instead of the actual command that was run.

On Solaris
ps -eo pid,comm
similar can be used on unix like systems.

On Linux, with bash, to output as quoted args so you can edit the command and rerun it
</proc/"${pid}"/cmdline xargs --no-run-if-empty -0 -n1 \
bash -c 'printf "%q " "${1}"' /dev/null; echo
On Solaris, with bash (tested with 3.2.51(1)-release) and without gnu userland:
IFS=$'\002' tmpargs=( $( pargs "${pid}" \
| /usr/bin/sed -n 's/^argv\[[0-9]\{1,\}\]: //gp' \
| tr '\n' '\002' ) )
for tmparg in "${tmpargs[#]}"; do
printf "%q " "$( echo -e "${tmparg}" )"
done; echo
Linux bash Example (paste in terminal):
{
## setup intial args
argv=( /bin/bash -c '{ /usr/bin/sleep 10; echo; }' /dev/null 'BEGIN {system("sleep 2")}' "this is" \
"some" "args "$'\n'" that" $'\000' $'\002' "need" "quot"$'\t'"ing" )
## run in background
"${argv[#]}" &
## recover into eval string that assigns it to argv_recovered
eval_me=$(
printf "argv_recovered=( "
</proc/"${!}"/cmdline xargs --no-run-if-empty -0 -n1 \
bash -c 'printf "%q " "${1}"' /dev/null
printf " )\n"
)
## do eval
eval "${eval_me}"
## verify match
if [ "$( declare -p argv )" == "$( declare -p argv_recovered | sed 's/argv_recovered/argv/' )" ];
then
echo MATCH
else
echo NO MATCH
fi
}
Output:
MATCH
Solaris Bash Example:
{
## setup intial args
argv=( /bin/bash -c '{ /usr/bin/sleep 10; echo; }' /dev/null 'BEGIN {system("sleep 2")}' "this is" \
"some" "args "$'\n'" that" $'\000' $'\002' "need" "quot"$'\t'"ing" )
## run in background
"${argv[#]}" &
pargs "${!}"
ps -fp "${!}"
declare -p tmpargs
eval_me=$(
printf "argv_recovered=( "
IFS=$'\002' tmpargs=( $( pargs "${!}" \
| /usr/bin/sed -n 's/^argv\[[0-9]\{1,\}\]: //gp' \
| tr '\n' '\002' ) )
for tmparg in "${tmpargs[#]}"; do
printf "%q " "$( echo -e "${tmparg}" )"
done; echo
printf " )\n"
)
## do eval
eval "${eval_me}"
## verify match
if [ "$( declare -p argv )" == "$( declare -p argv_recovered | sed 's/argv_recovered/argv/' )" ];
then
echo MATCH
else
echo NO MATCH
fi
}
Output:
MATCH

If you want to get a long-as-possible (not sure what limits there are), similar to Solaris' pargs, you can use this on Linux & OSX:
ps -ww -o pid,command [-p <pid> ... ]

try ps -n in a linux terminal. This will show:
1.All processes RUNNING, their command line and their PIDs
The program intiate the processes.
Afterwards you will know which process to kill

Related

Find the current shell of the user using a shell script [duplicate]

How can I determine the current shell I am working on?
Would the output of the ps command alone be sufficient?
How can this be done in different flavors of Unix?
There are three approaches to finding the name of the current shell's executable:
Please note that all three approaches can be fooled if the executable of the shell is /bin/sh, but it's really a renamed bash, for example (which frequently happens).
Thus your second question of whether ps output will do is answered with "not always".
echo $0 - will print the program name... which in the case of the shell is the actual shell.
ps -ef | grep $$ | grep -v grep - this will look for the current process ID in the list of running processes. Since the current process is the shell, it will be included.
This is not 100% reliable, as you might have other processes whose ps listing includes the same number as shell's process ID, especially if that ID is a small number (for example, if the shell's PID is "5", you may find processes called "java5" or "perl5" in the same grep output!). This is the second problem with the "ps" approach, on top of not being able to rely on the shell name.
echo $SHELL - The path to the current shell is stored as the SHELL variable for any shell. The caveat for this one is that if you launch a shell explicitly as a subprocess (for example, it's not your login shell), you will get your login shell's value instead. If that's a possibility, use the ps or $0 approach.
If, however, the executable doesn't match your actual shell (e.g. /bin/sh is actually bash or ksh), you need heuristics. Here are some environmental variables specific to various shells:
$version is set on tcsh
$BASH is set on bash
$shell (lowercase) is set to actual shell name in csh or tcsh
$ZSH_NAME is set on zsh
ksh has $PS3 and $PS4 set, whereas the normal Bourne shell (sh) only has $PS1 and $PS2 set. This generally seems like the hardest to distinguish - the only difference in the entire set of environment variables between sh and ksh we have installed on Solaris boxen is $ERRNO, $FCEDIT, $LINENO, $PPID, $PS3, $PS4, $RANDOM, $SECONDS, and $TMOUT.
ps -p $$
should work anywhere that the solutions involving ps -ef and grep do (on any Unix variant which supports POSIX options for ps) and will not suffer from the false positives introduced by grepping for a sequence of digits which may appear elsewhere.
Try
ps -p $$ -oargs=
or
ps -p $$ -ocomm=
If you just want to ensure the user is invoking a script with Bash:
if [ -z "$BASH" ]; then echo "Please run this script $0 with bash"; exit; fi
or ref
if [ -z "$BASH" ]; then exec bash $0 ; exit; fi
You can try:
ps | grep `echo $$` | awk '{ print $4 }'
Or:
echo $SHELL
$SHELL need not always show the current shell. It only reflects the default shell to be invoked.
To test the above, say bash is the default shell, try echo $SHELL, and then in the same terminal, get into some other shell (KornShell (ksh) for example) and try $SHELL. You will see the result as bash in both cases.
To get the name of the current shell, Use cat /proc/$$/cmdline. And the path to the shell executable by readlink /proc/$$/exe.
There are many ways to find out the shell and its corresponding version. Here are few which worked for me.
Straightforward
$> echo $0 (Gives you the program name. In my case the output was -bash.)
$> $SHELL (This takes you into the shell and in the prompt you get the shell name and version. In my case bash3.2$.)
$> echo $SHELL (This will give you executable path. In my case /bin/bash.)
$> $SHELL --version (This will give complete info about the shell software with license type)
Hackish approach
$> ******* (Type a set of random characters and in the output you will get the shell name. In my case -bash: chapter2-a-sample-isomorphic-app: command not found)
ps is the most reliable method. The SHELL environment variable is not guaranteed to be set and even if it is, it can be easily spoofed.
I have a simple trick to find the current shell. Just type a random string (which is not a command). It will fail and return a "not found" error, but at start of the line it will say which shell it is:
ksh: aaaaa: not found [No such file or directory]
bash: aaaaa: command not found
I have tried many different approaches and the best one for me is:
ps -p $$
It also works under Cygwin and cannot produce false positives as PID grepping. With some cleaning, it outputs just an executable name (under Cygwin with path):
ps -p $$ | tail -1 | awk '{print $NF}'
You can create a function so you don't have to memorize it:
# Print currently active shell
shell () {
ps -p $$ | tail -1 | awk '{print $NF}'
}
...and then just execute shell.
It was tested under Debian and Cygwin.
The following will always give the actual shell used - it gets the name of the actual executable and not the shell name (i.e. ksh93 instead of ksh, etc.). For /bin/sh, it will show the actual shell used, i.e. dash.
ls -l /proc/$$/exe | sed 's%.*/%%'
I know that there are many who say the ls output should never be processed, but what is the probability you'll have a shell you are using that is named with special characters or placed in a directory named with special characters? If this is still the case, there are plenty of other examples of doing it differently.
As pointed out by Toby Speight, this would be a more proper and cleaner way of achieving the same:
basename $(readlink /proc/$$/exe)
My variant on printing the parent process:
ps -p $$ | awk '$1 == PP {print $4}' PP=$$
Don't run unnecessary applications when AWK can do it for you.
Provided that your /bin/sh supports the POSIX standard and your system has the lsof command installed - a possible alternative to lsof could in this case be pid2path - you can also use (or adapt) the following script that prints full paths:
#!/bin/sh
# cat /usr/local/bin/cursh
set -eu
pid="$$"
set -- sh bash zsh ksh ash dash csh tcsh pdksh mksh fish psh rc scsh bournesh wish Wish login
unset echo env sed ps lsof awk getconf
# getconf _POSIX_VERSION # reliable test for availability of POSIX system?
PATH="`PATH=/usr/bin:/bin:/usr/sbin:/sbin getconf PATH`"
[ $? -ne 0 ] && { echo "'getconf PATH' failed"; exit 1; }
export PATH
cmd="lsof"
env -i PATH="${PATH}" type "$cmd" 1>/dev/null 2>&1 || { echo "$cmd not found"; exit 1; }
awkstr="`echo "$#" | sed 's/\([^ ]\{1,\}\)/|\/\1/g; s/ /$/g' | sed 's/^|//; s/$/$/'`"
ppid="`env -i PATH="${PATH}" ps -p $pid -o ppid=`"
[ "${ppid}"X = ""X ] && { echo "no ppid found"; exit 1; }
lsofstr="`lsof -p $ppid`" ||
{ printf "%s\n" "lsof failed" "try: sudo lsof -p \`ps -p \$\$ -o ppid=\`"; exit 1; }
printf "%s\n" "${lsofstr}" |
LC_ALL=C awk -v var="${awkstr}" '$NF ~ var {print $NF}'
My solution:
ps -o command | grep -v -e "\<ps\>" -e grep -e tail | tail -1
This should be portable across different platforms and shells. It uses ps like other solutions, but it doesn't rely on sed or awk and filters out junk from piping and ps itself so that the shell should always be the last entry. This way we don't need to rely on non-portable PID variables or picking out the right lines and columns.
I've tested on Debian and macOS with Bash, Z shell (zsh), and fish (which doesn't work with most of these solutions without changing the expression specifically for fish, because it uses a different PID variable).
If you just want to check that you are running (a particular version of) Bash, the best way to do so is to use the $BASH_VERSINFO array variable. As a (read-only) array variable it cannot be set in the environment,
so you can be sure it is coming (if at all) from the current shell.
However, since Bash has a different behavior when invoked as sh, you do also need to check the $BASH environment variable ends with /bash.
In a script I wrote that uses function names with - (not underscore), and depends on associative arrays (added in Bash 4), I have the following sanity check (with helpful user error message):
case `eval 'echo $BASH#${BASH_VERSINFO[0]}' 2>/dev/null` in
*/bash#[456789])
# Claims bash version 4+, check for func-names and associative arrays
if ! eval "declare -A _ARRAY && func-name() { :; }" 2>/dev/null; then
echo >&2 "bash $BASH_VERSION is not supported (not really bash?)"
exit 1
fi
;;
*/bash#[123])
echo >&2 "bash $BASH_VERSION is not supported (version 4+ required)"
exit 1
;;
*)
echo >&2 "This script requires BASH (version 4+) - not regular sh"
echo >&2 "Re-run as \"bash $CMD\" for proper operation"
exit 1
;;
esac
You could omit the somewhat paranoid functional check for features in the first case, and just assume that future Bash versions would be compatible.
None of the answers worked with fish shell (it doesn't have the variables $$ or $0).
This works for me (tested on sh, bash, fish, ksh, csh, true, tcsh, and zsh; openSUSE 13.2):
ps | tail -n 4 | sed -E '2,$d;s/.* (.*)/\1/'
This command outputs a string like bash. Here I'm only using ps, tail, and sed (without GNU extesions; try to add --posix to check it). They are all standard POSIX commands. I'm sure tail can be removed, but my sed fu is not strong enough to do this.
It seems to me, that this solution is not very portable as it doesn't work on OS X. :(
echo $$ # Gives the Parent Process ID
ps -ef | grep $$ | awk '{print $8}' # Use the PID to see what the process is.
From How do you know what your current shell is?.
This is not a very clean solution, but it does what you want.
# MUST BE SOURCED..
getshell() {
local shell="`ps -p $$ | tail -1 | awk '{print $4}'`"
shells_array=(
# It is important that the shells are listed in descending order of their name length.
pdksh
bash dash mksh
zsh ksh
sh
)
local suited=false
for i in ${shells_array[*]}; do
if ! [ -z `printf $shell | grep $i` ] && ! $suited; then
shell=$i
suited=true
fi
done
echo $shell
}
getshell
Now you can use $(getshell) --version.
This works, though, only on KornShell-like shells (ksh).
Do the following to know whether your shell is using Dash/Bash.
ls –la /bin/sh:
if the result is /bin/sh -> /bin/bash ==> Then your shell is using Bash.
if the result is /bin/sh ->/bin/dash ==> Then your shell is using Dash.
If you want to change from Bash to Dash or vice-versa, use the below code:
ln -s /bin/bash /bin/sh (change shell to Bash)
Note: If the above command results in a error saying, /bin/sh already exists, remove the /bin/sh and try again.
I like Nahuel Fouilleul's solution particularly, but I had to run the following variant of it on Ubuntu 18.04 (Bionic Beaver) with the built-in Bash shell:
bash -c 'shellPID=$$; ps -ocomm= -q $shellPID'
Without the temporary variable shellPID, e.g. the following:
bash -c 'ps -ocomm= -q $$'
Would just output ps for me. Maybe you aren't all using non-interactive mode, and that makes a difference.
Get it with the $SHELL environment variable. A simple sed could remove the path:
echo $SHELL | sed -E 's/^.*\/([aA-zZ]+$)/\1/g'
Output:
bash
It was tested on macOS, Ubuntu, and CentOS.
On Mac OS X (and FreeBSD):
ps -p $$ -axco command | sed -n '$p'
Grepping PID from the output of "ps" is not needed, because you can read the respective command line for any PID from the /proc directory structure:
echo $(cat /proc/$$/cmdline)
However, that might not be any better than just simply:
echo $0
About running an actually different shell than the name indicates, one idea is to request the version from the shell using the name you got previously:
<some_shell> --version
sh seems to fail with exit code 2 while others give something useful (but I am not able to verify all since I don't have them):
$ sh --version
sh: 0: Illegal option --
echo $?
2
One way is:
ps -p $$ -o exe=
which is IMO better than using -o args or -o comm as suggested in another answer (these may use, e.g., some symbolic link like when /bin/sh points to some specific shell as Dash or Bash).
The above returns the path of the executable, but beware that due to /usr-merge, one might need to check for multiple paths (e.g., /bin/bash and /usr/bin/bash).
Also note that the above is not fully POSIX-compatible (POSIX ps doesn't have exe).
Kindly use the below command:
ps -p $$ | tail -1 | awk '{print $4}'
This one works well on Red Hat Linux (RHEL), macOS, BSD and some AIXes:
ps -T $$ | awk 'NR==2{print $NF}'
alternatively, the following one should also work if pstree is available,
pstree | egrep $$ | awk 'NR==2{print $NF}'
You can use echo $SHELL|sed "s/\/bin\///g"
And I came up with this:
sed 's/.*SHELL=//; s/[[:upper:]].*//' /proc/$$/environ

How can I get the command that was executed at the command line?

If I call a script this way:
myScript.sh -a something -b anotherSomething
Within my script is there a way to get the command that called the script?
In my script on the first line I'm trying to use:
lastCommand=!!
echo $lastCommand
But the result is always null.
If I do echo !! the only thing that prints to the console is !!, but from the command line if I do echo !! I get the last command printed.
I've also tried:
echo $BASH_COMMAND
but I'm getting null here as well. Is it because the script is called in a subshell and thus there is no previous command stored in memory for the subshell?
The full command which called the script would be "$0" "$#", that is, the command itself followed by all the arguments quoted. This may not be the exact command which was run, but if the script is idempotent it can be run to get the same result:
$ cat myScript.sh
#!/usr/bin/env bash
printf '%q ' "$0" "$#"
printf '\n'
$ ./myScript.sh -a "foo bar" -b bar
./myScript.sh -a foo\ bar -b bar
Here's my script myScript.sh
#!/bin/bash
temp=`mktemp`
ps --pid $BASHPID -f > $temp
lastCommand=`tail -n 1 $temp | xargs | cut -d ' ' -f 8-`
rm $temp
echo $lastCommand
or
#!/bin/sh
last=`cat /proc/$$/cmdline | xargs -0`
echo $last

Find and highlight text in linux command line

I am looking for a linux command that searches a string in a text file,
and highlights (colors) it on every occurence in the file, WITHOUT omitting text lines (like grep does).
I wrote this handy little script. It could probably be expanded to handle args better
#!/bin/bash
if [ "$1" == "" ]; then
echo "Usage: hl PATTERN [FILE]..."
elif [ "$2" == "" ]; then
grep -E --color "$1|$" /dev/stdin
else
grep -E --color "$1|$" $2
fi
it's useful for stuff like highlighting users running processes:
ps -ef | hl "alice|bob"
Try
tail -f yourfile.log | egrep --color 'DEBUG|'
where DEBUG is the text you want to highlight.
command | grep -iz -e "keyword1" -e "keyword2" (ignore -e switch if just searching for a single word, -i for ignore case, -z for treating as a single file)
Alternatively,while reading files
grep -iz -e "keyword1" -e "keyword2" 'filename'
OR
command | grep -A 99999 -B 99999 -i -e "keyword1" "keyword2" (ignore -e switch if just searching for a single word, -i for ignore case,-A and -B for no of lines before/after the keyword to be displayed)
Alternatively,while reading files
grep -A 99999 -B 99999 -i -e "keyword1" "keyword2" 'filename'
command ack with --passthru switch:
ack --passthru pattern path/to/file
I take it you meant "without omitting text lines" (instead of emitting)...
I know of no such command, but you can use a script such as this (this one is a simple solution that takes the filename (without spaces) as the first argument and the search string (also without spaces) as the second):
#!/usr/bin/env bash
ifs_store=$IFS;
IFS=$'\n';
for line in $(cat $1);
do if [ $(echo $line | grep -c $2) -eq 0 ]; then
echo $line;
else
echo $line | grep --color=always $2;
fi
done
IFS=$ifs_store
save as, for instance colorcat.sh, set permissions appropriately (to be able to execute it) and call it as
colorcat.sh filename searchstring
I had a requirement like this recently and hacked up a small program to do exactly this. Link
Usage: ./highlight test.txt '^foo' 'bar$'
Note that this is very rough, but could be made into a general tool with some polishing.
Using dwdiff, output differences with colors and line numbers.
echo "Hello world # $(date)" > file1.txt
echo "Hello world # $(date)" > file2.txt
dwdiff -c -C 0 -L file1.txt file2.txt

bash - errors trying to pipe commands to run to separate function

I'm trying to get this function for making it easy to parallelize my bash scripts working. The idea is simple; instead of running each command sequentially, I pipe the command I want to run to this function and it does while read line; run the jobs in the bg for me and take care of logistics.... it doesn't work though. I added set -x by where stuff's executed and it looks like I'm getting weird quotes around the stuff I want executed... what should I do?
runParallel () {
while read line
do
while [ "`jobs | wc -l`" -eq 8 ]
do
sleep 2
done
{
set -x
${line}
set +x
} &
done
while [ "`jobs | wc -l`" -gt 0 ]
do
sleep 1
jobs >/dev/null 2>/dev/null
echo sleeping
done
}
for H in `ypcat hosts | grep fmez | grep -v mgmt | cut -d\ -f2 | sort -u`
do
echo 'ping -q -c3 $H 2>/dev/null 1>/dev/null && echo $H - UP || echo $H - DOWN'
done | runParallel
When I run it, I get output like the following:
> ./myscript.sh
+ ping -q -c3 '$H' '2>/dev/null' '1>/dev/null' '&&' echo '$H' - UP '||' echo '$H' - DOWN
Usage: ping [-LRUbdfnqrvVaA] [-c count] [-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination
+ set +x
sleeping
>
The quotes in the set -x output are not the problem, at most they are another result of the problem. The main problem is that ${line} is not the same as eval ${line}.
When a variable is expanded, the resulting words are not treated as shell reserved constructs. And this is expected, it means that eg.
A="some text containing > ; && and other weird stuff"
echo $A
does not shout about invalid syntax but prints the variable value.
But in your function it means that all the words in ${line}, including 2>/dev/null and the like, are passed as arguments to ping, which set -x output nicely shows, and so ping complains.
If you want to execute from variables complicated commandlines with redirections and conditionals, you will have to use eval.
If I'm understanding this correctly, you probably don't want single quotes in your echo command. Single quotes are literal strings, and don't interpret your bash variable $H.
Like many users of GNU Parallel you seem to have written your own parallelizer.
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
cat hosts | parallel -j8 'ping -q -c3 {} 2>/dev/null 1>/dev/null && echo {} - UP || echo {} - DOWN'
You can install GNU Parallel simply by:
wget http://git.savannah.gnu.org/cgit/parallel.git/plain/src/parallel
chmod 755 parallel
cp parallel sem
Watch the intro videos for GNU Parallel to learn more:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Put your command in an array.

Saving a command into a variable instead of running it

I'm trying to get the output of the ps command to output to a file, then to use that file to populate a radiolist. So far I'm having problems.
eval "ps -o pid,command">/tmp/process$$
more /tmp/process$$
sed -e '1d' /tmp/process$$ > /tmp/process2$$
while IFS= read -r pid command
do
msgboxlist="$msgboxlist" $($pid) $($command) "off"
done</tmp/process2$$
height=`wc -l "/tmp/process$$" | awk '{print $1}'`
width=`wc --max-line-length "/tmp/process$$" | awk '{print $1}'`
echo $height $width
dialog \
--title "Directory Listing" \
--radiolist "Select process to terminate" "$msgboxlist" $(($height+7)) $(($width+4))
So far not only does the while read not split the columns into 2 variables ($pid is the whole line and $command is blank) but when I try to run this the script is trying to run the line as a command. For example:
+ read -r pid command
++ 7934 bash -x assessment.ba
assessment.ba: line 322: 7934: command not found
+ msgboxlist=
+ off
assessment.ba: line 322: off: command not found
Basically I have no idea where I'm supposed to be putting quotes, double quotes and backslashes. It's driving me wild.
tl;dr Saving a command into a variable without running it, how?
You're trying to execute $pid and $command as commands:
msgboxlist="$msgboxlist" $($pid) $($command) "off"
Try:
msgboxlist="$msgboxlist $pid $command off"
Or use an array:
msgboxlist=() # do this before the while loop
msgboxlist+=($pid $command "off")
# when you need to use the whole list:
echo "${msgboxlist[#]}"
Your script can be refactored by removing some unnecessary calls like this:
ps -o pid=,command= > /tmp/process$$
msgboxlist=""
while read -r pid command
do
msgboxlist="$msgboxlist $pid $command off"
done < /tmp/process2$$
height=$(awk 'END {print NR}' "/tmp/process$$")
width=$(awk '{if (l<length($0)) l=length($0)} END{print l}' "/tmp/process$$")
dialog --title "Directory Listing" \
--radiolist "Select process to terminate" "$msgboxlist" $(($height+7)) $(($width+4))
I have to admit, I'm not 100% clear on what you're doing; but I think you want to change this:
msgboxlist="$msgboxlist" $($pid) $($command) "off"
to this:
msgboxlist+=("$pid" "$command" off)
which will add the PID, the command, and "off" as three new elements to the array named msgboxlist. You'd then change "$msgboxlist" to "${msgboxlist[#]}" in the dialog command, to include all of those elements as arguments to the command.
Use double quotes when you want variables to be expanded. Use single quotes to disable variable expansion.
Here's an example of a command saved for later execution.
file="readme.txt"
cmd="ls $file" # $file is expanded to readme.txt
echo "$cmd" # ls readme.txt
$cmd # lists readme.txt
Edit adressing the read:
Using read generally reads an entire line. Consider this instead (tested):
ps o pid=,command= | while read line ; do
set $line
pid=$1
command=$2
echo $pid $command
done
Also note the different usage of 'ps o pid=,command=' to skip displaying headers.

Resources