Linux 'cut' command line and replace - linux

I need to create some text using cut command and replace with space, on Linux terminal.
Examples:
Linux
inux
nux
ux
x
This is my bash script.
#!/bin/bash
INPUT=$#
SIZE=$(echo $INPUT|wc -c)
let $((SIZE--))
for i in $(seq 1 $SIZE);
do echo $INPUT | cut -c ${i}-${SIZE} ;
done
and i have failed to create some text like :
Linux
inux
nux
ux
x

This should do the trick:
#!/bin/bash
INPUT="$#"
SIZE=${#INPUT}
for ((i=0; i < ${SIZE}; i++)); do
echo "${INPUT}"
INPUT="${INPUT:0:${i}} ${INPUT:$((i+1)):${SIZE}}"
#INPUT="$(echo "$INPUT" | sed "s/^\(.\{${i}\}\)./\1 /")"
done
I added a sed option in the comment, although it creates a sub-process when you don't really have to.

Related

Calling a shell script that is stored in another shell script variabl

I searched SO but could not find any relevant post with this specific problem. I would like to know how to call a shell script which is stored in a variable of another shell script.
In the below script I am trying to read service name & corresponding shellscript, check if the service is running, if not, start the service using the shell script associated with that service name. tried multiple options shared in various forums(like 'eval' etc) with no luck. please help to provide your suggestions on this.
checker.sh
#!/bin/sh
while read service
do
servicename=`echo $service | cut -d: -f1`
servicestartcommand=`echo $service | rev | cut -d: -f1 | rev`
if (( $(ps -ef | grep -v grep | grep $servicename | wc -l) > 0 ))
then
echo "$servicename Running"
else
echo "!!$servicename!! Not Running, calling $servicestartcommand"
eval "$servicestartcommand"
fi
done < names.txt
Names.txt
WebSphere:\opt\software\WebSphere\startServer.sh
WebLogic:\opt\software\WebLogic\startWeblogic.sh
Your script can be refactored into this:
#!/bin/bash
while IFS=: read -r servicename servicestartcommand; do
if ps cax | grep -q "$servicename"; then
echo "$servicename Running"
else
echo "!!$servicename!! Not Running, calling $servicestartcommand"
$servicestartcommand
fi
done < names.txt
No need to use wc -l after grep's output as you can use grep -q
No need to use read full line and then use cut, rev etc later. You can use IFS=: and read the line into 2 separate variables
No need to use eval in the end
It is much simpler than you expect. Instead of:
eval "$servicestartcommand"
eval should only be used in extreme circumstances. All you need is
$servicestartcommand
Note: no quotes.
As an example, try this on the command-line:
cmd='ls -l'
$cmd
That should work. But:
"$cmd"
will fail. It will look for a program with a space in its name called 'ls -l'.
May be I don't get the idea, but why not use system variables?
export FOO=bar
echo $FOO
bar

Removing forks in a shell script so that it runs well in Cygwin

I'm trying to run a shell script on windows in Cygwin. The problem I'm having is that it runs extremely slowly in the following section of code. From a bit of googling, I believe its due to there being a large amount of fork() calls within the script and as windows has to use Cygwins emulation of this, it just slows to a crawl.
A typical scenario would be in Linux, the script would complete in < 10 seconds (depending on file size) but in Windows on Cygin for the same file it would take nearly 10 minutes.....
So the question is, how can i remove some of these forks and still have the script return the same output. I'm not expecting miracles but I'd like to cut that 10 minute wait time down a fair bit.
Thanks.
check_for_customization(){
filename="$1"
extended_class_file="$2"
grep "extends" "$filename" | grep "class" | grep -v -e '^\s*<!--' | while read line; do
classname="$(echo $line | perl -pe 's{^.*class\s*([^\s]+).*}{$1}')"
extended_classname="$(echo $line | perl -pe 's{^.*extends\s*([^\s]+).*}{$1}')"
case "$classname" in
*"$extended_classname"*) echo "$filename"; echo "$extended_classname |$classname | $filename" >> "$extended_class_file";;
esac
done
}
Update: Changed the regex a bit and used a bit more perl:
check_for_customization(){
filename="$1"
extended_class_file="$2"
grep "^\(class\|\(.*\s\)*class\)\s.*\sextends\s\S*\(.*$\)" "$filename" | grep -v -e '^\s*<!--' | perl -pe 's{^.*class\s*([^\s]+).*extends\s*([^\s]+).*}{$1 $2}' | while read classname extended_classname; do
case "$classname" in
*"$extended_classname"*) echo "$filename"; echo "$extended_classname | $classname | $filename" >> "$extended_class_file";;
esac
done
}
So, using the above code, the run time was reduced from about 8 minutes to 2.5 minutes. Quite an improvement.
If anybody can suggest any other changes I would appreciate it.
Put more commands into one perl script, e. g.
check_for_customization(){
filename="$1" extended_class_file="$2" perl -n - "$1" <<\EOF
next if /^\s*<!--/;
next unless /^.*class\s*([^\s]+).*/; $classname = $1;
next unless /^.*extends\s*([^\s]+).*/; $extended_classname = $1;
if (index($extended_classname, $classname) != -1)
{
print "$ENV{filename}\n";
open FILEOUT, ">>$ENV{extended_class_file}";
print FILEOUT "$extended_classname |$classname | $ENV{filename}\n"
}
EOF
}

Find and highlight text in linux command line

I am looking for a linux command that searches a string in a text file,
and highlights (colors) it on every occurence in the file, WITHOUT omitting text lines (like grep does).
I wrote this handy little script. It could probably be expanded to handle args better
#!/bin/bash
if [ "$1" == "" ]; then
echo "Usage: hl PATTERN [FILE]..."
elif [ "$2" == "" ]; then
grep -E --color "$1|$" /dev/stdin
else
grep -E --color "$1|$" $2
fi
it's useful for stuff like highlighting users running processes:
ps -ef | hl "alice|bob"
Try
tail -f yourfile.log | egrep --color 'DEBUG|'
where DEBUG is the text you want to highlight.
command | grep -iz -e "keyword1" -e "keyword2" (ignore -e switch if just searching for a single word, -i for ignore case, -z for treating as a single file)
Alternatively,while reading files
grep -iz -e "keyword1" -e "keyword2" 'filename'
OR
command | grep -A 99999 -B 99999 -i -e "keyword1" "keyword2" (ignore -e switch if just searching for a single word, -i for ignore case,-A and -B for no of lines before/after the keyword to be displayed)
Alternatively,while reading files
grep -A 99999 -B 99999 -i -e "keyword1" "keyword2" 'filename'
command ack with --passthru switch:
ack --passthru pattern path/to/file
I take it you meant "without omitting text lines" (instead of emitting)...
I know of no such command, but you can use a script such as this (this one is a simple solution that takes the filename (without spaces) as the first argument and the search string (also without spaces) as the second):
#!/usr/bin/env bash
ifs_store=$IFS;
IFS=$'\n';
for line in $(cat $1);
do if [ $(echo $line | grep -c $2) -eq 0 ]; then
echo $line;
else
echo $line | grep --color=always $2;
fi
done
IFS=$ifs_store
save as, for instance colorcat.sh, set permissions appropriately (to be able to execute it) and call it as
colorcat.sh filename searchstring
I had a requirement like this recently and hacked up a small program to do exactly this. Link
Usage: ./highlight test.txt '^foo' 'bar$'
Note that this is very rough, but could be made into a general tool with some polishing.
Using dwdiff, output differences with colors and line numbers.
echo "Hello world # $(date)" > file1.txt
echo "Hello world # $(date)" > file2.txt
dwdiff -c -C 0 -L file1.txt file2.txt

Saving a command into a variable instead of running it

I'm trying to get the output of the ps command to output to a file, then to use that file to populate a radiolist. So far I'm having problems.
eval "ps -o pid,command">/tmp/process$$
more /tmp/process$$
sed -e '1d' /tmp/process$$ > /tmp/process2$$
while IFS= read -r pid command
do
msgboxlist="$msgboxlist" $($pid) $($command) "off"
done</tmp/process2$$
height=`wc -l "/tmp/process$$" | awk '{print $1}'`
width=`wc --max-line-length "/tmp/process$$" | awk '{print $1}'`
echo $height $width
dialog \
--title "Directory Listing" \
--radiolist "Select process to terminate" "$msgboxlist" $(($height+7)) $(($width+4))
So far not only does the while read not split the columns into 2 variables ($pid is the whole line and $command is blank) but when I try to run this the script is trying to run the line as a command. For example:
+ read -r pid command
++ 7934 bash -x assessment.ba
assessment.ba: line 322: 7934: command not found
+ msgboxlist=
+ off
assessment.ba: line 322: off: command not found
Basically I have no idea where I'm supposed to be putting quotes, double quotes and backslashes. It's driving me wild.
tl;dr Saving a command into a variable without running it, how?
You're trying to execute $pid and $command as commands:
msgboxlist="$msgboxlist" $($pid) $($command) "off"
Try:
msgboxlist="$msgboxlist $pid $command off"
Or use an array:
msgboxlist=() # do this before the while loop
msgboxlist+=($pid $command "off")
# when you need to use the whole list:
echo "${msgboxlist[#]}"
Your script can be refactored by removing some unnecessary calls like this:
ps -o pid=,command= > /tmp/process$$
msgboxlist=""
while read -r pid command
do
msgboxlist="$msgboxlist $pid $command off"
done < /tmp/process2$$
height=$(awk 'END {print NR}' "/tmp/process$$")
width=$(awk '{if (l<length($0)) l=length($0)} END{print l}' "/tmp/process$$")
dialog --title "Directory Listing" \
--radiolist "Select process to terminate" "$msgboxlist" $(($height+7)) $(($width+4))
I have to admit, I'm not 100% clear on what you're doing; but I think you want to change this:
msgboxlist="$msgboxlist" $($pid) $($command) "off"
to this:
msgboxlist+=("$pid" "$command" off)
which will add the PID, the command, and "off" as three new elements to the array named msgboxlist. You'd then change "$msgboxlist" to "${msgboxlist[#]}" in the dialog command, to include all of those elements as arguments to the command.
Use double quotes when you want variables to be expanded. Use single quotes to disable variable expansion.
Here's an example of a command saved for later execution.
file="readme.txt"
cmd="ls $file" # $file is expanded to readme.txt
echo "$cmd" # ls readme.txt
$cmd # lists readme.txt
Edit adressing the read:
Using read generally reads an entire line. Consider this instead (tested):
ps o pid=,command= | while read line ; do
set $line
pid=$1
command=$2
echo $pid $command
done
Also note the different usage of 'ps o pid=,command=' to skip displaying headers.

How to get the command line args passed to a running process on unix/linux systems?

On SunOS there is pargs command that prints the command line arguments passed to the running process.
Is there is any similar command on other Unix environments?
There are several options:
ps -fp <pid>
cat /proc/<pid>/cmdline | sed -e "s/\x00/ /g"; echo
There is more info in /proc/<pid> on Linux, just have a look.
On other Unixes things might be different. The ps command will work everywhere, the /proc stuff is OS specific. For example on AIX there is no cmdline in /proc.
This will do the trick:
xargs -0 < /proc/<pid>/cmdline
Without the xargs, there will be no spaces between the arguments, because they have been converted to NULs.
Full commandline
For Linux & Unix System you can use ps -ef | grep process_name to get the full command line.
On SunOS systems, if you want to get full command line, you can use
/usr/ucb/ps -auxww | grep -i process_name
To get the full command line you need to become super user.
List of arguments
pargs -a PROCESS_ID
will give a detailed list of arguments passed to a process. It will output the array of arguments in like this:
argv[o]: first argument
argv[1]: second..
argv[*]: and so on..
I didn't find any similar command for Linux, but I would use the following command to get similar output:
tr '\0' '\n' < /proc/<pid>/environ
You can use pgrep with -f (full command line) and -l (long description):
pgrep -l -f PatternOfProcess
This method has a crucial difference with any of the other responses: it works on CygWin, so you can use it to obtain the full command line of any process running under Windows (execute as elevated if you want data about any elevated/admin process). Any other method for doing this on Windows is more awkward ( for example ).
Furthermore: in my tests, the pgrep way has been the only system that worked to obtain the full path for scripts running inside CygWin's python.
On Linux
cat /proc/<pid>/cmdline
outputs the commandline of the process <pid> (command including args) each record terminated by a NUL character.
A Bash Shell Example:
$ mapfile -d '' args < /proc/$$/cmdline
$ echo "#${#args[#]}:" "${args[#]}"
#1: /bin/bash
$ echo $BASH_VERSION
5.0.17(1)-release
Another variant of printing /proc/PID/cmdline with spaces in Linux is:
cat -v /proc/PID/cmdline | sed 's/\^#/\ /g' && echo
In this way cat prints NULL characters as ^# and then you replace them with a space using sed; echo prints a newline.
Rather than using multiple commands to edit the stream, just use one - tr translates one character to another:
tr '\0' ' ' </proc/<pid>/cmdline
ps -eo pid,args prints the PID and the full command line.
You can simply use:
ps -o args= -f -p ProcessPid
In addition to all the above ways to convert the text, if you simply use 'strings', it will make the output on separate lines by default. With the added benefit that it may also prevent any chars that may scramble your terminal from appearing.
Both output in one command:
strings /proc//cmdline /proc//environ
The real question is... is there a way to see the real command line of a process in Linux that has been altered so that the cmdline contains the altered text instead of the actual command that was run.
On Solaris
ps -eo pid,comm
similar can be used on unix like systems.
On Linux, with bash, to output as quoted args so you can edit the command and rerun it
</proc/"${pid}"/cmdline xargs --no-run-if-empty -0 -n1 \
bash -c 'printf "%q " "${1}"' /dev/null; echo
On Solaris, with bash (tested with 3.2.51(1)-release) and without gnu userland:
IFS=$'\002' tmpargs=( $( pargs "${pid}" \
| /usr/bin/sed -n 's/^argv\[[0-9]\{1,\}\]: //gp' \
| tr '\n' '\002' ) )
for tmparg in "${tmpargs[#]}"; do
printf "%q " "$( echo -e "${tmparg}" )"
done; echo
Linux bash Example (paste in terminal):
{
## setup intial args
argv=( /bin/bash -c '{ /usr/bin/sleep 10; echo; }' /dev/null 'BEGIN {system("sleep 2")}' "this is" \
"some" "args "$'\n'" that" $'\000' $'\002' "need" "quot"$'\t'"ing" )
## run in background
"${argv[#]}" &
## recover into eval string that assigns it to argv_recovered
eval_me=$(
printf "argv_recovered=( "
</proc/"${!}"/cmdline xargs --no-run-if-empty -0 -n1 \
bash -c 'printf "%q " "${1}"' /dev/null
printf " )\n"
)
## do eval
eval "${eval_me}"
## verify match
if [ "$( declare -p argv )" == "$( declare -p argv_recovered | sed 's/argv_recovered/argv/' )" ];
then
echo MATCH
else
echo NO MATCH
fi
}
Output:
MATCH
Solaris Bash Example:
{
## setup intial args
argv=( /bin/bash -c '{ /usr/bin/sleep 10; echo; }' /dev/null 'BEGIN {system("sleep 2")}' "this is" \
"some" "args "$'\n'" that" $'\000' $'\002' "need" "quot"$'\t'"ing" )
## run in background
"${argv[#]}" &
pargs "${!}"
ps -fp "${!}"
declare -p tmpargs
eval_me=$(
printf "argv_recovered=( "
IFS=$'\002' tmpargs=( $( pargs "${!}" \
| /usr/bin/sed -n 's/^argv\[[0-9]\{1,\}\]: //gp' \
| tr '\n' '\002' ) )
for tmparg in "${tmpargs[#]}"; do
printf "%q " "$( echo -e "${tmparg}" )"
done; echo
printf " )\n"
)
## do eval
eval "${eval_me}"
## verify match
if [ "$( declare -p argv )" == "$( declare -p argv_recovered | sed 's/argv_recovered/argv/' )" ];
then
echo MATCH
else
echo NO MATCH
fi
}
Output:
MATCH
If you want to get a long-as-possible (not sure what limits there are), similar to Solaris' pargs, you can use this on Linux & OSX:
ps -ww -o pid,command [-p <pid> ... ]
try ps -n in a linux terminal. This will show:
1.All processes RUNNING, their command line and their PIDs
The program intiate the processes.
Afterwards you will know which process to kill

Resources