how to run more than one command one terminal? - linux

i have to run "for" loop on linux terminal itself how i can do.
ex.for i in cat ~/log;do grep -l "UnoRuby" $i >> ~/logName; done.

Just as you typed it should be fine except: for i in $(cat ~/log); do grep -l "UnoRuby" $i >> ~/logName; done

You should prefer < instead of cat, and a more friendly format for the quesiton:
for i in $(< ~/log)
do
grep -l "UnoRuby" $i >> ~/logName
done

Related

How can I create a file which containing 10 random numbers in Linux Shell?

I want to create a Linux command which creates a file containing 10 random numbers.
This is a solution to generate 10 random numbers;
RANDOM=$$
for i in `seq 10`
do
echo $RANDOM
done
It is working to generate random numbers but how can I combine this with 'touch' command? Should I create a loop?
Using touch? Like this?
touch file.txt && RANDOM=$$
for i in `seq 10`
do
echo $RANDOM
done >> file.txt
Not sure why you need touch though, this will also work:
for i in `seq 10`; do echo $RANDOM; done > file.txt
Use >> to write to file, $1 is first argument of your .sh file
FILE=$1
RANDOM=$$
for i in `seq 10`
do
echo $RANDOM >> $FILE
echo "\n" >> $FILE
done
You could use head(1) and od(1) or GNU gawk with random(4).
For example, perhaps
head -20c /dev/random | od -s > /tmp/tenrandomnumbers.txt

Linux: Tail -f multiple options

I want to add multiple tail scripts in one.
First one:
tail -f /var/script/log/script-log.txt | if grep -q "Text1"; then echo "0:$?:AAC32 ONLINE"
fi
I want to add 5 more lines with a diffrent word, is this possible?
else if, if etc. etc.
Thanks!
tail -f /var/script/log/script-log.txt | if grep -E "Text1|Text2|Text3"; then echo "0:$?:AAC32 ONLINE" fi
In your case it's enough to use logical AND operator:
tail -f /var/script/log/script-log.txt | grep -q "text1\|text2\|text3" && echo "0:$?:AAC32 ONLINE"
#!/bin/sh
PIPENAME="`mktemp -u "/tmp/something-XXXXXX"`"
mkfifo -m 600 "$PIPENAME"
tail -f /tmp/log.txt >"$PIPENAME" &
while read line < "$PIPENAME"
do
echo $line # Whatever you want goes here
done
rm -f "$PIPENAME"
If you want Bash specific, you can use the -u option for read, and then you can rm the named pipe before the loop starts, which is more guaranteed to leave things clean when you're done.

grep filenames from an exclude log

I have a problem with my bash script. I want to excluding from processing files that are listed in the exclude.log. After a file is processed it is written in to the exclude log.
for I in `ls $1 | grep ./exclude.log -v`
do
echo "Procesing ...."
echo $I >> ./exclude.log
done
$I is not assigned a value.
Also your grep is not right formulated.
You possibly want
LIST=$( grep -v -f /path/to/exclude.log * )
for I in $LIST
do
echo "Procesing ...."
echo $I >> /path/to/exclude.log
done
Make sure you don't have any empty lines in exclude.log
You can use this while loop:
while read -r l; do
echo "$l";
done < <(fgrep -v -wf exclude.log <(printf "%s\n" "$1"/*))

Find and highlight text in linux command line

I am looking for a linux command that searches a string in a text file,
and highlights (colors) it on every occurence in the file, WITHOUT omitting text lines (like grep does).
I wrote this handy little script. It could probably be expanded to handle args better
#!/bin/bash
if [ "$1" == "" ]; then
echo "Usage: hl PATTERN [FILE]..."
elif [ "$2" == "" ]; then
grep -E --color "$1|$" /dev/stdin
else
grep -E --color "$1|$" $2
fi
it's useful for stuff like highlighting users running processes:
ps -ef | hl "alice|bob"
Try
tail -f yourfile.log | egrep --color 'DEBUG|'
where DEBUG is the text you want to highlight.
command | grep -iz -e "keyword1" -e "keyword2" (ignore -e switch if just searching for a single word, -i for ignore case, -z for treating as a single file)
Alternatively,while reading files
grep -iz -e "keyword1" -e "keyword2" 'filename'
OR
command | grep -A 99999 -B 99999 -i -e "keyword1" "keyword2" (ignore -e switch if just searching for a single word, -i for ignore case,-A and -B for no of lines before/after the keyword to be displayed)
Alternatively,while reading files
grep -A 99999 -B 99999 -i -e "keyword1" "keyword2" 'filename'
command ack with --passthru switch:
ack --passthru pattern path/to/file
I take it you meant "without omitting text lines" (instead of emitting)...
I know of no such command, but you can use a script such as this (this one is a simple solution that takes the filename (without spaces) as the first argument and the search string (also without spaces) as the second):
#!/usr/bin/env bash
ifs_store=$IFS;
IFS=$'\n';
for line in $(cat $1);
do if [ $(echo $line | grep -c $2) -eq 0 ]; then
echo $line;
else
echo $line | grep --color=always $2;
fi
done
IFS=$ifs_store
save as, for instance colorcat.sh, set permissions appropriately (to be able to execute it) and call it as
colorcat.sh filename searchstring
I had a requirement like this recently and hacked up a small program to do exactly this. Link
Usage: ./highlight test.txt '^foo' 'bar$'
Note that this is very rough, but could be made into a general tool with some polishing.
Using dwdiff, output differences with colors and line numbers.
echo "Hello world # $(date)" > file1.txt
echo "Hello world # $(date)" > file2.txt
dwdiff -c -C 0 -L file1.txt file2.txt

How to get the command line args passed to a running process on unix/linux systems?

On SunOS there is pargs command that prints the command line arguments passed to the running process.
Is there is any similar command on other Unix environments?
There are several options:
ps -fp <pid>
cat /proc/<pid>/cmdline | sed -e "s/\x00/ /g"; echo
There is more info in /proc/<pid> on Linux, just have a look.
On other Unixes things might be different. The ps command will work everywhere, the /proc stuff is OS specific. For example on AIX there is no cmdline in /proc.
This will do the trick:
xargs -0 < /proc/<pid>/cmdline
Without the xargs, there will be no spaces between the arguments, because they have been converted to NULs.
Full commandline
For Linux & Unix System you can use ps -ef | grep process_name to get the full command line.
On SunOS systems, if you want to get full command line, you can use
/usr/ucb/ps -auxww | grep -i process_name
To get the full command line you need to become super user.
List of arguments
pargs -a PROCESS_ID
will give a detailed list of arguments passed to a process. It will output the array of arguments in like this:
argv[o]: first argument
argv[1]: second..
argv[*]: and so on..
I didn't find any similar command for Linux, but I would use the following command to get similar output:
tr '\0' '\n' < /proc/<pid>/environ
You can use pgrep with -f (full command line) and -l (long description):
pgrep -l -f PatternOfProcess
This method has a crucial difference with any of the other responses: it works on CygWin, so you can use it to obtain the full command line of any process running under Windows (execute as elevated if you want data about any elevated/admin process). Any other method for doing this on Windows is more awkward ( for example ).
Furthermore: in my tests, the pgrep way has been the only system that worked to obtain the full path for scripts running inside CygWin's python.
On Linux
cat /proc/<pid>/cmdline
outputs the commandline of the process <pid> (command including args) each record terminated by a NUL character.
A Bash Shell Example:
$ mapfile -d '' args < /proc/$$/cmdline
$ echo "#${#args[#]}:" "${args[#]}"
#1: /bin/bash
$ echo $BASH_VERSION
5.0.17(1)-release
Another variant of printing /proc/PID/cmdline with spaces in Linux is:
cat -v /proc/PID/cmdline | sed 's/\^#/\ /g' && echo
In this way cat prints NULL characters as ^# and then you replace them with a space using sed; echo prints a newline.
Rather than using multiple commands to edit the stream, just use one - tr translates one character to another:
tr '\0' ' ' </proc/<pid>/cmdline
ps -eo pid,args prints the PID and the full command line.
You can simply use:
ps -o args= -f -p ProcessPid
In addition to all the above ways to convert the text, if you simply use 'strings', it will make the output on separate lines by default. With the added benefit that it may also prevent any chars that may scramble your terminal from appearing.
Both output in one command:
strings /proc//cmdline /proc//environ
The real question is... is there a way to see the real command line of a process in Linux that has been altered so that the cmdline contains the altered text instead of the actual command that was run.
On Solaris
ps -eo pid,comm
similar can be used on unix like systems.
On Linux, with bash, to output as quoted args so you can edit the command and rerun it
</proc/"${pid}"/cmdline xargs --no-run-if-empty -0 -n1 \
bash -c 'printf "%q " "${1}"' /dev/null; echo
On Solaris, with bash (tested with 3.2.51(1)-release) and without gnu userland:
IFS=$'\002' tmpargs=( $( pargs "${pid}" \
| /usr/bin/sed -n 's/^argv\[[0-9]\{1,\}\]: //gp' \
| tr '\n' '\002' ) )
for tmparg in "${tmpargs[#]}"; do
printf "%q " "$( echo -e "${tmparg}" )"
done; echo
Linux bash Example (paste in terminal):
{
## setup intial args
argv=( /bin/bash -c '{ /usr/bin/sleep 10; echo; }' /dev/null 'BEGIN {system("sleep 2")}' "this is" \
"some" "args "$'\n'" that" $'\000' $'\002' "need" "quot"$'\t'"ing" )
## run in background
"${argv[#]}" &
## recover into eval string that assigns it to argv_recovered
eval_me=$(
printf "argv_recovered=( "
</proc/"${!}"/cmdline xargs --no-run-if-empty -0 -n1 \
bash -c 'printf "%q " "${1}"' /dev/null
printf " )\n"
)
## do eval
eval "${eval_me}"
## verify match
if [ "$( declare -p argv )" == "$( declare -p argv_recovered | sed 's/argv_recovered/argv/' )" ];
then
echo MATCH
else
echo NO MATCH
fi
}
Output:
MATCH
Solaris Bash Example:
{
## setup intial args
argv=( /bin/bash -c '{ /usr/bin/sleep 10; echo; }' /dev/null 'BEGIN {system("sleep 2")}' "this is" \
"some" "args "$'\n'" that" $'\000' $'\002' "need" "quot"$'\t'"ing" )
## run in background
"${argv[#]}" &
pargs "${!}"
ps -fp "${!}"
declare -p tmpargs
eval_me=$(
printf "argv_recovered=( "
IFS=$'\002' tmpargs=( $( pargs "${!}" \
| /usr/bin/sed -n 's/^argv\[[0-9]\{1,\}\]: //gp' \
| tr '\n' '\002' ) )
for tmparg in "${tmpargs[#]}"; do
printf "%q " "$( echo -e "${tmparg}" )"
done; echo
printf " )\n"
)
## do eval
eval "${eval_me}"
## verify match
if [ "$( declare -p argv )" == "$( declare -p argv_recovered | sed 's/argv_recovered/argv/' )" ];
then
echo MATCH
else
echo NO MATCH
fi
}
Output:
MATCH
If you want to get a long-as-possible (not sure what limits there are), similar to Solaris' pargs, you can use this on Linux & OSX:
ps -ww -o pid,command [-p <pid> ... ]
try ps -n in a linux terminal. This will show:
1.All processes RUNNING, their command line and their PIDs
The program intiate the processes.
Afterwards you will know which process to kill

Resources