is it possible run a linux command on the output of a previous command ASSUMING that the previous command comes first? - linux

I know that I can use `` to get the output of a command, for example:
echo `ls`
but is there a way for me to use the ls command first and then run echo on it? For example: ls <some special redirection> echo? I tried ls > echo and it does not do what I want.
The reason I am asking is that sometimes I write complicated commands to get certain output for example: bjobs -u username01 | grep normal | awk '{print $1}' is a simple "complicated" command (sometimes they are 6 or 7 changed together). Now, I am currently having to do
Mycommand `(complicated string of commands)`
but I would much rather just do
(complicated string of commands) <some special redirection> Mycommand
is this possible?

You may use xargs
ls | xargs echo

When you need more actions on your result, you can parse them:
Parsing ls should be avoided, just rewriting the example:
ls | while read file; do
echo I found ${file}
done
This construction can be useful for more difficult parsing:
echo "red ford 2012
blue mustang 1998" | while read color car year; do
echo "My ${color} ${car} is from the year ${year}"
done

Related

Grep function not stopping with head pipe

So i'm currently trying to grep a single result from a random file in a specific directory. The grepping works just fine and the expected output file is populated as expected, but for some reason, even after the output file has already been filled, the process won't stop. This is the grep command where the program seems to be getting stuck.
searchFILE(){
case $2 in
pref)
echo "Populating output file: $3-$1.data.out"
dataOutputFile="$3-$1.data.out"
zgrep -a "\"someParameter\"\:\"$1\"" /folder/anotherFolder/filetemplate.log.* | zgrep -a "\"parameter2\"\:\"$3\"" | head -1 > $dataOutputFile
;;
*)
echo "Unrecognized command"
;;
esac
echo "Query finished"
}
What is currently happening is that the output file is being populated as expected with the head pipe, but for some reason I'm not getting the "Query finished" message, and the process seems not to stop at all.
grep does not know that head -n1 is no longer reading from the pipe until it attempts to write to the pipe, which it will only do if another match is found. There is no direct communication between the processes. It will eventually stop, but only once all the data is read, a second match is found and write fails with EPIPE, or some other error occurs.
You can watch this happen in a simple pipeline like this:
cat /dev/urandom | grep -ao "12[0-9]" | head -n1
With a sufficiently rare pattern, you will observe a delay between output and exit.
One solution is to change your stop condition. Instead of waiting for SIGPIPE as your pipeline does, wait for grep to match once using the -m1 option:
cat /dev/urandom | grep -ao -m1 "12[0-9]"
I saw better performance results with zcat myZippedFile | grep whatever paradigm...
The first difference you need to try is pipe with | head -z --lines=1
The reason is null terminated lines instead of newlines (just in case).
My example script below worked (drop the case statement to make it more simple). If I hold onto $1 $2 inside functions things go wrong. I use parameter $names and only use the $1 $2 $# once, because it also goes wrong for me if I don't and in any case you can then shift over $# and catch arguments. The $# in the script itself are not the same as arguments in bash functions.
grep searching for 2 or multiple parameters in any order means using grep twice; in your case zgrep | grep. The second grep is a normal grep! You only need the first grep to be zgrep to do the unzip. Your question is simpler if you drop the case statement as bash case scares off people: bash was always an ugly lady that works good for short scripts.
zgrep searches text or compressed text, but newlines in LINUX style vs WINDOWS are not the same. So use dos2unix to convert files so that newlines work. I use compressed file simply because it is strange and rare to see zgrep, so it is demonstrated in a shell script with a compressed file! It works for me. I changed a few things, like >> and "sort -u" but you can obviously change them back.
#!/usr/bin/env bash
# Search for egA AND egB using option go
# COMMAND LINE: ./zgrp egA go egB
A="$1"
cOPT="$2" # expecting case go
B="$3"
LOG="./filetemplate.log" # use parameters for long names.
# Generate some data with gzip and delete the temporary file.
echo "\"pramA\":\"$A\" \"pramB\":\"$B\"" >> $B$A.tmp
rm -f ${LOG}.A; tar czf ${LOG}.A $B$A.tmp
rm -f $B$A.tmp
# Use paramaterise $names not $1 etc because you may want to do shift etc
searchFILE()
{
outFile="$B-$A.data.out"
case $cOPT in
go) # This is zgrep | grep NOT zgrep | zgrep
zgrep -a "\"pramA\":\"$A\"" ${LOG}.* | grep -a "\"pramB\":\"$B\"" | head -z --lines=1 >> $outFile
sort -u $outFile > ${outFile}.sorted # sort unique on your output.
;;
*) echo -e "ERROR second argument must be go.\n Usage: ./zgrp egA go egB"
exit 9
;;
esac
echo -e "\n ============ Done: $0 $# Fin. ============="
}
searchFILE "$#"
cat ${outFile}.sorted

Merge two bash history files

Let's say I have two bash history files as follows:
history1.txt:
1 ls
2 cd foo
...
921 history > history1.txt
history2.txt:
154 vim /etc/nginx/nginx.conf
155 service nginx restart
...
1153 history > history2.txt
I knew I could easily write a bash script to merge these two files together so that the resulting file contains line 1 to 1153 without duplicate history entries... like the following bash scripts:
merge.sh
HEAD=`head -n 1 history2.txt | sed -e 's/^[[:space:]]*//'`
sed -n -e "1,/$HEAD/p" history1.txt > merged.txt
sed -e "1,$ s/$HEAD//" -e '/^\s*$/d' history2.txt >> merged.txt
But I spent way more time than I'd like to admit trying to find a way to accomplish this using only unnamed pipes, sed, and/ or any other common Linux utils WITHOUT variables and command substitution (`command`). I didn't have any success :(
Any Linux shell gurus or sed masters know if this is possible?
NOTE: I know the merge.sh script will not work for all edge cases.
If I understand that correct, you simply have 2 files with no duplicate entries (though not sure about it).
So you don't need sed or something here. Just:
cat history1.txt history2.txt | sort -n > output
If you have duplicates:
cat history1.txt history2.txt | sort -n -u > output
If I understand you want to add entries to your history file? Did you consider using the built in history -r ?
$ cat foo
echo "history"
$ history | tail -n 5
1371 rm foo
1372 tail .bash_history > foo
1373 vim foo
1374 cat foo
1375 history | tail
$ history -r foo
$ history | tail -n 5
1374 cat foo
1375 history | tail
1376 history -r foo
1377 echo "history"
1378 history | tail
Maybe give $ help history a look in case there is something that fits your needs.
You've got to learn the right tools to use to solve problems in UNIX as there are many wrong ways that LOOK like they solve a given problem but actually are slow, dangerous, non-portable, fragile, etc., etc.
sed is for simple substitutions on individual lines, that is all. shell is for manipulating files and processes and sequencing calls to UNIX tools, that is all. For general purpose text manipulation, the standard UNIX tool is awk.
The following is untested since you didn't provide sample input/output that we could run a tool against to test but it will be close if not exactly correct:
awk '
NR==FNR {file1[$1]=$0;next}
{
for (i=(prev+1); i<$1; i++) {
print file1[i]
}
print
prev = $1
}
' history1.txt history2.txt
if you create the flie like this
1 ls
2 cd foo
921 history > history1.txt
154 vim /etc/nginx/nginx.conf
155 service nginx restart
1153 history >> history1.txt
now everything, both historys, are in 1 file.
use the cut command to select only the command lines end cut out the line numbes like. (-d is for cutting on spaces, -f is for selecting from colum 4 till the end)
cut -d " " -f 4- histroy1.txt > temp.txt
now you have all the commands in one document in the temp.txt file.
if you then use sort and uniqe to like
sort temp.txt |uniq
you get all the uniq commands you used in the 2 history files in 1 file.

Backticks can't handle pipes in variable

I have a problem with one script in bash with CAT command.
This works:
#!/bin/bash
fil="| grep LSmonitor";
log="/var/log/sys.log ";
lines=`cat $log | grep LSmonitor | wc -l`;
echo $lines;
Output: 139
This does not:
#!/bin/bash
fil="| grep LSmonitor";
log="/var/log/sys.log ";
string="cat $log $fil | wc -l";
echo $string;
`$string`;
Output:
cat /var/log/sys.log | grep LSmonitor | wc -l
cat: opcion invalida -- 'l'
Pruebe 'cat --help' para mas informacion.
$fil is a parameter in this example static, but in real script, parameter is get from html form POST, and if I print I can see that the content of $fil is correct.
In this case, since you're building a pipeline as a string, you would need:
eval "$string"
But DON'T DO THIS!!!! -- someone can easily enter the filter
; rm -rf *
and then you're hosed.
If you want a regex-based filter, get the user to just enter the regex, and then you'll do:
grep "$fil" "$log" | wc -l
Firstly, allow me to say that this sounds like a really bad idea:
[…] in real script, parameter is get from html form POST, […]
You should not be allowing the content of POST requests to be run by your shell. This is a massive attack vector, and whatever mechanisms you have in place to try to protect it are probably not as effective as you think.
Secondly, | inside variables are not treated as special. This isn't specific to backticks. Parameter expansion (e.g., replacing $fil with | grep LSmonitor) happens after the command is parsed and mostly processed. There's a little bit of post-processing that's done on the results of parameter expansion (including "word splitting", which is why $fil is equivalent to the three arguments '|' grep LSmonitor rather than to the single argument '| grep LSmonitor'), but nothing as dramatic as you describe. So, for example, this:
pipe='|'
echo $pipe cat
prints this:
| cat
Since your use-case is so frightening, I'm half-tempted to not explain how you can do what you want — I think you'll be better off not doing this — but since Stack Overflow answers are intended to be useful for more people than just the original poster, an example of how you can do this is below. I encourage the OP not to read on.
fil='| grep LSmonitor'
log=/var/log/sys.log
string="cat $log $fil | wc -l"
lines="$(eval "$string")"
echo "$lines"
Try using eval (taken from https://stackoverflow.com/a/11531431/2687324).
It looks like it's interpreting | as a string, not a pipe, so when it reaches -l, it treats it as if you're trying to pass in -l to cat instead of wc.
The other answers outline why you shouldn't do it this way.
grep LSmonitor /var/log/syslog | wc -l will do what you're looking for.

How do I grep multiple lines (output from another command) at the same time?

I have a Linux driver running in the background that is able to return the current system data/stats. I view the data by running a console utility (let's call it dump-data) in a console. All data is dumped every time I run dump-data. The output of the utility is like below
Output:
- A=reading1
- B=reading2
- C=reading3
- D=reading4
- E=reading5
...
- variableX=readingX
...
The list of readings returned by the utility can be really long. Depending on the scenario, certain readings would be useful while everything else would be useless.
I need a way to grep only the useful readings whose names might have have nothing in common (via a bash script). I.e. Sometimes I'll need to collect A,D,E; and other times I'll need C,D,E.
I'm attempting to graph the readings over time to look for trends, so I can't run something like this:
# forgive my pseudocode
Loop
dump-data | grep A
dump-data | grep D
dump-data | grep E
End Loop
to collect A,D,E as that would actually give me readings from 3 separate calls of dump-data as that would not be accurate.
If you want to save all result of grep in the same file, you can just join all expressions in one:
grep -E 'expr1|expr2|expr3'
But if you want to have results (for expr1, expr2 and expr3) in separate files, things are getting more interesting.
You can do this using tee >(command).
For example, here I process the same pipe with thre different commands:
$ echo abc | tee >(sed s/a/_a_/ > file1) | tee >(sed s/b/_b_/ > file2) | sed s/c/_c_/ > file3
$ grep "" file[123]
file1:_a_bc
file2:a_b_c
file3:ab_c_
But the command seems to be too complex.
I would better save dump-data results to a file and then grep it.
TEMP=$(mktemp /tmp/dump-data-XXXXXXXX)
dump-data > ${TEMP}
grep A ${TEMP}
grep B ${TEMP}
grep C ${TEMP}
You can use dump-data | grep -E "A|D|E". Note the -E option of grep. Alternatively you could use egrep without the -E option.
you can simply use:
dump-data | grep -E 'A|D|E'
awk '/MY PATTERN/{print > "matches-"FILENAME;}' myfile{1,3}
thx Guru at Stack Exchange

Bash Sorting STDIN

I want to write a bash script that sorts the input by rules in different files. The first rule is to write all chars or strings in file1. The second rule is to write all numbers in file2. The third rule is to write all alphanumerical strings in file3. All specials chars must be ignored. Because I am not familiar with bash I don t know how to realize this.
Could someone help me?
Thanks,
Haniball
Thanks for the answers,
I wrote this script,
#!/bin/bash
inp=0 echo "Which filename for strings?"
read strg
touch $strg
echo "Which filename for nums?"
read nums
touch $nums
echo "Which filename for alphanumerics?"
read alphanums
touch $alphanums
while [ "$inp" != "quit" ]
do
echo "Input: "
read inp
echo $inp | grep -o '\<[a-zA-Z]+>' > $strg
echo $inp | grep -o '\<[0-9]>' > $nums
echo $inp | grep -o -E '\<[0-9]{2,}>' > $nums
done
After I ran it, it only writes string in the stringfile.
Greetings, Haniball
Sure can help. See here:
How To Ask Questions The Smart Way
Help Vampires: A Spotter’s Guide
cool site about the bash is here: http://wiki.bash-hackers.org/doku.php
for sorting try man sort
for pattern matching try man grep
other useful tools: man sed man awk man strings man tee
And it is always correct tag your homework as "homework" ;)
You can try something like:
<input_file strings -1 -a | tee chars_and_strings.txt |\
grep "^[A-Za-z0-9][A-Za-z0-9]*$" | tee alphanum.txt |\
grep "^[0-9][0-9]*$" > numonly.txt
The above is only for USA - no international (read unicode) chars, where things coming a little bit more complicated.
grep is sufficient (your question is a bit vague. If I got something wrong, let me know...)
Using the following input file:
this is a string containing words,
single digits as in 1 and 2 as well
as whole numbers 42 1066
all chars or strings
$ grep -o '\<[a-zA-Z]\+\>' sorting_input
this
is
a
string
containing
words
single
digits
as
in
and
as
well
all single digit numbers
$ grep -o '\<[0-9]\>' sorting_input
1
2
all multiple digit numbers
$ grep -o -E '\<[0-9]{2,}\>' sorting_input
42
1066
Redirect the output to a file, i.e. grep ... > file1
Bash really isn't the best language for this kind of task. While possible, ild highly recommend the use of perl, python, or tcl for this.
That said, you can write all of stdin from input to a temporary file with shell redirection. Then, use a command like grep to output matches to another file. It might look something like this.
#!/bin/bash
cat > temp
grep pattern1 > file1
grep pattern2 > file2
grep pattern3 > file3
rm -f temp
Then run it like this:
cat file_to_process | ./script.sh
I'll leave the specifics of the pattern matching to you.

Resources