redirect output from one function to another - linux

I'm trying to create a pipeline from user input, but when I redirect the output I'm getting a output with no newlines and it's just one huge single line.Here's the code :
42 function stack(){
43 echo $(history|tail -1|cut -d" " -f5-|cut -d "|" -f1) >> ~/commands
44 local last=$(tail -1 ~/commands)
45 echo $(eval $last) >> ~/output
46 }
Is there a better way to pipe the output from this to a file ? Echo seems to corrupt the output.

I'm not sure to understand the purpose of cuts, but quote are missing around $() so the output is split into words with IFS
echo "$(eval "$last")"
maybe cut -c8- is safer than cut -d" " -f5- for history entries with a number with number of digits different from 3.
also cut -d"|" -f1 can fail if | is used as literal for example echo '|'.
Maybe you can look at Even designators in bash manual : in interactive bash following will run the last command
$ !-1

Related

Using columns in bash

I've used the column command to split some of my output into 3 different columns. Problem is with the final column, the filetype output is being split into a 4th and 5th column because of the spaces.
Can somebody tell me how to change my code so that output stays under the Filetype column?
list_files()
{
if [ "$(ls -A ~/.junkdir)" ]
then
filesdir=/home/student/.junkdir/*
echo "Listing files in Junk Directory"
output="FILENAME SIZE(BYTES) TYPE \n\n---------------- ---------------- ------------------- "
for listed_file in $filesdir
do
file_name=$(basename "file $listed_file" | cut -d ' ' -f1)
file_size=$(du --bytes $listed_file | awk '{print $1}')
file_type=$(file $listed_file | cut -d ' ' -f2-)
output="$output\n${file_name} ${file_size} ${file_type}\n"
done
echo -ne $output | column -t
else
echo 'Junk directory is empty'
fi
}
The output at the moment..
Listing files in Junk Directory
FILENAME SIZE(BYTES) TYPE
---------------- ---------------- -------------------
files.txt 216 ASCII text
forLoop 401 Bourne-Again shell script,
ASCII text executable
I rarely give full solution, but it seems you are really stuck.
list_files2()
{
filesdir=/home/student/.junkdir/*
printf "FILENAME\1SIZE(BYTES)\1TYPE\1\n\n----------------\1----------------\1-------------------\n"
for listed_file in $filesdir
do
file_name=$(basename "file $listed_file" | cut -d ' ' -f1)
file_size=$(du --bytes $listed_file | awk '{print $1}')
file_type=$(file $listed_file | cut -d ' ' -f2-)
printf "%s\1%s\1%s\n" "${file_name}" "${file_size}" "${file_type}"
done
}
list_files()
{
if [ "$(ls -A ~/.junkdir)" ]
then
echo "Listing files in Junk Directory"
list_files2 | column -t -s $'\1'
else
echo 'Junk directory is empty'
fi
}
I slightly reorganized your code and made some other changes as well. I will explain what I did.
$'\1' is the 0x01 char. Even though I proposed to use $'\0', it's weird that my version of column has weird interaction with it appearing in the input. But in shell scripting practice, it's generally a bad idea to assume blank separator. In your case, you got caught by space, which is reasonable, because white space has so much overloaded meaning, you cannot prevent it from appearing in a human readable text. The solution to this is to consider using exotic chars like 0x00 or 0x01 as separator instead, which is almost always the case that it won't show up as a part of the text. So it's very safe to use and in fact, it's common in portable shell scripting to use 0x00 as separator.
Do not concatenate string like you did. In fact, there are couple problems with it.
one is you keep concatenating string while you don't really need the intermediate result.
another biteback is what if inside of your text, strings like \n exist and it should be interpreted as literal? echo -e is not going to distinguish that. yet another SO question on the road.
using printf in fact is more flavorable in shell, though I use echo a lot myself as well. Here the benefit of using printf is evident.
I don't have your files so I didn't try it, though I would expect it works. Let me know if there're glitches here and there.
Perhaps you can try
output="$output\n${file_name}\t${file_size}\t${file_type}\n"
...
echo -ne $output

store command output to array in shell script

I'm using ssh to connect to a remote machine and read a log file there. From that log file, based on some tokens, I extract specific logs and store it in a variable. Every log is in new line in the log file and the data can contain any character including white space.
array=("$(egrep "UserComments/propagateBundle-2013-10-19--04:42:13|UserComments/propagateBundle-2013-10-19--04:38:36|UserComments/propagateBundle-2013-10-19--04:34:24" <path>/propagateBundle.log)")
echo ${array[0]}
echo "$array"
First echo prints complete output in one line separated by white space while the other prints outputs in new line. Problem, is, I'm not able to save this output as an array. I tried this:
newArray=("$array")
max=${#newArray[#]}
echo $max
But echoing 'max' yields '1' on the screen. How can I save the output in an array? I also tried using
IFS=\`\n`
but could not get the data in an array.
EDIT
I used the solution given by Anubhav and it worked like charm. Now I faced a second issue. Since my data contains white spaces, so the array broke at white spaces and wrongly contained the one comments as multiple arrays. So, I used
IFS=\`\n`
and also used a $ symbol before backticks. Although this solves my problem, I still get an exception in the logs:
test.sh: line 11: n: command not found
Any suggestions?
Don't put quotes in the command substitution:
array=( $(egrep "UserComments/propagateBundle-2013-10-19--04:42:13|UserComments/propagateBundle-2013-10-19--04:38:36|UserComments/propagateBundle-2013-10-19--04:34:24" <path>/propagateBundle.log) )
With quotes as in your code whole output is treated as single string in the array.
I've used IFS=('\n') otherwise all "n" chars disappears from results and sort command doesn't work properly. See bellow, it is a customized llq output.
#!/bin/bash
IFS=('\n')
raw=(`llq -f %id %o %gu %gl %st %BS %c`)
echo
echo ${raw[*]} | grep "step(s)"
echo
echo ${raw[*]} | grep "Step"
echo ${raw[*]} | grep "\---*"
echo ${raw[*]} | grep "bgp-fn*" | sort -k5 -r
echo ${raw[*]} | grep "\---*"
echo ${raw[*]} | grep "Step"
echo
echo ${raw[*]} | grep "step(s)"
echo

Line from bash command output stored in variable as string

I'm trying to find a solution to a problem analog to this one:
#command_A
A_output_Line_1
A_output_Line_2
A_output_Line_3
#command_B
B_output_Line_1
B_output_Line_2
Now I need to compare A_output_Line_2 and B_output_Line_1 and echo "Correct" if they are equal and "Not Correct" otherwise.
I guess the easiest way to do this is to copy a line of output in some variable and then after executing the two commands, simply compare the variables and echo something.
This I need to implement in a bash script and any information on how to get certain line of output stored in a variable would help me put the pieces together.
Also, it would be cool if anyone can tell me not only how to copy/store a line, but probably just a word or sequence like : line 1, bytes 4-12, stored like string in a variable.
I am not a complete beginner but also not anywhere near advanced linux bash user. Thanks to any help in advance and sorry for bad english!
An easier way might be to use diff, no?
Something like:
command_A > command_A.output
command_B > command_B.output
diff command_A.output command_B.output
This will work for comparing multiple strings.
But, since you want to know about single lines (and words in the lines) here are some pointers:
# first line of output of command_A
command_A | head -n 1
The -n 1 option says only to use the first line (default is 10 I think)
# second line of output of command_A
command_A | head -n 2 | tail -n 1
that will take the first two lines of the output of command_A and then the last of those two lines. Happy times :)
You can now store this information in a variable:
export output_A=`command_A | head -n 2 | tail -n 1`
export output_B=`command_B | head -n 1`
And then compare it:
if [ "$output_A" == "$output_B" ]; then echo 'Correct'; else echo 'Not Correct'; fi
To just get parts of a string, try looking into cut or (for more powerful stuff) sed and awk.
Also, just learing a good general purpose scripting language like python or ruby (even perl) can go a long way with this kind of problem.
Use the IFS (internal field separator) to separate on newlines and store the outputs in an array.
#!/bin/bash
IFS='
'
array_a=( $(./a.sh) )
array_b=( $(./b.sh) )
if [ "${array_a[1]}" = "${array_b[0]}" ]; then
echo "CORRECT"
else
echo "INCORRECT"
fi

Error with a script in bash

I have a little error with a script I wrote in bash and I can't figure out what's I'm doing wrong
note that I'm using this script for thousands of calculations and this error happened only a few times (like 20 or so), but it still happened
What the script does is this: basically it takes in input a web page that I got from a site with the utility w3m and it counts all the occurrences of the words in it... After it orders them from the most common to the ones that occur only once
this is the code:
#!/bin/bash
# counts the numbers of words from specific sites #
# writes in a file the occurrences ordered from the most common #
touch check # file used to analyze the occurrences
touch distribution # final file ordered
page=$1 # the web page that needs to be analyzed
occurrences=$2 # temporary file for the occurrences
dictionary=$3 # dictionary used for another purpose (ignore this)
# write the words one by column
cat $page | tr -c [:alnum:] "\n" | sed '/^$/d' > check
# lopp to analyze the words
cat check | while read words
do
word=${words}
strlen=${#word}
# ignores blacklisted words or small ones
if ! grep -Fxq $word .blacklist && [ $strlen -gt 2 ]
then
# if the word isn't in the file
if [ `egrep -c -i "^$word: " $occurrences` -eq 0 ]
then
echo "$word: 1" | cat >> $occurrences
# else if it is already in the file, it calculates the occurrences
else
old=`awk -v words=$word -F": " '$1==words { print $2 }' $occurrences`
### HERE IS THE ERROR, EITHER THE LET OR THE SED ###
let "new=old+1"
sed -i "s/^$word: $old$/$word: $new/g" $occurrences
fi
fi
done
# orders the words
awk -F": " '{print $2" "$1}' $occurrences | sort -rn | awk -F" " '{print $2": "$1}' > distribution
# ignore this, not important
grep -w "1" distribution | awk -F ":" '{print $1}' > temp_dictionary
for line in `cat temp_dictionary`
do
if ! grep -Fxq $line $dictionary
then
echo $line >> $dictionary
fi
done
rm check
rm temp_dictionary
this is the error: (I'm translating it, so it could be different in english)
./wordOccurrences line:30 let:x // where x is a number, usually 9 or 10 (but also 11, 13, etc)
1: syntax error in the espression (the error token is 1)
sed: expression -e #1, character y: command 's' not terminated // where y is another number (this one is also usually 9 or 10) with y being different from x
EDIT:
Talking with kev it looks like it's a newline problem
I added an echo between let and sed to print the sed and it worked perfectly for like 5 to 10 minutes until that error. Usually the sed without error looked like this:
s/^CONSULENTI: 6$/CONSULENTI: 7/g
but when I got the error it was like this:
s/^00145: 1
1$/00145: 4/g
how to fix this?
If you get a new line in $old, it means awk prints two lines so there is a duplicate in $occurences.
The script seems complicated to count words, and not efficient because it launches many processes and process file in a loop ;
maybe you can do something similar with
sort | uniq -c
You should also consider that your case-insensitivity is not consistent throughout the program. I created a page with just "foooo" in it and ran the program, then created one with "Foooo" in it and ran the program again. The 'old=`awk...' line sets 'old' to the empty string because awk is matching case sensitively. This results in the occurrences file not being updated. The subsequent sed and possibly some of the greps are also case sensitive.
This may not be the only error since it doesn't explain the error message you saw, but it is an indication that the same word with different capitalization will be handled erroneously by your script.
The following would separate the words, lowercase them, and then remove the ones smaller than three characters:
tr -cs '[:alnum:]' '\n' <foo | tr '[:upper:]' '[:lower:]' | egrep -v '^.{0,2}$'
Using this at the front of your script would mean that the rest of the script would not have to be case insensitive to be correct.

Count the number of occurrences in a string. Linux

Okay so what I am trying to figure out is how do I count the number of periods in a string and then cut everything up to that point but minus 2. Meaning like this:
string="aaa.bbb.ccc.ddd.google.com"
number_of_periods="5"
number_of_periods=`expr $number_of_periods-2`
string=`echo $string | cut -d"." -f$number_of_periods`
echo $string
result: "aaa.bbb.ccc.ddd"
The way that I was thinking of doing it was sending the string to a text file and then just greping for the number of times like this:
grep -c "." infile
The reason I don't want to do that is because I want to avoid creating another text file for I do not have permission to do so. It would also be simpler for the code I am trying to build right now.
EDIT
I don't think I made it clear but I want to make finding the number of periods more dynamic because the address I will be looking at will change as the script moves forward.
If you don't need to count the dots, but just remove the penultimate dot and everything afterwards, you can use Bash's built-in string manuipulation.
${string%substring}
Deletes shortest match of $substring from back of $string.
Example:
$ string="aaa.bbb.ccc.ddd.google.com"
$ echo ${string%.*.*}
aaa.bbb.ccc.ddd
Nice and simple and no need for sed, awk or cut!
What about this:
echo "aaa.bbb.ccc.ddd.google.com"|awk 'BEGIN{FS=OFS="."}{NF=NF-2}1'
(further shortened by helpful comment from #steve)
gives:
aaa.bbb.ccc.ddd
The awk command:
awk 'BEGIN{FS=OFS="."}{NF=NF-2}1'
works by separating the input line into fields (FS) by ., then joining them as output (OFS) with ., but the number of fields (NF) has been reduced by 2. The final 1 in the command is responsible for the print.
This will reduce a given input line by eliminating the last two period separated items.
This approach is "shell-agnostic" :)
Perhaps this will help:
#!/bin/sh
input="aaa.bbb.ccc.ddd.google.com"
number_of_fields=$(echo $input | tr "." "\n" | wc -l)
interesting_fields=$(($number_of_fields-2))
echo $input | cut -d. -f-${interesting_fields}
grep -o "\." <<<"aaa.bbb.ccc.ddd.google.com" | wc -l
5

Resources