How to save the output of this awk command to file? - linux

I wanna save this command to another text:
awk '{print $2}'
it extract's from text.
now i wanna save output too another text.
thanks

awk '{ print $2 }' text.txt > outputfile.txt
> => This will redirect STDOUT to a file. If file not exists, it will create it. If file exists it will clear out (in effect) the content and will write new data to it
>> => This means same as above but if file exists, this will append new data to it.
Eg:
$ cat /etc/passwd | awk -F: '{ print $1 }' | tail -10 > output.txt
$ cat output.txt
_warmd
_dovenull
_netstatistics
_avbdeviced
_krb_krbtgt
_krb_kadmin
_krb_changepw
_krb_kerberos
_krb_anonymous
_assetcache
Alternatively you can use the command tee for redirection. The command tee will redirect STDOUT to a specified file as well as the terminal screen
For more about shell redirection goto following link:
http://www.techtrunch.com/scripting/redirections-and-file-descriptors

There is a way to do this from within awk itself (docs)
➜ cat text.txt
line 1
line 2
line three
line 4 4 4
➜ awk '{print $2}' text.txt
1
2
three
4
➜ awk '{print $2 >"text.out"}' text.txt
➜ cat text.out
1
2
three
4

try this command.
cat ORGFILENAME.TXT | awk '{print $2}' > SAVENAME.TXT
thx.

Related

Optimize Multiline Pipe to Awk in Bash Function

I have this function:
field_get() {
while read data; do
echo $data | awk -F ';' -v number=$1 '{print $number}'
done
}
which can be used like this:
cat filename | field_get 1
in order to extract the first field from some piped in input. This works but I'm iterating on each line and it's slower than expected.
Does anybody know how to avoid this iteration?
I tried to use:
stdin=$(cat)
echo $stdin | awk -F ';' -v number=$1 '{print $number}'
but the line breaks get lost and it treats all the stdin as a single line.
IMPORTANT: I need to pipe in the input because in general I DO NOT have just to cat a file. Assume that the file is multiline, the problem is that actually. I know I can use "awk something filename" but that won't help me.
Just lose the while. Awk is a while loop in itself:
field_get() {
awk -F ';' -v number=$1 '{print $number}'
}
$ echo 1\;2\;3 | field_get 2
2
Update:
Not sure what you mean by your comment on multiline pipe and file but:
$ cat foo
1;2
3;4
$ cat foo | field_get 1
1
3
Use either stdin or file
field_get() {
awk -F ';' -v number="$1" '{print $number}' "${2:-/dev/stdin}"
}
Test Results:
$ field_get() {
awk -F ';' -v number="$1" '{print $number}' "${2:-/dev/stdin}"
}
$ echo '1;2;3;4' >testfile
$ field_get 3 testfile
3
$ echo '1;2;3;4' | field_get 2
2
No need to use a while loop and then awk. awk itself can read the input file. Where $1 is the argument passed to your script.
cat script.ksh
awk -v field="$1" '{print $field}' Input_file
./script.ksh 1
This is a job for the cut command:
cut -d';' -f1 somefile

Passing two variables instead of two files to awk

Assume two multi-line text files that are dynamically generated during execution of a bash shell script: file1 and file2
$ echo -e "foo-bar\nbar-baz\nbaz-qux" > file1
$ cat file1
foo-bar
bar-baz
baz-qux
$ echo -e "foo\nbar\nbaz" > file2
$ cat file2
foo
bar
baz
Further assume that I wish to use awk involving an operation on the text strings of both files. For example:
$ awk 'NR==FNR{var1=$1;next} {print $var1"-"$1}' FS='-' file1 FS=' ' file2
Is there any way that I can skip having to save the text strings as files in my script and, instead, pass along the text strings to awk as variables (or as here-strings or the like)?
Something along the lines of:
$ var1=$(echo -e "foo-bar\nbar-baz\nbaz-qux")
$ var2=$(echo -e "foo\nbar\nbaz")
$ awk 'NR==FNR{var1=$1;next} {print $var1"-"$1}' FS='-' "$var1" FS=' ' "$var2"
# awk: fatal: cannot open file `foo-bar
# bar-baz
# baz-qux' for reading (No such file or directory)
$ awk '{print FILENAME, FNR, $0}' <(echo 'foo') <(echo 'bar')
/dev/fd/63 1 foo
/dev/fd/62 1 bar

Linux Command line Help | extract specific parts from file

Suppose I have a log which has data in the format given below
Time number status
2013-5-10 19:18:43.430 123456 success
2013-5-10 19:28:13.430 134324 fail
2013-5-10 19:58:33.430 456456 success
I want to extract the numbers having success status.
Is there any way in linux using command line(grep, sed) to extract the data as mentioned. ??
Thanks all ..
grep only solution:
grep -Po '\d+(?= success)' file
or with awk only:
awk '$4=="success"&&$0=$3' input
This prints numbers based on success status-:
awk '$4 ~ /success/ {print $3}' logfile
You could do
(grep 'success' | cut -d ' ' -f 3) <$file
cat file | grep success | awk '{print $3}'
Using perl:
perl -ne '/success/ && split && print "$_[2]\n"' inputfile

search a keyword, if it matches pick the word besides to it

255.255.0.0(queue=banglore)
255.255.0.10(queue=hyderabad)
255.255.1.2(cal = 10)
my Script is
command | awk '{print $3}' | sed '1,9d'
my output is nearly like as shown above in linux terminal..
by using awk and sed scripts i removed some useless matter. But, Now i want only the queue names with out the braces (i.e. only the words banglore, hyderabad). how to get that one.(using sed)
And that ip address will change rapidly..
thanks in advance ..
Simple grep solution:
$ grep -Po '(?<=queue=)[^)]*' file
banglore
hyderabad
perl -lne 'if(/queue=/){m/\(queue=(.*?)\)/g;print $1}' your_file
Tested below:
> cat temp
255.255.0.0(queue=banglore)
255.255.0.10(queue=hyderabad)
255.255.1.2(cal = 10)
>
>
> perl -lne 'if(/queue=/){m/\(queue=(.*?)\)/g;print $1}' temp
banglore
hyderabad
>
awk version:
awk -F '\\(|\\)|=' '{if($3 !~/(^ )/)print $3}' temp
temp file:
255.255.0.0(queue=banglore)
255.255.0.10(queue=hyderabad)
255.255.1.2(cal = 10)
output:
banglore
hyderabad

How to count number of tabs in each line using shell script?

I need to write a script to count the number of tabs in each line of a file and print the output to a text file (e.g., output.txt).
How do I do this?
awk '{print gsub(/\t/,"")}' inputfile > output.txt
If you treat \t as the field delimiter, there will be one fewer \t than fields on each line:
awk -F'\t' '{ print NF-1 }' input.txt > output.txt
sed 's/[^\t]//g' input.txt | awk '{ print length }' > output.txt
Based on this answer.
This will give the total number of tabs in file:
od -c infile | grep -o "\t" | wc -l > output.txt
This will give you number of tabs line by line:
awk '{print gsub(/\t/,"")}' infile > output.txt

Resources