A bash script to count the number of all files [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I just started to learn linux.
What I wanna do is to write a bash script that prints the file name, the number of lines, and the number of words to stdout, for all files in the directory
for example: Apple.txt 15 155
I don't know how to write a command that can work for all the files in the directory.

Based on your most recent comment, I would say you want something like:
wc -lw ./* | awk '{print $3 "\t" $1 "\t" $2}'
Note that you will get a line in the output (from stderr) for each directory that looks something like:
wc: ./this-is-a-directory: Is a directory
If the message about directories is undesirable, you can suppress stderr messages by adding 2>/dev/null to the wc command, like this:
wc -lw ./* 2>/dev/null | awk '{print $3 "\t" $1 "\t" $2}'

Try this:
wc -lw ./*
It will be in the format of <lines> <words> <filename>.

Related

How to get from a file exactly what I want in Linux? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 months ago.
Improve this question
How to get from a file exactly what I want in Linux?
I have: 123456789012,refid2141,test1,test2,test3 and I want this: 123456789012 or 123456789012 test3.
$ echo "123456789012,refid2141,test1,test2,test3" | awk -F "," '{print $1}'
123456789012
$ echo "123456789012,refid2141,test1,test2,test3" | awk -F "," '{printf("%s, %s", $1,$5)}'
123456789012, test3
foo.csv:
123456789012,refid2141,test1,test2,test3
import csv
with open("foo.csv", "rt") as fd:
data = list(csv.reader(fd))
print(data[0][0])
For a bash solution:
cat foo.csv | cut -d',' -f1

Combine number of lines of more files with filename [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Specify a command / command set that displays the number of lines of code in the .c and .h files in the current directory, displaying each file in alphabetical order followed by ":" and the number of lines in the files, and finally the total of the lines of code.
An example that might be displayed would be :
test.c: 202
example.c: 124
example.h: 43
Total: 369
I'd like to find a solution in the shortest form possible. I've experimented many commands like:
find . -name '*.c' -o -name '*.h' | xargs wc -l
== it shows 0 ./path/test.c and the total, but isn't close enough
stat -c "%n:%s" *
== it shows test.c:0, but it shows all file types and doesn't show the number of lines or the total
wc -l *.c *.h | tr ' ' '\:
== it shows 0:test.c and the total, but doesn't search in sub-directories and the order is reversed compared to the problem (filename: number_of_lines).
This one is closer to the answer but I'm out of ideas after searching most commands I saw in similar problems.
This should do it:
wc -l *.c *.h | awk '{print $2 ": " $1}'
Run a subshell in xargs
xargs -n1 sh -c 'printf "%s: %s\n" "$1" "$(wc -l <"$1")"' --
xargs -n1 sh -c 'echo "$1 $(wc -l <"$1")"' --

sed or awk command to merge two line into a single line [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have a text file with the following format.
12345
abcdefg
I need this to be in the same line. So the output should look like this...
12345 abcdefg
How should i proceed ? using sed or awk ?
If you want to join every line[i] and line[i+1] with a space,
you could use paste:
paste -d' ' - - < file
For given input and expected output below one should work
Using xargs
$ cat infile
12345
abcdefg
$ xargs < infile
12345 abcdefg
Using tr
$ tr -s '\n' ' ' <infile ; echo
12345 abcdefg

Sorting numbers in a row on the BASH / Shell [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
There is a line:
00000000000000;000022233333;2;NONE;true;100,100,5,1,28;UNKNOWN
It is necessary to sort 100,100,5,1,28 numbers in descending order.
Example:
00000000000000;000022233333;2;NONE;true;100,100,28,5,1;UNKNOWN
try this;
#!/bin/bash
while read line
do
beforeC=$(echo "$line" | cut -f-5 -d';')
sortcolumn=$(echo "$line" | awk -F ";" '{print $6}' | tr -t , "\n" | sort -r -n | xargs | sed 's/ /,/g')
afterC=$(echo "$line" | cut -f7- -d';')
echo -e $beforeC";"$sortcolumn";"$afterC
done <file
user#host:/tmp/test$ cat file
00000000000000;000022233333;2;NONE;true;100,100,5,1,28;UNKNOWN
00000000000000;000022233333;2;NONE;true;99,100,5,1,28;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,99,5,1,28;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,100,4,1,28;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,100,4,0,28;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,100,4,1,27;UNKNOWN
user#host:/tmp/test$ ./sortAColumn.sh
00000000000000;000022233333;2;NONE;true;100,100,28,5,1;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,99,28,5,1;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,99,28,5,1;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,100,28,4,1;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,100,28,4,0;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,100,27,4,1;UNKNOWN

awk: Iterate through content of a large list of files [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Improve this question
So, I have about 60k-70k vCard-Files and want to check (or, at this point, count), which vCards contain a mail address (EMAIL;INTERNET:me#my-domain.com)
I tried to pass the output of find to awk, but I just get awk to work with the files list, not with every files content. How can I get awk to do so? I tried several combinations of find, xargs and awk, but I don't get it to work properly.
Thanks for your help,
Wolle
I'd probably use grep for this.
If you want to extract adresses from the files:
grep -rio "EMAIL;INTERNET:.*#[a-z0-9-]*\.[a-z]*" *
Use cut, sed or awk to remove the leading EMAIL;INTERNET::
... | cut -d: -f2
... | sed "s/.*://"
... | awk -F: '{print $2}'
If you want the names of the files containing a particular address:
grep -ril "EMAIL;INTERNET:me#my-domain\.com" *
If grep can't process that many files at once, drop the -r option and try with find and xargs:
find /start/dir -name "*.vcf" -print0 | xargs -0 -I {} grep -io "..." {}
grep recursive can do this
grep -r 'EMAIL.+#'

Resources