Linux command df extract "name" and "available space" [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can extract only "name" and "available space" from df linux command.

You can use...
df | tail -n +2 | awk '{ print $1, $4 }'
...assuming you don't want the headers too. If you do, remove the tail.
We are piping df's output into tail, where we cut the first line off (the headers), and then pipe that into awk, using it to print the first and fourth columns.

Assuming name and available space are 1st and 4th columns:
df | awk '{print $1, $4}'

the traditional approach would be
df | awk '{printf "%-15s%-8s\n",$1,$5}'

Related

How to get from a file exactly what I want in Linux? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 months ago.
Improve this question
How to get from a file exactly what I want in Linux?
I have: 123456789012,refid2141,test1,test2,test3 and I want this: 123456789012 or 123456789012 test3.
$ echo "123456789012,refid2141,test1,test2,test3" | awk -F "," '{print $1}'
123456789012
$ echo "123456789012,refid2141,test1,test2,test3" | awk -F "," '{printf("%s, %s", $1,$5)}'
123456789012, test3
foo.csv:
123456789012,refid2141,test1,test2,test3
import csv
with open("foo.csv", "rt") as fd:
data = list(csv.reader(fd))
print(data[0][0])
For a bash solution:
cat foo.csv | cut -d',' -f1

sed or awk command to merge two line into a single line [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have a text file with the following format.
12345
abcdefg
I need this to be in the same line. So the output should look like this...
12345 abcdefg
How should i proceed ? using sed or awk ?
If you want to join every line[i] and line[i+1] with a space,
you could use paste:
paste -d' ' - - < file
For given input and expected output below one should work
Using xargs
$ cat infile
12345
abcdefg
$ xargs < infile
12345 abcdefg
Using tr
$ tr -s '\n' ' ' <infile ; echo
12345 abcdefg

Sorting numbers in a row on the BASH / Shell [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
There is a line:
00000000000000;000022233333;2;NONE;true;100,100,5,1,28;UNKNOWN
It is necessary to sort 100,100,5,1,28 numbers in descending order.
Example:
00000000000000;000022233333;2;NONE;true;100,100,28,5,1;UNKNOWN
try this;
#!/bin/bash
while read line
do
beforeC=$(echo "$line" | cut -f-5 -d';')
sortcolumn=$(echo "$line" | awk -F ";" '{print $6}' | tr -t , "\n" | sort -r -n | xargs | sed 's/ /,/g')
afterC=$(echo "$line" | cut -f7- -d';')
echo -e $beforeC";"$sortcolumn";"$afterC
done <file
user#host:/tmp/test$ cat file
00000000000000;000022233333;2;NONE;true;100,100,5,1,28;UNKNOWN
00000000000000;000022233333;2;NONE;true;99,100,5,1,28;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,99,5,1,28;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,100,4,1,28;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,100,4,0,28;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,100,4,1,27;UNKNOWN
user#host:/tmp/test$ ./sortAColumn.sh
00000000000000;000022233333;2;NONE;true;100,100,28,5,1;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,99,28,5,1;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,99,28,5,1;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,100,28,4,1;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,100,28,4,0;UNKNOWN
00000000000000;000022233333;2;NONE;true;100,100,27,4,1;UNKNOWN

A bash script to count the number of all files [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I just started to learn linux.
What I wanna do is to write a bash script that prints the file name, the number of lines, and the number of words to stdout, for all files in the directory
for example: Apple.txt 15 155
I don't know how to write a command that can work for all the files in the directory.
Based on your most recent comment, I would say you want something like:
wc -lw ./* | awk '{print $3 "\t" $1 "\t" $2}'
Note that you will get a line in the output (from stderr) for each directory that looks something like:
wc: ./this-is-a-directory: Is a directory
If the message about directories is undesirable, you can suppress stderr messages by adding 2>/dev/null to the wc command, like this:
wc -lw ./* 2>/dev/null | awk '{print $3 "\t" $1 "\t" $2}'
Try this:
wc -lw ./*
It will be in the format of <lines> <words> <filename>.

Selecting specific values on script bash [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I need to get some informations from this string (df command)
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 19G 3.8G 14G 22% /
I need to get:
The available space value
And the used space value
Thanks guys!
Use awk:
df / | awk 'FNR>1 {print $3, $4}'
To print the columns of the second line of the command, you can use awk or similar:
$ df -h / | awk 'FNR==2 {print $1, "used: " $3, "avail: " $4}'
For scripting purposes you could read the command output into an array and get your values there:
#!/bin/bash
line=( $(df -h / | tail +2) )
printf "%s\n" "${line[0]} used: ${line[2]} avail: ${line[3]}"
This is another solution, without awk.
df / --output=used,avail | tail -n +2

Resources