How can I extract a specific number from df in bash? [duplicate] - linux

This question already has answers here:
How can I *only* get the number of bytes available on a disk in bash?
(7 answers)
Closed 6 years ago.
My aim is checking if there is still enough space on my disk, every time my script (bash) proceeds a step.
Running df; echo $? prints:
Dateisystem 1K-Blöcke Benutzt Verfügbar Verw% Eingehängt auf
/dev/sdc4 1869858440 1680951776 93900284 95% /mnt/dd
0
The 0 is the result of that command.
In my case, I only want 93900284 in a variable or as the result.
I already read man df.

df --output=avail /path/to/where/you/want/to/write | tail -n 1
BTW: bash 'returns' (in this case 0 == success) are exit codes, the way you phrase it it seems you try to capture that rather than the output. In that case, you might want to read this.

You can use awk to extract suitable field from output:
BASH_VAR=`df | awk '/\/dev\/sda4/{print $4;}''`

If what you want to do is display only available disk space, you can use the following command
df -k /dev/sdc4 | tail -1 | awk '{print $4}'

Related

Shell script make lines in one huge file into two seperate files in one go? [duplicate]

This question already has answers here:
How to save both matching and non-matching from grep
(3 answers)
Closed 1 year ago.
Currently My shell script iterate the lines in one huge file two times:
(What I want to do is just like the shell script below.)
grep 'some_text' huge_file.txt > lines_contains_a.txt
grep -v 'some_text' huge_file.txt > lines_not_contains_a.txt
but it is slow.
How to do the same thing only iterate the lines once?
Thanks!
With GNU awk:
awk '/some_text/ { print >> "lines_contains_a.txt" }
!/some_text/ { print >> "lines_not_contains_a.txt" }' huge_file.txt
With sed:
sed -n '/some_text/ w lines_contains_a.txt
/some_text/! w lines_not_contains_a.txt' huge_file.txt

How to line up the contents of a variable [duplicate]

This question already has answers here:
How to join multiple lines of filenames into one with custom delimiter
(22 answers)
Closed 3 years ago.
I run the following commands:
cd /proc
process=$(ls | egrep '[0-9]')
echo $process
I get the following output:
1
108
109
8130
However, I want to have the following output:
1 108 109 8130
How can I do that?
Since your variable process is only be used in the echo, I would simplify your script to
cd /proc
echo *[0-9]*
If you really need the process names for postprocessing in a later step, I would store them in an array:
processes=(*[0-9]*)
With this approach, you can display them in a single line using
echo "${processes[#]}"
The easiest way is to use echo command as:
...
process=$(echo [0-9]*)
...

How to display exactly 10 lines of a command output [duplicate]

This question already has answers here:
How can I view only the first n lines of the file?
(2 answers)
Closed 4 years ago.
I am working on ARM-based processor and I am preparing inside it a shell script that writes into a text file a set of commands output.
I want it to write exactly 10 lines of a command's output (for example top command) but I don't know how, would you help me please ?
Thank you.
Which operating system are you working in ? If you have awk installed, you
can do:
command | awk 'NR<=10' > f.txt
command | head -n 10 > file.txt
If you want a pure Bash solution:
n=0
command | while (( n++ != 10 )) && IFS= read -r line; do
printf '%s\n' "$line"
done
command | sed 1,10p > f.txt
sed filters lines based on a pattern and performs an action on them. In this case, pattern is to filter lines whose number is between 1 and 10, and action is just to 'p'rint them.

Saving a certain string from command line [duplicate]

This question already has an answer here:
how to print certain column with numbers only in awk
(1 answer)
Closed 5 years ago.
i am writing a bash script, and when i execute a certain command from my script it spits out an ID like this
VM ID: 12345
IDs are different all the time. How would I be able to extract just the number of ID and store it in my script?
I tried to put a ">file" after the command and it does not seem to work.
So, it depends on the possible IDs.
If it is always the 3rd item in string.
echo "VM ID: 12345" | awk '{print $3}' > file
If the output is always the only numbers in the output you can use tr.
echo "VM ID: 12345" | tr -d '[:alpha:][:blank:][:punct:]'
12345
However, if another number is in the string it will be added.
echo "VM3 ID: 12345" | tr -d '[:alpha:][:blank:][:punct:]'
312345
You can also make a pattern that gets the number after the first ":"

ubuntu terminal: grep for numbers compare [duplicate]

This question already has answers here:
Is it possible to use egrep to match numbers within a range?
(2 answers)
Closed 6 years ago.
I have text file with table |ID | NAME | CREDIT| and content
Is it real to get all lines, where CREDIT < 1337(for example) by grep and ONLY with GREP, no awk or something else?
Have no idea, tnx
You can do it with pure grep, but it's ugly. Here you are:
grep -e " .$" -e " ..$" -e " ...$" -e " 1[0-2]..$" -e " 13[0-2].$" -e " 133[0-6]$"
This is a job very much unsuited to grep. As an artisan, you should select your tools carefully, no-one wants to try cutting down a giant Karri tree with a screwdriver :-)
It is almost certainly a job for awk. You haven't specified your content lines so let's assume for now they're of the form:
|iii|nnnnnnn|ccccc|
where the i, n and c sequences are the relevant column data.
To get those lines where the credit value is less than 1337, it's a simple matter to do:
awk -F'|' '$4 < 1337 {print}' inputFileName

Resources