Selecting specific values on script bash [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I need to get some informations from this string (df command)
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 19G 3.8G 14G 22% /
I need to get:
The available space value
And the used space value
Thanks guys!

Use awk:
df / | awk 'FNR>1 {print $3, $4}'

To print the columns of the second line of the command, you can use awk or similar:
$ df -h / | awk 'FNR==2 {print $1, "used: " $3, "avail: " $4}'
For scripting purposes you could read the command output into an array and get your values there:
#!/bin/bash
line=( $(df -h / | tail +2) )
printf "%s\n" "${line[0]} used: ${line[2]} avail: ${line[3]}"

This is another solution, without awk.
df / --output=used,avail | tail -n +2

Related

A bash script to count the number of all files [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I just started to learn linux.
What I wanna do is to write a bash script that prints the file name, the number of lines, and the number of words to stdout, for all files in the directory
for example: Apple.txt 15 155
I don't know how to write a command that can work for all the files in the directory.
Based on your most recent comment, I would say you want something like:
wc -lw ./* | awk '{print $3 "\t" $1 "\t" $2}'
Note that you will get a line in the output (from stderr) for each directory that looks something like:
wc: ./this-is-a-directory: Is a directory
If the message about directories is undesirable, you can suppress stderr messages by adding 2>/dev/null to the wc command, like this:
wc -lw ./* 2>/dev/null | awk '{print $3 "\t" $1 "\t" $2}'
Try this:
wc -lw ./*
It will be in the format of <lines> <words> <filename>.

for loop in bash to get more than 1 variables to use it in one command [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I have one text file and I need to get 2 variables from the same text and put them in one command like
for i in `cat TEXT | grep -i UID | awk '{print($2)}'` &&
x in `cat TEXT | grep -i LOGICAL | awk '{print($4)}'`
do
echo "naviseccli -h 10.1.1.37 sancopy -create -incremental -name copy_$i -srcwwn $x -destwwn xxxxxxxxxxxxxxxxxxxxxx -verify -linkbw 2048" >> OUTPUT
done
is there any possible way to accomplish that am storage admin and need to do tons of commands so i need to get this script to do it
You could make use of file descriptors. Moreover, your cat, grep, awk command could be combined into a single awk command:
exec 5< <(awk '{IGNORECASE=1}/UID/ {print $2}' TEXT)
exec 6< <(awk '{IGNORECASE=1}/LOGICAL/ {print $4}' TEXT)
while read i <&5 && read x <&6
do
echo command $i $x # Do something with i and x here!
done
Perform the cat operations first and save the result into two arrays. Then you can iterate with an index over one array and use the same index to also access the other one.
See http://tldp.org/LDP/abs/html/arrays.html about arrays in bash. Especially see the section "Example 27-5. Loading the contents of a script into an array".
With that resource you should be able to both populate your arrays and then also process them.
Since your words don't have spaces in them (by virtue of the use of awk), you could use:
paste <(grep -i UID TEXT | awk '{print($2)}') \
<(grep -i LOGICAL TEXT | awk '{print($4)}') |
while read i x
do
echo "naviseccli -h 10.1.1.37 sancopy -create -incremental -name copy_$i -srcwwn" \
"$x -destwwn xxxxxxxxxxxxxxxxxxxxxx -verify -linkbw 2048" >> OUTPUT
done
This uses Process Substitution to give paste two files which it pieces together. Each line will have two fields on it, which are read into i and x for use in the body of the loop.
Note that there is no need to use cat; you were eligible for a UUOC (Useless Use of cat) award.
Imma take a wild guess that UID and LOGICAL must be on the same line in your incoming TEXT, in which case this might actually make some sense and work:
cat TEST | awk '/LOGICAL/ && /UID/ { print $2, $4 }' | while read i x
do
echo "naviseccli -h 10.1.1.37 sancopy -create -incremental -name copy_$i -srcwwn" \
"$x -destwwn xxxxxxxxxxxxxxxxxxxxxx -verify -linkbw 2048"
done

awk: Iterate through content of a large list of files [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 9 years ago.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Improve this question
So, I have about 60k-70k vCard-Files and want to check (or, at this point, count), which vCards contain a mail address (EMAIL;INTERNET:me#my-domain.com)
I tried to pass the output of find to awk, but I just get awk to work with the files list, not with every files content. How can I get awk to do so? I tried several combinations of find, xargs and awk, but I don't get it to work properly.
Thanks for your help,
Wolle
I'd probably use grep for this.
If you want to extract adresses from the files:
grep -rio "EMAIL;INTERNET:.*#[a-z0-9-]*\.[a-z]*" *
Use cut, sed or awk to remove the leading EMAIL;INTERNET::
... | cut -d: -f2
... | sed "s/.*://"
... | awk -F: '{print $2}'
If you want the names of the files containing a particular address:
grep -ril "EMAIL;INTERNET:me#my-domain\.com" *
If grep can't process that many files at once, drop the -r option and try with find and xargs:
find /start/dir -name "*.vcf" -print0 | xargs -0 -I {} grep -io "..." {}
grep recursive can do this
grep -r 'EMAIL.+#'

Command for finding process using too much CPU [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
What command can I use to find a process that's using a lot of CPU? Can I do this without installing something new?
Or using a few other utils you could do:
ps aux | sort -rk 3,3 | head -n 5
Change the value of head to get the number of processes you want to see.
Try doing this :
top -b -n1 -c
And if you want the process that takes the most %CPU times :
top -b -n1 -c | awk '/PID *USER/{print;getline;print}'
or
top -b -n1 -c | grep -A 2 '^$'

Linux command df extract "name" and "available space" [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can extract only "name" and "available space" from df linux command.
You can use...
df | tail -n +2 | awk '{ print $1, $4 }'
...assuming you don't want the headers too. If you do, remove the tail.
We are piping df's output into tail, where we cut the first line off (the headers), and then pipe that into awk, using it to print the first and fourth columns.
Assuming name and available space are 1st and 4th columns:
df | awk '{print $1, $4}'
the traditional approach would be
df | awk '{printf "%-15s%-8s\n",$1,$5}'

Resources