Grep the most recent value of a particular column from a CSV file [closed] - linux

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
"cola","colb","colc","cold","cole","colf"
"a","b","c","d","e","f"
"a1","b1","c1","d1","e1","f1"
"a2","b2","c2","d2","e2","f2"
Assuming this is the CSV file, I want to grep the value "e" from the column "cole" and store it into a shell variable. And then use the shell variable as a part of a wget command.
How would I do this?

set -f # disable globbing
variable="$(awk 'NR==2 {print $5}' file)"
set +f

Awk is well suited to this. If you know the column number you can simply do:
$ awk 'NR==2{print $5}' file.csv
e
This will print the fifth field on the second line. If you want to use the column name then:
$ awk 'NR==1{for(i=1;i<=NF;i++)c[$i]=i}NR==2{print $c[col]}' col="cole" file.csv
e
Just set col="<name of column to use>".
You can use command substitution to store the value in variable:
$ val="$(awk 'NR==2{print $5}' file.csv)"
$ wget --what-ever-option "$val"
Or just use it in place:
$ wget --what-ever-option "$(awk 'NR==2{print $5}' file.csv)"

Related

how can I remove some numbers at the end of line in a text file [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 months ago.
Improve this question
I have a text file which contains a series of same line except at the end.
eg
lesi-1-1500-1
lesi-1-1500-2
lesi-1-1500-3
how can I remove the last number? it goes upto 250
to change in the file itself
sed -i 's/[0-9]\+$//' /path/to/file
or
sed 's/[0-9]\+$//' /path/to/file > /path/to/output
see example
You can do it with Awk by breaking it into fields.
echo "lesi-1-1500-2" > foo.txt
echo "lesi-1-1500-3" >> foo.txt
cat foo.txt | awk -F '-' '{print $1 "-" $2 "-" $3 }'
The -F switch allows us to set the delimiter which is -. Then we just print the first three fields with - for formatting.

Cut a string after certain a specific character, but just one field [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
This is from a vhost file. This is the output I get
ServerName uat3-dam-something1.prg-dc.brb.com
Hello,
I'm wondering how to cut from this output so only this part remains
something1.prg-dc.brb.com
Keep in mind that "something" could be "something4141411" or "something23". So length operations won't work. Tried with cut command and AWK, but didn't work. I would be happy receive a tips from the bash experts :)
Like this :
grep -o 'something.*' file
or more specific:
grep -oE 'something[0-9]+\..*' file
 Output:
something1.prg-dc.brb.com
Could you please try following, written and tested with provided samples only.
awk -F'uat3-dam-' '{print $NF}' Input_file
Description: Making uat3-dam- as field separator and printing last field of it.
2nd solution:
awk 'match($0,/something.*/){print substr($0,RSTART,RLENGTH)}' Input_file
Using:
echo "ServerName uat3-dam-something1.prg-dc.brb.com" |cut -d\- -f3-4
Will return:
something1.prg-dc.brb.com
And if you change the string (as you mention):
echo "ServerName uat3-dam-something111111.prg-dc.brb.com" |cut -d\- -f3-4
It will keep returning:
something111111.prg-dc.brb.com
$ echo 'ServerName uat3-dam-something1.prg-dc.brb.com' | awk -F- '{sub(".*" $2 FS,"")}1'
something1.prg-dc.brb.com
This will work:
echo "ServerName uat3-dam-something1.prg-dc.brb.com" | sed -E 's/.*(something.*)/\1/'
Or, if the string is in a file named file
sed -E 's/.*(something.*)/\1/' file
Explanation:
-E is for extended regex
.*(something.*) means "any char 0 or more times followed by something and any other char 0 or more times".
\1 is used to print only the matching part inside the brackets.
You could also use :
echo ${test#*dam-}
Example :
test="ServerName uat3-dam-something1.prg-dc.brb.com"
echo ${test#*dam-}
which gives:
something1.prg-dc.brb.com
Note that the opposite version would be echo ${test%something*}

Command for printing part of a String? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a file name test,
it contains a String James Bond 007,
and i want to print only James Bond.
I tried the following commands:
$ strings -n 2 test
$ sed -n '/James/,/Bond/p' test
$ awk '{print substr($1,10)}' test
To print the first two words, you can use awk:
awk '{print $1, $2}' test
To print the first ten characters, you can put the file contents in a variable, then use the bash substring operation:
contents=$(cat test)
echo "${contents:0:10}"
Or in awk:
awk '{print substr($0, 1, 10)}' test
Notice that $0 means the whole line, and you have to give both a starting index and length to substr(). Indexes in awk start at 1 rather than 0.
In sed, /James/,/Bond/ is a line range expression; it processes all the lines starting from a line containing James until a line containing Bond. It doesn't process just part of the lines.

How to create a Unix script to segregate data Line by Line? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I have some data in a MyFile.CSV file like this:
id,name,country
100,tom cruise,USA
101,Johnny depp,USA
102,John,India
What will be the shell script to take the above file as input and segregate the data in 2 different files as per the country?
I tried using the FOR loop and then using 2 IFs inside it but I am unable to do so. How to do it using awk?
For LINE in MyFile.CSV
Do
If grep "USA" $LINE >0 Then
$LINE >> Out_USA.csv
Else
$LINE >> Out_India.csv
Done
You can try with this
grep -R "USA" /path/to/file >> Out_USA.csv
grep -R "India" /path/to/file >> Out_India.csv
Many ways to do:
One way:
$ for i in `awk -F"," '{if(NR>1)print $3}' MyFile.csv|uniq|sort`;
do
echo $i;
egrep "${i}|country" MyFile.csv > Out_${i}.csv;
done
This assumes that the country name would not clash with other columns.
If it does, then you can fine tune that by adding additional regex.
For example, it country will be the last field, then you can add $ to the grep

Change the path address in a text file by shell scripting [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
In my Bash script, I have to change a name to a path address(new address) in a text file:
(MYADDREES) change to ( /home/run1/c1 ) and save it as new file.
I did like this: defined a new variable = new address and tried to replace it in previous address in text file.
I use sed but it has problem.
My script was:
#!/bin/bash
# To debug
set -x
x=`pwd`
echo $x
sed "s/MYADDRESS/$x/g" < sample1.txt > new.txt
exit
The output of pwd is likely to contain / characters, making your sed expression look something like s/MYADDRESS//home/user/somewhere/. This makes it impossible for sed to sort out what should be replaced with what. There are two solutions:
Use a different delimiter for sed:
sed "s,MYADDRESS,$x,g" < sample1.txt > new.txt
...although this will have the same problem if the current path contains a comma character or something else that is a special character for sed, so the more robust approach is to use awk instead:
awk -v curdir="$(pwd)" '{ gsub("MYADDRESS", curdir); print }' < sample1.txt > new.txt

Resources