I have a csv file with 4 attributes in each line, delimited by comma. I'm trying to come up with a sed command to keep only the second attribute from each line. Any ideas on how to do it?
You'd be better off with cut:
cut -d "," -f 2 file.txt
If you want to remove dupes, and you don't mind the order of the entries, simply do:
cut -d "," -f 2 file.txt | sort -u
And to extend to attrs 1 and to, simply use:
cut -d "," -f 1,2 file.txt | sort -u
You do not need sed for this . The fastest is with cut:
cut -d, -f2 file
However , if you want sed , you can do it so:
sed '/[^,]*,\([^,]*\).*/ sxx\1x' file
Related
I am trying to fetch the user name and IP from where they logged in on my system.
I used the following command:
last -i | grep 'Jan 12' | cut -f1,3
But I am getting full line as the result.
But when I use awk :
last -i | grep 'Jan 12' | awk '{print $1, $3}'
I am getting the correct result.
Why wrong output in case of cut command ?
Any help would be highly appreciated.
Default delimiter of cut is a tab, whereas default input field separator in awk is any whitespace i.e. space or tab.
To get the same behavior in cut, you need to add -d ' ' in cut to make it:
last -i | grep 'Jan 12' | tr -s ' ' | cut -d ' ' -f1,3
tr -s ' ' is required to squeeze multiple spaces into a single space.
However using awk lets you skip grep altogether and use:
last -i | awk '/Jan 12/{print $1, $3}'
In cut, default delimiter is [Tab]. Also with -d key you can specify only a single character as delimiter.
In last output there are 8 spaces in a row.
So, the best way is to use awk as in your example.
Bad, but working solution with cut:
last | grep 'Jun 23' | sed 's/\s\s*/ /g' | cut -d' ' -f1,3
I have 2mill lines of content and all lines look like this:
--username:orderID:email:country
I already added a -- prefix to all usernames.
What I need now is to get ONLY the usernames from the file. I think its possible with grep file starting with "--" ending with ":", but I have absolutely no idea.
So output should be:
usernameThank you all for the help.
THIS WORKED:
cut -d: -f1
Even without adding the prefix, you should be able to get the usernames with cut:
cut -d: -f1
-d says what the delimiter is, -f says which field(s) to return.
Try this:
cat YOUR_FILE | sed "s/:/\n/g" | grep "\-\-"
I'm trying to parse file names in specific directory. Filenames are of format:
token1_token2_token3_token(N-1)_token(N).sh
I need to cut the tokens using delimiter '_', and need to take string except the last two tokens. In above examlpe output should be token1_token2_token3.
The number of tokens is not fixed. I've tried to do it with -f#- option of cut command, but did not find any solution. Any ideas?
With cut:
$ echo t1_t2_t3_tn1_tn2.sh | rev | cut -d_ -f3- | rev
t1_t2_t3
rev reverses each line.
The 3- in -f3- means from the 3rd field to the end of the line (which is the beginning of the line through the third-to-last field in the unreversed text).
You may use POSIX defined parameter substitution:
$ name="t1_t2_t3_tn1_tn2.sh"
$ name=${name%_*_*}
$ echo $name
t1_t2_t3
It can not be done with cut, However, you can use sed
sed -r 's/(_[^_]+){2}$//g'
Just a different way to write ysth's answer :
echo "t1_t2_t3_tn1_tn2.sh" |rev| cut -d"_" -f1,2 --complement | rev
I have a file values.properties which contain data, like:
$ABC=10
$XYZ=20
I want to create a shell script that will take each element one by one from above file.
Say $ABC, then go to file ABC.txt & replace the value of $ABC with 10.
Similarly, then go to file XYZ.txt and replace $XYZ with 20.
I think maybe this should be in the Unix and Linux section, the solution I've hacked together is as follows:
cat values.properties | grep "=" | cut -d "$" -f2 | awk -F "=" '{print "s/$"$1"/"$2"/g "$1".txt"}' | xargs -n2 sed -i
The flow is like so:
Filter out all the value assignments via: grep "="
Remove the '$' via: cut -d "$" -f2
Use awk to split the variable name and value and construct sed replacement command
Use xargs to pull in the replacement parameter and target file via: xargs -n2
Finally pass sed to as the command to xargs: xargs -n2 sed
I have a file (input.txt) with columns of data separated by spaces. I want to get the 9th column of data and onwards.
Normally I would do:
cut -d " " -f 9- input.txt
However, in this file, sometimes the fields are separated by multiple spaces (and the number of spaces varies for each row / column). cut doesn't seem to treat consecutive spaces as one delimiter.
What should I do instead?
sed -r 's/ +/ /g' input.txt|cut -d " " -f 9-
You could use sed to replace n whitespaces with a single whitespace:
sed -r 's/\ +/\ /g' input.txt | cut -d ' ' -f 9-
Just be sure there aren't any tabs between your columns.