Suppose I have text.txt:
342423423423 - 'namefile.jpg' saved (2423423kb/2423423kb)
I can use
sed 's/^.*- //' text.txt
the output will be:
'namefile.jpg' saved (2423423kb/2423423kb)
it will get rid the text at the beginning of that namefile.jpg, but what if I also want to get rid the rest of it ? I want the output to be like this:
'namefile.jpg'
What sed pattern should I use? Please note that after the 'namefile.jpg' the text isn't always the same. It changes from time to time.
You could use capturing groups.
sed 's/^.*- \([^ ]\+\).*/\1/' text.txt
OR
sed 's/^.*- //;s/ .*//' file
^.*- regex matches all the characters from the start upto -. And the first command replaces all the matches characters with an empty string.
.* Now from the resultant string, this regex would match all the characters from the first space upto the last. Replacing those characters with an empty string will gave you the desired output.
Example:
$ echo "342423423423 - 'namefile.jpg' saved (2423423kb/2423423kb" | sed 's/^.*- \([^ ]\+\).*/\1/'
'namefile.jpg'
$ echo "342423423423 - 'namefile.jpg' saved (2423423kb/2423423kb" | sed 's/^.*- //;s/ .*//'
'namefile.jpg'
Or with awk:
echo "342423423423 - 'namefile.jpg' saved (2423423kb/2423423kb)" | awk '{print $3}'
Default delimiter for awk is space. Just print the 3rd field.
Try this way also
sed "s/.*\('.*'\).*/\1/" FileName
Output :
'namefile.jpg'
This is not sed but show how to do it with awk
awk -F\' '{print $2}' text.txt
namefile.jpg
or if you like to have the single quotes.
awk -F\' '{print FS$2FS}' text.txt
'namefile.jpg'
Just use a simple cut command
cut -d ' ' -f3 text.txt
or you can also use this,
sed 's/^.*- //' text.txt|cut -d ' ' -f1
Both will give you this Output:
'namefile.jpg'
Related
I am trying to change the column names to lowercase in a csv file. I found the code to do that online but I dont know how to replace the old column names(uppercase) with new column names(lowercase) in the original file. I did something like this:
$cat head -n1 xxx.csv | tr "[A-Z]" "[a-z]"
But it simply just prints out the column names in lowercase, which is not enough for me.
I tried to add sed -i but it did not do any good. Thanks!!
Using awk (readability winner) :
concise way:
awk 'NR==1{print tolower($0);next}1' file.csv
or using ternary operator:
awk '{print (NR==1) ? tolower($0): $0}' file.csv
or using if/else statements:
awk '{if (NR==1) {print tolower($0)} else {print $0}}' file.csv
To change the file for real:
awk 'NR==1{print tolower($0);next}1' file.csv | tee /tmp/temp
mv /tmp/temp file.csv
For your information, sed using the in place edit switch -i do the same: it use a temporary file under the hood.
You can check this by using :
strace -f -s 800 sed -i'' '...' file
Using perl:
perl -i -pe '$_=lc() if $.==1' file.csv
It replace the file on the fly with -i switch
You can use sed to tell it to replace the first line with all lower-case and then print the rest as-is:
sed '1s/.*/\L&/' ./xxx.csv
Redirect the output or use -i to do an in-place edit.
Proof of Concept
$ echo -e "COL1,COL2,COL3\nFoO,bAr,baZ" | sed '1s/.*/\L&/'
col1,col2,col3
FoO,bAr,baZ
I have CSV columns that I'm working with:
info,example-string,super-example-string,otherinfo
I would like to get:
example-string super example string
Right now, I'm running the following command:
awk -F ',' '{print $3}' | sed "s/-//g"
But, then I have to paste the lines together to combine $2 and $3.
Is there anyway to do something like this?
awk -F ',' '{print $2" "$3}' | sed "s/-//g"
Except, where the sed command is only performed on $3 and $2 stays in place? I'm just concerned later on if the lines don't match up, the data could be misaligned.
Please note: I need to keep the pipe for the SED command. I just used a simple example but I end up running a lot of commands after that as well.
Try:
$ awk -F, '{gsub(/-/," ",$3); print $2,$3}' file
example-string super example string
How it works
-F,
This tells awk to use a comma as the field separator.
gsub(/-/," ",$3)
This replaces all - in field 3 with spaces.
print $2,$3
This prints fields 2 and 3.
Examples using pipelines
$ echo 'info,example-string,super-example-string,otherinfo' | awk -F, '{gsub(/-/," ",$3); print $2,$3}'
example-string super example string
In a pipeline with sed:
$ echo 'info,example-string,super-example-string,otherinfo' | awk -F, '{gsub(/-/," ",$3); print $2,$3}' | sed 's/string/String/g'
example-String super example String
Though best solution will be either use a single sed or use single awk. Since you have requested to use awk and sed solution so providing this. Also considering your actual data will be same as shown sample Input_file.
awk -F, '{print $2,$3}' Input_file | sed 's/\([^ ]*\)\([^-]*\)-\([^-]*\)-\([^-]*\)/\1 \2 \3 \4/'
Output will be as follows.
example-string super example string
I have data in the format below:
2.7G58%/usr
8%/oravl02
49%/oravl01
98%/osszoneusr/osszone/user
and I need to remove everything from beginning of the line to the percentage so that the output of above sample data would be:
/usr
/oravl02
/oravl01
/osszoneusr/osszone/user
I am specifically looking for linux in-built string striping or sed as in solution.
Using awk:
awk -F '%' '{print $2}' file
Using sed:
sed -i.bak 's/^.*%//' file
Using cut:
cut -d% -f2 file
An awk variation.
awk -F% '$0=$NF' file
/usr
/oravl02
/oravl01
/osszoneusr/osszone/user
Please try in below ways
cut: the delimiter must be a single character
bcsmc2rtese001 [~/Shell_p]$ cut -d '%' -f2 text1.txt
/usr
/oravl02
/oravl01
bcsmc2rtese001 [~/Shell_p]$ awk -F "%" '{print $2}' text1.txt
/usr
/oravl02
/oravl01
/osszoneusr/osszone/user
I have a flat file where I have multiple occurrences of strings that contains single quote, e.g. hari's and leader's.
I want to replace all occurrences of the single quote with space, i.e.
all occurences of hari's to hari s
all occurences of leader's to leader s
I tried
sed -e 's/"'"/ /g' myfile.txt
and
sed -e 's/"'"/" "/g' myfile.txt
but they are not giving me the expected result.
Try to keep sed commands simple as much as possible.
Otherwise you'll get confused of what you'd written reading it later.
#!/bin/bash
sed "s/'/ /g" myfile.txt
This will do what you want to
echo "hari's"| sed 's/\x27/ /g'
It will replace single quotes present anywhere in your file/text. Even if they are used for quoting they will be replaced with spaces. In that case(remove the quotes within a word not at word boundary) you can use the following:
echo "hari's"| sed -re 's/(\<.+)\x27(.+\>)/\1 \2/g'
HTH
Just go leave the single quote and put an escaped single quote:
sed 's/'\''/ /g' input
also possible with a variable:
quote=\'
sed "s/$quote/ /g" input
Here is based on my own experience.
Please notice on how I use special char ' vs " after sed
This won't do (no output)
2521 #> echo 1'2'3'4'5 | sed 's/'/ /g'
>
>
>
but This would do
2520 #> echo 1'2'3'4'5 | sed "s/'/ /g"
12345
The -i should replace it in the file
sed -i 's/“/"/g' filename.txt
if you want backups you can do
sed -i.bak 's/“/"/g' filename.txt
I had to replace "0x" string with "32'h" and resolved with:
sed 's/ 0x/ 32\x27h/'
I have a file with the following layout:
123,01-08-2006
124,01-09-2007
125,01-10-2009
126,01-12-2010
How can I convert it into the following by using AWK?
123,2006-08-01
124,2007-09-01
125,2009-10-01
126,2009-12-01
Didn't read the question properly the first time. You need a field separator that can be either a dash or a comma. Once you have that you can use the dash as an output field separator (as it's the most common) and fake the comma using concatenation:
awk -F',|-' 'OFS="-" {print $1 "," $4,$3,$2}' file
Pure awk
awk -F"," '{ n=split($2,b,"-");$2=b[3]"-"b[2]"-"b[1];$i=$1","$2 } 1' file
sed
sed -r 's/(^.[^,]*,)([0-9]{2})-([0-9]{2})-([0-9]{4})/\1\4-\3-\2/' file
sed 's/\(^.[^,]*,\)\([0-9][0-9]\)-\([0-9][0-9]\)-\([0-9]\+\)/\1\4-\3-\2/' file
Bash
#!/bin/bash
while IFS="," read -r a b
do
IFS="-"
set -- $b
echo "$a,$3-$2-$1"
done <"file"
Unfortunately, I think standard awk only allows one field separator character so you'll have to pre-process the data. You can do this with tr but if you really want an awk-only solution, use:
pax> echo '123,01-08-2006
124,01-09-2007
125,01-10-2009
126,01-12-2010' | awk -F, '{print $1"-"$2}' | awk -F- '{print $1","$4"-"$3"-"$2}'
This outputs:
123,2006-08-01
124,2007-09-01
125,2009-10-01
126,2010-12-01
as desired.
The first awk changes the , characters to - so that you have four fields separated with the same character (this is the bit I'd usually use tr ',' '-' for).
The second awk prints them out in the order you specified, correcting the field separators at the same time.
If you're using an awk implementation that allows multiple FS characters, you can use something like:
gawk -F ',|-' '{print $1","$4"-"$3"-"$2}'
If it doesn't need to be awk, you could use Perl too:
$ perl -nle 'print "$1,$4-$3-$2" while (/(\d{3}),(\d{2})-(\d{2})-(\d{4})\s*/g)' < file.txt