The content of my file.txt is something like this:
fghfg
sdfsd
thrt
wefs
dfd
htyj
dfd
gsedf
1 sdfsdf
3 sdfghj
f fgdfhtyu
dfdsfsd
sdgdfg
sdfzcxzc
fgjty
I want to change this part:
1 sdfsdf
3 sdfghj
f fgdfhtyu
to this one: (by removing first column and spaces)
sdfsdf
sdfghj
fgdfhtyu
Then redirect the whole output of file.txt to file2.txt.
How is it possible...?
Thanks
Try changing your script to this:
find "$1" -print |
while IFS= read -r line; do
awk '!a[$0]++{print $NF}' "$line" > /tmp/file.txt
done
Does anything need to change?
Using sed, you can do something like this
sed 's/^ \+[^ ] \+//' file1.txt > file2.txt
It will remove all sequences at the begining of the line that contain a sequence of whitespaces, followed by a single non whitespace character than a sequence of whitespaces. Using the file in your question, the results is:
fghfg
sdfsd
thrt
wefs
dfd
htyj
dfd
gsedf
sdfsdf
sdfghj
fgdfhtyu
dfdsfsd
sdgdfg
sdfzcxzc
fgjty
This should do:
awk '{print $NF}' file.txt > file2.txt
or shortest solution of the day:
awk '{$0=$NF}1' file.txt > file2.txt
cat file2.txt
fghfg
sdfsd
thrt
wefs
dfd
htyj
dfd
gsedf
sdfsdf
sdfghj
fgdfhtyu
dfdsfsd
sdgdfg
sdfzcxzc
fgjty
Just print the last column.
Another simple sed command,
sed 's/.* \(.*\)/\1/g' file1.txt > file2.txt
Related
I have the text file with the column of the numbers, that I need to transform to the line with the numbers separated by ', '
For example:
$ cat file.txt
1034008
1034043
10340431
1034051
Then I use tr:
tr "\n" "', '" < file.txt > file2.txt
But, result is:
$ cat file2.txt
1034008'1034043'10340431'1034051
So, what I need to do to get the correct result?
tr can only to one-to-one mapping, not one-to-many
$ # convert all input lines to one line
$ # using , as separator, cannot give multiple character separator
$ paste -sd, ip.txt
1034008,1034043,10340431,1034051
$ # post process it
$ paste -sd, ip.txt | sed 's/,/, /g'
1034008, 1034043, 10340431, 1034051
$ # or use a tool that allows input record separator manipulation
$ perl -pe 's/\n/, / unless eof' ip.txt
1034008, 1034043, 10340431, 1034051
1.We can do this by sed.
The command N of sed can reads the next line into pattern space.So we use N to merge 2 lines into 1.But how to merge all lines into one?
We can set a lebel at the beginning and use t label to jump to the lebel to make a loop.
$ sed ':myLebel;N;s/\n/, /; t myLebel; ' file.txt > file2.txt
$ cat file2.txt
1034008, 1034043, 10340431, 1034051
2.In your question, we can use xargs to read all content into one line which is delimited by space,and then use sed to replace space to the strings you want.
$ cat file.txt | xargs |sed 's/ /, /g' > file2.txt
$ cat file2.txt
1034008, 1034043, 10340431, 1034051
Refer to:
How the 'N' command works in sed?
https://www.thegeekstuff.com/2009/12/unix-sed-tutorial-6-examples-for-sed-branching-operation/
pure bash, to avoid external commands (faster)
tk="$(< file.txt)"
echo "${tk//$'\n'/, }" > file2.txt
This question already has answers here:
Difference between single and double quotes in Bash
(7 answers)
Closed 5 years ago.
I need help with replacing a string in a file where "from"-"to" strings coming from a given file.
fromto.txt:
"TRAVEL","TRAVEL_CHANNEL"
"TRAVEL HD","TRAVEL_HD_CHANNEL"
"FROM","TO"
First column is what to I'm searching for, which is to be replaced with the second column.
So far I wrote this small script:
while read p; do
var1=`echo "$p" | awk -F',' '{print $1}'`
var2=`echo "$p" | awk -F',' '{print $2}'`
echo "$var1" "AND" "$var2"
sed -i -e 's/$var1/$var2/g' test.txt
done <fromto.txt
Output looks good (x AND y), but for some reason it does not replace the first column ($var1) with the second ($var2).
test.txt:
"TRAVEL"
Output:
"TRAVEL" AND "TRAVEL_CHANNEL"
sed -i -e 's/"TRAVEL"/"TRAVEL_CHANNEL"/g' test.txt
"TRAVEL HD" AND "TRAVEL_HD_CHANNEL"
sed -i -e 's/"TRAVEL HD"/"TRAVEL_HD_CHANNEL"/g' test.txt
"FROM" AND "TO"
sed -i -e 's/"FROM"/"TO"/g' test.txt
$ cat test.txt
"TRAVEL"
input:
➜ cat fromto
TRAVEL TRAVEL_CHANNEL
TRAVELHD TRAVEL_HD
➜ cat inputFile
TRAVEL
TRAVELHD
The work:
➜ awk 'BEGIN{while(getline < "fromto") {from[$1] = $2}} {for (key in from) {gsub(key,from[key])} print}' inputFile > output
and output:
➜ cat output
TRAVEL_CHANNEL
TRAVEL_CHANNEL_HD
➜
This first (BEGIN{}) loads your input file into an associate array: from["TRAVEL"] = "TRAVEL_HD", then rather inefficiently performs search and replace line by line for each array element in the input file, outputting the results, which I piped to a separate outputfile.
The caveat, you'll notice, is that the search and replaces can interfere with each other, the 2nd line of output being a perfect example since the first replacement happens. You can try ordering your replacements differently, or use a regex instead of a gsub. I'm not certain if awk arrays are guaranteed to have a certain order, though. Something to get you started, anyway.
2nd caveat. There's a way to do the gsub for the whole file as the 2nd step of your BEGIN and probably make this much faster, but I'm not sure what it is.
you can't do this oneshot you have to use variables within a script
maybe something like below sed command for full replacement
-bash-4.4$ cat > toto.txt
1
2
3
-bash-4.4$ cat > titi.txt
a
b
c
-bash-4.4$ sed 's|^\s*\(\S*\)\s*\(.*\)$|/^\2\\>/s//\1/|' toto.txt | sed -f - titi.txt > toto.txt
-bash-4.4$ cat toto.txt
a
b
c
-bash-4.4$
The contents of file.txt:
"16875170";"172";"50"
"11005137";"28";"39"
"16981017";"9347";"50"
"13771676";"13";"45"
"5865226";"963";"28"
File with the result:
"1";"16875170";"172";"50"
"2";"11005137";"28";"39"
"3";"16981017";"9347";"50"
"4";"13771676";"13";"45"
"5";"5865226";"963";"28"
awk can do this for you pretty easily.
$ cat test.txt
"16875170";"172";"50"
"11005137";"28";"39"
"16981017";"9347";"50"
"13771676";"13";"45"
"5865226";"963";"28"
$ awk '{print "\""NR"\";"$0}' test.txt
"1";"16875170";"172";"50"
"2";"11005137";"28";"39"
"3";"16981017";"9347";"50"
"4";"13771676";"13";"45"
"5";"5865226";"963";"28"
This tells awk to print a literal ", followed by the record number, followed by ";, then rest of the line. Depending on other needs not stated (e.g. the quoting not being totally necessary,) there may be a better method to use but given the question and output this works.
Grep solution for funsies:
$ grep ".*" test.txt -n | sed 's/\([0-9]*\):/"\1";/g;'
"1";"16875170";"172";"50"
"2";"11005137";"28";"39"
"3";"16981017";"9347";"50"
"4";"13771676";"13";"45"
"5";"5865226";"963";"28"
For the fun of sed:
sed "=" test.txt | sed "N;s/\([0-9]\{1,\}\)\n/\"\1\";/"
Output:
"1";"16875170";"172";"50"
"2";"11005137";"28";"39"
"3";"16981017";"9347";"50"
"4";"13771676";"13";"45"
"5";"5865226";"963";"28"
also, bash-based:
i=0; cat my_file.txt | while read line; do i=$(( $i + 1 )); echo \"$i\"\;"$line"; done > results.txt
There is also coreutils nl:
<file.txt nl -s';' -w1 | sed 's/[0-9]*/"&"/'
Or perl:
<file.txt perl -pne 's/^/"$.";/'
Or sed and paste:
<file.txt sed = | paste -d\; - - | sed 's/[0-9]*/"&"/'
Output in all cases:
"1";"16875170";"172";"50"
"2";"11005137";"28";"39"
"3";"16981017";"9347";"50"
"4";"13771676";"13";"45"
"5";"5865226";"963";"28"
I am in need of converting the below in multiple files. Text need not be same, but will be in the same format and length
File 1:
XXXxx81511
XXX is Present
abcdefg
07/09/2014
YES
1
XXX
XXX-XXXX
File 2:
XXXxx81511
XXX is Present
abcdefg
07/09/2014
YES
1
XXX
XXX-XXXX
TO
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXXXXX-XXXX
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXXXXX-XXXX
Basically converting row to column and appending to a new file while adding commas to separate them.
I am trying cat filename | tr '\n' ',' but the results do get added in the same line. like this
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXXXXX-XXXX,XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXXXXX-XXXX
Use:
paste -sd, file1 file2 .... fileN
#e.g.
paste -sd, *.txt file*
prints
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXX,XXX-XXXX
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXX,XXX-XXXX
and if you need the empty line after each one
paste -sd, file* | sed G
prints
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXX,XXX-XXXX
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXX,XXX-XXXX
Short perl variant:
perl -pe 'eof||s|$/|,|' files....
You need to insert an echo after tr. Use a script like this:
for f in file1 file2; do
tr '\n' ',' < "$f"; echo
done > files.output
Use a for loop:
for f in file*; do sed ':a;N;$!ba;s/\n/,/g' < $f; done
The sed code was taken from sed: How can I replace a newline (\n)?. tr '\n' ',' didn't work on my limited test setup.
perl -ne 'chomp; print $_ . (($. % 8) ? "," : "\n")' f*
where:
-n reads the file line by line but doesn't print each line
-e executes the code from the command line
8 number of lines in each file
f* glob for files (replace with something that will select all
your files). If you need a specific order, you will probably need
something more complicated here.
I have 2 two files
$cat file1.txt
field1=value1
field2=value2
field3=value3
::
::
$cat file2.txt
something.field1.some
otherthing.field2.anything
anything.field3.something
I need to read file1.txt and check whether file2.txt for fieldN and replace with valueN
so that the result will be
something.value1.some
otherthing.value2.anything
anything.value3.something
Provided there are no special sed-type characters in your fields and values, you can use a meta-sed approach:
pax> sed -e 's/^/s\/\\./' -e 's/=/\\.\/./' -e 's/$/.\/g/' file1.txt >x.sed
pax> sed -f x.sed file2.txt
something.value1.some
otherthing.value2.anything
anything.value3.something
If you look at the x.sed file, you'll see that the first sed just makes a list of sed commands to be executed on your second file.
use awk
$ awk -F"[=.]" 'FNR==NR{a[$1]=$2;next}{$2=a[$2]}1' OFS="." file1 file2
something.value1.some
otherthing.value2.anything
anything.value3.something
This unfortunately requires the files to be sorted:
tr = . < file1.txt | join -t . -1 1 -2 2 -o 2.1 1.2 2.3 - file2.txt