I have the text file with the column of the numbers, that I need to transform to the line with the numbers separated by ', '
For example:
$ cat file.txt
1034008
1034043
10340431
1034051
Then I use tr:
tr "\n" "', '" < file.txt > file2.txt
But, result is:
$ cat file2.txt
1034008'1034043'10340431'1034051
So, what I need to do to get the correct result?
tr can only to one-to-one mapping, not one-to-many
$ # convert all input lines to one line
$ # using , as separator, cannot give multiple character separator
$ paste -sd, ip.txt
1034008,1034043,10340431,1034051
$ # post process it
$ paste -sd, ip.txt | sed 's/,/, /g'
1034008, 1034043, 10340431, 1034051
$ # or use a tool that allows input record separator manipulation
$ perl -pe 's/\n/, / unless eof' ip.txt
1034008, 1034043, 10340431, 1034051
1.We can do this by sed.
The command N of sed can reads the next line into pattern space.So we use N to merge 2 lines into 1.But how to merge all lines into one?
We can set a lebel at the beginning and use t label to jump to the lebel to make a loop.
$ sed ':myLebel;N;s/\n/, /; t myLebel; ' file.txt > file2.txt
$ cat file2.txt
1034008, 1034043, 10340431, 1034051
2.In your question, we can use xargs to read all content into one line which is delimited by space,and then use sed to replace space to the strings you want.
$ cat file.txt | xargs |sed 's/ /, /g' > file2.txt
$ cat file2.txt
1034008, 1034043, 10340431, 1034051
Refer to:
How the 'N' command works in sed?
https://www.thegeekstuff.com/2009/12/unix-sed-tutorial-6-examples-for-sed-branching-operation/
pure bash, to avoid external commands (faster)
tk="$(< file.txt)"
echo "${tk//$'\n'/, }" > file2.txt
Related
I generated 1000 output files containing a single line with (mistakenly) no line break at the end, so that
cat filnename_* > outfile
generates a file with a single line. I attempted to remedy this using
cat filename_* | tr '\n' ' ' > outfile
but I get exactly the same result - a file with a single line of output. Why doesn't the latter code (which ought to add a line break for each filename_* file) accomplish what I'm trying to do?
I think you could manually append a line break to your 1000 out files, and then cat them all later:
echo | tee -a filename_*
cat filnename_* > outfile
Edit:
Change the first step to echo | tee -a filename_* as #rowboat suggested
If all your files are missing the final linefeed then you can use sed for adding it on the fly:
# with GNU sed
sed '$s/$/\n/' filnename_* > outfile
# with standard sed and bash, zsh, etc...
sed $'$s/$/\\\n/' filnename_* > outfile
# with standard sed and a POSIX shell
sed '$s/$/\
/' filnename_* > outfile
tr '\n' ' ' says to replace each \n with a space; you've already stated the inputs do not contain any \n so the tr does nothing and the final output is just a copy of the input
Setup:
for ((i=1;i<=5;i++))
do
printf 'abcd' > out${i}
done
$ cat out*
abcdabcdabcdabcdabcd
Many commands can process a file and add a \n, it just depends on how much typing you want to do, eg:
$ sed 's/$/&/' out* # or: sed -n '/$/p' out*
abcd
abcd
abcd
abcd
abcd
$ awk '1' out*
abcd
abcd
abcd
abcd
abcd
I'm not coming up with any ideas on how to use cat to append a \n but one idea would be to use a user-defined function; assume we want to name our new function catn (cat and add \n on end):
$ type -a catn # verify name "catn" not currently in use
-bash: type: catn: not found
$ catn() { awk '1' "${#:--}"; } # wrap function definition around the awk solution
$ catn out*
abcd
abcd
abcd
abcd
abcd
I have file that contains below information
$ cat test.txt
Studentename:Ram
rollno:12
subjects:6
Highest:95
Lowest:65
Studentename:Krish
rollno:13
subjects:6
Highest:90
Lowest:45
Studentename:Sam
rollno:14
subjects:6
Highest:75
Lowest:65
I am trying place info of single student in single.
i.e My output should be
Studentename:Ram rollno:12 subjects:6 Highest:95 Lowest:65
Studentename:Krish rollno:13 subjects:6 Highest:90 Lowest:45
Studentename:Sam rollno:14 subjects:6 Highest:75 Lowest:65.
Below is the command I wrote
cat test.txt | tr "\n" " " | sed 's/Lowest:[0-9]\+/Lowest:[0:9]\n/g'
Above command is breaking line at regex Lowest:[0-9] but it doesn't print the pattern. Instead it is printing Lowest:[0-9].
Please help
Try:
$ sed '/^Studente/{:a; N; /Lowest/!ba; s/\n/ /g}' test.txt
Studentename:Ram rollno:12 subjects:6 Highest:95 Lowest:65
Studentename:Krish rollno:13 subjects:6 Highest:90 Lowest:45
Studentename:Sam rollno:14 subjects:6 Highest:75 Lowest:65
How it works
/^Studente/{...} tells sed to perform the commands inside the curly braces only on lines that start with Studente. Those commands are:
:a
This defines a label a.
N
This reads in the next line and appends it to the pattern space.
/Lowest/!ba
If the current pattern space does not contain Lowest, this tells sed to branch back to label a.
In more detail, /Lowest/ is true if the line contains Lowest. In sed, ! is negation so /Lowest/! is true if the line does not containLowest. Inba, thebstands for the branch command anda` is the label to branch to.
s/\n/ /g
This tells sed to replace all newlines with spaces.
Try this using awk :
awk '{if ($1 !~ /^Lowest/) {printf "%s ", $0} else {print}}' file.txt
Or shorter but more obfuscated :
awk '$1!~/^Lowest/{printf"%s ",$0;next}1' file.txt
Or correcting your command :
tr "\n" " " < file.txt | sed 's/Lowest:[0-9]\+/&\n/g'
Explanation: & is whats matched in the left part of substitution
Another possible GNU sed that doesn't assume Lowest is the last item:
sed ':a; N; /\nStudent/{P; D}; s/\n/ /; ba' test.txt
This might work for you (GNU sed):
sed '/^Studentename:/{:a;x;s/\n/ /gp;d};H;$ba;d' file
Use the hold space to gather up the fields and then remove the newlines to produce a record.
I need to delete the <#> in the following pattern:
vdd1a<1>
vdd1b<2>
vdd1c<3>
....
Outputs should be like:
vdd1a
vdd1b
vdd1c
...
I was trying to do this sed 's/\(vdd1[a-z]*\).<[0-9]>/\1/' file1 > file2
But it gives me "vdd1" all the way.
How can I do it correctly?
The dot . after the paren is matching the letter after the 1. You need to get rid of it. I.e.,
sed 's/\(vdd1[a-z]*\)<[0-9]>/\1/' file1 > file2
Alternatively, you can just replace the <[0-9]> with a blank pattern, i.e.,
sed 's/<[0-9]>//' file1 > filed
If the lines don't contain another < besides the one in the <#> part, you can avoid using sed and use something like cut instead, for example:
cut -d"<" -f1 <<< "vdd1a<1>"
Will print:
vdd1a
Invoking it with the files:
cut -d"<" -f1 < file1 > file2
I am in need of converting the below in multiple files. Text need not be same, but will be in the same format and length
File 1:
XXXxx81511
XXX is Present
abcdefg
07/09/2014
YES
1
XXX
XXX-XXXX
File 2:
XXXxx81511
XXX is Present
abcdefg
07/09/2014
YES
1
XXX
XXX-XXXX
TO
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXXXXX-XXXX
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXXXXX-XXXX
Basically converting row to column and appending to a new file while adding commas to separate them.
I am trying cat filename | tr '\n' ',' but the results do get added in the same line. like this
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXXXXX-XXXX,XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXXXXX-XXXX
Use:
paste -sd, file1 file2 .... fileN
#e.g.
paste -sd, *.txt file*
prints
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXX,XXX-XXXX
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXX,XXX-XXXX
and if you need the empty line after each one
paste -sd, file* | sed G
prints
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXX,XXX-XXXX
XXXxx81511,XXX is Present,abcdefg,07/09/2014,YES,1,XXX,XXX-XXXX
Short perl variant:
perl -pe 'eof||s|$/|,|' files....
You need to insert an echo after tr. Use a script like this:
for f in file1 file2; do
tr '\n' ',' < "$f"; echo
done > files.output
Use a for loop:
for f in file*; do sed ':a;N;$!ba;s/\n/,/g' < $f; done
The sed code was taken from sed: How can I replace a newline (\n)?. tr '\n' ',' didn't work on my limited test setup.
perl -ne 'chomp; print $_ . (($. % 8) ? "," : "\n")' f*
where:
-n reads the file line by line but doesn't print each line
-e executes the code from the command line
8 number of lines in each file
f* glob for files (replace with something that will select all
your files). If you need a specific order, you will probably need
something more complicated here.
I have 2 two files
$cat file1.txt
field1=value1
field2=value2
field3=value3
::
::
$cat file2.txt
something.field1.some
otherthing.field2.anything
anything.field3.something
I need to read file1.txt and check whether file2.txt for fieldN and replace with valueN
so that the result will be
something.value1.some
otherthing.value2.anything
anything.value3.something
Provided there are no special sed-type characters in your fields and values, you can use a meta-sed approach:
pax> sed -e 's/^/s\/\\./' -e 's/=/\\.\/./' -e 's/$/.\/g/' file1.txt >x.sed
pax> sed -f x.sed file2.txt
something.value1.some
otherthing.value2.anything
anything.value3.something
If you look at the x.sed file, you'll see that the first sed just makes a list of sed commands to be executed on your second file.
use awk
$ awk -F"[=.]" 'FNR==NR{a[$1]=$2;next}{$2=a[$2]}1' OFS="." file1 file2
something.value1.some
otherthing.value2.anything
anything.value3.something
This unfortunately requires the files to be sorted:
tr = . < file1.txt | join -t . -1 1 -2 2 -o 2.1 1.2 2.3 - file2.txt