Following commands have been executed by me
[root#gblabvl31 Aakash]# cat > file1.txt
-rwx
[root#gblabvl31 Aakash]# cat file1.txt |tr '[-rwx]' '[0421]' > file2.txt
[root#gblabvl31 Aakash]# cat file2.txt
-]]]
I could not understand why the expected output which is
0421
is not achieved
tr is a little special and expects its parameters to already be character classes, thus
$ echo '-rwx' | tr -- '-rwx' '0421'
0421
yields the expected result.
Get rid of the character class:
$ tr -- '-rwx' '0421' <<<'-rwx'
0421
Related
I generated 1000 output files containing a single line with (mistakenly) no line break at the end, so that
cat filnename_* > outfile
generates a file with a single line. I attempted to remedy this using
cat filename_* | tr '\n' ' ' > outfile
but I get exactly the same result - a file with a single line of output. Why doesn't the latter code (which ought to add a line break for each filename_* file) accomplish what I'm trying to do?
I think you could manually append a line break to your 1000 out files, and then cat them all later:
echo | tee -a filename_*
cat filnename_* > outfile
Edit:
Change the first step to echo | tee -a filename_* as #rowboat suggested
If all your files are missing the final linefeed then you can use sed for adding it on the fly:
# with GNU sed
sed '$s/$/\n/' filnename_* > outfile
# with standard sed and bash, zsh, etc...
sed $'$s/$/\\\n/' filnename_* > outfile
# with standard sed and a POSIX shell
sed '$s/$/\
/' filnename_* > outfile
tr '\n' ' ' says to replace each \n with a space; you've already stated the inputs do not contain any \n so the tr does nothing and the final output is just a copy of the input
Setup:
for ((i=1;i<=5;i++))
do
printf 'abcd' > out${i}
done
$ cat out*
abcdabcdabcdabcdabcd
Many commands can process a file and add a \n, it just depends on how much typing you want to do, eg:
$ sed 's/$/&/' out* # or: sed -n '/$/p' out*
abcd
abcd
abcd
abcd
abcd
$ awk '1' out*
abcd
abcd
abcd
abcd
abcd
I'm not coming up with any ideas on how to use cat to append a \n but one idea would be to use a user-defined function; assume we want to name our new function catn (cat and add \n on end):
$ type -a catn # verify name "catn" not currently in use
-bash: type: catn: not found
$ catn() { awk '1' "${#:--}"; } # wrap function definition around the awk solution
$ catn out*
abcd
abcd
abcd
abcd
abcd
(Need in bash linux)I have a file with numbers like this
1.415949602
91.09582241
91.12042924
91.40270349
91.45625033
91.70150341
91.70174342
91.70660043
91.70966213
91.72597066
91.7287678315
91.7398645966
91.7542977976
91.7678146465
91.77196659
91.77299733
abcdefghij
91.7827827
91.78288651
91.7838959
91.7855
91.79080605
91.80103075
91.8050505
sed 's/^91\.//' file (working)
Any way possible I can do these 3 steps?
1st I try this
cat input | tr -d 91. > 1.txt (didnt work)
cat input | tr -d "91." > 1.txt (didnt work)
cat input | tr -d '91.' > 1.txt (didnt work)
then
grep -x '.\{10\}' (working)
then
grep "^[6-9]" (working)
Final 1 line solution
cat input.txt | sed 's/\91.//g' | grep -x '.\{10\}' | grep "^[6-9]" > output.txt
Your "final" solution:
cat input.txt |
sed 's/\91.//g' |
grep -x '.\{10\}' |
grep "^[6-9]" > output.txt
should avoid the useless cat, and also move the backslash in the sed script to the correct place (and I added a ^ anchor and removed the g flag since you don't expect more than one match on a line anyway);
sed 's/^91\.//' input.txt |
grep -x '.\{10\}' |
grep "^[6-9]" > output.txt
You might also be able to get rid of at least one useless grep but at this point, I would switch to Awk:
awk '{ sub(/^91\./, "") } /^[6-9].{9}$/' input.txt >output.txt
The sub() does what your sed replacement did; the final condition says to print lines which match the regex.
The same can conveniently, but less readably, be written in sed:
sed -n 's/^91\.([6-9][0-9]\{9\}\)$/\1/p' input.txt >output.txt
assuming your sed dialect supports BRE regex with repetitions like [0-9]\{9\}.
I have the text file with the column of the numbers, that I need to transform to the line with the numbers separated by ', '
For example:
$ cat file.txt
1034008
1034043
10340431
1034051
Then I use tr:
tr "\n" "', '" < file.txt > file2.txt
But, result is:
$ cat file2.txt
1034008'1034043'10340431'1034051
So, what I need to do to get the correct result?
tr can only to one-to-one mapping, not one-to-many
$ # convert all input lines to one line
$ # using , as separator, cannot give multiple character separator
$ paste -sd, ip.txt
1034008,1034043,10340431,1034051
$ # post process it
$ paste -sd, ip.txt | sed 's/,/, /g'
1034008, 1034043, 10340431, 1034051
$ # or use a tool that allows input record separator manipulation
$ perl -pe 's/\n/, / unless eof' ip.txt
1034008, 1034043, 10340431, 1034051
1.We can do this by sed.
The command N of sed can reads the next line into pattern space.So we use N to merge 2 lines into 1.But how to merge all lines into one?
We can set a lebel at the beginning and use t label to jump to the lebel to make a loop.
$ sed ':myLebel;N;s/\n/, /; t myLebel; ' file.txt > file2.txt
$ cat file2.txt
1034008, 1034043, 10340431, 1034051
2.In your question, we can use xargs to read all content into one line which is delimited by space,and then use sed to replace space to the strings you want.
$ cat file.txt | xargs |sed 's/ /, /g' > file2.txt
$ cat file2.txt
1034008, 1034043, 10340431, 1034051
Refer to:
How the 'N' command works in sed?
https://www.thegeekstuff.com/2009/12/unix-sed-tutorial-6-examples-for-sed-branching-operation/
pure bash, to avoid external commands (faster)
tk="$(< file.txt)"
echo "${tk//$'\n'/, }" > file2.txt
I have multiple files which I want to concat with cat.
Let's say
File1.txt
foo
File2.txt
bar
File3.txt
qux
I want to concat so that the final file looks like:
foo
bar
qux
Instead of this with usual cat File*.txt > finalfile.txt
foo
bar
qux
What's the right way to do it?
You can do:
for f in *.txt; do (cat "${f}"; echo) >> finalfile.txt; done
Make sure the file finalfile.txt does not exist before you run the above command.
If you are allowed to use awk you can do:
awk 'FNR==1{print ""}1' *.txt > finalfile.txt
If you have few enough files that you can list each one, then you can use process substitution in Bash, inserting a newline between each pair of files:
cat File1.txt <(echo) File2.txt <(echo) File3.txt > finalfile.txt
If it were me doing it I'd use sed:
sed -e '$s/$/\n/' -s *.txt > finalfile.txt
In this sed pattern $ has two meanings, firstly it matches the last line number only (as a range of lines to apply a pattern on) and secondly it matches the end of the line in the substitution pattern.
If your version of sed doesn't have -s (process input files separately) you can do it all as a loop though:
for f in *.txt ; do sed -e '$s/$/\n/' $f ; done > finalfile.txt
This works in Bash:
for f in *.txt; do cat $f; echo; done
In contrast to answers with >> (append), the output of this command can be piped into other programs.
Examples:
for f in File*.txt; do cat $f; echo; done > finalfile.txt
(for ... done) > finalfile.txt (parens are optional)
for ... done | less (piping into less)
for ... done | head -n -1 (this strips off the trailing blank line)
You may do it using xargs if you like, but the main idea is still the same:
find *.txt | xargs -I{} sh -c "cat {}; echo ''" > finalfile.txt
That's how I just did it on OsX 10.10.3
for f in *.txt; do (cat $f; echo '') >> fullData.txt; done
since the simple 'echo' command with no params ended up in no new lines inserted.
In python, this concatenates with blank lines between files (the , suppresses adding an extra trailing blank line):
print '\n'.join(open(f).read() for f in filenames),
Here is the ugly python one-liner that can be called from the shell and prints the output to a file:
python -c "from sys import argv; print '\n'.join(open(f).read() for f in argv[1:])," File*.txt > finalfile.txt
You could use grep, with -h to not echo the filenames
grep -h "" File*.txt
Will give:
foo
bar
qux
I have 2 two files
$cat file1.txt
field1=value1
field2=value2
field3=value3
::
::
$cat file2.txt
something.field1.some
otherthing.field2.anything
anything.field3.something
I need to read file1.txt and check whether file2.txt for fieldN and replace with valueN
so that the result will be
something.value1.some
otherthing.value2.anything
anything.value3.something
Provided there are no special sed-type characters in your fields and values, you can use a meta-sed approach:
pax> sed -e 's/^/s\/\\./' -e 's/=/\\.\/./' -e 's/$/.\/g/' file1.txt >x.sed
pax> sed -f x.sed file2.txt
something.value1.some
otherthing.value2.anything
anything.value3.something
If you look at the x.sed file, you'll see that the first sed just makes a list of sed commands to be executed on your second file.
use awk
$ awk -F"[=.]" 'FNR==NR{a[$1]=$2;next}{$2=a[$2]}1' OFS="." file1 file2
something.value1.some
otherthing.value2.anything
anything.value3.something
This unfortunately requires the files to be sorted:
tr = . < file1.txt | join -t . -1 1 -2 2 -o 2.1 1.2 2.3 - file2.txt