Unwanted line break using echo and cat - linux

I'm trying to add a line at the beginning of a file, using
echo 'time/F:x1:x2' | cat - file.txt>newfile.txt
But this produces line breaks at each line in the new file (except for after the added 'time/F:x1:x2' line). Any ideas on how to avoid this?

Use -n to disable the trailing newline:
echo -n 'time/F:x1:x2' | cat - file.txt > newfile.txt
There are other ways, too:
sed '1s|^|time/F:x1:x2|' file.txt > newfile.txt

How about
{ echo 'time/F:x1:x2'; cat file.txt; } >newfile.txt
or
sed '1i\
time/F:x1:x2' file.txt > newfile.txt

Actually you don't even need the echo and pipe if you're using bash. Just use a herestring:
<<< 'time/F:x1:x2' cat - file.txt > newfile.txt

Related

Redirecting and writing in the same file

Hi i have a script which replaces the certain occurrences in the .sql files and after that writes it in the some new file.So,i m here unnecessarily creating extra files.Is there anyway by which i can write in the same file.
Below is the part of the script:
sed "s/v1/$value1/g" Save.sql >> CreateViewFinal1.sql
sed "s/v2/$value2/g" CreateViewFinal1.sql >> CreateViewFinal2.sql
sed "s/v3/$value3/g" CreateViewFinal2.sql >> CreateViewFinal3.sql
sed "s/v4/$value4/g" CreateViewFinal3.sql >> CreateViewFinal4.sql
sed "s/v5/$value5/g" CreateViewFinal4.sql >> CreateViewFinal5.sql
sed "s/v6/$value6/g" CreateViewFinal5.sql >> CreateViewFinal6.sql
sed "s/v7/$value7/g" CreateViewFinal6.sql >> CreateViewFinal7.sql
sed "s/v8/$value8/g" CreateViewFinal7.sql >> CreateViewFinal8.sql
sed "s/v9/$value9/g" CreateViewFinal8.sql >> CreateViewFinal9.sql
sed "s/a1/$value10/g" CreateViewFinal9.sql >> CreateViewFinal10.sql
sed "s/b1/$value11/g" CreateViewFinal10.sql >> CreateViewFinal11.sql
sed "s/c1/$value12/g" CreateViewFinal11.sql >> CreateViewFinal12.sql
sqlplus -S -L cimkroger/cimkroger#orcl #CreateViewFinal12.sql
Thanks in advance.
You can sed's inline editing and can avoid multiple sed commands with -e switch like this:
sed -i.bak -e "s/v1/$value1/g" -e "s/v2/$value2/g" -e "s/v3/$value3/g" Save.sql
Yes.
You can change a file directly with the -i option of sed.
Hence, sed -i .... file will replace something in the same file.
Moreover, instead of so many sed different lines, you can do multiple sed actions with the -e option. So instead of:
sed "s/v1/$value1/g" Save.sql >> CreateViewFinal1.sql
sed "s/v2/$value2/g" CreateViewFinal1.sql >> CreateViewFinal2.sql
sed "s/v3/$value3/g" CreateViewFinal2.sql >> CreateViewFinal3.sql
You can do
sed -i -e "s/v1/$value1/g" -e "s/v2/$value2/g" -e "s/v3/$value3/g" Save.sql
and so on.
Example
$ cat file
hello you
$ sed -i -e 's/hello/bye/g' -e 's/you/me/g' file
$ cat file
bye me

How can I prepend a string to the beginning of each line in a file?

I have the following bash code which loops through a text file, line by line .. im trying to prefix the work 'prefix' to each line but instead am getting this error:
rob#laptop:~/Desktop$ ./appendToFile.sh stusers.txt kp
stusers.txt
kp
./appendToFile.sh: line 11: /bin/sed: Argument list too long
115000_210org#house.com,passw0rd
This is the bash script ..
#!/bin/bash
file=$1
string=$2
echo "$file"
echo "$string"
for line in `cat $file`
do
sed -e 's/^/prefix/' $line
echo "$line"
done < $file
What am i doing wrong here?
Update:
Performing head on file dumps all the lines onto a single line of the terminal, probably related?
rob#laptop:~/Desktop$ head stusers.txt
rob#laptop:~/Desktop$ ouse.com,passw0rd
a one-line awk command should do the trick also:
awk '{print "prefix" $0}' file
Concerning your original error:
./appendToFile.sh: line 11: /bin/sed: Argument list too long
The problem is with this line of code:
sed -e 's/^/prefix/' $line
$line in this context is file name that sed is running against. To correct your code you should fix this line as such:
echo $line | sed -e 's/^/prefix/'
(Also note that your original code should not have the < $file at the end.)
William Pursell addresses this issue correctly in both of his suggestions.
However, I believe you have correctly identified that there is an issue with your original text file. dos2unix will not correct this issue, as it only strips the carriage returns Windows sticks on the end of lines. (However, if you are attempting to read a Linux file in Windows, you would get a mammoth line with no returns.)
Assuming that it is not an issue with the end of line characters in your text file, William Pursell's, Andy Lester's, or nullrevolution's answers will work.
A variation on the while read... suggestion:
while read -r line; do echo "PREFIX " $line; done < $file
This could be run directly from the shell (no need for a batch / script file):
while read -r line; do echo "kp" $line; done < stusers.txt
The entire loop can be replaced by a single sed command that operates on the entire file:
sed -e 's/^/prefix/' $file
A Perl way to do it would be:
perl -p -e's/^/prefix' filename
or
perl -p -e'$_ = "prefix $_"' filename
In either case, that reads from filename and prints the prefixed lines to STDOUT.
If you add a -i flag, then Perl will modify the file in place. You can also specify multiple filenames and Perl will magically do all of them.
Instead of the for loop, it is more appropriate to use while read...:
while read -r line; do
do
echo "$line" | sed -e 's/^/prefix/'
done < $file
But you would be much better off with the simpler:
sed -e 's/^/prefix/' $file
Use sed. Just change the word prefix.
sed -e 's/^/prefix/' file.ext
If you want to save the output in another file
sed -e 's/^/prefix/' file.ext > file_new.ext
You don't need sed, just concatenate the strings in the echo command
while IFS= read -r line; do
echo "prefix$line"
done < filename
Your loop iterates over each word in the file:
for line in `cat file`; ...
sed -i '1a\
Your Text' file1 file2 file3
A solution without sed/awk and while loops:
xargs -n1 printf "$prefix%s\n" < "$file"

How to filter data out of tabulated stdout stream in Bash?

Here's what output looks like, basically:
? RESTRequestParamObj.cpp
? plugins/dupfields2/_DupFields.cpp
? plugins/dupfields2/_DupFields.h
I need to get the filenames from second column and pass them to rm. There's AWK script that goes like awk '{print $2}' but I was wondering if there's another solution.
If you have spaces between the ? and the filename then:
cut -c9-
If they're tabs then:
cut -f2
Placed your output in file
$> cat ./text
? RESTRequestParamObj.cpp
? plugins/dupfields2/_DupFields.cpp
? plugins/dupfields2/_DupFields.h
Edit it with sed
$> cat ./text | sed -r -e 's/(\?[\ \t]*)(.*)/\2/g'
RESTRequestParamObj.cpp
plugins/dupfields2/_DupFields.cpp
plugins/dupfields2/_DupFields.h
Sed in here is matching 2 parts of line -
? with tabs or spaces
Other characters until the end f the line
And then it changes whole line only with second part.
This might work for you:
echo "? RESTRequestParamObj.cpp" | sed -e 's/^\S\+/rm /' | sh
or using GNU sed
echo "? RESTRequestParamObj.cpp"| sed -r 's/^\S+/rm /e'
bash only solution, assuming your output comes from stdin:
while read line; do echo ${line##* }; done
use cut/perl instead
cut -f2 -t'\t'|xargs rm -rf
<your output>|perl -ne '#cols = split /\t/; print $cols[1]'|xargs rm -rf

Removing line that contains more than one word

I need to remove a line in a specified file if it has more than one word in it using a bash script in linux.
e.g. file:
$ cat testfile
This is a text
file
This line should be deleted
this-should-not.
awk 'NF<=1{print}' testfile
a word being a run of non-whitespace.
Just for fun, here's a pure bash version which doesn't call any other executable (since you asked for it in bash):
$ while read a b; do if [ -z "$b" ]; then echo $a;fi;done <testfile
awk '!/[ \t]/{print $1}' testfile
This reads "print the first element of lines that don't contain a space or a tab".
Empty lines will be output (since they don't contain more than one word).
Easy enough:
$ egrep -v '\S\s+\S' testfile
$ sed '/ /d' << EOF
> This is a text
> file
>
> This line should be deleted
> this-should-not.
> EOF
file
this-should-not.
If you want to edit files in-place (without any backups), you may also use man ed:
cat <<-'EOF' | ed -s testfile
H
,g/^[[:space:]]*/s///
,g/[[:space:]]*$/s///
,g/[[:space:]]/.d
wq
EOF
This should satisfy your needs:
cat filename | sed -n '/^\S*$/p'

how i can add Add text at the beginning of each line?

how i can add Add text at the beginning of each line?
for example:- i have file contain:-
/var/lib/svn/repos/b1me/products/payone/generic/code/core
/var/lib/svn/repos/b1me/products/payone/generic/code/fees
/var/lib/svn/repos/b1me/products/payone/generic/code/2ds
i want it to become:-
svn+ssh://svn.xxx.com.jo/var/lib/svn/repos/b1me/products/payone/generic/code/core
svn+ssh://svn.xxx.com.jo/var/lib/svn/repos/b1me/products/payone/generic/code/fees
svn+ssh://svn.xxx.com.jo/var/lib/svn/repos/b1me/products/payone/generic/code/2ds
in other word i want to add "svn+ssh://svn.xxx.com.jo" at the beginning of each line of this file
One way to do this is to use awk.
awk '{ printf "svn+ssh://svn.xxx.com.jo"; print }' <filename>
If you want to modify the file in place, you can use sed with the -i switch.
sed -i -e 's_.*_svn+ssh://svn.xxx.com.jo&_' <filename>
Using sed:
printf "line1\nline2\n" | sed "s/^/new text /"
Using ex:
printf "line1\nline2\n" | ex -s +"%s/^/foo bar /e" +%p -cq! /dev/stdin
Using vim:
printf "line1\nline2\n" | vim - -es +"%s/^/foo bar /e" +%p -cq!
Using shell:
printf "line1\nline2\n" | while read line; do echo foo bar $line; done
ruby -pne 'sub(/^/,"svn+ssh://svn.xxx.com.jo")' file
Simple way:
sed -i 's_._svn+ssh://svn.xxx.com.jo_' <filename>
It can also be done with Perl:
perl -pe 's#^#svn+ssh://svn.xxx.com.jo#' input.file

Resources