Related
What's the simplest way to do a find and replace for a given input string, say abc, and replace with another string, say XYZ in file /tmp/file.txt?
I am writting an app and using IronPython to execute commands through SSH — but I don't know Unix that well and don't know what to look for.
I have heard that Bash, apart from being a command line interface, can be a very powerful scripting language. So, if this is true, I assume you can perform actions like these.
Can I do it with bash, and what's the simplest (one line) script to achieve my goal?
The easiest way is to use sed (or perl):
sed -i -e 's/abc/XYZ/g' /tmp/file.txt
Which will invoke sed to do an in-place edit due to the -i option. This can be called from bash.
If you really really want to use just bash, then the following can work:
while IFS='' read -r a; do
echo "${a//abc/XYZ}"
done < /tmp/file.txt > /tmp/file.txt.t
mv /tmp/file.txt{.t,}
This loops over each line, doing a substitution, and writing to a temporary file (don't want to clobber the input). The move at the end just moves temporary to the original name. (For robustness and security, the temporary file name should not be static or predictable, but let's not go there.)
For Mac users:
sed -i '' 's/abc/XYZ/g' /tmp/file.txt
(See the comment below why)
File manipulation isn't normally done by Bash, but by programs invoked by Bash, e.g.:
perl -pi -e 's/abc/XYZ/g' /tmp/file.txt
The -i flag tells it to do an in-place replacement.
See man perlrun for more details, including how to take a backup of the original file.
I was surprised when I stumbled over this...
There is a replace command which ships with the "mysql-server" package, so if you have installed it try it out:
# replace string abc to XYZ in files
replace "abc" "XYZ" -- file.txt file2.txt file3.txt
# or pipe an echo to replace
echo "abcdef" |replace "abc" "XYZ"
See man replace for more on this.
This is an old post but for anyone wanting to use variables as #centurian said the single quotes mean nothing will be expanded.
A simple way to get variables in is to do string concatenation since this is done by juxtaposition in bash the following should work:
sed -i -e "s/$var1/$var2/g" /tmp/file.txt
Bash, like other shells, is just a tool for coordinating other commands. Typically you would try to use standard UNIX commands, but you can of course use Bash to invoke anything, including your own compiled programs, other shell scripts, Python and Perl scripts etc.
In this case, there are a couple of ways to do it.
If you want to read a file, and write it to another file, doing search/replace as you go, use sed:
sed 's/abc/XYZ/g' <infile >outfile
If you want to edit the file in place (as if opening the file in an editor, editing it, then saving it) supply instructions to the line editor 'ex'
echo "%s/abc/XYZ/g
w
q
" | ex file
Example is like vi without the fullscreen mode. You can give it the same commands you would at vi's : prompt.
I found this thread among others and I agree it contains the most complete answers so I'm adding mine too:
sed and ed are so useful...by hand.
Look at this code from #Johnny:
sed -i -e 's/abc/XYZ/g' /tmp/file.txt
When my restriction is to use it in a shell script, no variable can be used inside in place of "abc" or "XYZ". The BashFAQ seems to agree with what I understand at least. So, I can't use:
x='abc'
y='XYZ'
sed -i -e 's/$x/$y/g' /tmp/file.txt
#or,
sed -i -e "s/$x/$y/g" /tmp/file.txt
but, what can we do? As, #Johnny said use a while read... but, unfortunately that's not the end of the story. The following worked well with me:
#edit user's virtual domain
result=
#if nullglob is set then, unset it temporarily
is_nullglob=$( shopt -s | egrep -i '*nullglob' )
if [[ is_nullglob ]]; then
shopt -u nullglob
fi
while IFS= read -r line; do
line="${line//'<servername>'/$server}"
line="${line//'<serveralias>'/$alias}"
line="${line//'<user>'/$user}"
line="${line//'<group>'/$group}"
result="$result""$line"'\n'
done < $tmp
echo -e $result > $tmp
#if nullglob was set then, re-enable it
if [[ is_nullglob ]]; then
shopt -s nullglob
fi
#move user's virtual domain to Apache 2 domain directory
......
As one can see if nullglob is set then, it behaves strangely when there is a string containing a * as in:
<VirtualHost *:80>
ServerName www.example.com
which becomes
<VirtualHost ServerName www.example.com
there is no ending angle bracket and Apache2 can't even load.
This kind of parsing should be slower than one-hit search and replace but, as you already saw, there are four variables for four different search patterns working out of one parse cycle.
The most suitable solution I can think of with the given assumptions of the problem.
You can use sed:
sed -i 's/abc/XYZ/gi' /tmp/file.txt
You can use find and sed if you don't know your filename:
find ./ -type f -exec sed -i 's/abc/XYZ/gi' {} \;
Find and replace in all Python files:
find ./ -iname "*.py" -type f -exec sed -i 's/abc/XYZ/gi' {} \;
Be careful if you replace URLs with "/" character.
An example of how to do it:
sed -i "s%http://domain.com%http://www.domain.com/folder/%g" "test.txt"
Extracted from: http://www.sysadmit.com/2015/07/linux-reemplazar-texto-en-archivos-con-sed.html
If the file you are working on is not so big, and temporarily storing it in a variable is no problem, then you can use Bash string substitution on the whole file at once - there's no need to go over it line by line:
file_contents=$(</tmp/file.txt)
echo "${file_contents//abc/XYZ}" > /tmp/file.txt
The whole file contents will be treated as one long string, including linebreaks.
XYZ can be a variable eg $replacement, and one advantage of not using sed here is that you need not be concerned that the search or replace string might contain the sed pattern delimiter character (usually, but not necessarily, /). A disadvantage is not being able to use regular expressions or any of sed's more sophisticated operations.
You may also use the ed command to do in-file search and replace:
# delete all lines matching foobar
ed -s test.txt <<< $'g/foobar/d\nw'
See more in "Editing files via scripts with ed".
To edit text in the file non-interactively, you need in-place text editor such as vim.
Here is simple example how to use it from the command line:
vim -esnc '%s/foo/bar/g|:wq' file.txt
This is equivalent to #slim answer of ex editor which is basically the same thing.
Here are few ex practical examples.
Replacing text foo with bar in the file:
ex -s +%s/foo/bar/ge -cwq file.txt
Removing trailing whitespaces for multiple files:
ex +'bufdo!%s/\s\+$//e' -cxa *.txt
Troubleshooting (when terminal is stuck):
Add -V1 param to show verbose messages.
Force quit by: -cwq!.
See also:
How to edit files non-interactively (e.g. in pipeline)? at Vi SE
Try the following shell command:
find ./ -type f -name "file*.txt" | xargs sed -i -e 's/abc/xyz/g'
You can use python within the bash script too. I didn't have much success with some of the top answers here, and found this to work without the need for loops:
#!/bin/bash
python
filetosearch = '/home/ubuntu/ip_table.txt'
texttoreplace = 'tcp443'
texttoinsert = 'udp1194'
s = open(filetosearch).read()
s = s.replace(texttoreplace, texttoinsert)
f = open(filetosearch, 'w')
f.write(s)
f.close()
quit()
Simplest way to replace multiple text in a file using sed command
Command -
sed -i 's#a/b/c#D/E#g;s#/x/y/z#D:/X#g;' filename
In the above command s#a/b/c#D/E#g where I am replacing a/b/c with D/E and then after the ; we again doing the same thing
You can use rpl command. For example you want to change domain name in whole php project.
rpl -ivRpd -x'.php' 'old.domain.name' 'new.domain.name' ./path_to_your_project_folder/
This is not clear bash of cause, but it's a very quick and usefull. :)
For MAC users in case you don't read the comments :)
As mentioned by #Austin, if you get the Invalid command code error
For the in-place replacements BSD sed requires a file extension after the -i flag to save to a backup file with given extension.
sed -i '.bak' 's/find/replace' /file.txt
You can use '' empty string if you want to skip backup.
sed -i '' 's/find/replace' /file.txt
All merit to #Austin
Open file using vim editor. In command mode
:%s/abc/xyz/g
This is the simplest
In case of doing changes in multiple files together we can do in a single line as:-
user_name='whoami'
for file in file1.txt file2.txt file3.txt; do sed -i -e 's/default_user/${user_name}/g' $file; done
Added if in case could be useful.
I'm current using command line to grep a pattern in a source tree. A line of grep output is in the form:
path/to/a/file.java:123: some text here
If I want to open the file at the location specified in the grep output, I would have to manually enter the vim command as:
$ vim +123 path/to/a/file.java
Is there an easier method that would allow me to use the raw grep output and have the relevant components parsed and run vim for the file at the line#.
I am interested in a command line solution. I am aware that I can do greps inside vim.
Thanks
The file-line plugin is exactly what you want. With that installed, you can just run
vim path/to/a/file.java:123
You could simply run grep from Vim itself and benefit from the quickfix list/window:
:grep -Rn foo **/*.h
:cw
(scroll around)
<CR>
Or you could pass your grep output to Vim for the same benefits:
$ vim -q <(grep -Rn foo **/*.h)
:cw
(scroll around)
<CR>
Or, if you are already in Vim, you could insert the output of your grep in a buffer and use gF to jump to the right line of the right file:
:r !grep -Rn foo **/*.h
(scroll around)
gF
Or, from your shell:
$ vim <(grep -Rn foo **/*.h)
(scroll around)
gF
Or, if you just ran your grep, you can reuse it like so:
$ vim <(!!)
(scroll around)
gF
Or, if you know its number in history:
$ vim <(!884)
(scroll around)
gF
> vim $(cat the.file | grep xxx)
will evauluates the $() - find xxx in the.file then will pipe xxx to vim
also possible with backticks ``:
> vim `cat the.file | grep xxx`
Try this:
grep -nr --null pattern | { IFS= read -rd "" f; IFS=: read -d "" n match; vim +$n "$f" </dev/tty; }
grep does a recursive search for pattern. For the first file that it finds, vim is started with the +linenum parameter to put you on the line of interest.
This approach uses NUL-separated i/o. It should be safe for all file names, even ones that contain white space or other difficult characters.
This was tested on GNU tools (Linux). It may work on BSD/OSX as well.
Multiline version
For those who prefer their commands spread over multiple lines:
grep -nr --null pattern | {
IFS= read -rd "" f
IFS=: read -d "" n match
vim +$n "$f" </dev/tty
}
Convenience function
Because the above command is long, one may want to put it in a shell function:
vigrep() { grep -nr --null "$1" | { IFS= read -rd "" f; IFS=: read -d "" n match; vim +$n "$f" </dev/tty; }; }
Once this has been defined, it can be used to search for a file containing any pattern. For example:
vigrep 'some text here'
To make the definition of vigrep permanent, put it in your ~/.bashrc file.
How it works
grep -nr --null pattern
-r tells grep to search recursively.
-n tells grep to return line number of the matches.
-null tells grep to use NUL-separated output
pattern is the regex to search for.
IFS= read -rd "" f
This reads the first NUL-separated section of input (which will be a file name) and assigns it to the shell variable f.
IFS=: read -d "" n match
This reads the next NUL-separated section of input using : as the word separator. The first word (which is the line number) is assigned to shell variable n. The rest of this line will be ignored.
vim +$n "$f" </dev/tty
This starts vim on line number $n of file $f using the terminal, /dev/tty, for input.
Generally, when running vim, one really wants to have vim accept input from the keyboard. That is why, for this case, we hard-coded input from /dev/tty.
Using cut-and-paste to launch vim
Start the following and cut-and-paste a line of grep -n output to it:
IFS=: read f n rest; vim +$n "$f"
The read command will wait for a line on standard input. The type of input it expects looks like:
path/to/a/file.java:123: some text here
Because IFS=:, it divides up the line on colons and assigns the file name to shell variable f and the line number to shell variable n. When this is done, it launches the vim command.
This command could also, if desired, be saved as a shell function:
grvim() { IFS=: read f n rest; vim "+$n" "$f"; }
I have this function in my .bashrc:
grep_edit(){
grep "$#" | sed 's/:/ +/;s/:/ /';
}
So, the output is in the form:
path/to/a/file.java +123 some text here
Then I can directly use
$ vi path/to/a/file.java +123
Note: I have also heard of file-line plugin, but I was not sure how it will work with netrw plugin.
e.g. vi can open remote files with this syntax:
vi scp://root#remote-system//var/log/daemon.log
But if that is not a concern, then you can better use file-line plugin.
I have a requirement to batch edit a bunch of files using vim based on their content. The simplest example is that I'd like to perform a series of let's say substitutions on files but only if the first line of the file matches a certain pattern.
I'm trying to do this kind of thing:
vim -e -s $file < changes.vim
I should add that I have no access to tools like sed and awk and would like to perform the entire operation in vim.
I recommend that you find the list of files you need, and pass that list into the command you want. For this, a combination of awk and xargs would seem useful. There are probably clever shorter things you can do…
awk 'FNR>1 {nextfile} /pattern/ { print FILENAME ; nextfile }' filePattern | xargs -I{} vim -e -s {} < changes.vim
In the above, filePattern gives all the files you want (maybe *.c), /pattern/ is the regex of the match you are looking for. xargs will take "one output at a time" and substitute it into the following command at the place where I put the {}.
I want to give a tip of the hat to this link where I found the inspiration for this answer.
vim only solution
EDIT - after I posted this you said you need a "vim only" solution. Here it is…
Step 1: create a conditionalEdits.vim file with the following lines at the start:
let line_num = search('searchExpression') " any regex
if line_num == 1 " first line matched
center " put your editing commands here...
update " save changes
endif
quit
Of course, instead of just centering the first line, you will want to put all your editing commands inside the if statement.
Now, you execute this command with
vim -c '/path/to/my/conditionalEdits.vim' -s filePattern
where filePattern matches all the files you might be interested in (but you will know for sure after you have looked at line 1 inside…)
Obviously you can navigate through the file in the usual way and look for matches / patterns etc to your heart's content - but this is the basic idea.
Helpful links: http://www.ibm.com/developerworks/library/l-vim-script-1/
and http://learnvimscriptthehardway.stevelosh.com
I highly recommend that you do this in a separate directory, using copies of a handful of files first, to make sure this actually does what you think it does. I would hate to be responsible for a bunch of files being overwritten (you do back up, right?)
You can loop over all files, if you find the pattern, open vim. Once it is modified to your needs and closed, the next one will open.
#!/usr/bin/env bash
for file in *; do
if [[ "$(sed '1q' ${file})" == "pattern" ]]; then
vim ${file}
fi
done
Within Vim, you can determine the matching files via :vimgrep; to check for a match in the first line, the \%l atom is handy:
:vimgrep /\%1lcertain pattern/ {file-glob}
Then, you can iterate through all matches with :cfnext, or use the :QFDo command from here.
You can pass those commands either via vim -c {cmd} -c {cmd} ..., or in a separate script, as you outline in your question.
I want to find a string from some file in subdirectory.
Like
we are in bundle/.
and in bundle/ there are multiple subdirectories and multiple txt files
I want to do something like
find . -type f -exec grep "\<F8\>" {} \;
want to get the file where it contain string < F8 >
this command does work, find the string, but never return filename
I hope anyone can give me a better solution to this, like display filename along with the line containing that string
grep -rl '<F8>' .
The -r option tells grep to search recursively through directories starting at .
The -l option tells it to show you just the filename that's matched, not the line itself.
Your output will look something like this:
./thisfile
./foo/bar/thatfile
If you want to limit this to only one file, append | head -1 to the end of the line.
If you want output like:
./thisfile:My text contains the <F8> string
./foo/bar/thatfile:didn't remember to press the <F8> key on the
then you can just leave off the -l option. Note that this output is not safe to parse, as filenames may contain colons, and colons in filenames are not escaped in grep's output.
You can use grep by itself.
grep -r '<F8>' .
This should list out all the files and line numbers that match:
grep -nR '<F8>' *
Personally I find it's easier just to use ack. (ack-grep is the name used in Ubuntu's repos to keep from confusing it with another software with the same name.) It's available in most major repositories.
The command would be ack -a "<F8>" (or ack-grep -a "<F8>" in Ubuntu). The -a option is to search all file types.
Example:
testfile
Line 1
Line 2
Line 3
<F8>
<F9>
<F10>
Line 4
Line 5
Line 6
Output:
$ ack -a "<F8>"
testfile
4:<F8>
I want to get automatically to the positions of the results in Vim after grepping, on command line. Is there such feature?
Files to open in Vim on the lines given by grep:
% grep --colour -n checkWordInFile *
SearchToUser.java:170: public boolean checkWordInFile(String word, File file) {
SearchToUser.java~:17: public boolean checkWordInFile(String word, File file) {
SearchToUser.java~:41: if(checkWordInFile(word, f))
If you pipe the output from grep into vim
% grep -n checkWordInFile * | vim -
you can put the cursor on the filename and hit gF to jump to the line in that file that's referenced by that line of grep output. ^WF will open it in a new window.
From within vim you can do the same thing with
:tabedit
:r !grep -n checkWordInFile *
which is equivalent to but less convenient than
:lgrep checkWordInFile *
:lopen
which brings up the superfantastic quickfix window so you can conveniently browse through search results.
You can alternatively get slower but in-some-ways-more-flexible results by using vim's native grep:
:lvimgrep checkWordInFile *
:lopen
This one uses vim REs and paths (eg allowing **). It can take 2-4 times longer to run (maybe more), but you get to use fancy \(\)\#<=s and birds of a feather.
Have a look at "Grep search tools integration with Vim" and "Find in files within Vim". Basically vim provides these commands for searching files:
:grep
:lgrep
:vimgrep
:lvimgrep
The articles feature more information regarding their usage.
You could do this:
% vim "+/checkWordInFile" $(grep -l checkWordInFile *)
This will put in the vim command line a list of all the files that match the regex. The "+/..." option will tell vim to search from the start of each file until it finds the first line that matches the regex.
Correction:
The +/... option will only search the first file for the regex. To search in every file you need this:
% vim "-c bufdo /checkWordInFile" $(grep -l checkWordInFile *)
If this is something you need to do often you could write a bash function so that you only need to specify the regex once (assuming that the regex is valid for both grep and vim).
I think this is what you are looking for:
http://www.vim.org/scripts/script.php?script_id=2184
When you open a file:line, for instance when coping and pasting from an error from your compiler (or grep output) vim tries to open a file with a colon in its name. With this little script in your plugins folder if the stuff after the colon is a number and a file exists with the name especified before the colon vim will open this file and take you to the line you wished in the first place.
It's definitely what I was looking for.
I highly recommend ack.vim over grep for this functionality.
http://github.com/mileszs/ack.vim
http://betterthangrep.com/
You probably want to make functions for these. :)
Sequential vim calls (console)
grep -rn "implements" app | # Or any (with "-n") you like
awk '{
split($0,a,":"); # split on ":"
print "</dev/tty vim", a[1], "+" a[2] # results in lines with "</dev/tty vim <foundfile> +<linenumber>
}' |
parallel --halt-on-error 1 -j1 --tty bash -ec # halt on error and "-e" important to make it possible to quit in the middle
Use :cq from vim to stop editing.
Concurrent opening in tabs (gvim)
Start the server:
gvim --servername GVIM
Open the tabs:
grep -rn "implements" app | # again, any grep you like (with "-n")
awk "{ # double quotes because of $PWD
split(\$0,a,\":\"); # split on ":"
print \":tabedit $PWD/\" a[1] \"<CR>\" a[2] \"G\" # Vim commands. Open file, then jump to line
}" |
parallel gvim --servername GVIM --remote-send # of course the servername needs to match
If you use git, results are often more meaningful when you search only in the files tracked by git. To open files at the given line which is a search result of git grep in vim you will need the fugitive plugin, then
:copen
:Ggrep pattern
Will give you the list in a buffer and you can choose to open files from your git grep results.
In this particular example:
vim SearchToUser.java +170