How to do this under linux/unix? - linux

There have a million webpages, each page may have some phone numbers with two
formats (XXX)XXX-XXXX, XXX-XXX-XXXX, how to find them out and update them into a unified format, i.e., 1-xxx-xxx-xxxx. How to do that using Linux or Unix commands?

cat ph.txt
111-222-3333-4444
(222)-234-2932-2929
212-939-2929-2929
using sed you can change millon webpages
cat ph.txt | sed -e 's/^(//;s/)//;s/^/1-/'
1-111-222-3333-4444
1-222-234-2932-2929
1-212-939-2929-2929
for all html files
find dirname -type f -name "*.html" -exec sed -e 's/^(//;s/)//;s/^/1-/' {} \;

sed -e 's/(\([[:digit:]]\{3\}\))\([[:digit:]]\{3\}-[[:digit:]]\{4\}\)/\1-\2/g' -e 's/[[:digit:]]\{3\}-[[:digit:]]\{3\}-[[:digit:]]\{4\}/1-&/g'
Something like that. First command changes parenthesis' style to hyphen style, second command adds 1- to it.

This command works with either format in one step:
sed 's/(\?\([[:digit:]]\{3\}\)[)-]\?\([[:digit:]]\{3\}-[[:digit:]]\{4\}\)/1-\1-\2/g' inputfile
It will change other number formats, too, though, including "123456-7890", "(123456-7890" and "123)456-7890".

Related

Wildcard in sed command to replace string not working

I'm trying to use the sed command in terminal to replace a specific line in all my text files with a certain extension by a specific string:
sed -i.bak '35s/^.*$/5\) 1\-4/' fitting_file*.feedme
So I am trying to replace line 35 in each of these files with the string "5) 1-4". When I run an ls fitting_file*.feedme | wc -l command in this directory, I get 221 files. However, when I run the above sed command, it only edits the FIRST file in the order of ls fitting_file*.feedme. I know this because grep '5) 1-4' fitting_file*.feedme continually only returns the first file on the list after I run the replacement command. I also tried replacing fitting_file*.feedme with a space-separated list of a couple of these files in my sed command as a test, but it still only operated on the one I chose to list first. Why is this happening?
sed operates on a single stream. It essentially concats all the files together and treats that as a single stream. So it replaces the 35th line of the big concatenated stream.
To see this, make a 20 line file called A and a 20 line file called B. Apply your sed command as
sed -i.bak '35s/^.*$/5\) 1\-4/' A B
and you will see the 15th line of B replaced.
I think this should answer your direct question. As far how to get done what you like, I assume you've already figured out that wrapping your sed command in a for is one way to do it. :)
Try
Create a file containing your sed instruction like this
#!/bin/bash
sed -i.bak '35s/^.*$/5\) 1\-4/' $1
exit 0
and call it prog.sh. Next make it executable :
chmod u+x prog.sh
now you can solve your problem using
find . -name fitting_file\*.feedme -exec ./prog.sh {} \;
You could do all this on one line but frankly the number of escapes required is a bit much. Good luck.
To do what you're trying to do without using a shell loop is:
awk -i inplace -v inplace::suffix=.bak 'FNR==35{$0="5) 1-4"}1' fitting_file*.feedme
Note that unlike sed which can just count lines across all input files, awk has NR to track the number of records (lines by default) across all files and FNR for the same but just within the current file.
The above uses GNU awk for inplace editing just like GNU sed has -i for that. The default awk on MacOS is BSD awk, not GNU awk, but you should install GNU awk as it doesn't have all the bugs/quirks that BSD awk does and it has a ton of extremely useful extensions.
If you just want to use MacOS's awk then it'd be something like:
find . -name 'fitting_file*.feedme' -exec sh -c "\
awk 'FNR==35{\$0=\"5) 1-4\"}1' \"\$1\" > \"\$1.bak\" &&
mv -- \"\$1.bak\" \"\$1\"
" sh {} \;
which is obviously getting kinda complicated - I'd probably put the awk+mv script in a file to execute from sh -c or just resort to a shell loop myself if faced with that alternative (or a similar quoting nightmare with xargs)!

How to replace string in files recursively via sed or awk?

I would like to know how to search from the command line for a string in various files of type .rb.
And replace:
.delay([ANY OPTIONAL TEXT FOR DELETION]).
with
.delay.
Besides sed an awk are there any other command line tools included in the OS that are better for the task?
Status
So far I have the following regular expression:
.delay\(*.*\)\.
I would like to know how to match only the expression ending on the first closing parenthesis? And avoid replacing:
.delay([ANY OPTIONAL TEXT FOR DELETION]).sometext(param)
Thanks in advance!
If you need to find and replace text in files - sed seems to be the best command line solution.
Search for a string in the text file and replace:
sed -i 's/PATTERN/REPLACEMENT/' file.name
Or, if you need to process multiple occurencies of PATTERN in file, add g key
sed -i 's/PATTERN/REPLACEMENT/g' file.name
For multiple files processing - redirect list of files to sed:
echo "${filesList}" | xargs sed -i ...
You can use find to generate your list of files, and xargs to run sed over the result:
find . -type f -print | xargs sed -i 's/\.delay.*/.delay./'
find will generate a list of files contained in your current directory (., although you can of course pass a different directory), xargs will read that list and then run sed with the list of files as an argument.
Instead of find, which here generates a list of all files, you could use something like grep to generate a list of files that contain a specific term. E.g.:
grep -rl '\.delay' | xargs sed -i ...
For the part of the question where you want to only match and replace until the first ) and not include a second pair of (), here is how to change your regex:
.delay\(*.*\)\.
->
\.delay\([^\)]*\)
I.e. match "actual dot, delay, brace open, everything but brace close and brace close".
E.g. using sed:
>echo .delay([ANY OPTIONAL TEXT FOR DELETION]).sometext(param) | sed -E "s/\.delay\([^\)]*\)/.delay/"
.delay.sometext(param)
I recommend to use grep for finding the right files:
grep -rl --include "*.rb" '\.delay' .
Then feed the list into xargs, as recommended by other answers.
Credits to the other answers for providing a solution for feeding multiple files into sed.

remove DOS end of file character from many files in Linux

I transferred millions of generated SVG files from DOS to a Linux box and just realized that there is a ^# as the last character of each file (the DOS end of file character) which is giving an error when I try to display the SVG file in a browser.
In this question:
How can I remove the last character of the last line of a file?
Maroun gives the solution as:
sed '$ s/.$//' your_file
But when I modify it to look like this:
sed '$ s/.$//' *.SVG
or
find . -print | grep .SVG | sed '$ s/.$//'
It does not work.
I would also like to be able to specify that it should only delete the last character if it is the ^#.
Can someone please tell me what I am doing wrong or how to get this to work. The SVG files are in thousands of sub directories so I need to be able to make the change from top to bottom of the tree structure.
I'm guessing your first example isn't working because it's taking only the last file expanded by *.SVG. The second example (the one with the find command) fails because you're passing find's output to sed instead of the file contents.
Also, if you want to change the files' contents with sed, you need to use -i so the changes are performed "in place".
You could try this:
for file in *.SVG
do
sed -i '$ s/.$//' $file
done
Or:
find . -name "*.SVG" -exec sed -i '$ s/.$//' {} \;

Sed appears to be deleting the full line

Hi guys I'm trying to use sed to delete part of a string (its a directory). I'm using it like so
sed -i 's/$1//g' ~/Desktop/RecyclingBin/logs/$1
whenever I open the text file it appears to be blank. Any help would be appreciated..
Also if there's an easier way to output a files location to a text file without the actual filename being in the output that would make life a lot easier currently using:
find $PWD -type d -name "*$1*" >> ~/Desktop/RecyclingBin/logs/$1
thank you in advance!
You can do this in find itself:
find . -type d -name "*$1*" -exec bash -c 'echo "${1##*/}"' - {} \;
Try to use my example
sed -i "s/$1//g" ~/Desktop/RecyclingBin/logs/$1
It works for me.
Always remember, if you want to use a variable in sed expression , you should use "" like the example posted above
Otherwise for normal substitution use single quotes..
If you want to use a variable in sed expression , you should use "" like the example posted above. This is the feature of sed
Otherwise for normal substitution use single quotes..

How can I use xargs to copy files that have spaces and quotes in their names?

I'm trying to copy a bunch of files below a directory and a number of the files have spaces and single-quotes in their names. When I try to string together find and grep with xargs, I get the following error:
find .|grep "FooBar"|xargs -I{} cp "{}" ~/foo/bar
xargs: unterminated quote
Any suggestions for a more robust usage of xargs?
This is on Mac OS X 10.5.3 (Leopard) with BSD xargs.
You can combine all of that into a single find command:
find . -iname "*foobar*" -exec cp -- "{}" ~/foo/bar \;
This will handle filenames and directories with spaces in them. You can use -name to get case-sensitive results.
Note: The -- flag passed to cp prevents it from processing files starting with - as options.
find . -print0 | grep --null 'FooBar' | xargs -0 ...
I don't know about whether grep supports --null, nor whether xargs supports -0, on Leopard, but on GNU it's all good.
The easiest way to do what the original poster wants is to change the delimiter from any whitespace to just the end-of-line character like this:
find whatever ... | xargs -d "\n" cp -t /var/tmp
This is more efficient as it does not run "cp" multiple times:
find -name '*FooBar*' -print0 | xargs -0 cp -t ~/foo/bar
I ran into the same problem. Here's how I solved it:
find . -name '*FoooBar*' | sed 's/.*/"&"/' | xargs cp ~/foo/bar
I used sed to substitute each line of input with the same line, but surrounded by double quotes. From the sed man page, "...An ampersand (``&'') appearing in the replacement is replaced by the string matching the RE..." -- in this case, .*, the entire line.
This solves the xargs: unterminated quote error.
This method works on Mac OS X v10.7.5 (Lion):
find . | grep FooBar | xargs -I{} cp {} ~/foo/bar
I also tested the exact syntax you posted. That also worked fine on 10.7.5.
Just don't use xargs. It is a neat program but it doesn't go well with find when faced with non trivial cases.
Here is a portable (POSIX) solution, i.e. one that doesn't require find, xargs or cp GNU specific extensions:
find . -name "*FooBar*" -exec sh -c 'cp -- "$#" ~/foo/bar' sh {} +
Note the ending + instead of the more usual ;.
This solution:
correctly handles files and directories with embedded spaces, newlines or whatever exotic characters.
works on any Unix and Linux system, even those not providing the GNU toolkit.
doesn't use xargs which is a nice and useful program, but requires too much tweaking and non standard features to properly handle find output.
is also more efficient (read faster) than the accepted and most if not all of the other answers.
Note also that despite what is stated in some other replies or comments quoting {} is useless (unless you are using the exotic fishshell).
Look into using the --null commandline option for xargs with the -print0 option in find.
For those who relies on commands, other than find, eg ls:
find . | grep "FooBar" | tr \\n \\0 | xargs -0 -I{} cp "{}" ~/foo/bar
find | perl -lne 'print quotemeta' | xargs ls -d
I believe that this will work reliably for any character except line-feed (and I suspect that if you've got line-feeds in your filenames, then you've got worse problems than this). It doesn't require GNU findutils, just Perl, so it should work pretty-much anywhere.
I have found that the following syntax works well for me.
find /usr/pcapps/ -mount -type f -size +1000000c | perl -lpe ' s{ }{\\ }g ' | xargs ls -l | sort +4nr | head -200
In this example, I am looking for the largest 200 files over 1,000,000 bytes in the filesystem mounted at "/usr/pcapps".
The Perl line-liner between "find" and "xargs" escapes/quotes each blank so "xargs" passes any filename with embedded blanks to "ls" as a single argument.
Frame challenge — you're asking how to use xargs. The answer is: you don't use xargs, because you don't need it.
The comment by user80168 describes a way to do this directly with cp, without calling cp for every file:
find . -name '*FooBar*' -exec cp -t /tmp -- {} +
This works because:
the cp -t flag allows to give the target directory near the beginning of cp, rather than near the end. From man cp:
-t, --target-directory=DIRECTORY
copy all SOURCE arguments into DIRECTORY
The -- flag tells cp to interpret everything after as a filename, not a flag, so files starting with - or -- do not confuse cp; you still need this because the -/-- characters are interpreted by cp, whereas any other special characters are interpreted by the shell.
The find -exec command {} + variant essentially does the same as xargs. From man find:
-exec command {} +
This variant of the -exec action runs the specified command on
the selected files, but the command line is built by appending
each selected file name at the end; the total number of invoca‐
matched files. The command line is built in much the same way
that xargs builds its command lines. Only one instance of `{}'
is allowed within the command, and (when find is being invoked
from a shell) it should be quoted (for example, '{}') to protect
it from interpretation by shells. The command is executed in
the starting directory. If any invocation returns a non-zero
value as exit status, then find returns a non-zero exit status.
If find encounters an error, this can sometimes cause an immedi‐
ate exit, so some pending commands may not be run at all. This
variant of -exec always returns true.
By using this in find directly, this avoids the need of a pipe or a shell invocation, such that you don't need to worry about any nasty characters in filenames.
With Bash (not POSIX) you can use process substitution to get the current line inside a variable. This enables you to use quotes to escape special characters:
while read line ; do cp "$line" ~/bar ; done < <(find . | grep foo)
Be aware that most of the options discussed in other answers are not standard on platforms that do not use the GNU utilities (Solaris, AIX, HP-UX, for instance). See the POSIX specification for 'standard' xargs behaviour.
I also find the behaviour of xargs whereby it runs the command at least once, even with no input, to be a nuisance.
I wrote my own private version of xargs (xargl) to deal with the problems of spaces in names (only newlines separate - though the 'find ... -print0' and 'xargs -0' combination is pretty neat given that file names cannot contain ASCII NUL '\0' characters. My xargl isn't as complete as it would need to be to be worth publishing - especially since GNU has facilities that are at least as good.
For me, I was trying to do something a little different. I wanted to copy my .txt files into my tmp folder. The .txt filenames contain spaces and apostrophe characters. This worked on my Mac.
$ find . -type f -name '*.txt' | sed 's/'"'"'/\'"'"'/g' | sed 's/.*/"&"/' | xargs -I{} cp -v {} ./tmp/
If find and xarg versions on your system doesn't support -print0 and -0 switches (for example AIX find and xargs) you can use this terribly looking code:
find . -name "*foo*" | sed -e "s/'/\\\'/g" -e 's/"/\\"/g' -e 's/ /\\ /g' | xargs cp /your/dest
Here sed will take care of escaping the spaces and quotes for xargs.
Tested on AIX 5.3
I created a small portable wrapper script called "xargsL" around "xargs" which addresses most of the problems.
Contrary to xargs, xargsL accepts one pathname per line. The pathnames may contain any character except (obviously) newline or NUL bytes.
No quoting is allowed or supported in the file list - your file names may contain all sorts of whitespace, backslashes, backticks, shell wildcard characters and the like - xargsL will process them as literal characters, no harm done.
As an added bonus feature, xargsL will not run the command once if there is no input!
Note the difference:
$ true | xargs echo no data
no data
$ true | xargsL echo no data # No output
Any arguments given to xargsL will be passed through to xargs.
Here is the "xargsL" POSIX shell script:
#! /bin/sh
# Line-based version of "xargs" (one pathname per line which may contain any
# amount of whitespace except for newlines) with the added bonus feature that
# it will not execute the command if the input file is empty.
#
# Version 2018.76.3
#
# Copyright (c) 2018 Guenther Brunthaler. All rights reserved.
#
# This script is free software.
# Distribution is permitted under the terms of the GPLv3.
set -e
trap 'test $? = 0 || echo "$0 failed!" >& 2' 0
if IFS= read -r first
then
{
printf '%s\n' "$first"
cat
} | sed 's/./\\&/g' | xargs ${1+"$#"}
fi
Put the script into some directory in your $PATH and don't forget to
$ chmod +x xargsL
the script there to make it executable.
bill_starr's Perl version won't work well for embedded newlines (only copes with spaces). For those on e.g. Solaris where you don't have the GNU tools, a more complete version might be (using sed)...
find -type f | sed 's/./\\&/g' | xargs grep string_to_find
adjust the find and grep arguments or other commands as you require, but the sed will fix your embedded newlines/spaces/tabs.
I used Bill Star's answer slightly modified on Solaris:
find . -mtime +2 | perl -pe 's{^}{\"};s{$}{\"}' > ~/output.file
This will put quotes around each line. I didn't use the '-l' option although it probably would help.
The file list I was going though might have '-', but not newlines. I haven't used the output file with any other commands as I want to review what was found before I just start massively deleting them via xargs.
I played with this a little, started contemplating modifying xargs, and realised that for the kind of use case we're talking about here, a simple reimplementation in Python is a better idea.
For one thing, having ~80 lines of code for the whole thing means it is easy to figure out what is going on, and if different behaviour is required, you can just hack it into a new script in less time than it takes to get a reply on somewhere like Stack Overflow.
See https://github.com/johnallsup/jda-misc-scripts/blob/master/yargs and https://github.com/johnallsup/jda-misc-scripts/blob/master/zargs.py.
With yargs as written (and Python 3 installed) you can type:
find .|grep "FooBar"|yargs -l 203 cp --after ~/foo/bar
to do the copying 203 files at a time. (Here 203 is just a placeholder, of course, and using a strange number like 203 makes it clear that this number has no other significance.)
If you really want something faster and without the need for Python, take zargs and yargs as prototypes and rewrite in C++ or C.
You might need to grep Foobar directory like:
find . -name "file.ext"| grep "FooBar" | xargs -i cp -p "{}" .
If you are using Bash, you can convert stdout to an array of lines by mapfile:
find . | grep "FooBar" | (mapfile -t; cp "${MAPFILE[#]}" ~/foobar)
The benefits are:
It's built-in, so it's faster.
Execute the command with all file names in one time, so it's faster.
You can append other arguments to the file names. For cp, you can also:
find . -name '*FooBar*' -exec cp -t ~/foobar -- {} +
however, some commands don't have such feature.
The disadvantages:
Maybe not scale well if there are too many file names. (The limit? I don't know, but I had tested with 10 MB list file which includes 10000+ file names with no problem, under Debian)
Well... who knows if Bash is available on OS X?

Resources