How to replace a text string with dollar sign $ in Linux? - linux

I am trying to replace a text '../../Something' with '$Something' in all .txt files in current directory.Let me know where I am going wrong?
find . -name "*.txt" | xargs sed -i "s/..\/..\/Something/\'\$Something'/g"
Error - Variable name must contain alphanumeric character
I also tried with but doesn't work-
find . -name "*.txt" | xargs sed -i "s/..\/..\/Something/\\$Something/g"
Any suggestions for correct command?

You're shell is treating the $ as the start of a variable.
There are two ways you can make it work:
Use single quotes, which tells the shell to not perform any variable interpolation (among other things):
find . -name "*.txt" | xargs sed -i 's/..\/..\/Something/\$Something/g'
Escape the $ from the shell and sed. This requires 3 backslashes (the first one escapes the second backslash, the second escapes the dollar sign once the tring reaches sed, and the third escapes the dollar sign in the shell so it doesn't get treated as a variable):
find . -name "*.txt" | xargs sed -i s/..\\/..\\/Something/\\\$Something/g

You can try removing the extra backslash given in the second command.
find . -name "*.txt" | xargs sed -i 's/../../Something/\$Something/g'

Tested the following and it worked for me. Let me know if this works
find . -name "*.txt" | xargs sed -i "s,../../Something,$\Something,g"

Related

Can't find a file by pattern [duplicate]

I am having a hard time getting find to look for matches in the current directory as well as its subdirectories.
When I run find *test.c it only gives me the matches in the current directory. (does not look in subdirectories)
If I try find . -name *test.c I would expect the same results, but instead it gives me only matches that are in a subdirectory. When there are files that should match in the working directory, it gives me: find: paths must precede expression: mytest.c
What does this error mean, and how can I get the matches from both the current directory and its subdirectories?
Try putting it in quotes -- you're running into the shell's wildcard expansion, so what you're acually passing to find will look like:
find . -name bobtest.c cattest.c snowtest.c
...causing the syntax error. So try this instead:
find . -name '*test.c'
Note the single quotes around your file expression -- these will stop the shell (bash) expanding your wildcards.
What's happening is that the shell is expanding "*test.c" into a list of files. Try escaping the asterisk as:
find . -name \*test.c
From find manual:
NON-BUGS
Operator precedence surprises
The command find . -name afile -o -name bfile -print will never print
afile because this is actually equivalent to find . -name afile -o \(
-name bfile -a -print \). Remember that the precedence of -a is
higher than that of -o and when there is no operator specified
between tests, -a is assumed.
“paths must precede expression” error message
$ find . -name *.c -print
find: paths must precede expression
Usage: find [-H] [-L] [-P] [-Olevel] [-D ... [path...] [expression]
This happens because *.c has been expanded by the shell resulting in
find actually receiving a command line like this:
find . -name frcode.c locate.c word_io.c -print
That command is of course not going to work. Instead of doing things
this way, you should enclose the pattern in quotes or escape the
wildcard:
$ find . -name '*.c' -print
$ find . -name \*.c -print
Try putting it in quotes:
find . -name '*test.c'
I see this question is already answered. I just want to share what worked for me. I was missing a space between ( and -name. So the correct way of chosen a files with excluding some of them would be like below;
find . -name 'my-file-*' -type f -not \( -name 'my-file-1.2.0.jar' -or -name 'my-file.jar' \)
I came across this question when I was trying to find multiple filenames that I could not combine into a regular expression as described in #Chris J's answer, here is what worked for me
find . -name one.pdf -o -name two.txt -o -name anotherone.jpg
-o or -or is logical OR. See Finding Files on Gnu.org for more information.
I was running this on CygWin.
You can try this:
cat $(file $( find . -readable) | grep ASCII | tr ":" " " | awk '{print $1}')
with that, you can find all readable files with ascii and read them with cat
if you want to specify his weight and no-executable:
cat $(file $( find . -readable ! -executable -size 1033c) | grep ASCII | tr ":" " " | awk '{print $1}')
In my case i was missing trailing / in path.
find /var/opt/gitlab/backups/ -name *.tar

Linux: How to replace part of filename with specified character

I want to replace part of filename with specified character.
For example:
$ ls
SubNetwork=RNCRAM955E,MeContext=RNCRAM955E_statsfile.xml
I want to replace RNCRAM955E with RNCMST954E
Here comes my expected output.
$ ls
SubNetwork=RNCMST954E,MeContext=RNCMST954E_statsfile.xml
and below is my code:
$ find ./ -name '*.xml' | xargs -i echo mv {} {} | sed 's/RNCRAM955E/RNCMST954E/3g' | sh
mv ./SubNetwork=RNCRAM955E,MeContext=RNCRAM955E_statsfile.xml ./SubNetwork=RNCMST954E,MeContext=RNCMST954E_statsfile.xml
mv ./SubNetwork=RNCRAM955E,MeContext=RNCRAM955E_statsfile.xml ./SubNetwork=RNCMST954E,MeContext=RNCMST954E_statsfile.xml
mv ./SubNetwork=RNCRAM955E,MeContext=RNCRAM955E_statsfile.xml ./SubNetwork=RNCMST954E,MeContext=RNCMST954E_statsfile.xml
mv ./SubNetwork=RNCRAM955E,MeContext=RNCRAM955E_statsfile.xml ./SubNetwork=RNCMST954E,MeContext=RNCMST954E_statsfile.xml
I can't understand what's the code of 3g exactly mean.
In my opinion:
Does the sed s/xx/xx/3g means replace match pattern from the 3rd one to the end ,and the sed s/xx/xx/3 means only replace the 3rd match pattern?
BTW, what's the exactly mean of |sh, I think it makes the command after echo as a shell executing, right ?
3 means to substitute the 3rd match, and g means to substitute all matches. When you use them together it means to substitute all matches starting at the 3rd one.
Piping to sh means to execute the output of sed as shell commands.
Most Linux distributions have a rename command that makes this easier:
find . -name '*.xml' -exec rename 's/RNCRAM955E/RNCMST954E/' {} +

I'm trying to grep folders and make a variable of the result for further need

GET_DIR=$ (find ${FIND_ROOT} -type -d 2>/dev/null | grep -Eiv ${EX_PATTERN| grep -Eio ${FIND_PATTERN})
but somehow when I try to print the result, its empty.
But when I am using my grep without a script I got results on the Command line.
You could avoid the pipe | and grep by using name or iname (case insensitive) within find, for example:
find /tmp -type d -iname "*foo*"
This will find directories -type d that match the pattern *foo* ignoring case -iname in /tmp
To save the output in a variable you could use:
FOO=$(find /tmp -type d -iname "*foo*")
From the find man:
-name pattern
True if the last component of the pathname being examined matches pattern. Special shell pattern matching
characters (``['', ``]'', ``*'', and ``?'') may be used as part of pattern. These characters may be matched
explicitly by escaping them with a backslash (``\'').
consider using xargs :
GET_DIR=$ (find ${FIND_ROOT} -type -d 2>/dev/null | xargs grep -Eiv ${EX_PATTERN| grep -Eio ${FIND_PATTERN})

How to print the output of find command with tab delimited at beginning

I Have tried the below command to print the output of a find command with tab delimited.
echo -e "\t"; find /usr/live/class/$client_abbr -name "$line.cls" -exec grep '^#include' {} \;
If the output contains n number of lines, only the first line is printed with tab delimited, and it was not applied to rest of the lines. Please let me know how could i modify the above command to have tab at front of all lines.
You will likely find piping to xargs more efficient than using -exec. The extra quotes, -type f and -print0 are respectively for safety, for specifying that you need a file (not a directory) and for enabling file names with embedded white space. With the grep output piped to sed (attribution to Fischer's comment), you get what you need.
find "/usr/live/class/$client_abbr" -type f -name "$line.cls" -print0 |
xargs -0 grep '^#include' |
sed 's/^/\t/'

using find command in unix to search for a newline

I would like to search all .java files which have the newline escape sequence \n (backslash followed by 'n') in the files.
I am using this command:
find . –name "*.java" –print | xargs grep “\n”
but the result shows all lines in .java files having the letter n.
I want to search for a newline \n.
Can you please suggest a solution?
Example:
x.java
method abc{
String msg="\n Action not allowed.";}
y. java
method getMsg(){
String errMsg = "\n get is not allowed.";}
I want to search all *.java files having these type of strings defined with newline escape sequence.
It looks like you want to find lines containing the 2-character sequence \n. To do this, use grep -F, which treats the pattern as a fixed string rather than as a regular expression or escape sequence.
find . –name "*.java" –print | xargs grep -F "\n"
This -P grep will match a newline character. using '$'.
Since each line in my file contains a newline ,it will match every line.
grep -P '$' 1.c
I don't know why you want to match a newline character in files.That is strange.
I believe you're looking for this:
find . –name "*.java" –exec grep -H '"[^"]*\n' {} \;
The -H flag is to show the name of the file when there was a pattern match. If that doesn't work for you:
find . –name "*.java" –print0 | xargs -0 grep '"[^"]*\n'
If xargs -0 doesn't work for you:
find . –name "*.java" –print | xargs grep '"[^"]*\n'
If grep doesn't work for you:
find . –name "*.java" –print | xargs egrep '"[^"]*\n'
I needed this last version in Solaris, in modern systems the first one should work.
Finally, not sure if the pattern covers all your corner cases.

Resources