Remove Files whit name '\' [duplicate] - linux

This question already has an answer here:
Delete files with backslash in linux
(1 answer)
Closed 2 years ago.
I make mistake file with this name '' and i do not know how to clear this file
How to remove the file with this name '' ?
rw-r--r-- 1 root root 1555 Sep 15 12:54 '\'

You could follow 2 steps to do this.
1- Get the inode number of that specific file by doing ls -litr Input_file_name
2- Then use following command to delete it by inode number: (replace 1235 with your actual inode number which you get in your previous step)
find . -inum 1235 -exec rm {} \;
Working example: Its a dummy/test example only for understanding purposes.
1- Do ls -lihtr to get inode number:
total 16K
1227 -rw-r--r-- 1 singh singh 0 Sep 15 08:05 \\\\
2- Now place that in find command as follows to delete that specific file:
find . -inum 1227 -exec rm {} \;
NOTE: As per #JRFerguson's comment, there could be same inode number files/symlinks, so better to give either . or complete path in find command to make sure it deletes the correct file and add -xdev option to above find command too.

$ rm \\
Works for me (with bash). Or, if you have an interactive file manager available (which might be mc if all you have is a terminal) just use a point-and-click method. It's the shell's escaping that's causing all the problems here.

You need to double-quote the filename and escape the backslash in the shell with another backslash
rm "'\\'"

You need to escape both ' and \ with \.
The following command
rm \'\\\'
should do the trick.

Related

How to rename file name contains backslash in bash?

I got a tar file, after extracting, there are many files naming like
a
b\c
d\e\f
g\h
I want to correct their name into files in sub-directories like
a
b/c
d/e/f
g/h
I face a problem when a variable contains backslash, it will change the original file name. I want to write a script to rename them.
Parameter expansion is the way to go. You have everything you need in bash, no need to use external tools like find.
$ touch a\\b c\\d\\e
$ ls -l
total 0
-rw-r--r-- 1 ghoti staff 0 11 Jun 23:13 a\b
-rw-r--r-- 1 ghoti staff 0 11 Jun 23:13 c\d\e
$ for file in *\\*; do
> target="${file//\\//}"; mkdir -p "${target%/*}"; mv -v "$file" "$target"; done
a\b -> a/b
c\d\e -> c/d/e
The for loop breaks out as follows:
for file in *\\*; do - select all files whose names contain backslashes
target="${file//\\//}"; - swap backslashes for forward slashes
mkdir -p "${target%/*}"; - create the target directory by stripping the filename from $target
mv -v "$file" "$target"; - move the file to its new home
done - end the loop.
The only tricky bit here I think is the second line: ${file//\\//} is an expression of ${var//pattern/replacement}, where the pattern is an escaped backslash (\\) and the replacement is a single forward slash.
Have a look at man bash and search for "Parameter Expansion" to learn more about this.
Alternately, if you really want to use find, you can still take advantage of bash's Parameter Expansion:
find . -name '*\\*' -type f \
-exec bash -c 't="${0//\\//}"; mkdir -p "${t%/*}"; mv -v "$0" "$t"' {} \;
This uses find to identify each file and process it with an -exec option that basically does the same thing as the for loop above. One significant difference here is that find will traverse subdirectories (limited by the -maxdepth option), so ... be careful.
Renaming a file with backslashes is simple: mv 'a\b' 'newname' (just quote it), but you'll need more than that.
You need to:
find all files with a backslash (e.g. a\b\c)
split path from filename (e.g. a\b from c)
create a complete path (e.g. a/b, dir b under dir a)
move the old file under a new name, under a created path (e.g. rename a\b\c to file named c in dir a/b)
Something like this:
#!/bin/bash
find . -name '*\\*' | while read f; do
base="${f%\\*}"
file="${f##*\\}"
path="${base//\\//}"
mkdir -p "$path"
mv "$f" "$path/$file"
done
(Edit: correct handling of filenames with spaces)

How to redirect out put of xargs when using sed

Since swiching over to a better management system I am wanting to remove all the redundant logs at the top of each of our source files. In Notepad++ I was able to achieve the result by using "replace in files" and replacing matches to \A(//.*\n)+ with blank. On Linux however I am having no such luck and am needing to resort to 'xargs' and 'sed'.
The sed expression I'm using is:
sed '1,/^[^\/]/{/^[^\/]/b; d}'
Ugly to be sure but it does seem to work.
The problem I'm having is when I try to run that through 'xargs' in order to feed it all the source files in our system I am unable to redirect the output to 'stripped' files, which I then intend to copy over the originals.
I want something in the line of:
find . -name "*.com" -type f -print0 | xargs -0 -I file sed '1,/^[^\/]/{/^[^\/]/b; d}' "file" > "file.stripped"
However I'm having grief passing the ">" through to the receiving environment (shell) as I'm already using too many quote marks. I have tried all manner of escaping and shell "wrappers" but I just can't get it to play ball.
Anyone care to point me in the right direction?
Thanks,
Slarti.
I made a similar scenario with a simpler sed expression just as an example, see if it works for you:
I created 3 files with the string "abcd" inside each:
# ls -l
total 12
-rw-r--r-- 1 root root 5 Oct 6 09:05 test.aaaaa.com
-rw-r--r-- 1 root root 5 Oct 6 09:05 test2.aaaaa.com
-rw-r--r-- 1 root root 5 Oct 6 09:05 test3.aaaaa.com
# cat test*
abcd
abcd
abcd
Running the find command as you showed using the -exec option instead of xargs, and replacing the sed expression for a silly one that simply replaces every "a" for "b" and the option -i, that writes directly do the input file:
# find . -name "*.com" -type f -print0 -exec sed -i 's/a/b/g' {} \;
./test2.aaaaa.com./test3.aaaaa.com./test.aaaaa.com
# cat test*
bbcd
bbcd
bbcd
In your case it should look like this:
# find . -name "*.com" -type f -print0 -exec sed -i '1,/^[^\/]/{/^[^\/]/b; d}' {} \;

Show first 5 lines every file without name

I need to show first 5 lines of every file inside my home folder but without showing name of the file. I know that has something to do with head -n 5 command and I know I can list files using ls -al|grep ^- but I don't know how to combine that knowledge to solve my problem. Any tips?
This uses find to find all regular files in the home dir without (recursing into subdirectories), and passes them on to head:
find ~ -maxdepth 1 -type f -exec head -q -n 5 '{}' '+'

Removing files in a sub directory based on modification date [duplicate]

This question already has answers here:
bash script to remove directories based on modified file date
(3 answers)
Closed 8 years ago.
Hi so I'm trying to remove old backup files from a sub directory if the number of files exceeds the maximum and I found this command to do that
ls -t | sed -e '1,10d' | xargs -d '\n' rm
And my changes are as follows
ls -t subdirectory | sed -e '1,$f' | xargs -d '\n' rm
Obviously when I try running the script it gives me an error saying unknown commands: f
My only concern right now is that I'm passing in the max number of files allowed as an argument so I'm storing that in f but now I'm not too sure how to use that variable in the command above instead of having to set condition to a specific number.
Can anyone give me any pointers? And is there anything else I'm doing wrong?
Thanks!
The title of your question says "based on modification date". So why not simply using find with mtime option?
find subdirectory -mtime +5d -exec rm -v {} \;
Will delete all files older than 5 days.
The problem is that the file list you are passing to xargs does not contain the needed path information to delete the files. When called from the current directory, no path is needed, but if you call it with subdirectory, you need to then rm subdirectory/file from the current directory. Try it:
ls -t subdirectory # returns files with no path info
What you need to do is change to the subdirectory, call the removal script, then change back. In one line it could be done with:
pushd subdirectory &>/dev/null; ls -t | sed -e '1,$f' | xargs -d '\n' rm; popd
Other than doing it in a similar manner, you are probably better writing a slightly longer and more flexible script forming the list of files with the find command to insure the path information is retained.

Delete all .SVN folders in paths with embedded blanks

In this question, and a hundred other places, there are mostly identical Linux solutions for deleting all .svn directories. It works beautifully until the paths happen to include blanks. So, is there a technique to recursively remove .svn files in directories that contain blanks? Perhaps a way to tell find to wrap its answers in quotes?
you can tell find to use null as an output delimiter instead of newline with the -print0 action.
then you can tell xargs to use null as an input delimiter with the -0 argument.
example:
find . -name '*.svn' -print0 | xargs -0 -I{} rm \'{}\'
the -I{} argument to xargs tells it to replace {} with the current line from standard input. and personally, i like to include the backslash escaped quotes around the filenames as well, just to be doubly sure.
find . -name '*.svn' | while read x; do rm -r "$x" ; done
Yes, wrapping in quotes seems to do the trick.
mkdir "x y"
mkdir x\ y/.svn
find . -name '.svn' | awk '{print "rm -rf \""$0"\""}' | bash
And finally:
ls x\ y
total 8
drwxrwxr-x 6 dylan dylan 4096 Dec 13 06:09 ..
drwxrwxr-x 2 dylan dylan 4096 Dec 13 06:11 .
find . -type d -name '.svn' -delete
Current versions of GNU find have gained the "-delete" action.
If what you need is a clean copy of your repository, have you envisaged the use of the SVN export command?
You will then get a copy of all the directories present in your repository, including the ones with spaces in their name, but without any .svn folder.

Resources