In Linux I'm trying to find a file in a directory, backup it with another name and, then, replace it with another one.
I tried the first two actions with these commands
find foldername -name filename.html; -exec sed -i .bak;
but it's says
bash: -exec: command not found
Try this:
find foldername -name filename.html -exec cp -vp {}{,.bak} \; -exec truncate -s 0 {} \;
This uses find's exec option like it looks like you tried to use. Then cp copies the file (specified with {}) and appends .bak to the copy and preserves what it can with the p option:
preserve the specified attributes (default:
mode,ownership,timestamps), if possible additional attributes:
context, links, xattr, all
This leaves the original file in place as well.
You can do the following:
find . -name 'FILE_PATTERN_HERE' | xargs -I file_name cp file_name file_name.bkp
You can pipe the output of find command to cp using xargs. Here file_name acts as the output of find.
Example
find . -name 'logback.xml*'
Output:
./logback.xml
./apache-cassandra-3.11.1/conf/logback.xml
After running the command
find . -name 'logback.xml*' | xargs -I file_name cp file_name file_name.bkp
find . -name 'logback.xml*'
Output:
./logback.xml
./apache-cassandra-3.11.1/conf/logback.xml
./apache-cassandra-3.11.1/conf/logback.xml.bkp
./logback.xml.bkp
find -name archive.zip -exec unzip {} file.txt \;
This command finds all files named archive.zip and unzips file.txt to the folder that I execute the command from, is there a way to unzip the file to the same folder where the .zip file was found? I would like file.txt to be unzipped to folder1.
folder1\archive.zip
folder2\archive.zip
I realize $dirname is available in a script but I'm looking for a one line command if possible.
#iheartcpp - I successfully ran three alternatives using the same base command...
find . -iname "*.zip"
... which is used to provide the list of / to be passed as an argument to the next command.
Alternative 1: find with -exec + Shell Script (unzips.sh)
File unzips.sh:
#!/bin/sh
# This will unzip the zip files in the same directory as the zip are
for f in "$#" ; do
unzip -o -d `dirname $f` $f
done
Use this alternative like this:
find . -iname '*.zip' -exec ./unzips.sh {} \;
Alternative 2: find with | xargs _ Shell Script (unzips)
Same unzips.sh file.
Use this alternative like this:
find . -iname '*.zip' | xargs ./unzips.sh
Alternative 3: all commands in the same line (no .sh files)
Use this alternative like this:
find . -iname '*.zip' | xargs sh -c 'for f in $#; do unzip -o -d `dirname $f` $f; done;'
Of course, there are other alternatives but hope that the above ones can help.
I want to recursively delete all binary files in a folder under linux using the command-line or a bash script. I found
grep -r -m 1 "^" path/to/folder | grep "^Binary file"
to list all binary files in path/to/folder at How to list all binary file extensions within a directory tree?. I would now like to delete all these files.
I could do
grep -r -m 1 "^" path/to/folder | grep "^Binary file" | xargs rm
but that is rather fishy as it also tries to delete the files 'Binary', 'file', and 'matches' as in
rm: cannot remove ‘Binary’: No such file or directory
rm: cannot remove ‘file’: No such file or directory
rm: cannot remove ‘matches’: No such file or directory
The question is thus how do I delete those files correctly ?
This command will return all binary executable files recursively within a directory, run this first to ensure proper output.
find . -type f -executable -exec sh -c "file -i '{}' | grep -q 'x-executable; charset=binary'" \; -print
If that works you can pass the output to xargs to delete these files.
find . -type f -executable -exec sh -c "file -i '{}' | grep -q 'x-executable; charset=binary'" \; -print | xargs rm -f
Hope this helped, have an awesome day! :)
I coded a tool, called blobs, that lists runable binaries.
Its readme mentions how to pipe to any other command.
This should do the job, if you are deleting a lot of binrary files in a folder.
find . -type f -executable | xargs rm
I want to create a loop in Linux script that will go thru the folders in one directory and will copy photos to one folder and will overwrite photos that have the same name. Can anyone point me in the write direction?
find /path/to/source -type f -exec cp -f {} /path/to/destination \;
Would that work? Keep in mind, that will overwrite files without asking.
If you want it to confirm with you before overwriting, use the -i flag (for interactive mode) in the cp command.
find /path/to/source -type f | xargs -I {} file {} | grep <JPEG or image type> | cut -d ":" -f1 | xargs -I {} cp -rf {} /path/to/destination
With this you can find tune your copy with selecting only the image type.
Actually, you need not to loop through folders for finding photos using script, find command will do that job for you.
Try using find with xargs and cp
find source_dir -type f -iname '*.jpg' -print0 | xargs -0 -I {} cp -f {} dest_dir
Replace *.jpg with format of your photo files. (e.g. *.png etc.)
Note the use of -f option of cp, since you want to overwrite photos with the same name
I want to create a clone of the structure of our multi-terabyte file server. I know that cp --parents can move a file and it's parent structure, but is there any way to copy the directory structure intact?
I want to copy to a linux system and our file server is CIFS mounted there.
You could do something like:
find . -type d > dirs.txt
to create the list of directories, then
xargs mkdir -p < dirs.txt
to create the directories on the destination.
cd /path/to/directories &&
find . -type d -exec mkdir -p -- /path/to/backup/{} \;
Here is a simple solution using rsync:
rsync -av -f"+ */" -f"- *" "$source" "$target"
one line
no problems with spaces
preserve permissions
I found this solution there
1 line solution:
find . -type d -exec mkdir -p /path/to/copy/directory/tree/{} \;
I dunno if you are looking for a solution on Linux. If so, you can try this:
$ mkdir destdir
$ cd sourcedir
$ find . -type d | cpio -pdvm destdir
This copy the directories and files attributes, but not the files data:
cp -R --attributes-only SOURCE DEST
Then you can delete the files attributes if you are not interested in them:
find DEST -type f -exec rm {} \;
This works:
find ./<SOURCE_DIR>/ -type d | sed 's/\.\/<SOURCE_DIR>//g' | xargs -I {} mkdir -p <DEST_DIR>"/{}"
Just replace SOURCE_DIR and DEST_DIR.
The following solution worked well for me in various environments:
sourceDir="some/directory"
targetDir="any/other/directory"
find "$sourceDir" -type d | sed -e "s?$sourceDir?$targetDir?" | xargs mkdir -p
This solves even the problem with whitespaces:
In the original/source dir:
find . -type d -exec echo "'{}'" \; > dirs2.txt
then recreate it in the newly created dir:
mkdir -p <../<SOURCEDIR>/dirs2.txt
Substitute target_dir and source_dir with the appropriate values:
cd target_dir && (cd source_dir; find . -type d ! -name .) | xargs -i mkdir -p "{}"
Tested on OSX+Ubuntu.
If you can get access from a Windows machine, you can use xcopy with /T and /E to copy just the folder structure (the /E includes empty folders)
http://ss64.com/nt/xcopy.html
[EDIT!]
This one uses rsync to recreate the directory structure but without the files.
http://psung.blogspot.com/2008/05/copying-directory-trees-with-rsync.html
Might actually be better :)
A python script from Sergiy Kolodyazhnyy
posted on Copy only folders not files?:
#!/usr/bin/env python
import os,sys
dirs=[ r for r,s,f in os.walk(".") if r != "."]
for i in dirs:
os.makedirs(os.path.join(sys.argv[1],i))
or from the shell:
python -c 'import os,sys;dirs=[ r for r,s,f in os.walk(".") if r != "."];[os.makedirs(os.path.join(sys.argv[1],i)) for i in dirs]' ~/new_destination
FYI:
Copy top level folder structure without copying files in linux
How do I copy a directory tree but not the files in Linux?
Another approach is use the tree which is pretty handy and navigating directory trees based on its strong options. There are options for directory only, exclude empty directories, exclude names with pattern, include only names with pattern, etc. Check out man tree
Advantage: you can edit or review the list, or if you do a lot of scripting and create a batch of empty directories frequently
Approach: create a list of directories using tree, use that list as an arguments input to mkdir
tree -dfi --noreport > some_dir_file.txt
-dfi lists only directories, prints full path for each name, makes tree not print the indentation lines,
--noreport Omits printing of the file and directory report at the end of the tree listing, just to make the output file not contain any fluff
Then go to the destination where you want the empty directories and execute
xargs mkdir < some_dir_file.txt
find source/ -type f | rsync -a --exclude-from - source/ target/
Copy dir only with associated permission and ownership
Simple way:
for i in `find . -type d`; do mkdir /home/exemplo/$i; done
cd oldlocation
find . -type d -print0 | xargs -0 -I{} mkdir -p newlocation/{}
You can also create top directories only:
cd oldlocation
find . -maxdepth 1 -type d -print0 | xargs -0 -I{} mkdir -p newlocation/{}
Here is a solution in php that:
copies the directories (not recursively, only one level)
preserves permissions
unlike the rsync solution, is fast even with directories containing thousands of files as it does not even go into the folders
has no problems with spaces
should be easy to read and adjust
Create a file like syncDirs.php with this content:
<?php
foreach (new DirectoryIterator($argv[1]) as $f) {
if($f->isDot() || !$f->isDir()) continue;
mkdir($argv[2].'/'.$f->getFilename(), $f->getPerms());
chown($argv[2].'/'.$f->getFilename(), $f->getOwner());
chgrp($argv[2].'/'.$f->getFilename(), $f->getGroup());
}
Run it as user that has enough rights:
sudo php syncDirs.php /var/source /var/destination