Linux backup all files with known extensions with timestamps - linux

I want to backup all files with a given extension in a directory but I want them to be with timestamps.
Given a directory:
Sample/ with multiple subdirectory and a subfolder name BACKUPS.
cd Sample
find . -name '*.xml' -exec cp {} BACKUPS \;
Say I have multiple xml files in this Sample folder and I want them to be copied to the BACKUPS folder but I want them to be timestamp
say..
text.xml.20171107
conf.xml.20171107
I am able to backup the files but I could not figure out how to append a timestamp to the files using the find command.

You could try this:
find . -name '*.xml' -execdir cp {} "$PWD/BACKUPS/{}.$(date +%Y%m%d)" \;
As before, we use find . -name '*.xml' to locate all the files. However, in order to get rid of the names of subdirectories, we use -execdir instead of exec. This causes the specified command to be run from inside the subdirectory the current file is in and replaces {} by its base name.
This means we have to modify cp's second argument (the target filename). We now pass "$PWD/BACKUPS" to create an absolute path ($PWD is the current working directory). This way cp always targets the right directory, even when invoked from a subdirectory of Sample.
Finally, the filename we use is constructed from {}.$(date +%Y%m%d). $( ) runs the specified command and substitutes its output (the current date, in this case). This is done by the shell before find is invoked, so find just sees .../{}.20171107. The {} part is replaced by find itself just before it runs each cp.

Related

Linux find command explanation

Can someone explain me what does this command do and if I want to try the same thing using git, how should I modify this command?
find . -name CVS -print -exec rm -fr {} \;
This command looks in your current working directory for any directories or files named "CVS" and prints the full path. Then executes a forced recursive removal for each result returned by the find command.
Since there is no filetype present in the name, this command will remove any directory, within your current working directory, named CVS, including all subdirectories and files housed within.

Extract .7z's recursively by name - P7zip

I have multiple directories, with .7z's inside each, but also other directories inside the same start directory which I do not want to extract. The ones I do want to extract all have a common name, for example: each directory has the word "extract" in it. How could I recursively extract each .7z to a chosen folder that contains the word "extract" in it?
The command you should use is:
find . -name "*extract*.7z" -print -exec 7z x {} \;
I would have run the command withtout -exec before, just to make sure these are the files you would actually like to extract:
find . -name "*extract*.7z" -print

How to script to go to the deepest folder directory from parent directory to execute a command?

I am trying to automate converting .vmx to .ova by ovftool, and these .vmx files are generated from ghettoVCB. So I am writing a script to get converting automation working.
My question is how do I write a shell script that goes through each directory from a parent_directory and executes a command in each directory? Or could move everything from the deepest folder to parent_directory? (This solution may take time consuming to move those files from the deepest folder to parent_directory).
The directory structure is as follows:
parent_directory/automation/automation-2016-04-18_19-08-32/automation.vmx
parent_directory/bulls/bulls-2016-04-18_18-28-57/bulls.vmx
Here is another structure layout
parent_directory
automation
automation-2016-04-18_19-08-32
automation.vmx
bulls
bulls-2016-04-18_18-28-57
bulls.vmx
The name of subfolders from parent_directory does not follow patterns. Could be any name.
The folder "automation-2016-04-18_19-08-32" is the name of subfolder + date + time.
Approach 1
move everything from the deepest folder to parent_directory
This command will search subdirectories of the current directory, find all .vmx files, and move them to the current directory:
find . -type f -name '*.vmx' -exec mv {} . \;
Approach 2
write a shell script that goes through each directory from a parent_directory and executes a command in each directory
The following command will search for every subdirectory of the current directory and execute command in it:
find . -type d -execdir command \;
If you want to run command in every directory that contains .vmx files and supply the names of those .vmx files as arguments to command, then run:
find . -type f -name '*.vmx' -execdir command {} +
Alternatively, suppose we want to run the command once for each .vmx file that we find:
find . -type f -name '*.vmx' -execdir command {} \;

Linux command to create the empty file called 'test1'

Enter a Linux command to create the empty file called 'test1' in the directory 'systems' (you are still in your home directory).
Assuming 'systems' is a subdirectory of the current directory:
touch systems/test1
Assuming that you only know that the directory 'systems' is some subdirectory in the directory tree of the current directory then: find . -name systems -type d -exec touch "{}/test1" \; Will create such a file. Alternately, so will find . -name systems -type d -execdir touch systems/test1 \; However, both will do so in every subdirectory named 'systems' in the current directory tree. We could limit that action to only the first, the last, or some other criteria, but the list of possible permutations is just too long.
You really have not provided enough information for us to provide a complete answer.

Bash script to recursively step through folders and delete files

Can anyone give me a bash script or one line command i can run on linux to recursively go through each folder from the current folder and delete all files or directories starting with '._'?
Change directory to the root directory you want (or change . to the directory) and execute:
find . -name "._*" -print0 | xargs -0 rm -rf
xargs allows you to pass several parameters to a single command, so it will be faster than using the find -exec syntax. Also, you can run this once without the | to view the files it will delete, make sure it is safe.
find . -name '._*' -exec rm -Rf {} \;
I've had a similar problem a while ago (I assume you are trying to clean up a drive that was connected to a Mac which saves a lot of these files), so I wrote a simple python script which deletes these and other useless files; maybe it will be useful to you:
http://github.com/houbysoft/short/blob/master/tidy
find /path -name "._*" -exec rm -fr "{}" +;
Instead of deleting the AppleDouble files, you could merge them with the corresponding files. You can use dot_clean.
dot_clean -- Merge ._* files with corresponding native files.
For each dir, dot_clean recursively merges all ._* files with their corresponding native files according to the rules specified with the given arguments. By default, if there is an attribute on the native file that is also present in the ._ file, the most recent attribute will be used.
If no operands are given, a usage message is output. If more than one directory is given, directories are merged in the order in which they are specified.
Because dot_clean works recursively by default, use:
dot_clean <directory>
If you want to turn off the recursively merge, use -f for flat merge.
dot_clean -f <directory>
find . -name '.*' -delete
A bit shorter and perform better in case of extremely long list of files.

Resources