Retrieving the sub-directory, which had most recently been modified, in a Linux shell script? - linux

How can I retrieve the sub-directory, which had most recently been modified, in a directory?
I am using a shell script on a Linux distribution (Ubuntu).

Sounds like you want the ls options
-t sort by modification time, newest first
And only show directories, use something like this answer suggests Listing only directories using ls in bash: An examination
ls -d */
And if you want each directory listed on one line (if your file/dirnames have no newlines or crazy characters) I'd add -1 So all together, this should list directories in the current directory, with the newest modified times at the top
ls -1td */
And only the single newest directory:
ls -1td */ | head -n 1
Or if you want to compare to a specific time you can use find and it's options like -cmin -cnewer -ctime -mmin -mtime and find can handle crazy names like newlines, spaces, etc with null terminated names options like -print0

How much the subdirectory is modified is irrelevant. Do you know the name of the subdirectory? Get its content like this:
files=$(ls subdir-name)
for file in ${files}; do
echo "I see there is a file named ${file}"
done

Related

Shell Script to finding files in specific folder and based on "age" of files

hello i got homework to make a shell script in linux to find a file in specific folder based on the "age" of those files. and after that i want to move that file to other specific folder.
thank you before
One way is to use the find command, and specify the "age" with -mtime (or -newer if age relative to other files). See man find for more details.
To move the files you can use mv (again, see man mv).
Directories can be passed as arguments or stored in variables and then used
as variables in the commands.
Without knowing anything else about your assignment I'd say use something like this:
find <directory> -mtime <n> | xargs mv -t <destination>
where xargs is used to pass the results from find to the mv command.

Script to zip complete file structure depending on file age

Alright so i have a web server running CentOS at work that is hosting a few websites internally only. It's our developpement server and thus has lots [read tons] of old junk websites and whatnot.
I was trying to elaborate a command that would find files that haven't been modified for over 6 months, group them all in a tarball and then delete them. Thus far i have tried many different type of find commands with arguments and whatnot. Our structure looks like such
/var/www/joomla/username/fileshere/temp
/var/www/username/fileshere
So i tried something amongst the lines of :
find /var/www -mtime -900 ! -mtime -180 | xargs tar -cf test4.tar
Only to have a 10MB resulting tar, when the expected result would be over 50 GB's.
I tried using gzip instead, but i ended up zipping MY WHOLE SERVER thus making is unusable, had to transfer the whole filesystem and reinstall a complete new server and lots of shit and trouble and... you get the idea. So i want to find the perfect command that won't blow up our server but will find all FILES and DIRECTORIES that haven't been modified for over 6 months.
Be careful with ctime.
ctime is related to changes made to inodes (changing permissions, owner, etc)
atime when a file was last accessed (check if your file system is using noatime or relatime options, in that case the atime option may not work in the expected way)
mtime when data in a file was last modified.
Depending on what are you trying to do, the mtime option could be your best option.
Besides, you should check the print0 option. From man find:
-print0
True; print the full file name on the standard output, followed by a null character (instead of the newline character that -print uses). This allows file names that contain newlines or
other types of white space to be correctly interpreted by programs that process the find output. This option corresponds to the -0 option of xargs.
I do not know what are you trying to do but this command could be useful for you:
find /var/www -mtime +180 -print0 | xargs -0 tar -czf example.tar.gz
Try this:
find /var/www -ctime +180 | xargs tar cf test.tar
The ctime parameter tells you the difference between current time and each files modification times, and if you use the + instead of minus it will give you the "files modified in a date older than x days".
Then just pass it to tar with xargs and you should be set.

sed not working as expected, but only for directory depth greater than 1

I am trying to find all instances of a string in all files on my system up to a specified directory depth. I then want to replace these with another string and I am using 'find' and 'sed' by piping one into the other.
This works where I use the base path as cd /home/../.. or any other directory which isn't "/". It also only works if I select a directory depth of 1 (so /test.txt is changed, but /home/test.txt isn't) If I change nothing else and used say a depth of 2 or 3, neither /test.txt nor /home/text.txt are changed. In the former, no warnings appear, and in the latter, the results below (And no strings are replaced in either of the files).
Worryingly, it did work once out of the blue, but I have no idea how and I can't recreate the results. I should say I know the risks of using these commands with root from base directory, and the specific use of the programs below is intentional so I am not looking for an alternative way, just a clue as to how this isn't working and perhaps a suggestion on how to fix it.
cd /;find . -maxdepth 3 -type f -print0 | xargs -0 sed -i 's/teststring123/itworked/gI'
sed: couldn't open temporary file ./sys/kernel/sedoPGqGB: No such file or directory
sed: couldn't open temporary file ./proc/878/sedtqayiq: No such file or directory
As you see, there are warnings, but nether the less I would expect it to work, the commands appear good, anything I am missing folks?
This should be:
find / -maxdepth 3 -type f -print -exec sed -i -e 's/teststring123/itworked/g' {} \;
Although changing all files below / strikes me as a very bad idea indeed (I hope you're not running as root!).
The "couldn't open temporary file ./[...]" errors are likely to be because sed, running as your user, doesn't have permission to create files in /.
My version runs from your current working directory, I assume your ${HOME}, where you'll be able to create the temporary file, but you're still unlikely to be able to replace those files vital to the continued running of your operating system.

linux include all directories

how would I type a file path in ubuntu terminal to include all files in all sub-directories?
If I had a main directory called "books" but had a ton of subdirectories with all sorts of different names containing files, how would I type a path to include all files in all subdirectories?
/books/???
From within the books top directory, you can use the command:
find . -type f
Then, if you wanted to, say run each file through cat, you could use the xargs command:
find . -type f | xargs cat
For more info, use commands:
man find
man xargs
It is unclear what you actually want ... Probably you will get a better solution to your problem, if you ask directly for it, not for one other problem you've come accross trying to circumvent the original problem.
do you mean something like the following?
file */*
where the first * expands for all subdirectories and the second * for all contained files ?
I have chosen the file command arbitrarily. You can choose whatever command you want to run on the files you get shell-expanded.
Also note that directories will also be included (if not excluded by name, e.g. *.png or *.txt).
The wildcard * is not exactly the file path to include all files in all subdirectories but it expands to all files (or directories) matching the wildcard expression as a list, e.g. file1 file2 file3 file4. See also this tutorial on shell expansion.
Note that there may be easy solutions to related problems. Like to copy all files in all subdirectories (cp -a for example, see man cp).
I also like find very much. It's quite easy to generate more flexible search patterns in combination with grep. To provide a random example:
du `find . | grep some_pattern_to_occur | grep -v some_pattern_to_not_occur`
./books/*
For example, assuming i'm in the parent directory of 'books':
ls ./books/*
EDIT:
Actually, to list all the tree recursively you should use:
ls -R ./books/*

How to recursive list files with size and last modified time?

Given a directory i'm looking for a bash one-liner to get a recursive list of all files with their size and modified time tab separated for easy parsing. Something like:
cows/betsy 145700 2011-03-02 08:27
horses/silver 109895 2011-06-04 17:43
You can use stat(1) to get the information you want, if you don't want the full ls -l output, and you can use find(1) to get a recursive directory listing. Combining them into one line, you could do this:
# Find all regular files under the current directory and print out their
# filenames, sizes, and last modified times
find . -type f -exec stat -f '%N %z %Sm' '{}' +
If you want to make the output more parseable, you can use %m instead of %Sm to get the last modified time as a time_t instead of as a human-readable date.
find is perfect for recursively searching through directories. The -ls action tells it to output its results in ls -l format:
find /dir/ -ls
On Linux machines you can print customized output using the -printf action:
find /dir/ -printf '%p\t%s\t%t\n'
See man find for full details on the format specifiers available with -printf. (This is not POSIX-compatible and may not be available on other UNIX flavors.)
find * -type f -printf '%p\t%s\t%TY-%Tm-%Td %Tk:%TM\n'
If you prefer fixed-width fields rather than tabs, you can do things like changing %s to %10s.
I used find * ... to avoid the leading "./" on each file name. If you don't mind that, use . rather than * (which also shows files whose names start with .). You can also pipe the output through sed 's/^\.\///'.
Note that the output order will be arbitrary. Pipe through sort if you want an ordered listing.
You could try this for recursive listing from current folder called "/from_dir"
find /from_dir/* -print0 | xargs -0 stat -c “%n|%A|%a|%U|%G” > permissions_list.txt

Lists files and directories passes through to stat command and puts all the info into a file called permissions_list.txt
“%n|%A|%a|%U|%G” will give you the following result in the file:
from_
 dir|drwxr-sr-x|2755|root|root
from_dir/filename|-rw-r–r–|644|root|root

Cheers!


Resources