Go to bottom most directory? - linux

I'm working with a directory with a lot of nested folders like /path/to/project/users/me/tutorial
I found a neat way to navigate up the folders here:
https://superuser.com/questions/449687/using-cd-to-go-up-multiple-directory-levels
But I'm wondering how to go down them. This seems significantly more difficult, but a couple things about the directory structure help. Each directory only has another directory in it, or maybe a directory and a README.
The directory I'm looking for looks more like a traditional project and might have random directories and files in it (more than any of the other higher directories certainly).
Right now I'm working on a solution using uh.. recursive bash functions cd'ing into the only directory underneath until there are either 0 or 2+ directories to loop through. This doesn't work yet..
Am I overcomplicating this? I feel like there could be some sweet solution using find. Ideally I want to be able to type something like:
down path
where path is a top-level folder. And that will take me down to the bottom folder tutorial.

There is an environment variable named CDPATH. This variable is used by cd in the same manner that executables use PATH when searching for pathname.
For example, if you have the following directories:
/path/to/project/users/me
/path/to/project/users/me/tutorial
/path/to/project/users/him
/path/to/project/users/him/test
/path/to/project/users/her
/path/to/project/users/her/uat
/path/to/project/users/her/dev
/path/to/application
/path/to/application/conf
/path/to/application/bin
/path/to/application/share
export CDPATH=/path/to/project/users/me:/path/to/project/users/him:/path/to/project/users/her:/path/to/application
A simple command such as cd tutorial will search the above paths for tutorial.
Let's pretend /path/to/application has directories underneath namely, conf, bin, share. A simple cd conf will send you to /path/to/application/conf as long as none of the paths before it have conf directory. This behavior is similar to executables in PATH. The first occurrence always gets chosen

My attempt - this actually works now! I'm still afraid it could easily go infinite with symbolic links or some such.
Also, I have to run this like
. down
from within the first empty folder.
#!/bin/bash
function GoDownOnce {
Dirs=$(find ./ -maxdepth 1 -mindepth 1 -type d)
NumDirs=$(echo $Dirs | wc -w)
echo $Dirs
echo $NumDirs
if [ "$NumDirs" = "1" ]; then
cd $Dirs
GoDownOnce
fi
}
GoDownOnce

A friend also suggested this sweet one liner:
cd $(find . -type d -name tutorial)
Admittedly this isn't quite what I asked, but it gets the job done pretty well.

Related

bash: get path from current directory given sub-directory name

Trying to write a script to clean up environment files after a resource is deleted. The problem is all the script is given as input is the name of the resource (this cannot be changed) with zero identifying information beyond that. How can I find the path of the directory the resource is sitting in?
The directory is set up a bit like the following, although much more extensive. All of these are directories, not files. There can be as many as 40+ directories to search, but the desired one is generally not more than 2-3 directories deep.
foo
aaa
aaa_green
aaa_blue
bbb
ccc
ccc_green
bar
ddd
eee
eee_green
eee_blue
fff
fff_green
fff_blue
fff_pink
I might be handed input like aaa_green or just ddd.
As an example, given eee_blue as input, I need to know eee_blue's path from the working directory so I can cd there and delete the directory. IE, I would expect to return bar/eee/eee_blue/ or bar/eee/, either is acceptable.
The "best" option I can see currently is to cd into the lowest level of each directory via multiple greps, get each's contents and look for a match, and when it does (eventually) match save that cd'ing as the path. This frankly sounds awful and inefficient.
The only other alternative method I could think of was a straight recursive grep, but I tested it and at 8 minutes it still hadn't finished running.
This script needs to run on both mac and linux, although in a desperate pinch I could go linux only.
The standard Unix tool for doing this sort of task is the find command. The GNU version of find has more extensive options than the POSIX specification (by quite a margin). The version on macOS Sierra (and Mac OS X) is similar to the GNU version. I found an online manual for OS X 10.9 at Apple find, but there's probably a better location somewhere.
It looks like you might want to run:
find . -name 'eee_blue'
which will print the names of matching files or directories, or perhaps:
find . -name 'eee_blue' -exec rm -fr {} +
which will run the rm -fr command on each name. You can run a custom script you create in place of rm -fr if you prefer; if the logic is complex, it's what I do.
Be extremely cautious before using rm -fr automatically!

How can I delete files that are not used in code files in linux?

I am running Fedora 18 linux and I have a PHP project that I have been working on for some time. I am trying to clean things up for a production deploy of a web application. I have a folder of icon images that over time has collected files that are not used in my code any more, either because I changed to a different icon in code, or the image file was used to create other icons. What I am looking to do is to make a backup copy of the entire code project, and HOPEFULLY using a combination of find, rm and grep on the command line, scan the entire folder of the images, and if the images are not used anywhere in my code files, delete them. I did some searching on the web and I am finding things that find a line of text in a file and delete it, but I have not found anything quite like what I am trying to do.
Any help is appreciated...
So here is what I came up with. I put together a shell script that does what I need. For the benefit of those who stumble upon this, and for those who want to critique my solution, here it is. I chose to skip files that were found in .xcf files because these are only used to create many of the icon files and some of the .png images would grep to these .xcf files.
#!/bin/bash
FILES=/var/www/html/support_desk/templates/default/images/icons/*
codedir=/var/www/html/support_desk_branch/
for f in $FILES
do
bn=$(basename $f)
ext="${bn##*.}"
echo "Processing $bn file..."
if ! fgrep --quiet -R $bn $codedir; then
if [ ext != 'xcf' ]; then
rm $f
fi
fi
done
Now I have ONLY the image files that are used in the PHP script files. Just so as not to miss any of the icon files used in the menu, which is defined in a table in a mysql database, I created an sql dump file of the data for that table, and put it in the path of the application files prior to running the shell script.
The simplest way to find unused icon files would be to do a build of your complete project and then look at the access-times of the icon-files. Those that were not read recently (including with grep, of course) would show up readily.
For instance, supposing that you did a backup an hour ago, and did a build ten minutes ago — the access times would be distinct. Then
find . -amin +15 -type f
should give a nice list of "unused" files. If you're sure of the list (you did do a backup, right?) then you could purge the unused files:
find . -amin +15 -type f -exec rm -i {} \;
If you are really certain, you can remove the -i option.

how to delete 600 GBs of small files?

This was an interview question, they did not tell any information about the files, ie: extension, hidden files?, location (stored in single directory or a directory tree), so my first reaction to this question was:
rm -fr *
oh no, wait, should be:
rm -fr -- *
Then I realize that the above command would not remove hidden files successfully and quite frankly directories like . and .. might interfere, my second and final thought was a ShellScript that uses find.
find -depth -type f -delete
I'm not sure if this is the right way of doing it, I'm wondering if there is a better way of doing this task.
It's not as obvious as it seems:
http://linuxnote.net/jianingy/en/linux/a-fast-way-to-remove-huge-number-of-files.html

Find folders with specific name and no symlink pointing to them

I'm trying to write a shell script under linux, which lists all folders (recursively) with a certain name and no symlink pointing to it.
For example, I have:
/home/htdocs/cust1/typo3_src-4.2.11
/home/htdocs/cust2/typo3_src-4.2.12
/home/htdocs/cust3/typo3_src-4.2.12
Now I want to go through all subdirectories of /home/htdocs and find those folders typo3_*, that are not pointed to from somewhere.
Should be possible with a shellscript or a command, but I have no idea how.
Thanks for you help
Stefan
I think none of the common file systems store if there are symlinks pointing to this file in the file node, so you would have to scan all other files to see if it is a symlink to this one. If you don't limit your depth of search to a certain level, this might take a very long time. If you want to perform that search in /home/htdocs, for example, it would work something like this:
# find specified folders:
find /home/htdocs -name 'typo3_*' -type d | while read folder; do
# list all symlinks pointing to $folder
find -L /home/htdocs -samefile "$folder"|grep -v "$folder\$"
done

Is there a way to check if there are symbolic links pointing to a directory?

I have a folder on my server to which I had a number of symbolic links pointing. I've since created a new folder and I want to change all those symbolic links to point to the new folder. I'd considered replacing the original folder with a symlink to the new folder, but it seems that if I continued with that practice it could get very messy very fast.
What I've been doing is manually changing the symlinks to point to the new folder, but I may have missed a couple.
Is there a way to check if there are any symlinks pointing to a particular folder?
I'd use the find command.
find . -lname /particular/folder
That will recursively search the current directory for symlinks to /particular/folder. Note that it will only find absolute symlinks. A similar command can be used to search for all symlinks pointing at objects called "folder":
find . -lname '*folder'
From there you would need to weed out any false positives.
You can audit symlinks with the symlinks program written by Mark Lord -- it will scan an entire filesystem, normalize symlink paths to absolute form and print them to stdout.
There isn't really any direct way to check for such symlinks. Consider that you might have a filesystem that isn't mounted all the time (eg. an external USB drive), which could contain symlinks to another volume on the system.
You could do something with:
for a in `find / -type l`; do echo "$a -> `readlink $a`"; done | grep destfolder
I note that FreeBSD's find does not support the -lname option, which is why I ended up with the above.
find . -type l -printf '%p -> %l\n'
Apart from looking at all other folders if there are links pointing to the original folder, I don't think it is possible. If it is, I would be interested.
find / -lname 'fullyqualifiedpathoffile'
find /foldername -type l -exec ls -lad {} \;
For hardlinks, you can get the inode of your directory with one of the "ls" options (-i, I think).
Then a find with -inum will locate all common hardlinks.
For softlinks, you may have to do an ls -l on all files looking for the text after "->" and normalizing it to make sure it's an absolute path.
To any programmers looking here (cmdline tool questions probably should instead go to unix.stackexchange.com nowadays):
You should know that the Linux/BSD function fts_open() gives you an easy-to-use iterator for traversing all sub directory contents while also detecting such symlink recursions.
Most command line tools use this function to handle this case for them. Those that don't often have trouble with symlink recursions because doing this "by hand" is difficult (any anyone being aware of it should just use the above function instead).

Resources