Here's the problem. I had a bunch of files in a directory. Then I created another directory in that directory. Then I cobbled together this command:
find . -maxdepth 1 -type f -exec mv {} ./1 \;
This command was supposed to take all the files in the directory and move them to that newly-created directory, but instead of providing the name of the directory, I screwed up and typed 1, as you can see from the code snippet. So, I ended up having just one text file named 1 that now contains the stuff from one of the disappeared files and that's all.
Is there any chance I could recover the lost files (or possibly the actual data from the files--they were all text files) or are they pretty much permanently gone?
Before:
misha#hp-laptop:~/Documents/prgmg/work$ ls
add.s bubble.s cpuid.s div.s hello.s mult.s sum.s test.s
a.out c demo.s gas.txt max.s print_arr.s test.c
misha#hp-laptop:~/Documents/prgmg/work$ mkdir asm
After:
misha#hp-laptop:~/Documents/prgmg/work$ ls
1 asm c
So, as you can see, I wanted to put all assembly language files into the asm directory. And as things stand now, 1 is a text file and it contains the stuff from gas.txt.
No. Not easily. Sorry.
Restoring from backup would be the best option.
See the answers to the question "Recovering accidentally deleted files" over at Unix & Linux, if you feel like doing a bit of low-level file access.
Related
I have a file that I copied sometime back, but I forgot the source of it. Is there a way to find the source of the copied file? I don't remember which terminal I have used to try and check with Esc+P
Command used: cp -rf $source/file $destination/file
Thanks in advance!
You could try history | grep your_filename.
A Linux system has many files (and if you think of /proc/, it could change at every moment). And some other process can write or create (or append or truncate) files (e.g. some crontab(1) job...)
Assume you do know some parent directory containing the source file. Suppose it is /home/foo.
Then, you might use find(1) and some hashing command like md5sum(1) to compute and collect the hash of every file.
Use the property that two files A and B with identical contents (a sequence of bytes) have the same md5sum. Of course, the converse is false, but in practice unlikely.
So run first
find /home/foo -type f -exec md5sum '{}' \; > /tmp/foo-md5
then do seekingmd5=$(md5sum A )
then grep $seekingmd5 /tmp/foo-md5 will find lines for files having the same md5 than your original A
Depending on your filesystem and hardware, this could take hours.
You could accelerate slightly things by writing a C program using nftw(3) with md5init etc...
Trying to write a script to clean up environment files after a resource is deleted. The problem is all the script is given as input is the name of the resource (this cannot be changed) with zero identifying information beyond that. How can I find the path of the directory the resource is sitting in?
The directory is set up a bit like the following, although much more extensive. All of these are directories, not files. There can be as many as 40+ directories to search, but the desired one is generally not more than 2-3 directories deep.
foo
aaa
aaa_green
aaa_blue
bbb
ccc
ccc_green
bar
ddd
eee
eee_green
eee_blue
fff
fff_green
fff_blue
fff_pink
I might be handed input like aaa_green or just ddd.
As an example, given eee_blue as input, I need to know eee_blue's path from the working directory so I can cd there and delete the directory. IE, I would expect to return bar/eee/eee_blue/ or bar/eee/, either is acceptable.
The "best" option I can see currently is to cd into the lowest level of each directory via multiple greps, get each's contents and look for a match, and when it does (eventually) match save that cd'ing as the path. This frankly sounds awful and inefficient.
The only other alternative method I could think of was a straight recursive grep, but I tested it and at 8 minutes it still hadn't finished running.
This script needs to run on both mac and linux, although in a desperate pinch I could go linux only.
The standard Unix tool for doing this sort of task is the find command. The GNU version of find has more extensive options than the POSIX specification (by quite a margin). The version on macOS Sierra (and Mac OS X) is similar to the GNU version. I found an online manual for OS X 10.9 at Apple find, but there's probably a better location somewhere.
It looks like you might want to run:
find . -name 'eee_blue'
which will print the names of matching files or directories, or perhaps:
find . -name 'eee_blue' -exec rm -fr {} +
which will run the rm -fr command on each name. You can run a custom script you create in place of rm -fr if you prefer; if the logic is complex, it's what I do.
Be extremely cautious before using rm -fr automatically!
I'm working with a directory with a lot of nested folders like /path/to/project/users/me/tutorial
I found a neat way to navigate up the folders here:
https://superuser.com/questions/449687/using-cd-to-go-up-multiple-directory-levels
But I'm wondering how to go down them. This seems significantly more difficult, but a couple things about the directory structure help. Each directory only has another directory in it, or maybe a directory and a README.
The directory I'm looking for looks more like a traditional project and might have random directories and files in it (more than any of the other higher directories certainly).
Right now I'm working on a solution using uh.. recursive bash functions cd'ing into the only directory underneath until there are either 0 or 2+ directories to loop through. This doesn't work yet..
Am I overcomplicating this? I feel like there could be some sweet solution using find. Ideally I want to be able to type something like:
down path
where path is a top-level folder. And that will take me down to the bottom folder tutorial.
There is an environment variable named CDPATH. This variable is used by cd in the same manner that executables use PATH when searching for pathname.
For example, if you have the following directories:
/path/to/project/users/me
/path/to/project/users/me/tutorial
/path/to/project/users/him
/path/to/project/users/him/test
/path/to/project/users/her
/path/to/project/users/her/uat
/path/to/project/users/her/dev
/path/to/application
/path/to/application/conf
/path/to/application/bin
/path/to/application/share
export CDPATH=/path/to/project/users/me:/path/to/project/users/him:/path/to/project/users/her:/path/to/application
A simple command such as cd tutorial will search the above paths for tutorial.
Let's pretend /path/to/application has directories underneath namely, conf, bin, share. A simple cd conf will send you to /path/to/application/conf as long as none of the paths before it have conf directory. This behavior is similar to executables in PATH. The first occurrence always gets chosen
My attempt - this actually works now! I'm still afraid it could easily go infinite with symbolic links or some such.
Also, I have to run this like
. down
from within the first empty folder.
#!/bin/bash
function GoDownOnce {
Dirs=$(find ./ -maxdepth 1 -mindepth 1 -type d)
NumDirs=$(echo $Dirs | wc -w)
echo $Dirs
echo $NumDirs
if [ "$NumDirs" = "1" ]; then
cd $Dirs
GoDownOnce
fi
}
GoDownOnce
A friend also suggested this sweet one liner:
cd $(find . -type d -name tutorial)
Admittedly this isn't quite what I asked, but it gets the job done pretty well.
Now, I get the feeling that some people will think that there was no original file of a hard link, but I would strongly disagree because of the following experiment I did.
Let's create a file with the content pwd and make a hard link to a subfolder:
echo "pwd" > original
mkdir subfolder
cp -l original subfolder/hardlink
Now let's see what the files output if I run it with shell:
sh original
sh subfolder/hardlink
The output is the same, even though the file hardlink is in a subfolder!
Sorry, for the long intro, but I wanted to make sure that nobody says that my following question is irrelevent.
So my question now is: If the content of the original file was not conveniently pwd, how do I find out the path to the original file from a hard link file?
I know that linux programs seem to know the path somehow, but not the filename, because some programs returned error messages that <path to original file>/hardlinkname was not found. But how do they do that?
Thanks in advance for an answer!
Edit: Btw, I fixed the error messages mentioned above by naming the hard links the same as the original file.
But how do they do that?
By looking for the same inode value. Here's one way you can list files with the same inode:
find /home -xdev -samefile original
replace /home with any other starting directory for find to start searching.
how do I find out the path to the original file from a hard link file?
For hard links there are no multiple files, just one file (inode) with multiple (file) names.
ADDENDUM:
is there no other way to find the hard links of an inode than searching through folders?
ln, ls, find, and stat are the common ways of discovering and querying the filesystem for inodes. Then depending on what next you want to accomplish, many file, directory, archiving, and searching commands recognize inode values. Some may require a special -inum or --follow or equivalent option to specify inodes.
The find example I gave above is just one such usage. Another is to combine with xargs to operate on all the found files. Here's one way to delete them all:
find /home -xdev -samefile original | xargs rm
Look under --help for other standard os commands. Most Linux distributions also come with help files that explain inodes and which tools work with inodes.
pwd is the present working directory, so of course, the output should be the same, since you didnt cd't into your subfolder.
Sorry to say, but there is no "original" file if you create other hardlinks. If you want to get other hardlinks of a file, look at How to find all hard links to a given file? for example.
Agree with #Emacs User. Your example of pwd is irrelevant and confused you.
There is no concept of original file for hard-links. The file names just act as a reference count to the content on the disk pointed by the i-node (see 'ls -li original subfolder/hardlink'). So even if you delete the original file hardlink still points to the same content.
It is impossible to find out as all hard links are treated the same way pointing to one inode.
I have a folder on my server to which I had a number of symbolic links pointing. I've since created a new folder and I want to change all those symbolic links to point to the new folder. I'd considered replacing the original folder with a symlink to the new folder, but it seems that if I continued with that practice it could get very messy very fast.
What I've been doing is manually changing the symlinks to point to the new folder, but I may have missed a couple.
Is there a way to check if there are any symlinks pointing to a particular folder?
I'd use the find command.
find . -lname /particular/folder
That will recursively search the current directory for symlinks to /particular/folder. Note that it will only find absolute symlinks. A similar command can be used to search for all symlinks pointing at objects called "folder":
find . -lname '*folder'
From there you would need to weed out any false positives.
You can audit symlinks with the symlinks program written by Mark Lord -- it will scan an entire filesystem, normalize symlink paths to absolute form and print them to stdout.
There isn't really any direct way to check for such symlinks. Consider that you might have a filesystem that isn't mounted all the time (eg. an external USB drive), which could contain symlinks to another volume on the system.
You could do something with:
for a in `find / -type l`; do echo "$a -> `readlink $a`"; done | grep destfolder
I note that FreeBSD's find does not support the -lname option, which is why I ended up with the above.
find . -type l -printf '%p -> %l\n'
Apart from looking at all other folders if there are links pointing to the original folder, I don't think it is possible. If it is, I would be interested.
find / -lname 'fullyqualifiedpathoffile'
find /foldername -type l -exec ls -lad {} \;
For hardlinks, you can get the inode of your directory with one of the "ls" options (-i, I think).
Then a find with -inum will locate all common hardlinks.
For softlinks, you may have to do an ls -l on all files looking for the text after "->" and normalizing it to make sure it's an absolute path.
To any programmers looking here (cmdline tool questions probably should instead go to unix.stackexchange.com nowadays):
You should know that the Linux/BSD function fts_open() gives you an easy-to-use iterator for traversing all sub directory contents while also detecting such symlink recursions.
Most command line tools use this function to handle this case for them. Those that don't often have trouble with symlink recursions because doing this "by hand" is difficult (any anyone being aware of it should just use the above function instead).