Years ago I use to do:
$ od -c .
to get a dump of the current directory and show the inodes.
This no longer works ... anyone know: why or what I can do?
I was just demonstrating the 'beauty' of everything is a file to someone.
Craig
od internally calls open() and then read(); from man 2 read, errors section
EISDIR fd refers to a directory.
That is, you cannot read bytes from a directory. Maybe in some old version of Linux or in some other *nix system it was allowed, but not in today Linux.
Related
Actually as shown bellow I wondered
How to show '->' in front of the directory information in Linux when do ll -a?
Is it something related to repository/git or it is done by another person intentionally?
The printed line with the -> is a symbolic link. Not really a directory (or a standard file).
The name of the link is latest (size of 5 characters). The target of this link is 9.0.2 (in the same directory).
So it's done intentionally by another person or a install program.
Read man ln for futher information. ln is the command for create links.
There is no relation with Git or whatelse like it.
I want to know how we can have 2 symlinks (not 2 regular files or hardlink and symlink but 2 symlinks) with the same inode and the same inode point. I tried a lot of combination of hardlinks and symlinks and regular files, I can have two files with same inode and the same inode point but they aren't both of them symlinks.
N.B: I use the os library under python3 to get all the informations and "os.path.islink" for knowing if it's a symlink, and "os.stat" "os.lstat"
Thanks a lot.
You have to make sure, the tool you use for making hardlink does not follow the symlink you are trying to link. On command line, you can do it like this, for example:
cp -l -P symlink1 symlink2
These options mean:
-l, --link
hard link files instead of copying
-P, --no-dereference
never follow symbolic links in SOURCE
In Python, starting from version 3.3, you can do the following:
os.link ("symlink1", "symlink2", follow_symlinks = False)
Thank to all of your answers but actually I solved the problem, I can have two files with the same inodes and the same inodes point,
I needed to create the hardlink of the symlink of a file (file f1 as example)
ln symlinkf1 hardlink_ofsymlinkf1
Then I get inode_symlink1 = inode_hardlink_ofsymlinkf1 (1)
and inodepoint_symlinkf1 = inodepoint_hardlink_ofsymlinkf1 (2)
but (1) =/= (2)
And this works well only in Linux where I get both of the files as softlinks (with os.islink(file_name)=True), and weirdly it doesn't work on Mac OS.
I'm not even sure if this is easily possible, but I would like to list the files that were recently deleted from a directory, recursively if possible.
I'm looking for a solution that does not require the creation of a temporary file containing a snapshot of the original directory structure against which to compare, because write access might not always be available. Edit: If it's possible to achieve the same result by storing the snapshot in a shell variable instead of a file, that would solve my problem.
Something like:
find /some/directory -type f -mmin -10 -deletedFilesOnly
Edit: OS: I'm using Ubuntu 14.04 LTS, but the command(s) would most likely be running in a variety of Linux boxes or Docker containers, most or all of which should be using ext4, and to which I would most likely not have access to make modifications.
You can use the debugfs utility,
debugfs is a simple to use RAM-based file system specially designed
for debugging purposes
First, run debugfs /dev/hda13 in your terminal (replacing /dev/hda13 with your own disk/partition).
(NOTE: You can find the name of your disk by running df / in the terminal).
Once in debug mode, you can use the command lsdel to list inodes corresponding with deleted files.
When files are removed in linux they are only un-linked but their
inodes (addresses in the disk where the file is actually present) are
not removed
To get paths of these deleted files you can use debugfs -R "ncheck 320236" replacing the number with your particular inode.
Inode Pathname
320236 /path/to/file
From here you can also inspect the contents of deleted files with cat. (NOTE: You can also recover from here if necessary).
Great post about this here.
So a few things:
You may have zero success if your partition is ext2; it works best with ext4
df /
Fill mount point with result from #2, in my case:
sudo debugfs /dev/mapper/q4os--desktop--vg-root
lsdel
q (to exit out of debugfs)
sudo debugfs -R 'ncheck 528754' /dev/sda2 2>/dev/null (replace number with one from step #4)
Thanks for your comments & answers guys. debugfs seems like an interesting solution to the initial requirements, but it is a bit overkill for the simple & light solution I was looking for; if I'm understanding correctly, the kernel must be built with debugfs support and the target directory must be in a debugfs mount. Unfortunately, that won't really work for my use-case; I must be able to provide a solution for existing, "basic" kernels and directories.
As this seems virtually impossible to accomplish, I've been able to negotiate and relax the requirements down to listing the amount of files that were recently deleted from a directory, recursively if possible.
This is the solution I ended up implementing:
A simple find command piped into wc to count the original number of files in the target directory (recursively). The result can then easily be stored in a shell or script variable, without requiring write access to the file system.
DEL_SCAN_ORIG_AMOUNT=$(find /some/directory -type f | wc -l)
We can then run the same command again later to get the updated number of files.
DEL_SCAN_NEW_AMOUNT=$(find /some/directory -type f | wc -l)
Then we can store the difference between the two in another variable and update the original amount.
DEL_SCAN_DEL_AMOUNT=$(($DEL_SCAN_ORIG_AMOUNT - $DEL_SCAN_NEW_AMOUNT));
DEL_SCAN_ORIG_AMOUNT=$DEL_SCAN_NEW_AMOUNT
We can then print a simple message if the number of files went down.
if [ $DEL_SCAN_DEL_AMOUNT -gt 0 ]; then echo "$DEL_SCAN_DEL_AMOUNT deleted files"; fi;
Return to step 2.
Unfortunately, this solution won't report anything if the same amount of files have been created and deleted during an interval, but that's not a huge issue for my use case.
To circumvent this, I'd have to store the actual list of files instead of the amount, but I haven't been able to make that work using shell variables. If anyone could figure that out, I'd help me immensely as it would meet the initial requirements!
I'd also like to know if anyone has comments on either of the two approaches.
Try:
lsof -nP | grep -i deleted
history >> history.txt
Look for all rm statements.
In Ubuntu, I give these commands and obtain this output:
soujanya#LLN-Ubuntu:~/workspace/openEAR-0.1.0$ ls -l SMILExtract
-rwxr-xr-x 1 soujanya soujanya 3789876 Aug 20 2009 SMILExtract
soujanya#LLN-Ubuntu:~/workspace/openEAR-0.1.0$ whoami
soujanya
soujanya#LLN-Ubuntu:~/workspace/openEAR-0.1.0$ ./SMILExtract
bash: ./SMILExtract: No such file or directory
soujanya#LLN-Ubuntu:~/workspace/openEAR-0.1.0$
SMILExtract is an executable file (not shell script) and I do not have access to the source code of this file. Maybe it calls some system() or maybe not, no way for me to know.
I have heard that this error might be if the file is 32-bit and I run it on a 64-bit system, so No such file or directory refers to the loader and not this file. I think this is not the cause in my case, but anyway, my question is:
Is there a way to find out WHICH file is No such file or directory? Maybe a special variable in Bash or something like this.
You can run programs with strace, a tool that shows you which system calls are used by a program. It'll produce a lot of output, but you can see the files your program attempts to open. Run your program like this:
strace ./SMILExtract
To be sure about the 32/64 bit question you could 'file ./SMILExtract'
There was a situation when somebody moved the whole rootdir into a subdir on a remote system, thus all the system tools like cp, mv, etc didn't work anymore. We had an active session though but couldn't find a way to copy/move the files back using only bash built-ins.
Do somebody know of a way to achieve this?
I even thought about copy the cp or mv binary in the currentdir with
while read -r; do echo $LINE; done
and then redirect this to a file, but it didn't work. Guess because of all the special non printable chars in a binary file that can't be copied/displayed using echo.
thanks.
/newroot/lib/ld-linux.so.2 --library-path /newroot/lib \
/newroot/bin/mv /newroot/* /
(Similar for Solaris, but I think the dynamic linker is named ld.so.1 or something along those lines.)
Or, if your shell is sh-like (not csh-like),
LD_LIBRARY_PATH=/newroot/lib /newroot/bin/mv /newroot/* /
If you have prepared with sash pre-installed, then that is static and has a copy built-in (-cp).
Otherwise LD_LIBRARY_PATH=/copied/to/path/lib /copied/to/path/bin/cp might work?
I think it might have a problem with not having ld-so in the expected place.
Here's a reasonable ghetto replacement for cp. You'll want echo -E if the file ends with a new line (like most text files), echo -nE if it doesn't (like most binaries).
echo -nE "`< in.file`" > out.file
Old thread, but got exactly the same stupid mistake. /lib64 was moved to /lib64.bak remotely and everything stopped working.
This was a x86_64 install, so ephemient's solution was not working:
# /lib64.bak/ld-linux.so.2 --library-path /lib64.bak/ /bin/mv /lib64.bak/ /lib64
/bin/mv: error while loading shared libraries: /bin/mv: wrong ELF class: ELFCLASS64
In that case, a different ld-linux had to be used:
# /lib64.bak/ld-linux-x86-64.so.2 --library-path /lib64.bak/ /bin/mv /lib64.bak/ /lib64
Now the system is salvaged. Thanks ephemient!
/subdir/bin/mv /subdir /
or am I missing something in your explanation?
If you have access to another machine, one solution is to download and compile a Busybox binary. It will be a single binary contains most of the common tools you need to restore your system. This might not work if your system is remote though.