in linux, how to (quickly) get a list of all files in a directory - with their filesize - linux

I need to get a list of all the files, along with their sizes, in Linux.
the filesystem is ext4, running over USB on a machine with very little RAM
the functions I'm using are these - is there a better technique?
a) opendir()
b) readdir()
c) stat()
I believe I'm getting hit pretty hard with the stat() call, I don't have much RAM and the HD is USB connected
is there a way to say
"give me all the files in the directory, along with the file sizes?" - my guess is that I'm getting impacted because stat() needs to go query the inode for the size, leading to lots of seeks?

No, not really. If you don't want to hit the disk, you would need have the inodes cached in memory. That's the tradeoff in this case.
You could try tuning inode_readahead_blks and vfs_cache_pressure though.

Use cd to get to the directory you want to get the size of the files and directories for, then type:
du -sh *
This will give you all the sizes of the files and directories.
The s gives a total for each argument, h makes the output human readable. The * means that it will show all the contents of that directory.
Type man du to find out more about the du command (q or Ctrl-c to exit)

Related

How to prove that directory is a file in Linux

"Everything is a file in Linux". How can i prove that directories are represented as files in linux. Also the physical hardware devices everything creates and is represented as files in Linux. But how can i prove this concept with supporting examples to someone.
Viewing the Directory and other physical hardwares as files in Liniux.( POC)
The "Everything is a file in Linux" statement is a bit of an oversimplification. There are many things in Linux that appear as files, but don't quite 'act' as you think they would in a conventional sense.
Block files (e.g. /dev/loop0) are a great example of this as they are used as a way of communicating with device drivers.
That said, directories are their own 'special' kind of file that contain inode ids pointing to a file's inode. I suppose a simple 'proof' of sorts would be to ls -l any directory and you will notice that most (if not all) of them will have a listed file size of 4096 bytes rather than listing the collective size of its contents.
4096 bytes is the smallest blocksize for most filesystems and is usually more than enough to fit all the information (inode ids) of a directory. So rather than direct information/access to its files, a directory rather holds meta-data about them.
Alternatively, using stat on any directory will display it's own inode number (as well as the number of links it has).
EDIT: Directory files contain the inode id (a pointer to a file's inode) not the inode itself. I have edited the answer.

Linux file deleted recovery

Is there a way to create a file in Linux that link to a specific iNode?
Take this scenario: There is a file that is in course of writing (a log maybe) and the specific file is deleted but a link in the dir /proc is still pointing at it. In this case we need not a bare copy of it but an hard link to it so we can have the future modifications and the most last modification before the process close and the system delete it.
If we have the iNode number is there a way to achieve this goal?
Since there is no Syscall that involves iNode, because is a concept of extX fs and is not a good practice make a stove pipe but it is to make a chain of responsability (as M.E.L. suggests), there is only a NO answer for this question because at VFS level we handle files path and names and not other internal representations.
BUT to achieve the goal to track the most last modification we can use a continous monitoring and duplication with tail:
tail -c+1 -f --pid=PID /proc/PID/fd/FD > /path/to/the/copy
where PID is the pid of the process that have the deleted file still opened and FD is its file descriptor number. With -f tail open and hold the file to display further modification, with -c+1 start to "tail" from the first byte and with --pid=PID tail is informed to exit when the pid exit.
You can use lsof to recover deleted files (sometimes)...
> lsof | grep testing.txt
less 4607 juliet 4r REG 254,4 21
8880214 /home/juliet/testing.txt (deleted)
Be sure to read the original article for full details before attempting this, unless you're a Maveric like me.
> ls -l /proc/4607/fd/4
lr-x------ 1 juliet juliet 64 Apr 7 03:19
/proc/4607/fd/4 -> /home/juliet/testing.txt (deleted)
> cp /proc/4607/fd/4 testing.txt.bk
http://www.linuxplanet.com/linuxplanet/tips/6767/1
Enjoy
It's always difficult to answer a question like "can I do" confidently in the negative. But as far as I see, neither /sys/ nor /proc provide a mapping of open files descriptors that are not symlinks. I assume by "BUT a link in the dir /proc is still pointing at it" you mean that the /proc//fd/ entries look like symlinks? I'm almost sure you cannot recover the original file.
I take that back: As user user2676075 pointed out, copying does work. Just hardlinking doesn't ...
UPDATE: If you think about it, it's quite logical.
/proc and /sys are file systems different from your hard disk. So they can't provide file like directory entries which one could hardlink to a destination on the hard disk.
The /proc/*/fd/ entries pretend to be symlinks, but actually they are different, else the copying would not work. I think they pretend to be symlinks to provide meaningful information with 'ln -l'.
Regarding the (missing) capability to hardlink to some inode (let's say with some system call): This cannot be part of the kernel or the VFS-Interface, for the following reasons:
It would violate the integrity of the file system. The filesystem is not supposed to keep the disk blocks of files that are completely deleted around in the same manner as files that persist.
The inodes might be a completely virtual concept to identify a "slot where a datastream is stored'. I assume there can be implementations that would have a problem converting a slot that has no reference back to a slot which is refered to by a name in the file system.
I admit the case against the possibility of such a system call is not water tight. But given the current state of the VFS interface (which AFAIR doesn't provide for such a call), it would be a heavy burden for any file system implementation (including e.g. distributed file systems) to provide a call to link a file into a directory by inode.
ATM I wonder if calling fstat before and after deleting the last reference is actually requires to return the same inode information ...
t

How to make file sparse?

If I have a big file containing many zeros, how can i efficiently make it a sparse file?
Is the only possibility to read the whole file (including all zeroes, which may patrially be stored sparse) and to rewrite it to a new file using seek to skip the zero areas?
Or is there a possibility to make this in an existing file (e.g. File.setSparse(long start, long end))?
I'm looking for a solution in Java or some Linux commands, Filesystem will be ext3 or similar.
A lot's changed in 8 years.
Fallocate
fallocate -d filename can be used to punch holes in existing files. From the fallocate(1) man page:
-d, --dig-holes
Detect and dig holes. This makes the file sparse in-place,
without using extra disk space. The minimum size of the hole
depends on filesystem I/O block size (usually 4096 bytes).
Also, when using this option, --keep-size is implied. If no
range is specified by --offset and --length, then the entire
file is analyzed for holes.
You can think of this option as doing a "cp --sparse" and then
renaming the destination file to the original, without the
need for extra disk space.
See --punch-hole for a list of supported filesystems.
(That list:)
Supported for XFS (since Linux 2.6.38), ext4 (since Linux
3.0), Btrfs (since Linux 3.7) and tmpfs (since Linux 3.5).
tmpfs being on that list is the one I find most interesting. The filesystem itself is efficient enough to only consume as much RAM as it needs to store its contents, but making the contents sparse can potentially increase that efficiency even further.
GNU cp
Additionally, somewhere along the way GNU cp gained an understanding of sparse files. Quoting the cp(1) man page regarding its default mode, --sparse=auto:
sparse SOURCE files are detected by a crude heuristic and the corresponding DEST file is made sparse as well.
But there's also --sparse=always, which activates the file-copy equivalent of what fallocate -d does in-place:
Specify --sparse=always to create a sparse DEST file whenever the SOURCE file contains a long enough sequence of zero bytes.
I've finally been able to retire my tar cpSf - SOURCE | (cd DESTDIR && tar xpSf -) one-liner, which for 20 years was my graybeard way of copying sparse files with their sparseness preserved.
Some filesystems on Linux / UNIX have the ability to "punch holes" into an existing file. See:
LKML posting about the feature
UNIX file trunctation FAQ (search for F_FREESP)
It's not very portable and not done the same way across the board; as of right now, I believe Java's IO libraries do not provide an interface for this.
If hole punching is available either via fcntl(F_FREESP) or via any other mechanism, it should be significantly faster than a copy/seek loop.
I think you would be better off pre-allocating the whole file and maintaining a table/BitSet of the pages/sections which are occupied.
Making a file sparse would result in those sections being fragmented if they were ever re-used. Perhaps saving a few TB of disk space is not worth the performance hit of a highly fragmented file.
You can use $ truncate -s filename filesize on linux teminal to create sparse file having
only metadata.
NOTE --Filesize is in bytes.
According to this article, it seems there is currently no easy solution, except for using FIEMAP ioctl. However, I don't know how you can make "non sparse" zero blocks into "sparse" ones.

Creating a hard link to partial file contents in linux

I have fileA, which has a size of 200 MB. Now I want to create a hardlink to fileA, named fileB, but I only want this file point to the first 100 mb of fileA. So basically I need fileB to point to the same data blocks, but with a different length. It doesn't necessarily have to be a real hardlink it could be a virtual file proxying contents.
I was thinking about duplicating the Inode somehow and changing the length, but I presume this could cause filesystem coherency issues (when datablocks move around etc.). Is there any linux tool or user-level system call that could let me do this?
You cannot do this directly in the manner you are talking about. There are filesystems that sort of do this. For example LessFS can do this. I also believe that the underlying structure of btrfs supports this, so if someone put the hooks in, it would be accessible at user level.
But, there is no way to do this with any of the ext filesystems, and I believe that it happens implicitly in LessFS.
There is a really ugly way of doing it in btrfs. If you make a snapshot, then truncate the snapshot file to 100M, you have effectively achieved your goal.
I believe this would also work with btrfs and a sufficiently recent version of cp you could just copy the file with cp then truncate one copy. The version of cp would have to have the --reflink option, and if you want to be extra sure about this, give the --reflink=always option.
Adding to #Omnifarious's answer:
What you're describing is not a hard link. A hard links is essentially a reference to an inode, by a path name. (A soft link is a reference to a path name, by a path name.) There is no mechanism to say, "I want this inode, but slightly different, only the first k blocks". A copy-on-write filesystem could do this for you under the covers. If you were using such a filesystem then you would simply say
cp fileA fileB && truncate -s 200M fileB
Of course, this also works on a non-copy-on-write filesystem, but it takes up an extra 200 MB instead of just the filesystem overhead.
Now, that said, you could still implement something like this easily on Linux with FUSE. You could implement a filesystem that mirrors some target directory but simply artificially sets a maximum length to files (at say 200 MB).
FUSE
FUSE Hello World
Maybe you can check ChunkFS. I think this is what you need (I didn't try it).

What happens if there are too many files under a single directory in Linux?

If there are like 1,000,000 individual files (mostly 100k in size) in a single directory, flatly (no other directories and files in them), is there going to be any compromises in efficiency or disadvantages in any other possible ways?
ARG_MAX is going to take issue with that... for instance, rm -rf * (while in the directory) is going to say "too many arguments". Utilities that want to do some kind of globbing (or a shell) will have some functionality break.
If that directory is available to the public (lets say via ftp, or web server) you may encounter additional problems.
The effect on any given file system depends entirely on that file system. How frequently are these files accessed, what is the file system? Remember, Linux (by default) prefers keeping recently accessed files in memory while putting processes into swap, depending on your settings. Is this directory served via http? Is Google going to see and crawl it? If so, you might need to adjust VFS cache pressure and swappiness.
Edit:
ARG_MAX is a system wide limit to how many arguments can be presented to a program's entry point. So, lets take 'rm', and the example "rm -rf *" - the shell is going to turn '*' into a space delimited list of files which in turn becomes the arguments to 'rm'.
The same thing is going to happen with ls, and several other tools. For instance, ls foo* might break if too many files start with 'foo'.
I'd advise (no matter what fs is in use) to break it up into smaller directory chunks, just for that reason alone.
My experience with large directories on ext3 and dir_index enabled:
If you know the name of the file you want to access, there is almost no penalty
If you want to do operations that need to read in the whole directory entry (like a simple ls on that directory) it will take several minutes for the first time. Then the directory will stay in the kernel cache and there will be no penalty anymore
If the number of files gets too high, you run into ARG_MAX et al problems. That basically means that wildcarding (*) does not always work as expected anymore. This is only if you really want to perform an operation on all the files at once
Without dir_index however, you are really screwed :-D
Most distros use Ext3 by default, which can use b-tree indexing for large directories.
Some of distros have this dir_index feature enabled by default in others you'd have to enable it yourself. If you enable it, there's no slowdown even for millions of files.
To see if dir_index feature is activated do (as root):
tune2fs -l /dev/sdaX | grep features
To activate dir_index feature (as root):
tune2fs -O dir_index /dev/sdaX
e2fsck -D /dev/sdaX
Replace /dev/sdaX with partition for which you want to activate it.
When you accidently execute "ls" in that directory, or use tab completion, or want to execute "rm *", you'll be in big trouble. In addition, there may be performance issues depending on your file system.
It's considered good practice to group your files into directories which are named by the first 2 or 3 characters of the filenames, e.g.
aaa/
aaavnj78t93ufjw4390
aaavoj78trewrwrwrwenjk983
aaaz84390842092njk423
...
abc/
abckhr89032423
abcnjjkth29085242nw
...
...
The obvious answer is the folder will be extremely difficult for humans to use long before any technical limit, (time taken to read the output from ls for one, their are dozens of other reasons) Is there a good reason why you can't split into sub folders?
Not every filesystem supports that many files.
On some of them (ext2, ext3, ext4) it's very easy to hit inode limit.
I've got a host with 10M files in a directory. (don't ask)
The filesystem is ext4.
It takes about 5 minutes to
ls
One limitation I've found is that my shell script to read the files (because AWS snapshot restore is a lie and files aren't present till first read) wasn't able to handle the argument list so I needed to do two passes. Firstly construct a file list with find (wholename in case you want to do partial matches)
find /path/to_dir/ -wholename '*.ldb'| tee filenames.txt
then secondly read from a the file containing filenames and read all files. (with limited parallelism)
while read -r line; do
if test "$(jobs | wc -l)" -ge 10; then
wait -n
fi
{
#do something with 10x fanout
} &
done < filenames.txt
Posting here in case anyone finds the specific work-around useful when working with too many files.

Resources