Create a sparse file - linux

I tried lseek and dd if=/dev/urandom seek=7 bs=4096 count=2 of=fwh, they didn't work on my computer. The ls -lks result is:
44 -rw-rw-r-- 1 guangmu guangmu 36864 Aug 10 18:19 fwh.
I tried rm the file, reboot, and even cleaning the blocks used by fwt via debugfs. These didn't help.
My filesystem is ext4, OS is ubuntu 14.04. Here is the result of sudo tune2fs -l /dev/sda5:
tune2fs 1.42.9 (4-Feb-2014)
Filesystem volume name:
Last mounted on: /
Filesystem UUID: e051336c-6a7a-4683-9c24-1230676170b1
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 34594816
Block count: 138359808
Reserved block count: 6917990
Free blocks: 109566416
Free inodes: 33280312
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 991
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Mon Jan 20 20:56:20 2014
Last mount time: Mon Aug 11 11:08:31 2014
Last write time: Mon Aug 11 11:08:30 2014
Mount count: 387
Maximum mount count: -1
Last checked: Mon Jan 20 20:56:20 2014
Check interval: 0 ()
Lifetime writes: 743 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
First orphan inode: 18877404
Default directory hash: half_md4
Directory Hash Seed: 780cc1b8-1fa1-4751-9385-270b563b29cd
Journal backup: inode blocks
Did I do something wrong?

No, that is correct. You have seeked (or sought maybe?) 7 blocks of 4096 bytes in fwh and then written 2 blocks of 4096 bytes to it. So you would expect fwh to contain 9 blocks of 4096 bytes, which is 36,864 bytes - exactly as you have.
Or have I missed something? What were you expecting?

It's my carelessness.:(
/home is mounted as ecrpyptfs, which doesn't support sparse file.

Related

Size of directory .(DOT) does not decrease?

I am studying the linux file system. I had an experiment to explore how linux saves the hard links.
I made 1000 hard links for a file in the same directory. The size of .(DOT) increased to 28672; I remove 500 hard links, the size of .(DOT) did not decrease. (I used "stat ." to check the size.) Why doesn't the size decrease?
This is my experiment:
I have a folder named test, which has only one small file testfile and a script, the status was like this:
York:~/test$ ll -li
total 84
7995940 drwxr-xr-x 2 York domain_users 4096 Jul 17 19:20 ./
7995939 drwxr-xr-x 3 York domain_users 69632 Jul 17 19:20 ../
7996494 -rwxrwxrwx 1 York domain_users 94 Jul 17 19:14 copy.sh*
8026281 -rw-r--r-- 1 York domain_users 7 Jul 17 19:17 testfile
York:~/test$ stat .
File: `.'
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fc03h/64515d Inode: 7995940 Links: 2
Access: (0755/drwxr-xr-x) Uid: (2060469376/York) Gid: (2060452353/domain_users)
Access: 2015-07-17 19:20:06.288345960 +0200
Modify: 2015-07-17 19:20:05.420340318 +0200
Change: 2015-07-17 19:20:05.420340318 +0200
Birth: -
Then I ran the script:
for i in `seq 200000 200999`;
do
ln testfile "$i"
done
After that, I got the following result:
York:~/test$ stat .
File: `.'
Size: 28672 Blocks: 64 IO Block: 4096 directory
Device: fc03h/64515d Inode: 7995940 Links: 2
Access: (0755/drwxr-xr-x) Uid: (2060469376/York) Gid: (2060452353/domain_users)
Access: 2015-07-17 19:21:25.364862751 +0200
Modify: 2015-07-17 19:21:11.064768884 +0200
Change: 2015-07-17 19:21:11.064768884 +0200
Birth: -
And I could see that the inode counter is 1001, which was what I expected:
York:~/test$ ll -li testfile
8026281 -rw-r--r-- 1001 York domain_users 7 Jul 17 19:17 testfile
I used "rm" to remove 500 hard links, I saw:
York:~/test$ ll -li testfile
8026281 -rw-r--r-- 501 York domain_users 7 Jul 17 19:17 testfile
But the size of the directory did not decrease:
York:~/test$ stat .
File: `.'
Size: 28672 Blocks: 64 IO Block: 4096 directory
Device: fc03h/64515d Inode: 7995940 Links: 2
Access: (0755/drwxr-xr-x) Uid: (2060469376/York) Gid: (2060452353/domain_users)
Access: 2015-07-17 19:24:35.138125221 +0200
Modify: 2015-07-17 19:24:35.142125246 +0200
Change: 2015-07-17 19:24:35.142125246 +0200
Birth: -
My understanding about the directory in file system is like this: For each directory, an inode is allocated for attributions like folder name etc, and also a block of data is used to keep entries for files and directories in that folder. Because each hard link needs one entry, 1000 hard links need more space than a block of data, another data blocks are needed. So the size of the directory .(DOT) increases. Vise versa, if I remove 500 hard links, the size should decrease.
But the experiment showed that the size did not decrease. Where am I wrong?
Thank you in advance!
Best Wishes,
York
What you're seeing is correct. Many Linux filesystems never shrink the size of a directory inode; they just blank out the entries for deleted files, allowing them to be reused if more files are added to the directory later. The only way to return the directory to its original size may be to delete it and create a new one with the same name.
Frequently, inodes are implemented on b-trees and as a minor optimization, the can grow as needed, but don't collapse.

JFFS2 filesystem corrupts immediately (Magic bitmask 0x1985 not found errors)

I have created a root filesystem with buildroot that is using squashfs. It works fine, and now I would like to create an overlayfs, which would hold /home and /etc directories.
For this purpose, I wanted to create a simple jffs2 filesystem with couple of files:
jlumme#simppa:~/projects/jffs2_home$ ls -la
total 20
drwxrwxr-x 4 jlumme jlumme 4096 Apr 21 16:21 .
drwxrwxr-x 6 jlumme jlumme 4096 Apr 21 16:21 ..
drwxrwxr-x 2 jlumme jlumme 4096 Apr 21 13:45 default
drwxrwxr-x 2 jlumme jlumme 4096 Apr 21 13:45 ftp
-rw-rw-r-- 1 jlumme jlumme 24 Apr 21 15:34 test.txt
The flash chip I use is SST25VF064C, so I believe it's erase block size is 64 KB, and thus I create a filesystem image from that folder:
mkfs.jffs2 -r jffs2_home/ -e 64 -o home.jffs2
$ ls -la
-rw-r--r-- 1 jlumme jlumme 496 Apr 21 15:42 home.jffs2
(Suprisingly, if I set -e 32, or even -e 4, the resulting binary image doesn't change at all???).
Nevertheless, moving on, I have aligned my mtdblock that contains home, to 64KB, and my flash layout looks like this:
uboot/<0x00000000 0x40000>
kernel/<0x00040000 0x3D9000>
dtb/<0x00419000 0x10000>
rootfs/<0x00429000 0x1F7000>
home/<0x00620000 0x1E0000>
On my board, I can mount the mtdblock4 fine, and I can read the file contents properly. However, if I modify the file, and try saving it, vi complains:
[ 77.030000] jffs2: Node totlen on flash (0xffffffff) != totlen from node ref (0x00000044)
Now, if I unmount the filesystem, and remount it, I start getting complaints immediately:
# mount -t jffs2 /dev/mtdblock4 /home/
[ 99.740000] jffs2: jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x001d4070: 0xff0a instead
[ 99.760000] jffs2: Empty flash at 0x001d4074 ends at 0x001d412c
[ 99.770000] jffs2: jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x001d412c: 0xffff instead
[ 99.790000] jffs2: Empty flash at 0x001d4130 ends at 0x001d4194
[ 99.790000] jffs2: jffs2_scan_eraseblock(): Magic bitmask 0x1985 not found at 0x001d4194: 0xff0a instead
I suppose now my filesystem is already corrupted... and I don't really understand the reason for it..
Any ideas where am I going wrong with this ? Thanks for all suggestions..
This is what I did to solve the issue.
Updated newer MTD drivers from http://www.linux-mtd.infradead.org/
- There was new code for SST25V064C chip
Made sure the area reserved for JFFS2 was initialized to 0xFF
(possibly optional) Specified more accurately the creation of jffs2 file system:
mkfs.jffs2 -e 64 -l -p -s 4096 -r jffs2_home/ -o home.jffs2
With these changes the file system now reads and writes as expected.

Database backups not writing to disc, not enough space?

I just inherited an AIX project which I know very little about. I have a cronjob that has been failing for a few days now that does a full backup of my database(db2). Looking at the logs, I'm seeing this:
SQL2419N The target disk "/home/dbtmp/backups" has become full.
When checking out this directory:
(/var/spool/cron)> df -g /home/dbtmp
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/dbtmplv 10.00 0.96 91% 85 1% /home/dbtmp
The size of the previous backups:
(/var/spool/cron)> ll /home/dbtmp/backups
total 18365248
-rw------- 1 hsprd cics 4411498496 Feb 12 18:01 HSPRD.0.hsprd.NODE0000.CATN0000.20130212180036.001
-rw------- 1 hstrn cics 874287104 Feb 12 18:08 HSTRN.0.hstrn.NODE0000.CATN0000.20130212180747.001
-rw------- 1 hstst cics 3242835968 Feb 12 18:05 HSTST.0.hstst.NODE0000.CATN0000.20130212180443.001
What options to I have to fix this? Thank you.
As you can see, the size of your backup files exceeds the free space on the device. You need a larger device.

Why doesn't "total" from ls -l add up to total file sizes listed? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Why is the total in the output of ls -l printed as 64 and not 26078 which is the total of all files listed?
$ ls -l ~/test/ls
total 64
-rw-r--r-- 1 root root 15276 Oct 5 2004 a2ps.cfg
-rw-r--r-- 1 root root 2562 Oct 5 2004 a2ps-site.cfg
drwxr-xr-x 4 root root 4096 Feb 2 2007 acpi
-rw-r--r-- 1 root root 48 Feb 8 2008 adjtime
drwxr-xr-x 4 root root 4096 Feb 2 2007 alchemist
You can find the definition of that line in the ls documentation for your platform. For coreutils ls (the one found on a lot of Linux systems), the information can be found via info coreutils ls:
For each directory that is listed, preface the files with a line
`total BLOCKS', where BLOCKS is the total disk allocation for all
files in that directory.
The Formula: What is that number?
total int = Sum of (physical_blocks_in_use) * physical_block_size/ls_block_size) for each file.
Where:
ls_block_size is an arbitrary environment variable (normally 512 or 1024 bytes) which is freely modifiable with the
--block-size=<int> flag on ls, the POSIXLY_CORRECT=1 GNU
environment variable (to get 512-byte units), or the -k flag to force
1kB units.
physical_block_size is the OS dependent value of an internal block interface, which may or may not be connected to the underlying hardware. This value is normally 512b or 1k, but is completely dependent on OS. It can be revealed through the %B value on stat or fstat. Note that this value is (almost always) unrelated to the number of physical blocks on a modern storage device.
Why so confusing?
This number is fairly detached from any physical or meaningful metric. Many junior programmers haven't had experience with file holes or hard/sym links. In addition, the documentation available on this specific topic is virtually non-existent.
The disjointedness and ambiguity of the term "block size" has been a result of numerous different measures being easily confused, and the relatively deep levels of abstraction revolving around disk access.
Examples of conflicting information: du (or ls -s) vs stat
Running du * in a project folder yields the following: (Note: ls -s returns the same results.)
dactyl:~/p% du *
2 check.cc
2 check.h
1 DONE
3 Makefile
3 memory.cc
5 memory.h
26 p2
4 p2.cc
2 stack.cc
14 stack.h
Total: 2+2+1+3+3+5+26+4+2+14 = 62 Blocks
Yet when one runs stat we see a different set of values. Running stat in the same directory yields:
dactyl:~/p% stat * --printf="%b\t(%B)\t%n: %s bytes\n"
3 (512) check.cc: 221 bytes
3 (512) check.h: 221 bytes
1 (512) DONE: 0 bytes
5 (512) Makefile: 980 bytes
6 (512) memory.cc: 2069 bytes
10 (512) memory.h: 4219 bytes
51 (512) p2: 24884 bytes
8 (512) p2.cc: 2586 bytes
3 (512) stack.cc: 334 bytes
28 (512) stack.h: 13028 bytes
Total: 3+3+1+5+6+10+51+8+3+28 = 118 Blocks
Note: You can use the command stat * --printf="%b\t(%B)\t%n: %s bytes\n" > to output (in order) the number of blocks, (in parens) the size of those
blocks, the name of the file, and the size in bytes, as shown above.
There are two important things takeaways:
stat reports both the physical_blocks_in_use and physical_block_size as used in the formula above. Note that these are values based on OS interfaces.
du is providing what is generally accepted as a fairly accurate estimate of physical disk utilization.
For reference, here is the ls -l of directory above:
dactyl:~/p% ls -l
**total 59**
-rw-r--r--. 1 dhs217 grad 221 Oct 16 2013 check.cc
-rw-r--r--. 1 dhs217 grad 221 Oct 16 2013 check.h
-rw-r--r--. 1 dhs217 grad 0 Oct 16 2013 DONE
-rw-r--r--. 1 dhs217 grad 980 Oct 16 2013 Makefile
-rw-r--r--. 1 dhs217 grad 2069 Oct 16 2013 memory.cc
-rw-r--r--. 1 dhs217 grad 4219 Oct 16 2013 memory.h
-rwxr-xr-x. 1 dhs217 grad 24884 Oct 18 2013 p2
-rw-r--r--. 1 dhs217 grad 2586 Oct 16 2013 p2.cc
-rw-r--r--. 1 dhs217 grad 334 Oct 16 2013 stack.cc
-rw-r--r--. 1 dhs217 grad 13028 Oct 16 2013 stack.h
That is the total number of file system blocks, including indirect blocks, used by the listed files. If you run ls -s on the same files and sum the reported numbers you'll get that same number.
Just to mention - you can use -h (ls -lh) to convert this in human readable format.

knowing a device special file major and minor numbers in linux

All files in /dev are special files... they represent devices of the computer.
They were created with the mknod syscall. My question is: How can I know the minor and
major numbers that were used to create this special file?
The list is called the LANANA Linux Device List, and it is administered by Alan Cox.
You can find the latest copy online (direct link), or in the Linux source. Its filename in the kernel tree is Documentation/devices.txt.
To see the major and minor numbers that created a node in /dev (or any device node for that matter), simply use ls with the -l option:
22:26 jsmith#undertow% ls -l /dev/xvd?
brw-rw---- 1 root disk 202, 0 Nov 1 20:31 /dev/xvda
brw-rw---- 1 root disk 202, 16 Nov 1 20:31 /dev/xvdb
brw-rw---- 1 root disk 202, 32 Nov 1 20:31 /dev/xvdc
In this example, 202 is the three devices' major number, and 0, 16, and 32 are minors. The b at left indicates that the node is a block device. The alternative is c, a character device:
crw-rw-rw- 1 root tty 5, 0 Nov 22 00:29 /dev/tty
$ ls -l /dev/fd0 /dev/null
brw-rw---- 1 root floppy 2, 0 Nov 22 19:48 /dev/fd0
crw-rw-rw- 1 root root 1, 3 Nov 22 19:48 /dev/null
$ stat -c '%n: %F, major %t minor %T' /dev/fd0 /dev/null
/dev/fd0: block special file, major 2 minor 0
/dev/null: character special file, major 1 minor 3
Most device numbers are fixed (i.e. /dev/null will always be character device 1:3) but on Linux, some are dynamically allocated.
$ cat /proc/devices
Character devices:
...
10 misc
...
Block devices:
...
253 mdp
254 device-mapper
$ cat /proc/misc
...
57 device-mapper
...
For example, on this system, it just so happens that /dev/mapper/control will be c:10:57 while the rest of /dev/mapper/* will be b:254:*, and this could differ from one boot cycle to another -- or even as modules are loaded/unloaded and devices are added/removed.
You can explore these device registrations further in /sys.
$ readlink /sys/dev/block/2:0
../../devices/platform/floppy.0/block/fd0
$ cat /sys/devices/platform/floppy.0/block/fd0/dev
2:0
$ readlink /sys/dev/char/1:3
../../devices/virtual/mem/null
$ cat /sys/devices/virtual/mem/null/dev
1:3
You can also use stat.
$ stat -c 'major: %t minor: %T' <file>
Especially for block devices:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 90G 0 disk
├─sda1 8:1 0 4G 0 part [SWAP]
├─sda2 8:2 0 4G 0 part /
Alternative that doesn't depend on stat:
$ cat /sys/class/*/random/dev
1:8

Resources