Why rsync --sparse produce bigger qcow2 files than original qcow2 files - linux

Problem:
When i´am copying the whole disk with my virtual machines onto anemty disk with rsync --sparse the disk images (qcow2 files) on the new Disk are bigger then the original files.
Old Disk:
/dev/sda1 => /ssdstor
New Disk:
/dev/sdb1 => /new
Details:
Hardware:
2x SSD Curcial M500 960GB Firmware MU5
OS: Proxmox 3.4
Filesyste: XFS
Command:
rsync -axHv --force --progress --stats --sparse /ssdstor/ /new/
Rsync Version:
dpkg -L | grep rsync
ii rsync 993.1.1-1 amd64 fast, versatile, remote (and local) file-copying tool
file / disk comparison after first copy*
( to check everything was transfered correctly )
rsync -axHv --dry-run --force --progress --stats --sparse /ssdstor/ /new/
sending incremental file list
Number of files: 90,545 (reg: 70,269, dir: 9,395, link: 10,817, dev: 4, special: 60)
Number of created files: 0
Number of deleted files: 0
Number of regular files transferred: 0
Total file size: 634,456,255,674 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 65,536
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 2,097,654
Total bytes received: 9,993
sent 2,097,654 bytes received 9,993 bytes 1,405,098.00 bytes/sec
total size is 634,456,255,674 speedup is 301,025.86 (DRY RUN)
mount | egrep '(sda|sdb)'
/dev/sda1 on /ssdstor type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)
/dev/sdb1 on /new type xfs (rw,noatime,nodiratime,attr2,inode64,noquota)
df -h | egrep '(sda|sdb)'
/dev/sda1 894G 388G 506G 44% /ssdstor
/dev/sdb1 894G 430G 465G 49% /new
ls -alshR /ssdstor | grep qcow2
77G -rw-r--r-- 1 root root 103G Jul 14 09:09 vm-100-disk-1.qcow2
6,2G -rw-r--r-- 1 root root 14G Jul 14 09:07 vm-101-disk-1.qcow2
2,0G -rw-r--r-- 1 root root 4,1G Jul 14 09:07 vm-101-disk-2.qcow2
17G -rw-r--r-- 1 root root 61G Feb 18 09:10 vm-102-disk-1.qcow2
40G -rw-r--r-- 1 root root 78G Jul 14 09:06 vm-103-disk-1.qcow2
40G -rw-r--r-- 1 root root 41G Jul 14 09:05 vm-103-disk-2.qcow2
31G -rw-r--r-- 1 root root 44G Jul 14 09:05 vm-104-disk-1.qcow2
5,2G -rw-r--r-- 1 root root 41G Mai 1 01:00 vm-105-disk-2.qcow2
63G -rw-r--r-- 1 root root 65G Jul 14 10:04 vm-106-disk-1.qcow2
26G -rw-r--r-- 1 root root 65G Jul 14 09:14 vm-107-disk-2.qcow2
51G -rw-r--r-- 1 root root 51G Mai 19 21:21 vm-108-disk-1.qcow2
ls -alshR /new | grep qcow2
79G -rw-r--r-- 1 root root 103G Jul 14 09:09 vm-100-disk-1.qcow2
6,2G -rw-r--r-- 1 root root 14G Jul 14 09:07 vm-101-disk-1.qcow2
2,0G -rw-r--r-- 1 root root 4,1G Jul 14 09:07 vm-101-disk-2.qcow2
17G -rw-r--r-- 1 root root 61G Feb 18 09:10 vm-102-disk-1.qcow2
40G -rw-r--r-- 1 root root 78G Jul 14 09:06 vm-103-disk-1.qcow2
41G -rw-r--r-- 1 root root 41G Jul 14 09:05 vm-103-disk-2.qcow2
37G -rw-r--r-- 1 root root 44G Jul 14 09:05 vm-104-disk-1.qcow2
34G -rw-r--r-- 1 root root 41G Mai 1 01:00 vm-105-disk-2.qcow2
63G -rw-r--r-- 1 root root 65G Jul 14 10:04 vm-106-disk-1.qcow2
33G -rw-r--r-- 1 root root 65G Jul 14 09:14 vm-107-disk-2.qcow2
51G -rw-r--r-- 1 root root 51G Mai 19 21:21 vm-108-disk-1.qcow2
Has anyone an idea?
More Tests:
cp --sparse=always vm-105-disk-2.qcow2 vm-105-disk-2.qcow2.new
5,2G -rw-r--r-- 1 root root 41G Jul 16 08:07 vm-105-disk-2.qcow2
34G -rw-r--r-- 1 root root 41G Jul 16 11:51 vm-105-disk-2.qcow2.new

Related

Unexpected bash readable test result with GitHub Actions

MWE
I have a GitHub Actions workflow that recently stopped working without me making changes. The error:
==> ERROR: /etc/makepkg.conf not found.
Aborting...
This is from running # sudo -Eu builder makepkg --printsrcinfo.
Source
Logs
The order of the logs seem to be wrong but it is correct in an earlier log (possibly due to ls -l having a large output).
The source of this error seems to be libmakepkg/util/config.sh.in:
# Source the config file; fail if it is not found
if [[ -r $MAKEPKG_CONF ]]; then
source_safe "$MAKEPKG_CONF"
else
error "$(gettext "%s not found.")" "$MAKEPKG_CONF"
plainerr "$(gettext "Aborting...")"
exit $E_CONFIG_ERROR
fi
I added the following to my entrypoint script:
echo "Writing SRCINFO..."
# Debug
echo "---"
ls -l /
echo "---"
ls -l /etc
echo "---"
sudo -Eu builder cat /etc/makepkg.conf
echo "---"
sudo -Eu builder /bin/bash -c "[[ -r "/etc/makepkg.conf" ]] && echo 1 || echo 0"
echo "---"
sudo -Eu builder makepkg --printsrcinfo > .SRCINFO
The builder user is created in build.sh:
useradd builder -m
echo "builder ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
I got:
Setting permissions...
Writing SRCINFO...
---
total 52
lrwxrwxrwx 1 root root 7 Feb 1 19:19 bin -> usr/bin
drwxr-xr-x 2 root root 4096 Jan 19 01:32 boot
drwxr-xr-x 5 root root 340 Feb 7 11:58 dev
-rwxr-xr-x 1 root root 836 Feb 7 11:57 entrypoint.sh
drwxr-xr-x 1 root root 4096 Feb 7 11:58 etc
drwxr-xr-x 6 root root 4096 Feb 7 11:58 github
drwxr-xr-x 1 root root 4096 Feb 7 11:57 home
lrwxrwxrwx 1 root root 7 Feb 1 19:19 lib -> usr/lib
lrwxrwxrwx 1 root root 7 Feb 1 19:19 lib64 -> usr/lib
drwxr-xr-x 2 root root 4096 Jan 19 01:32 mnt
drwxr-xr-x 2 root root 4096 Jan 19 01:32 opt
dr-xr-xr-x 159 root root 0 Feb 7 11:58 proc
drwxr-x--- 2 root root 4096 Jan 19 01:32 root
drwxr-xr-x 1 root root 4096 Feb 7 11:58 run
lrwxrwxrwx 1 root root 7 Feb 1 19:19 sbin -> usr/bin
drwxr-xr-x 4 root root 4096 Feb 1 19:19 srv
dr-xr-xr-x 12 root root 0 Feb 7 11:58 sys
drwxrwxrwt 2 root root 4096 Jan 19 01:32 tmp
drwxr-xr-x 1 root root 4096 Feb 7 11:57 usr
drwxr-xr-x 1 root root 4096 Feb 1 19:19 var
---
total 640
-rw-r--r-- 1 root root 0 Jan 19 01:32 arch-release
drwxr-xr-x 3 root root 4096 Feb 1 19:19 audit
-rw-r--r-- 1 root root 28 Dec 20 18:44 bash.bash_logout
-rw-r--r-- 1 root root 618 Dec 20 18:44 bash.bashrc
-rw-r--r-- 1 root root 447 Dec 2 16:02 bindresvport.blacklist
drwxr-xr-x 2 root root 4096 Dec 16 14:38 binfmt.d
drwxr-xr-x 4 root root 4096 Feb 1 19:19 ca-certificates
-rw------- 1 root root 722 Jan 19 01:32 crypttab
drwxr-xr-x 2 root root 4096 Feb 1 19:19 default
drwxr-xr-x 2 root root 4096 Jan 7 19:51 depmod.d
-rw-r--r-- 1 root root 685 Jan 31 20:31 e2scrub.conf
-rw-r--r-- 1 root root 97 Jan 13 22:50 environment
-rw-r--r-- 1 root root 1362 Jan 20 21:31 ethertypes
-rw-r--r-- 1 root root 126 Jan 19 01:32 fstab
-rw-r--r-- 1 root root 2584 Feb 6 00:09 gai.conf
-rw-r--r-- 1 root root 626 Feb 7 11:57 group
-rw-r--r-- 1 root root 610 Jan 31 00:20 group-
-rw------- 1 root root 558 Feb 7 11:57 gshadow
-rw------- 1 root root 546 Jan 31 00:20 gshadow-
-rw-r--r-- 1 root root 73 Jan 19 01:32 host.conf
-rw-r--r-- 1 root root 13 Feb 7 11:58 hostname
-rw-r--r-- 1 root root 174 Feb 7 11:58 hosts
-rw-r--r-- 1 root root 714 Dec 8 17:48 inputrc
drwxr-xr-x 2 root root 4096 Feb 1 19:19 iproute2
drwxr-xr-x 2 root root 4096 Feb 1 19:19 iptables
-rw-r--r-- 1 root root 20 Jan 19 01:32 issue
drwxr-xr-x 3 root root 4096 Feb 1 19:19 kernel
drwxr-xr-x 2 root root 4096 Jul 7 2020 keyutils
-rw-r--r-- 1 root root 369 Jan 14 00:32 krb5.conf
-rw-r--r-- 1 root root 18096 Feb 7 11:57 ld.so.cache
-rw-r--r-- 1 root root 117 Jan 19 01:32 ld.so.conf
drwxr-xr-x 1 root root 4096 Feb 7 11:57 ld.so.conf.d
-rw-r----- 1 root root 191 Jan 13 22:33 libaudit.conf
drwxr-xr-x 2 root root 4096 Feb 1 19:19 libnl
-rw-r--r-- 1 root root 17 Jan 31 00:19 locale.conf
-rw-r--r-- 1 root root 18 Jan 31 00:19 locale.gen
-rw-r--r-- 1 root root 9984 Feb 6 00:09 locale.gen.pacnew
-rw-r--r-- 1 root root 5645 Sep 7 13:42 login.defs
-rw-r--r-- 1 root root 5792 Jul 1 2020 makepkg.conf
-rw-r--r-- 1 root root 812 Jan 31 20:31 mke2fs.conf
drwxr-xr-x 2 root root 4096 Jan 7 19:51 modprobe.d
drwxr-xr-x 2 root root 4096 Dec 16 14:38 modules-load.d
-rw-r--r-- 1 root root 0 Jan 19 01:32 motd
lrwxrwxrwx 1 root root 12 Feb 7 11:58 mtab -> /proc/mounts
-rw-r--r-- 1 root root 767 Dec 2 16:02 netconfig
-rw-r--r-- 1 root root 2717 Feb 6 00:09 nscd.conf
-rw-r--r-- 1 root root 328 Jan 19 01:32 nsswitch.conf
drwxr-xr-x 1 root root 4096 Feb 7 11:57 openldap
lrwxrwxrwx 1 root root 19 Feb 1 19:19 os-release -> /usr/lib/os-release
-rw-r--r-- 1 root root 3264 Feb 7 11:57 pacman.conf
-rw-r--r-- 1 root root 2883 Jul 1 2020 pacman.conf.pacnew
drwxr-xr-x 1 root root 4096 Feb 7 11:57 pacman.d
drwxr-xr-x 1 root root 4096 Feb 7 11:57 pam.d
-rw-r--r-- 1 root root 744 Feb 7 11:57 passwd
-rw-r--r-- 1 root root 699 Jan 31 00:20 passwd-
drwxr-xr-x 2 root root 4096 Feb 1 19:19 pkcs11
-rw-r--r-- 1 root root 1020 Jan 19 01:32 profile
drwxr-xr-x 1 root root 4096 Feb 7 11:57 profile.d
-rw-r--r-- 1 root root 3171 Jan 3 17:14 protocols
-rw-r--r-- 1 root root 1814 Jul 7 2020 request-key.conf
drwxr-xr-x 2 root root 4096 Jul 7 2020 request-key.d
-rw-r--r-- 1 root root 649 Feb 7 11:58 resolv.conf
-rw-r--r-- 1 root root 1634 Feb 6 00:09 rpc
-rw-r--r-- 1 root root 139 Jan 19 01:32 securetty
drwxr-xr-x 2 root root 4096 Feb 1 19:19 security
-rw-r--r-- 1 root root 297708 Jan 3 17:14 services
-rw------- 1 root root 375 Feb 7 11:57 shadow
-rw------- 1 root root 346 Jan 31 00:20 shadow-
-rw-r--r-- 1 root root 83 Jan 19 01:32 shells
drwxr-xr-x 2 root root 4096 Feb 1 19:19 skel
drwxr-xr-x 5 root root 4096 Feb 1 19:19 ssl
-rw-r--r-- 1 root root 3975 Jan 26 18:34 sudo.conf
-r--r----- 1 root root 3160 Feb 7 11:57 sudoers
drwxr-x--- 2 root root 4096 Jan 26 18:34 sudoers.d
-rw-r--r-- 1 root root 6169 Jan 26 18:34 sudo_logsrvd.conf
drwxr-xr-x 2 root root 4096 Dec 16 14:38 sysctl.d
drwxr-xr-x 1 root root 4096 Feb 7 11:57 systemd
drwxr-xr-x 2 root root 4096 Dec 16 14:38 tmpfiles.d
drwxr-xr-x 1 root root 4096 Feb 7 11:57 udev
drwxr-xr-x 1 root root 4096 Feb 1 19:19 X11
-rw-r--r-- 1 root root 642 May 7 2020 xattr.conf
drwxr-xr-x 1 root root 4096 Feb 1 19:19 xdg
drwxr-xr-x 2 root root 4096 Feb 1 19:19 xinetd.d
---
#!/hint/bash
#
# /etc/makepkg.conf
#
...
SRCEXT='.src.tar.gz'
---
0
---
==> ERROR: /etc/makepkg.conf not found.
Aborting...
(The contents of /etc/makepkg.conf was concatenated.)
I also accidentally did [[ -r "/etc/makepkg.conf" ]] && echo 1 || echo 0 (as root) and I also got 0.
How is it possible that a file is not readable yet I can cat it? I also tried running the exact same commands in a local container and couldn't reproduce this issue, but this has happened every GitHub Actions run since it started.
This makes me think the Actions setup is causing an issue, but nothing seems odd there:
/usr/bin/docker build -t 442333:35a065f0b9f356b32f2852ba2f6b7296 -f "/home/runner/work/visual-studio-code-insiders-arch/visual-studio-code-insiders-arch/./.github/actions/pkg/Dockerfile" "/home/runner/work/visual-studio-code-insiders-arch/visual-studio-code-insiders-arch/.github/actions/pkg"
/usr/bin/docker run --name a065f0b9f356b32f2852ba2f6b7296_baf94b --label 442333 --workdir /github/workspace --rm -e pythonLocation -e LD_LIBRARY_PATH -e HOME -e GITHUB_JOB -e GITHUB_REF -e GITHUB_SHA -e GITHUB_REPOSITORY -e GITHUB_REPOSITORY_OWNER -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RETENTION_DAYS -e GITHUB_ACTOR -e GITHUB_WORKFLOW -e GITHUB_HEAD_REF -e GITHUB_BASE_REF -e GITHUB_EVENT_NAME -e GITHUB_SERVER_URL -e GITHUB_API_URL -e GITHUB_GRAPHQL_URL -e GITHUB_WORKSPACE -e GITHUB_ACTION -e GITHUB_EVENT_PATH -e GITHUB_ACTION_REPOSITORY -e GITHUB_ACTION_REF -e GITHUB_PATH -e GITHUB_ENV -e RUNNER_OS -e RUNNER_TOOL_CACHE -e RUNNER_TEMP -e RUNNER_WORKSPACE -e ACTIONS_RUNTIME_URL -e ACTIONS_RUNTIME_TOKEN -e ACTIONS_CACHE_URL -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/visual-studio-code-insiders-arch/visual-studio-code-insiders-arch":"/github/workspace" 442333:35a065f0b9f356b32f2852ba2f6b7296
Thanks for sharing this. I have precisely the same problems.
I added the following in my Dockerfile
RUN sed -i 's/\tif \[\[ -r $MAKEPKG_CONF \]\]; then/\tif \[\[ -f $MAKEPKG_CONF \]\]; then/' /usr/share/makepkg/util/config.sh
which replaces the check for read permission (-r) for the check whether the file exists and is a regular file (-f).
now my github action gets over this particular location, but fails with the next check:
==> ERROR: You do not have write permission for the directory $BUILDDIR (/tmp/aurutils).
Aborting...
This is not a solution and does not answer what's the underlying issue, but I hope it helps anyways.
This issue is caused by using glibc >= 2.33 on the container and an outdated version of the Docker engine on the host.
You can fix it by patching glibc in your container:
patched_glibc=glibc-linux4-2.33-4-x86_64.pkg.tar.zst
curl -LO https://repo.archlinuxcn.org/x86_64/$patched_glibc
bsdtar -C / -xvf $patched_glibc
Thanks to lxqt-panel for the workaround.

about /etc/fstab and chown

I have mounted /dev/sdb2 to /base in /etc/fstab, like this:
/dev/sda1 /base ntfs defaults,user,errors=remount-ro 0 0
then /base like this:
-rwxrwxrwx 1 root root 29696 Oct 19 00:11 Mac-OS-Lion(Docky).tar*
drwxrwxrwx 1 root root 0 Dec 8 18:13 MySQL/
try to chown:
DevOps mysql # ll /base/MySQL/
total 40
drwxrwxrwx 1 root root 0 Dec 8 18:13 ./
drwxrwxrwx 1 root root 20480 Dec 16 10:40 ../
drwxrwxrwx 1 root root 20480 Dec 8 18:14 mysql_data/
drwxrwxrwx 1 root root 0 Dec 8 10:50 mysql_log/
DevOps mysql # chown -R mysql:mysql /base/MySQL/
DevOps mysql # ll /base/MySQL/
total 40
drwxrwxrwx 1 root root 0 Dec 8 18:13 ./
drwxrwxrwx 1 root root 20480 Dec 16 10:40 ../
drwxrwxrwx 1 root root 20480 Dec 8 18:14 mysql_data/
drwxrwxrwx 1 root root 0 Dec 8 10:50 mysql_log/
Is something wrong?
the issue due to ntfs format,ntfs have no genneic Permission,so we should use mask to define this,for exm:
/dev/sda1 /base ntfs defaults,utf8,uid=1000,gid=1000,dmask=022,fmask=133 0 0

ls -h reporting more diskusage then du -h, how?

If I do a ls -h, I get a total of 126 Gb, whereas du -h is reporting half of it: 63 Gb.
It's a directory with 24 files. If I count all the individual filesizes I have a total of 126 Gb. There are no symbolic links.
What's causing the difference?
ls -alh
total 126G
drwxrwxrwx 3 root root 4.0K Dec 11 12:48 .
drwxrwxrwx 3 root root 4.0K May 19 2008 ..
-rw-rw-rw- 1 root root 0 Dec 11 10:28 auto-opschoning.errtmp
-rw-rw-rw- 1 root root 11M Dec 11 12:33 auto-opschoning.logtmp
drwxrwxrwx 2 root root 4.0K Feb 19 2016 backup
-rw-rw-rw- 2 root root 9.7M Dec 11 12:48 batchkop
-rw-rw-rw- 2 root root 9.7M Dec 11 12:48 batchkop.his
-rw-rw-rw- 2 root root 9.2G Dec 11 12:48 dispudet
-rw-rw-rw- 2 root root 9.2G Dec 11 12:48 dispudet.his
-rw-rw-rw- 2 root root 1.2G Dec 11 12:48 dispukop
-rw-rw-rw- 2 root root 1.2G Dec 11 12:48 dispukop.his
-rw-rw-rw- 2 root root 765M Dec 11 12:48 loktrail
-rw-rw-rw- 2 root root 765M Dec 11 12:48 loktrail.his
-rw-rw-rw- 2 root root 19G Dec 11 12:48 orddet
-rw-rw-rw- 2 root root 19G Dec 11 12:48 orddet.his
-rw-rw-rw- 2 root root 4.1G Dec 11 12:48 orddetkl
-rw-rw-rw- 2 root root 4.1G Dec 11 12:48 orddetkl.his
-rw-rw-rw- 2 root root 977M Dec 11 12:48 ordkop
-rw-rw-rw- 2 root root 977M Dec 11 12:48 ordkop.his
-rw-rw-rw- 2 root root 12G Dec 11 12:48 trail
-rw-rw-rw- 2 root root 12G Dec 11 12:48 trail.his
-rw-rw-rw- 2 root root 5.7G Dec 11 12:48 verzdud
-rw-rw-rw- 2 root root 7.4G Dec 11 12:48 verzdudd
-rw-rw-rw- 2 root root 7.4G Dec 11 12:48 verzdudd.his
-rw-rw-rw- 2 root root 5.7G Dec 11 12:48 verzdud.his
-rw-rw-rw- 2 root root 251M Dec 11 12:48 verzduk
-rw-rw-rw- 2 root root 251M Dec 11 12:48 verzduk.his
-rw-rw-rw- 2 root root 3.5G Dec 11 12:48 voorsnap
-rw-rw-rw- 2 root root 3.5G Dec 11 12:48 voorsnap.his
du -h
4.0K ./backup
63G .
I think the difference here is related to the files that you are trying to get their space.
some files are called sparse files.
sparse files are files that their space is not fully physically allocated (they are virtually allocated not physically)
they are used a lot as virtual machine storage files and some data-structures need them .
you can use dd to create a sparse file and test with it
check this example i just did
h#localhost:~$ mkdir test
h#localhost:~$ cd test/
h#localhost:~/test$ dd if=/dev/zero of=file.img bs=1 count=0 seek=512M
0+0 records in
0+0 records out
0 bytes (0 B) copied, 0.000214033 s, 0.0 kB/s
h#localhost:~/test$ ls -h
file.img
h#localhost:~/test$ ls -alh
total 8.0K
drwxr-xr-x 2 h h 4.0K Dec 16 14:04 .
drwxr-xr-x 3 h h 4.0K Dec 16 14:02 ..
-rw-r--r-- 1 h h 512M Dec 16 14:04 file.img
h#localhost:~/test$ du -c
4 .
4 total
h#localhost:~/test$
and as the link that was posted in comments says the diffrence between ls -h and du -c is that du -c will get the actual used space not the virtually allocated space while ls -h will give the virtual allocated space

Wget occasionally failed to download some file

I'm using the following command to download some file in another machine.
It works fine when the file size of senselist.txt is small(<1G maybe, no accurate number just a estimate).
When the file size grow bigger, it failed to download one of the senselist.txt.
wget -r -N -l inf -nd -np -q --accept=senselist.txt* \
-t 30 -T 60 -c --limit-rate=100M \
ftp://ftp:ftp#hostname/home/work/data/20150922 \
-P ./data_tmp/20150921000000/tmp_dir
Currently there are eight files on the source machine. Sometimes, I failed to download senselist.txt4. But it can't be reproduced, it happens occasionally.
How to fix this, or how to find the reason for this?
-rw-rw-rw- 1 root root 1.4G Sep 21 19:04 senselist.txt0
-rw-rw-rw- 1 root root 100 Sep 21 19:04 senselist.txt0.md5
-rw-rw-rw- 1 root root 1019.8M Sep 21 19:33 senselist.txt1
-rw-rw-rw- 1 root root 100 Sep 21 19:34 senselist.txt1.md5
-rw-rw-rw- 1 root root 1.1G Sep 21 20:42 senselist.txt2
-rw-rw-rw- 1 root root 100 Sep 21 20:42 senselist.txt2.md5
-rw-rw-rw- 1 root root 1.1G Sep 21 21:25 senselist.txt3
-rw-rw-rw- 1 root root 100 Sep 21 21:25 senselist.txt3.md5
-rw-rw-rw- 1 root root 1017.0M Sep 21 21:59 senselist.txt4
-rw-rw-rw- 1 root root 100 Sep 21 22:00 senselist.txt4.md5
-rw-rw-rw- 1 root root 895.2M Sep 21 22:37 senselist.txt5
-rw-rw-rw- 1 root root 100 Sep 21 22:38 senselist.txt5.md5
-rw-rw-rw- 1 root root 1.2G Sep 21 23:22 senselist.txt6
-rw-rw-rw- 1 root root 100 Sep 21 23:22 senselist.txt6.md5
-rw-rw-rw- 1 root root 1.2G Sep 21 23:54 senselist.txt7
-rw-rw-rw- 1 root root 100 Sep 21 23:54 senselist.txt7.md5
UPDATE
I wrote the command in a shell script, and use the following code:
eval "$cmd_str" 2>>$ERRFILE 1>>$LOGFILE

Linux file permission

There is a process which is running under root user.
ps aux | grep ProcessX
root 11565 0.0 0.7 82120 22976 ? Ssl 14:57 0:02 ProcessX
Now ls -l /proc/11565/ (pid ) gives this result.
total 0
dr-xr-xr-x 2 root root 0 Aug 9 16:06 attr
-rw-r--r-- 1 root root 0 Aug 9 16:06 autogroup
-r-------- 1 root root 0 Aug 9 16:06 auxv
-r--r--r-- 1 root root 0 Aug 9 16:06 cgroup
--w------- 1 root root 0 Aug 9 16:06 clear_refs
-r--r--r-- 1 root root 0 Aug 9 16:06 cmdline
-rw-r--r-- 1 root root 0 Aug 9 16:06 coredump_filter
-r--r--r-- 1 root root 0 Aug 9 16:06 cpuset
lrwxrwxrwx 1 root root 0 Aug 9 16:06 cwd -> /usr/local/bin
-r-------- 1 root root 0 Aug 9 16:06 environ
lrwxrwxrwx 1 root root 0 Aug 9 16:06 exe -> /usr/local/bin/ProcessX
dr-x------ 2 root root 0 Aug 9 16:06 fd
dr-x------ 2 root root 0 Aug 9 16:06 fdinfo
-r-------- 1 root root 0 Aug 9 16:06 io
-rw------- 1 root root 0 Aug 9 16:06 limits
-rw-r--r-- 1 root root 0 Aug 9 16:06 loginuid
-r--r--r-- 1 root root 0 Aug 9 16:06 maps
-rw------- 1 root root 0 Aug 9 16:06 mem
-r--r--r-- 1 root root 0 Aug 9 16:06 mountinfo
-r--r--r-- 1 root root 0 Aug 9 16:06 mounts
-r-------- 1 root root 0 Aug 9 16:06 mountstats
dr-xr-xr-x 6 root root 0 Aug 9 16:06 net
-r--r--r-- 1 root root 0 Aug 9 16:06 numa_maps
-rw-r--r-- 1 root root 0 Aug 9 16:06 oom_adj
-r--r--r-- 1 root root 0 Aug 9 16:06 oom_score
-rw-r--r-- 1 root root 0 Aug 9 16:06 oom_score_adj
-r--r--r-- 1 root root 0 Aug 9 16:06 pagemap
-r--r--r-- 1 root root 0 Aug 9 16:06 personality
lrwxrwxrwx 1 root root 0 Aug 9 16:06 root -> /
-rw-r--r-- 1 root root 0 Aug 9 16:06 sched
-r--r--r-- 1 root root 0 Aug 9 16:06 schedstat
-r--r--r-- 1 root root 0 Aug 9 16:06 sessionid
-r--r--r-- 1 root root 0 Aug 9 16:06 smaps
-r--r--r-- 1 root root 0 Aug 9 16:06 stack
-r--r--r-- 1 root root 0 Aug 9 16:06 stat
-r--r--r-- 1 root root 0 Aug 9 16:06 statm
-r--r--r-- 1 root root 0 Aug 9 16:06 status
-r--r--r-- 1 root root 0 Aug 9 16:06 syscall
dr-xr-xr-x 6 root root 0 Aug 9 16:06 task
-r--r--r-- 1 root root 0 Aug 9 16:06 wchan
Now the file permission for both status and maps are same (-r--r--r--). But when I issue cat /proc/11565/maps with a non privileged (not root) user, it gives me a permission denied. But for cat /proc/11565/status, it outputs as expected.
Is there something I am missing here?
It's because the file permissions are not the only protection you're encountering.
Those aren't just regular text files on a file system, procfs is a window into process internals and you have to get past both the file permissions plus whatever other protections are in place.
The maps show potentially dangerous information about memory usage and where executable code is located within the process space. If you look into ASLR, you'll see this was a method of preventing potential attackers from knowing where code was loaded and it wouldn't make sense to reveal it in a world-readable entry in procfs.
This protection was added way back in 2007:
This change implements a check using "ptrace_may_attach" before allowing access to read the maps contents. To control this protection, the new knob /proc/sys/kernel/maps_protect has been added, with corresponding updates to the procfs documentation.
Within ptrace_may_attach() (actually within one of the functions it calls) lies the following code:
if (((current->uid != task->euid) ||
(current->uid != task->suid) ||
(current->uid != task->uid) ||
(current->gid != task->egid) ||
(current->gid != task->sgid) ||
(current->gid != task->gid)) && !capable(CAP_SYS_PTRACE))
return -EPERM;
so that, unless you have the same real user/group ID, saved user/group ID and effective user/group ID (i.e., no sneaky setuid stuff) and they're the same as the user/group ID that owns the process, you're not allowed to see inside that "file" (unless your process has the CAP_SYS_PTRACE capability of course).
The process uid must match the smaps uid, and the process gid must match the smaps gid.
$ ls -l /proc/15889/smaps /proc/16139/smaps
-r--r--r--. 1 oracle dba 0 Feb 10 16:42 /proc/15889/smaps
-r--r--r--. 1 oracle asmadmin 0 Feb 10 16:42 /proc/16139/smaps
$ wc /proc/15889/smaps /proc/16139/smaps
6851 23498 224275 /proc/15889/smaps
wc: /proc/16139/smaps: Permission denied
6851 23498 224275 total
$ id
uid=400(oracle) gid=400(dba) groups=400(dba),522(asmadmin),etc.
Same for environ, io, and all memory maps.

Resources