Database backups not writing to disc, not enough space? - linux

I just inherited an AIX project which I know very little about. I have a cronjob that has been failing for a few days now that does a full backup of my database(db2). Looking at the logs, I'm seeing this:
SQL2419N The target disk "/home/dbtmp/backups" has become full.
When checking out this directory:
(/var/spool/cron)> df -g /home/dbtmp
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/dbtmplv 10.00 0.96 91% 85 1% /home/dbtmp
The size of the previous backups:
(/var/spool/cron)> ll /home/dbtmp/backups
total 18365248
-rw------- 1 hsprd cics 4411498496 Feb 12 18:01 HSPRD.0.hsprd.NODE0000.CATN0000.20130212180036.001
-rw------- 1 hstrn cics 874287104 Feb 12 18:08 HSTRN.0.hstrn.NODE0000.CATN0000.20130212180747.001
-rw------- 1 hstst cics 3242835968 Feb 12 18:05 HSTST.0.hstst.NODE0000.CATN0000.20130212180443.001
What options to I have to fix this? Thank you.

As you can see, the size of your backup files exceeds the free space on the device. You need a larger device.

Related

/var/log/daemon.log taking more space how to reduce it?

below are the files
-rw-r----- 1 root adm 4.4G Mar 6 09:04 daemon.log
-rw-r----- 1 root adm 6.2G Mar 1 06:26 daemon.log.1
-rw-r----- 1 root adm 50M Feb 23 06:26 daemon.log.2.gz
-rw-r----- 1 root adm 41M Feb 17 06:25 daemon.log.3.gz
-rw-r----- 1 root adm 72K Feb 9 06:25 daemon.log.4.gz
how can I remove it? will it affect if I directly delete it?
Thanks in advance.
Best way to manage the logs would be to use Logrotate
This is Serhii's comment on your other similar question:
Have a look at this Logrotate tutorial
linode.com/docs/uptime/logs/use-logrotate-to-manage-log-files. You can
use size to force log rotation when it grows bigger than the
specified [value], also you can use rotate to control how many
times a log is rotated before old logs are removed (If you set it to
0 logs will be removed immediately after they are rotated).
You can delete the logs but depending on the software you're running - if some of it needs some part of logs or utilises them in any way - if you delete them it will stop working as intended.
You can also have a look at the logs and analyse them to see which software writes the most data and try to reconfigure it so the number of logs info generated will drop significantly. That - combined with logrorate should yield satisfactory results.
And if that's not enough you can store your logs in a bucket and mount it as a disk in your VM's filesystem. That way any software installed on your VM will be able to write to it.
But this will incur some charges for using the bucket storage so keep that in mind.

MTD start and config at runtime

Got a embedded system that i have root shell access to.
I can not enter the U-boot boot menu. (boot delay=0)
The device boots from a nor flash and loads the filesystem on emmc.
It does not set /dev/mtd devices.
I want to access the nor flash.
There are MTD drivers on the system, so that seems the best option.
(no experiance with this at all, so please correct me if i'm wrong)
drwxrwxr-x 2 1000 root 1024 Jul 29 2013 chips
drwxrwxr-x 2 1000 root 1024 Jul 29 2013 maps
-rw-rw-r-- 1 1000 1000 21544 Jul 29 2013 mtd.ko
-rw-rw-r-- 1 1000 1000 8560 Jul 29 2013 mtd_blkdevs.ko
-rw-rw-r-- 1 1000 1000 6132 Jul 29 2013 mtdblock.ko
-rw-rw-r-- 1 1000 1000 9648 Jul 29 2013 mtdchar.ko
If start MTD with modprobe, /proc/mtd is created.
Nothing in dmesg.
root:/proc# cat /proc/mtd
dev: size erasesize name
So no partition.
How can i configure mtd to be able to access the nor flash.
( physical addresses are known )
Thanks
You need to describe your NOR partitions in a board-specific file in the kernel. In u-boot, you should be able to see them with smeminfo.
In your linux kernel, you'll need to populate an array of mtd_partitions.
Find more here: http://free-electrons.com/blog/managing-flash-storage-with-linux/

Is it better for php-fpm Unix Socket in ephemeral storage or EBS?

I'm trying to tune my EC2 performance. One of it is to utilize the ephemeral storage for all I/O. For php-fpm, I'm utilizing unix socket instead of tcp/ip since everything is local. Considering EBS storage only has 24 IOPS (for 8GB), I'm wondering if it's better to move the php-fpm socket to ephemeral storage. Is there any I/O activity inside the unix socket file since the file size is always 0
[root# php-fpm]# ls -al
total 12
drwxr-xr-x 2 root root 4096 Aug 5 19:37 .
drwxr-xr-x 16 root root 4096 Aug 7 03:27 ..
-rw-r--r-- 1 root root 4 Aug 5 19:37 php-fpm.pid
srw-rw-rw- 1 nginx nginx 0 Aug 5 19:37 php-fpm.sock
EBS is a network based service, so every single operation depends on Network. The docs say:
An Amazon EBS volume is off-instance storage that can persist independently from the life of an instance.
Consider Ephemeral storage for your socket. If you use EBS, don't forget to allocate all disk with disk dupe before first use:
dd if=/dev/zero of=/dev/xvdf bs=1M
But don't do it on the root / disk, just on extra EBS disk if you prefer to use that.
P.S. How to warm up EBS, please read all details in the official docs.

Size() vs ls -la vs du -h which one is correct size?

I was compiling a custom kernel, and I wanted to test the size of the image file.
These are the results:
ls -la | grep vmlinux
-rwxr-xr-x 1 root root 8167158 May 21 12:14 vmlinux
du -h vmlinux
3.8M vmlinux
size vmlinux
text data bss dec hex filename
2221248 676148 544768 3442164 3485f4 vmlinux
Since all of them show different sizes, which one is closest to the actual image size?
Why are they different?
They are all correct, they just show different sizes.
ls shows size of the file (when you open and read it, that's how many bytes you will get)
du shows actual disk usage which can be smaller than the file size due to holes
size shows the size of the runtime image of an object/executable which is not directly related to the size of the file (bss uses no bytes in the file no matter how large, the file may contain debugging information that is not part of the runtime image, etc.)
If you want to know how much RAM/ROM an executable will take excluding dynamic memory allocation, size gives you the information you need.
Two definition need to be understood
1 runtime vs storetime (this is why size differs)
2 file depth vs directory (this is why du differs)
Look at the below example:
[root#localhost test]# ls -l
total 36
-rw-r--r-- 1 root root 712 May 12 19:50 a.c
-rw-r--r-- 1 root root 3561 May 12 19:42 a.h
-rwxr-xr-x 1 root root 71624 May 12 19:50 a.out
-rw-r--r-- 1 root root 1403 May 8 00:15 b.c
-rw-r--r-- 1 root root 1403 May 8 00:15 c.c
[root#localhost test]# du -abch --max-depth=1
1.4K ./b.c
1.4K ./c.c
3.5K ./a.h
712 ./a.c
70K ./a.out
81K .
81K total
[root#localhost test]# ls -l
total 36
-rw-r--r-- 1 root root 712 May 12 19:50 a.c
-rw-r--r-- 1 root root 3561 May 12 19:42 a.h
-rwxr-xr-x 1 root root 71624 May 12 19:50 a.out
-rw-r--r-- 1 root root 1403 May 8 00:15 b.c
-rw-r--r-- 1 root root 1403 May 8 00:15 c.c
[root#localhost test]# size a.out
text data bss dec hex filename
3655 640 16 4311 10d7 a.out
If using size not on executable, OS will report an error.
Empirically differences happen most often for sparse files and for compressed files and can go in both directions.
du < ls
Sparse files contain metadata about space needed for an application, which ls reads and applies for its result, while du doesn't. For example:
truncate -s 1m test.dat
creates a sparse file consisting entirely of nulls without disk usage, ie. du shows 0 and ls shows 1M.
du > ls
On the other hand du can indicate, as in your case, files which might occupy a lot of space on disk (ie. they spread among lots of blocks), but not all blocks are filled, i.e. their bytesize (measured by ls) is smaller than du (looking at occupied blocks). I observed this rather prominently e.g. for some python pickle files.

Why doesn't "total" from ls -l add up to total file sizes listed? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Why is the total in the output of ls -l printed as 64 and not 26078 which is the total of all files listed?
$ ls -l ~/test/ls
total 64
-rw-r--r-- 1 root root 15276 Oct 5 2004 a2ps.cfg
-rw-r--r-- 1 root root 2562 Oct 5 2004 a2ps-site.cfg
drwxr-xr-x 4 root root 4096 Feb 2 2007 acpi
-rw-r--r-- 1 root root 48 Feb 8 2008 adjtime
drwxr-xr-x 4 root root 4096 Feb 2 2007 alchemist
You can find the definition of that line in the ls documentation for your platform. For coreutils ls (the one found on a lot of Linux systems), the information can be found via info coreutils ls:
For each directory that is listed, preface the files with a line
`total BLOCKS', where BLOCKS is the total disk allocation for all
files in that directory.
The Formula: What is that number?
total int = Sum of (physical_blocks_in_use) * physical_block_size/ls_block_size) for each file.
Where:
ls_block_size is an arbitrary environment variable (normally 512 or 1024 bytes) which is freely modifiable with the
--block-size=<int> flag on ls, the POSIXLY_CORRECT=1 GNU
environment variable (to get 512-byte units), or the -k flag to force
1kB units.
physical_block_size is the OS dependent value of an internal block interface, which may or may not be connected to the underlying hardware. This value is normally 512b or 1k, but is completely dependent on OS. It can be revealed through the %B value on stat or fstat. Note that this value is (almost always) unrelated to the number of physical blocks on a modern storage device.
Why so confusing?
This number is fairly detached from any physical or meaningful metric. Many junior programmers haven't had experience with file holes or hard/sym links. In addition, the documentation available on this specific topic is virtually non-existent.
The disjointedness and ambiguity of the term "block size" has been a result of numerous different measures being easily confused, and the relatively deep levels of abstraction revolving around disk access.
Examples of conflicting information: du (or ls -s) vs stat
Running du * in a project folder yields the following: (Note: ls -s returns the same results.)
dactyl:~/p% du *
2 check.cc
2 check.h
1 DONE
3 Makefile
3 memory.cc
5 memory.h
26 p2
4 p2.cc
2 stack.cc
14 stack.h
Total: 2+2+1+3+3+5+26+4+2+14 = 62 Blocks
Yet when one runs stat we see a different set of values. Running stat in the same directory yields:
dactyl:~/p% stat * --printf="%b\t(%B)\t%n: %s bytes\n"
3 (512) check.cc: 221 bytes
3 (512) check.h: 221 bytes
1 (512) DONE: 0 bytes
5 (512) Makefile: 980 bytes
6 (512) memory.cc: 2069 bytes
10 (512) memory.h: 4219 bytes
51 (512) p2: 24884 bytes
8 (512) p2.cc: 2586 bytes
3 (512) stack.cc: 334 bytes
28 (512) stack.h: 13028 bytes
Total: 3+3+1+5+6+10+51+8+3+28 = 118 Blocks
Note: You can use the command stat * --printf="%b\t(%B)\t%n: %s bytes\n" > to output (in order) the number of blocks, (in parens) the size of those
blocks, the name of the file, and the size in bytes, as shown above.
There are two important things takeaways:
stat reports both the physical_blocks_in_use and physical_block_size as used in the formula above. Note that these are values based on OS interfaces.
du is providing what is generally accepted as a fairly accurate estimate of physical disk utilization.
For reference, here is the ls -l of directory above:
dactyl:~/p% ls -l
**total 59**
-rw-r--r--. 1 dhs217 grad 221 Oct 16 2013 check.cc
-rw-r--r--. 1 dhs217 grad 221 Oct 16 2013 check.h
-rw-r--r--. 1 dhs217 grad 0 Oct 16 2013 DONE
-rw-r--r--. 1 dhs217 grad 980 Oct 16 2013 Makefile
-rw-r--r--. 1 dhs217 grad 2069 Oct 16 2013 memory.cc
-rw-r--r--. 1 dhs217 grad 4219 Oct 16 2013 memory.h
-rwxr-xr-x. 1 dhs217 grad 24884 Oct 18 2013 p2
-rw-r--r--. 1 dhs217 grad 2586 Oct 16 2013 p2.cc
-rw-r--r--. 1 dhs217 grad 334 Oct 16 2013 stack.cc
-rw-r--r--. 1 dhs217 grad 13028 Oct 16 2013 stack.h
That is the total number of file system blocks, including indirect blocks, used by the listed files. If you run ls -s on the same files and sum the reported numbers you'll get that same number.
Just to mention - you can use -h (ls -lh) to convert this in human readable format.

Resources