Are these stats normal? I have problems with my PHP products, so I want to know if these data are healthy
stats
STAT pid 2312
STAT uptime 5292037
STAT time 1253692925
STAT version 1.2.8
STAT pointer_size 64
STAT rusage_user 2600.605647
STAT rusage_system 9533.168738
STAT curr_items 1153303
STAT total_items 139795434
STAT bytes 435570863
STAT curr_connections 288
STAT total_connections 135128959
STAT connection_structures 1018
STAT cmd_flush 1
STAT cmd_get 171491050
STAT cmd_set 139795434
STAT get_hits 127840250
STAT get_misses 43650800
STAT evictions 24166536
STAT bytes_read 2731502572454
STAT bytes_written 2889855000126
STAT limit_maxbytes 536870912
STAT threads 2
STAT accepting_conns 1
STAT listen_disabled_num 802
END
It is impossible to say what is wrong with your application, but your memcached usage is not optimal:
STAT cmd_get 171491050
STAT cmd_set 139795434
STAT get_hits 127840250
STAT get_misses 43650800
These numbers mean that 139m items have been stored in the cache. 171m attempts to retrieve items have been performed and of those only 127m items has been found. That means that the likelihood of any item set in the cache to be retrieved is only about 91% (127/139). That is not an effective use of the cache because most of the items stored in it are never used. To me, that suggests you are caching the wrong data. You should try and figure out which data is most frequently used and only cache that instead. Especially if you are running out of cache space often.
Yes, why? Is anything wrong?
STAT bytes 435570863
STAT limit_maxbytes 536870912
You may want to increase cache size as you are close full it.
Related
I am getting the error "Too many open files" but 99.5% of inodes are free. The ulimit is 1024 for soft and
and 4076 for hard. Is it possible that the error may be due to some other problem?
Inodes are not related to open files. You can check current open files using lsof (sth. like lsof | wc -l). I would suggest to just raise the limit in the /etc/security/limits.conf
Try adding something like:
* soft nofile 20000
* hard nofile 20000
And see if that helps.
I create a file using truncate -s 1024M a.test.
I was expecting a file size of a.test to be 1024M, but not getting correct size somehow.
Below is my code.
$du -sh a.test
4.0K a.test
When using ls -l a.test, it is ok:
$ ll a.test
-rw-rw-r-- 1 work work 1073741824 Jul 12 17:26 a.test
Can some one help me out with this issue.
du tells you how much actual disk space you use. Since your file does not have any data in it, the OS will store it as a sparse file, so actual disk usage is much smaller than the size of the file. If you check it with "du --apparent-size -sh a.test", then that will report what you expected.
How do I tie values in the 'inode' column of /proc/net/tcp to files in /proc/<pid>/fd/?
I was under the impression that the inode column in the TCP had a decimal representation of the socket's inode, but that doesn't seem to be the case.
For example, if I run telnet localhost 80, I see the following (telnet is pid 9021).
/proc/net/tcp contains
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
23: 0100007F:CE2A 0100007F:0050 01 00000000:00000000 00:00000000 00000000 1000 0 361556 1 00000000 20 0 0 10 -1
which makes me think that the inode of the socket connected to 127.0.0.1:80 is 361556. But if I run ls --inode -alh /proc/9021/fd, I see
349886 lrwx------ 1 me me 64 Dec 26 10:51 3 -> socket:[361556]
The inode is 349886, which is different from the value in the inode column of the tcp table: 361556. But the link target seems to have the right name. Similarly, stat /proc/9021/3 shows:
File: ‘/proc/9021/fd/3’ -> ‘socket:[361556]’
Size: 64 Blocks: 0 IO Block: 1024 symbolic link
Device: 3h/3d Inode: 349886 Links: 1
What's the number in the inode column of tcp table? Why doesn't it line up with the inode as reported by ls or stat?
(I'm running Ubuntu 14.10, if that matters)
The inode shown by ls and stat is for the symlink that points to the inode associated with the socket. Running ls -iLalh shows the right inode. Ditto for stat -L.
Herpa derp derp. I only figured this out when I was composing my question. ;_;
Inode id represent a file id per fs mount (proc, sys, ntfs, ext...), so as you probably understand you deal with two different fs here: procfs and some pseudo socket fs.
The files under the /proc/pid/fd/ directories are soft links which have inode representation in the procfs fs.
These links are "pointing" to different "fs" - socket fs.
What stat -L and ls -iLalh do, is to give you the inode of the file the links points to.
You can do this also explicitly with readlink /proc/#pid/fd/#fdnum
How to get the current count of file descriptors in the system?
I know how to get the maximum.
% sysctl kern.maxfiles
kern.maxfiles: 8232
Ref:
http://www.freebsd.org/cgi/man.cgi?query=fstat&apropos=0&sektion=0&manpath=FreeBSD+9.0-RELEASE&arch=default&format=html
Are you looking for kern.openfiles?
[ghoti#pc ~]$ sysctl -ad | grep 'kern.*files:'
kern.maxfiles: Maximum number of files
kern.openfiles: System-wide number of open files
[ghoti#pc ~]$
cat /proc/sys/fs/file-nr
Three columns are total file descriptors allocated since boot, free allocated file descriptors, maximum open file descriptors.
Additional info here.
mysqldump: Couldn't execute 'show fields from `tablename`': Out of resources when opening file './databasename/tablename#P#p125.MYD' (Errcode: 24) (23)
on checking the error 24 on the shell it says
>>perror 24
OS error code 24: Too many open files
how do I solve this?
At first, to identify the certain user or group limits you have to do the following:
root#ubuntu:~# sudo -u mysql bash
mysql#ubuntu:~$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 71680
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 71680
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
mysql#ubuntu:~$
The important line is:
open files (-n) 1024
As you can see, your operating system vendor ships this version with the basic Linux configuration - 1024 files per process.
This is obviously not enough for a busy MySQL installation.
Now, to fix this you have to modify the following file:
/etc/security/limits.conf
mysql soft nofile 24000
mysql hard nofile 32000
Some flavors of Linux also require additional configuration to get this to stick to daemon processes versus login sessions. In Ubuntu 10.04, for example, you need to also set the pam session limits by adding the following line to /etc/pam.d/common-session:
session required pam_limits.so
Quite an old question but here are my two cents.
The thing that you could be experiencing is that the mysql engine didn't set its variable "open-files-limit" right.
You can see how many files are you allowing mysql to open
mysql> SHOW VARIABLES;
Probably is set to 1024 even if you already set the limits to higher values.
You can use the option --open-files-limit=XXXXX in the command line for mysqld.
Cheers
add --single_transaction to your mysqldump command
It could also be possible that by some code that accesses the tables dint close those properly and over a point of time, the number of open files could be reached.
Please refer to http://dev.mysql.com/doc/refman/5.0/en/table-cache.html for a possible reason as well.
Restarting mysql should cause this problem to go away (although it might happen again unless the underlying problem is fixed).
You can increase your OS limits by editing /etc/security/limits.conf.
You can also install "lsof" (LiSt Open Files) command to see Files <-> Processes relation.
There are no need to configure PAM, as I think. On my system (Debian 7.2 with Percona 5.5.31-rel30.3-520.squeeze ) I have:
Before my.cnf changes:
\#cat /proc/12345/limits |grep "open files"
Max open files 1186 1186 files
After adding "open_files_limit = 4096" into my.cnf and mysqld restart, I got:
\#cat /proc/23456/limits |grep "open files"
Max open files 4096 4096 files
12345 and 23456 is mysqld process PID, of course.
SHOW VARIABLES LIKE 'open_files_limit' show 4096 now.
All looks ok, while "ulimit" show no changes:
\# su - mysql -c bash
\# ulimit -n
1024
There is no guarantee that "24" is an OS-level error number, so don't assume that this means that too many file handles are open. It could be some type of internal error code used within mysql itself. I'd suggest asking on the mysql mailing lists about this.