There is command chcpu. I know how to use this with one CPU number. How to use this with list or set of CPU numbers ?
chcpu -e cpu-list
How to write this cpu-list ?
From man chcpu
Some options have a cpu-list argument. Use this argument to specify a comma-separated list of CPUs. The list can contain indi‐
vidual CPU addresses or ranges of addresses. For example, 0,5,7,9-11 makes the command applicable to the CPUs with the addresses
0, 5, 7, 9, 10, and 11.
Related
in the book 2017 " UNIX and Linux System Administration " i've read the article below :
Modern systems manage their device files automatically. However, a few rare corner
cases may still require you to create devices manually with the mknod command.
So here’s how to do it:
mknod filename type major minor
Here, filename is the device file to be created, type is c for a character device or b
for a block device, and major and minor are the major and minor device numbers.
If you are creating a device file that refers to a driver that’s already present in your
kernel, check the documentation for the driver to find the appropriate major and
minor device numbers.
where can i find this doc and how to find Major & Minor for a device driver ???
The command cat /proc/devices shows the character and block major device numbers in use by drivers in the currently running Linux kernel, but provides no information about minor device numbers.
There is a list of pre-assigned (reserved) device numbers in the Linux kernel user's and administrator's guide: Linux allocated devices (4.x+ version). (The same list also appears in "Documentation/admin-guide/devices.txt" in the Linux kernel sources.) The list shows how minor device numbers are interpreted for each pre-assigned character and block major device number.
Some major device numbers are reserved for local or experimental use, or for dynamic assignment:
60-63 char LOCAL/EXPERIMENTAL USE
60-63 block LOCAL/EXPERIMENTAL USE
Allocated for local/experimental use. For devices not
assigned official numbers, these ranges should be
used in order to avoid conflicting with future assignments.
120-127 char LOCAL/EXPERIMENTAL USE
120-127 block LOCAL/EXPERIMENTAL USE
Allocated for local/experimental use. For devices not
assigned official numbers, these ranges should be
used in order to avoid conflicting with future assignments.
234-254 char RESERVED FOR DYNAMIC ASSIGNMENT
Character devices that request a dynamic allocation of major number will
take numbers starting from 254 and downward.
240-254 block LOCAL/EXPERIMENTAL USE
Allocated for local/experimental use. For devices not
assigned official numbers, these ranges should be
used in order to avoid conflicting with future assignments.
384-511 char RESERVED FOR DYNAMIC ASSIGNMENT
Character devices that request a dynamic allocation of major
number will take numbers starting from 511 and downward,
once the 234-254 range is full.
Character device drivers that call alloc_chrdev_region() to register a range of character device numbers will be assigned an unused major device number from the dynamic range. The same is true for character device drivers that call __register_chrdev() with the first argument (major) set to 0.
Some external ("out-of-tree") Linux kernel modules have a module parameter to allow their default major device number to be specified at module load time. That is useful for drivers that do not create their "/dev" entries dynamically, but want some flexibility for the system administrator to choose a major device number when creating device files manually with mknod.
docs:
https://www.oreilly.com/library/view/linux-device-drivers/0596000081/ch03s02.html
https://tldp.org/LDP/tlk/dd/drivers.html
how to find the appropriate minor & major number for a device number:
ls -l /dev/
cat /proc/devices shows the same as lsblk
I have been trying to learn about the Linux block driver from https://linux-kernel-labs.github.io/refs/heads/master/labs/block_device_drivers.html
I looked in the background code of register_blkdev() function and came across
struct blk_major_name{
struct blk_major_name *next;
int major;
char name[16];
} *major_names[BLKDEV_MAJOR_HASH_SIZE];
I understood from the code of register_blkdev() that:
major = major number of block driver, same as index of array `major_names`
name = name of block driver
but I cannot understand use of next in above struct.
This seems similar to a linked list node but I'm not sure. Please help. And please feel free to correct me if above information is not correct.
The major device number is not necessarily the same as the major_names index. The index is in the range [0, 254] ([0, BLKDEV_MAJOR_HASH_SIZE-1]), but the major device number is in the range [1, 511] ([0, BLKDEV_MAJOR_MAX-1]). The major device number is hashed to an index by index = major_to_index(major); which is equivalent to index = major % BLKDEV_MAJOR_HASH_SIZE;.
More than one major device number can map to the same index. Major device numbers 1, 256 and 511 all map to major_names index 1. (Three major device numbers mapping to index 1 is the worst case. All indices other than 1 are mapped to by two major device numbers.)
The next member of struct blk_major_name is needed to search through all the registered major device numbers that map to the same major_names index.
Index numbers 1 to 254 are mapped to by the identical major device number. The code that allocates an unused major device number in the range [1, 254] when register_blkdev is called with major number 0 does so by looking for an index in the the range [1, 254] that has not been allocated. There is room for improvement in the code that allocates an unused major number (at least up to and including the latest kernel version 5.10 at the time of writing) because it is only checking for unused indices. For example, if major number 254 has not been registered, but major number 509 has been registered, then since both major numbers map to the same index 254, major number 254 will not be allocated dynamically in this case, even though it is available.
How to disable this feature in a block device driver?
What I mean: as mentioned in the below documentation I want to set the value of that "flag" to 2. Where do I do that? preferably in the block device driver code.
What: /sys/block/<disk>/queue/nomerges
Date: January 2010
Contact:
Description:
Standard I/O elevator operations include attempts to
merge contiguous I/Os. For known random I/O loads these
attempts will always fail and result in extra cycles
being spent in the kernel. This allows one to turn off
this behavior on one of two ways: When set to 1, complex
merge checks are disabled, but the simple one-shot merges
with the previous I/O request are enabled. When set to 2,
all merge tries are disabled. The default value is 0 -
which enables all types of merge tries.
First check the nomerges value -
cat /sys/block/sda/queue/nomerges
if it's not already 2, then do:
echo 2 > /sys/block/sda/queue/nomerges
OPEN_MAX is the constant that defines the maximum number of open files allowed for a single program.
According to Beginning Linux Programming 4th Edition, Page 101 :
The limit, usually defined by the constant OPEN_MAX in limits.h, varies from system to system, ...
In my system, the file limits.h in directory /usr/lib/gcc/x86_64-linux-gnu/4.6/include-fixed does not have this constant. Am i looking at the wrong limits.h or has the location of OPEN_MAX changed since 2008 ?
For what it's worth, the 4th edition of Beginning Linux Programming was published in 2007; parts of it may be a bit out of date. (That's not a criticism of the book, which I haven't read.)
It appears that OPEN_MAX is deprecated, at least on Linux systems. The reason appears to be that the maximum number of file that can be opened simultaneously is not fixed, so a macro that expands to an integer literal is not a good way to get that information.
There's another macro FOPEN_MAX that should be similar; I can't think of a reason why OPEN_MAX and FOPEN_MAX, if they're both defined, should have different values. But FOPEN_MAX is mandated by the C language standard, so system's don't have the option of not defining it. The C standard says that FOPEN_MAX
expands to an integer constant expression that is the minimum number of files that
the implementation guarantees can be open simultaneously
(If the word "minimum" is confusing, it's a guarantee that a program can open at least that many files at once.)
If you want the current maximum number of files that can be opened, take a look at the sysconf() function; on my system, sysconf(_SC_OPEN_MAX) returns 1024. (The sysconf() man page refers to a symbol OPEN_MAX. This is not a count, but a value recognized by sysconf(). And it's not defined on my system.)
I've searched for OPEN_MAX (word match, so excluding FOPEN_MAX) on my Ubuntu system, and found the following (these are obviously just brief excerpts):
/usr/include/X11/Xos.h:
# ifdef __GNU__
# define PATH_MAX 4096
# define MAXPATHLEN 4096
# define OPEN_MAX 256 /* We define a reasonable limit. */
# endif
/usr/include/i386-linux-gnu/bits/local_lim.h:
/* The kernel header pollutes the namespace with the NR_OPEN symbol
and defines LINK_MAX although filesystems have different maxima. A
similar thing is true for OPEN_MAX: the limit can be changed at
runtime and therefore the macro must not be defined. Remove this
after including the header if necessary. */
#ifndef NR_OPEN
# define __undef_NR_OPEN
#endif
#ifndef LINK_MAX
# define __undef_LINK_MAX
#endif
#ifndef OPEN_MAX
# define __undef_OPEN_MAX
#endif
#ifndef ARG_MAX
# define __undef_ARG_MAX
#endif
/usr/include/i386-linux-gnu/bits/xopen_lim.h:
/* We do not provide fixed values for
ARG_MAX Maximum length of argument to the `exec' function
including environment data.
ATEXIT_MAX Maximum number of functions that may be registered
with `atexit'.
CHILD_MAX Maximum number of simultaneous processes per real
user ID.
OPEN_MAX Maximum number of files that one process can have open
at anyone time.
PAGESIZE
PAGE_SIZE Size of bytes of a page.
PASS_MAX Maximum number of significant bytes in a password.
We only provide a fixed limit for
IOV_MAX Maximum number of `iovec' structures that one process has
available for use with `readv' or writev'.
if this is indeed fixed by the underlying system.
*/
Aside from the link given by cste, I would like to point out that there is a /proc/sys/fs/file-max entry that provides the number of files THE SYSTEM can have open at any given time.
Here's some docs:
https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Directory_Server/8.2/html/Performance_Tuning_Guide/system-tuning.html
Note that this is not to say that there's a GUARANTEE you can open that many files - if the system runs out of some resource (e.g. "no more memory available"), then it may well fail.
The FOPEN_MAX indicates that the C library allows this many files to be opened (at least, as discussed), but there are other limits that may happen first. Say for example the SYSTEM limit is 4000 files, and some applications already running has 3990 files open. Then you won't be able to open more than 7 files [since stdin, stdout and stderr take up three slots too]. And if rlimit is set to 5, then you can only open 2 files of your own.
In my opinion, the best way to know if you can open a file is to open it. If that fails, you have to do something else. If you have some process that needs to open MANY files [e.g. a multithreaded search/compare on a machine with 256 cores and 8 threads per core and each thread uses three files (file "A", "B" and "diff") ], then you may need to ensure that your FOPEN_MAX allows for 3 * 8 * 256 files being opened before you start creating threads, as a thread that fails to open its files will be meaningless. But for most ordinary applications, just try to open the file, if it fails, tell the user (log, or something), and/or try again...
I suggest to use the magic of grep to find this constant on /usr/include:
grep -rn --col OPEN_MAX /usr/include
...
...
/usr/include/stdio.h:159: FOPEN_MAX Minimum number of files that can be open at once.
...
...
Hope it helps you
When issuing this command on Linux:
# cat /proc/loadavg
0.75 0.35 0.25 1/25 1747
The first three numbers are load averages. What are the last 2 numbers?
The last one keeps increasing by 2 every second, should I be worried?
/proc/loadavg
The first three fields in this file are load average figures giving
the number of jobs in the run queue (state R) or waiting for disk
I/O (state D) averaged over 1, 5, and 15 minutes. They are the
same as the load average numbers given by uptime(1) and other
programs.
The fourth field consists of two numbers separated by a
slash (/). The first of these is the number of currently executing
kernel scheduling entities (processes, threads); this will be less
than or equal to the number of CPUs. The value after the slash is the
number of kernel scheduling entities that currently exist on the
system.
The fifth field is the PID of the process that was most
recently created on the system.
I would like to comment the accepted answer.
The fourth field consists of two numbers separated by a slash (/). The
first of these is the number of currently executing kernel scheduling
entities (processes, threads); this will be less than or equal to the
number of CPUs.
I did a test program that reads integer N from input and then creates N threads and their run them forever. On RHEL 6.5 computer I have 8 processor and each processor has hyper threading. Anyway if I run my test and it creates 128 threads I see in the fourth field values that are greater than 128, for example 135. It is clearly greater than the number of CPU. This post supports my observation: http://juliano.info/en/Blog:Memory_Leak/Understanding_the_Linux_load_average
It is worth noting that the current explanation in proc(5) manual page
(as of man-pages version 3.21, March 2009) is wrong. It reports the
first number of the forth field as the number of currently executing
scheduling entities, and so predicts it can't be greater than the
number of CPUs. That doesn't match the real implementation, where this
value reports the current number of runnable threads.
The first three columns measure CPU and I/O utilization of the last one, five, and 15 minute periods. The fourth column shows the number of currently running processes and the total number of processes. The last column displays the last process ID used.
https://docs.fedoraproject.org/en-US/Fedora/17/html/System_Administrators_Guide/s2-proc-loadavg.html
The following page explains these in detail:
http://www.brendangregg.com/blog/2017-08-08/linux-load-averages.html
Some interpretations:
If the averages are 0.0, then your system is idle.
If the 1 minute average is higher than the 5 or 15 minute averages, then load is increasing.
If the 1 minute average is lower than the 5 or 15 minute averages, then load is decreasing.
If they are higher than your CPU count, then you might have a performance problem (it depends).
You can consult the proc manual page for /proc/loadavg :
$ man proc | sed -n '/loadavg/,/^$/ p'
/proc/loadavg
The first three fields in this file are load average figures giving the number of jobs in the run queue
(state R) or waiting for disk I/O (state D) averaged over 1, 5, and 15 minutes. They are the same as
the load average numbers given by uptime(1) and other programs. The fourth field consists of two num‐
bers separated by a slash (/). The first of these is the number of currently runnable kernel schedul‐
ing entities (processes, threads). The value after the slash is the number of kernel scheduling enti‐
ties that currently exist on the system. The fifth field is the PID of the process that was most
recently created on the system.
For that, you need to install the man-pages package on CentOS7/RedHat7 or the manpages package on Ubuntu 20.04/22.04 LTS.