What are the total number of users operating in a linux os at a time? - linux

I wanted to know the answer since I couldn't find it anywhere.

It depends on the maximum UID and PID. UIDs are 32 bit, so it can be 4,294,967,296, but PID range is narrower: 2^22, which is exactly: 4,194,304. This is the theoretic maximum; in real world there are some running daemons already, so approximately 4 million. (on 32bit it's only 32,768)

You can use "word count" command like below :
users | wc -w

Theoretically you can have as many users as the user ID space supports. To determine this on a particular system check out the definition of the uid_t type. It is usually defined as unsigned_int or int. On intel architectures, sizes are defined in /usr/include/bits/typsizes.h. You can check value of this variable in your system by typing following command on terminal
cat /usr/include/bits/typesizes.h | grep UID_T
In my system, output of this command shows:
define __UID_T_TYPE __U32_TYPE
This means system can host 4294967296 (2^32) different users. However, other resources may become exhausted before you reach this limit, e.g. disk space. If you create a home directory for each user then even with just 1MB of space for each user you need over 4PBs of storage. Also, large number of users leaving processes running in the background, scheduling cron jobs, opening ftp and/or ssh sessions can create a severe burden on the system.
Limit for simultaneous logins:
When logging in using SSH, you use a pseudo-terminal (a pty) allocated to the SSH daemon, not a real one (a tty). Pseudo-terminals are created and destroyed as needed. You can find the number of ptys allowed to be allocated at one time by
cat /proc/sys/kernel/pty/max
In my system, output of this command shows:
4096
This means 4096 users can simultaneously login on this machine (remote login).
PS: My Linux distribution is 64-bit Fedora 23
PS.PS: Please don't forget to mark answered. And hit my rating. Thank you :)

Related

Limiting the memory usage of a program in Linux

I'm new to Linux and Terminal (or whatever kind of command prompt it uses), and I want to control the amount of RAM a process can use. I already looked for hours to find an easy-t-use guide. I have a few requirements for limiting it:
Multiple instances of the program will be running, but I only want to limit some of the instances.
I do not want the process to crash once it exceeds the limit. I want it to use HDD page swap.
The program will run under WINE, and is a .exe.
So can somebody please help with the command to limit the RAM usage on a process in Linux?
The fact that you’re using Wine makes no difference in this particular context, which leaves requirements 1 and 2. Requirement 2 –
I do not want the process to crash once it exceeds the limit. I want it to use HDD page swap.
– is known as limiting the resident set size or rss of the process, and it’s actually rather nontrivial to do on Linux, as is demonstrated by a question asked in 2010. You’ll need to set up Linux control groups (cgroups). Fortunately, Justin L.’s answer gives a brief rundown on how to do so. Note that
instead of jlebar, you should use your own Unix user name, and
instead of your/program, you should use wine /path/to/Windows/program.exe.
Using cgroups will also satisfy your other requirements – you can start as many instances of the program as you wish, but only those which you start with cgexec -g memory:limited will be limited.

how does "ulimit -v" work in the Linux OS?

I would like to limit memory used by a process started through bash with the ulimit command on Linux. I was wondering what OS mechanism is used to support ulimit. In particular, is it based on cgroups?
The Linux API methods for getting and setting limits are getrlimit(2) and setrlimit(2)
Limits are managed within the process space. A child process will inherit the limits of its parent. Limits are part of the POSIX standard, so all POSIX compliant operating systems support them (Linux, BSD, OSX).
cgroups are Linux specific, and are not even required in a Linux install. I'm not sure if it is possible to manage limits with cgroups, but it would definitely be non-standard do to so.
"ulimit" is basically an anachronism. You shouldn't have any real limits out of the box if you need the resources, and there are better ways to establish quotes if you want to limit resources.
Here's a good overview:
http://www.gnu.org/software/libc/manual/html_node/Limits-on-Resources.html
Several man pages to look at include:
man 2 getrlimit
man 2 setrlimit
man 3 ulimit OBSOLETE!
I use softlimit, part of DJB's daemontools package.
By specifying something like softlimit -m 1048576 nautilus for example, the program (nautilus) will never exceed 1MiB of memory usage (which also causes it to fail immediately in this case).

LINUX: How to lock the pages of a process in memory

I have a LINUX server running a process with a large memory footprint (some sort of a database engine). The memory allocated by this process is so large that part of it needs to be swapped (paged) out.
What I would like to do is to lock the memory pages of all the other processes (or a subset of the running processes) in memory, so that only the pages of the database process get swapped out. For example I would like to make sure that i can continue to connect remotely and monitor the machine without having the processes impacted by swapping. I.e. I want sshd, X, top, vmstat, etc to have all pages memory resident.
On linux there are the mlock(), mlockall() system calls that seem to offer the right knob to do the pinning. Unfortunately, it seems to me that I need to make an explicit call inside every process and cannot invoke mlock() from a different process or from the parent (mlock() is not inherited after fork() or evecve()).
Any help is greatly appreciated. Virtual pizza & beer offered :-).
It has been a while since I've done this so I may have missed a few steps.
Make a GDB command file that contains something like this:
call mlockall(3)
detach
Then on the command line, find the PID of the process you want to mlock. Type:
gdb --pid [PID] --batch -x [command file]
If you get fancy with pgrep that could be:
gdb --pid $(pgrep sshd) --batch -x [command file]
Actually locking the pages of most of the stuff on your system seems a bit crude/drastic, not to mention being such an abuse of the mechanism it seems bound to cause some other unanticipated problems.
Ideally, what you probably actually want is to control the "swappiness" of groups of processes so the database is first in line to be swapped while essential system admin tools are the last, and there is a way of doing this.
While searching for mlockall information I ran across this tool. You may be able to find it for your distribution. I only found the man page.
http://linux.die.net/man/8/memlockd
Nowadays, the easy and right way to tackle the problem is cgroup.
Just restrict memory usage of database process:
1. create a memory cgroup
sudo cgcreate -g memory:$test_db -t $User:$User -a $User:$User
2. limit the group's RAM usage to 1G.
echo 1000M > /sys/fs/cgroup/memory/$test_db/memory.limit_in_bytes
or
echo 1000M > /sys/fs/cgroup/memory/$test_db/memory.soft_limit_in_bytes
3. run the database program in the $test_db cgroup
cgexec -g memory:$test_db $db_program_name

How to find or calculate a Linux process's page table size and other kernel accounting?

How can I find out how big a Linux process's page table is, along with any other variable-size process accounting?
If you are really interested in the page tables, do a
$ cat /proc/meminfo | grep PageTables
PageTables: 24496 kB
Since Linux 2.6.10, the amount of memory used by a single process' page tables has been exposed via the VmPTE field of /proc/<pid>/status.
Not sure about Linux, but most UNIX variants provide sysctl(3) for this purpose. There is also the sysctl(8) command line utility.
Hmmm, back in Ye Olden Tymes, we used to call nlist(3) to get the system address for the data we were interested in, then open /dev/kmem, seek to the address, then read the data. Not sure if this works in Linux, but it might be worth typing "man 3 nlist" and seeing what comes back.
You should describe your problem, and not ask about details. If you fork too much (especially with a process which has a large address space) there are all kind of things which go wrong (including out of memory), hitting a pagetable maximum size is IMHO not a realistic problem.
Thad said, I would also be interested to read a process pagetable share in Linux.
As a simple rule of thumb you can however asume that each process occopies a share in the pagetable which is equal to its virtual size, for example 6 bytes for each page. So for example if you have a Oracle Database with 8GB SGA and 500 Processes sharing it, each of the process will use 14MB pagetable, which results in 7GB pagetables+8GB SGA. (sample numbers from http://kevinclosson.wordpress.com/2009/07/25/little-things-doth-crabby-make-%E2%80%93-part-ix-sometimes-you-have-to-really-really-want-your-hugepages/)

Limit the memory and cpu available for a user in Linux

I am a little concerned with the amount of resources that I can use in a shared machine. Is there any way to test if the administrator has a limit in the amount of resources that I can use? And if does, to make a more complete question, how can I set up such limit?
For process related limits, you can have a look in /etc/security/limits.conf (read the comments in the file, use google or use man limits.conf for more information). And as jpalecek points out, you may use ulimit -a to see (and possibly modify) all such limits currently in effect.
You can use the command quota to see if a disk quota is in effect.
You can try running
ulimit -a
to see what resource limits are in effect. Also, if you are allowed to change such limits, you can change them by the ulimit command, eg.
ulimit -c unlimited
lifts any limit for a size of a core file a process can make.
At the C level, the relevant functions (actually syscalls(2)...) could be setrlimit(2) and setpriority(2) and sched_setattr(2). You probably would want them to be called from your shell.
See also proc(5) -and try cat /proc/self/limits and sched(7).
You may want to use the renice(1) command.
If you run a long-lasting program (for several hours) not requiring user interaction, you could consider using some batch processing. Some Linux systems have a batch or at command.

Resources