How to compare different default stack sizes? [closed] - linux

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I changed the default stack size of my Linux machine from 8MB to 2MB and i want to compare the amount of memory i have saved with this change . How can I compare the effect of changes in the system with 8mb stack size and one with 2 mb stack size

Write a non-tail recursive function that prints an increasing number such as:
void stackOverFlowMe(int i) {
cout<<i<<"\n";
stackOverFlowMe(i+1);
}
for example in C++ (you can use any language), and see how far it goes. Most programs don't need that much.
Following your comment you can check memory usage in all Linux distributions using top in the shell. The first lines have the global info:
top - 11:27:46 up 18 days, 21:08, 13 users, load average: 0.71, 0.23, 0.16
Tasks: 277 total, 2 running, 274 sleeping, 1 stopped, 0 zombie
%Cpu(s): 1.4 us, 0.4 sy, 0.0 ni, 98.1 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 8105520 total, 1798056 free, 3223720 used, 3083744 buff/cache
KiB Swap: 5192700 total, 5165132 free, 27568 used. 3993932 avail Mem

When programs on your linux box run, they add and remove data from the stack on a regular basis as the programs function. The stack size, referes to how much space is allocated in memory for the stack. If you increase the stack size, that allows the program to increase the number of routines that can be called. Each time a function is called, data can be added to the stack (stacked on top of the last routines data.)
Unless the program is a very complex, or designed for a special purpose, a stack size of 8192kb is normally fine. Some programs like graphics processing programs require you to increase the size of the stack to function. As they may store a lot of data on the stack. Below are some commands for changing stack size. Hope this will help.
SunOS/Solaris:
==============
> limit # shows the current stack size
> unlimit # changes the stack size to unlimited
> setenv STACKSIZE 32768 # limits the stack size to 32M bytes
Linux:
======
> ulimit -a # shows the current stack size
> ulimit -s 32768 # sets the stack size to 32M bytes

Related

Linux "free -m": Total, used and free memory values don't add up [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
On a linux system, while using "free", following are the values:
total used free shared buff/cache available
Mem: 26755612 873224 389320 286944 25493068 25311948
Swap: 0 0 0
The total, used and free values don't add up. I'm expecting total = used + free.
Question:
What am I missing here?
For the main memory, the actual size of memory can be calculated as used+free+buffers+cache OR used+free+buffers/cache because buffers/cache = buffer+cache.
The man page of free highlights used as Used memory (calculated as total - free - buffers - cache)
As the man page of free says :-
total Total installed memory (MemTotal and SwapTotal in /proc/meminfo)
used Used memory (calculated as total - free - buffers - cache)
free Unused memory (MemFree and SwapFree in /proc/meminfo)
shared Memory used (mostly) by tmpfs (Shmem in /proc/meminfo,
on kernels 2.6.32, displayed as zero if not available)
buffers Memory used by kernel buffers (Buffers in /proc/meminfo)
cache Memory used by the page cache and slabs (Cached and Slab in
/proc/meminfo)
buff / cache Sum of buffers and cache
available Estimation of how much memory is available for starting new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account page cache and also that not all reclaimable memory slabs will be reclaimed due to items being in use (MemAvailable in /proc/meminfo, available on kernels 3.14, emulated on kernels 2.6.27+, otherwise the same as free)
In your case,
873224(used) + 389320(free) + 25493068(buff/cache) = 26755612(total)
Linux likes to cache every file that it opens. Every time you open a file for reading, Linux will cache it but it will drop those caches if it needs the memory for something more important -- like when a process on the system wants to allocate more memory. These caches in memory simply make Linux faster when the same files are used over and over again. Instead of actually going to disk every time it wants to read the file, it just gets it from memory and memory is a lot faster that disk. That is why your system shows 25493068 used in buff/cache but also shows 25311948 available. Much of that cached data can be freed if the system needs it.

how to know process working set size in linux /proc [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
Process Working Set Info in LINUX
I am finding Working Set Size of process in proc folder
this link say that I can find working set size in /proc but I don't know how to know. I knew RSS is working set size but RSS is different from working set size can I know working set size using /proc/[pid]/statm?
I don't believe that /proc/[pid]/statm gives the WSS, or /proc/[pid]/status for that matter.
WSS is the number of pages a process needs in memory to keep "working".
RSS is the number of pages of a process that actually reside in main memory.
So RSS >= WSS. Meaning that RSS may include some pages that the process doesn't really need right now. Maybe it used those stale pages in the past.
From my understanding of linux internals, the kernel doesn't really keep track of the WSS on a per-process basis. WSS is too involved to track continuously and doesn't have an exact formula. RSS is simpler to calculate, so the kernel just reports that.
Note that if the sum of WSS of all processes is greater than or equal to the main memory size (i.e. the system is thrashing or close to thrashing) then RSS equals WSS because only the pages absolutely needed by a process are kept in the main memory. Got it?
RSS (resident virtual size) is how much memory this process currently has in main memory (RAM). VSZ (virtual size) is how much virtual memory the process has in total.
From your question I believe you're after virtual size, i.e. total memory usage.
Regarding statm - from Linux manpages:
proc/[pid]/statm
Provides information about memory usage, measured in pages. The columns are:
size (1) total program size
(same as VmSize in /proc/[pid]/status)
resident (2) resident set size
(same as VmRSS in /proc/[pid]/status)
share (3) shared pages (i.e., backed by a file)
text (4) text (code)
lib (5) library (unused in Linux 2.6)
data (6) data + stack
dt (7) dirty pages (unused in Linux 2.6)
So you need the first integer, which will return total page count used. If you however need in more readable output, status will provide information in kilobytes. For example:
rr-#burza:~$ cat /proc/29262/status | grep -i rss
VmRSS: 1736 kB
rr-#burza:~$ cat /proc/29262/status | grep -i vmsize
VmSize: 5980 kB
This means process 29262 uses 5980 kB, out of which 1736 resides in RAM.

Linux Ram is not free eventhough the cpu is ~idle [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm newbie to linux.
My linux server says it has 47 gb Ram and quadcore cpu. But it is not as fast as it should be.
Used free -m command and it shows
Available: ~ 47gb
Used: ~ 45gb
Free: ~ 2gb
At that the server is not used by anyone else.
Used top command and it showed cpu is 0.1% used.
Is the used value shown in free command correct?
If the data is reliable what could make use of 45gb?
It is a fedora 64 bit kernel and it supports pae - physical address extension.
Please help and let me know if it is a known question.
Yes, it is a question, but the answer is, your memory is all there primarily free and not the source of your slowdowns. Take a look at your memory with free. For example:
$ free -tm
total used free shared buffers cached
Mem: 3833 3751 82 0 1056 1107
-/+ buffers/cache: 1587 2246
Swap: 2000 83 1916
Total: 5833 3834 1999
In the first line used does not mean currently in use.
Looking at the first line it says I have 3833 total and have 3751 used. Is that a problem? No. Why? When Linux uses memory, it marks the memory as used and when it is done, it releases the buffers and cached memory that is no longer needed. The memory that was used, but is now free is not returned to total and subtracted from used, rather the buffers and cache are simply returned to the system and are available for re-use by any other process that may need it.
If you look further to the right, you see I have 1056 buffers and 1107 cached. The next line explains that of the total there is only 1587 used and 2246 free. The 2246 roughly being the sum of the original 82 free + (1056 buffers + 1107 cached) that have been released for re-use. This is the current memory in use and available.
The next line shows the swap available and its use and the last line shows the rough sums of lines 1 and 3. So no need to panic, if there is a slowdown, it is most likely not because your memory has all been used.

Understanding Linux top CPU utilisation output [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm using an single core small ARM processor running under Debian and have problems understanding the CPU utilisation output of top, see:
top - 15:31:54 up 30 days, 23:00, 2 users, load average: 0.90, 0.89, 0.87
Tasks: 44 total, 1 running, 43 sleeping, 0 stopped, 0 zombie
Cpu(s): 65.0%us, 20.3%sy, 0.0%ni, 14.5%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
Mem: 61540k total, 40056k used, 21484k free, 0k buffers
Swap: 0k total, 0k used, 0k free, 22260k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
26028 root 20 0 2536 1124 912 R 1.9 1.8 0:00.30 top
31231 root 19 -1 45260 964 556 S 1.9 1.6 1206:15 owserver
3 root 15 -5 0 0 0 S 0.3 0.0 0:08.68 ksoftirqd/0
694 root 20 0 28640 840 412 S 0.3 1.4 468:26.74 rsyslogd
The column %CPU is very low over all processes, in this example it is all together 4,4% (all other process below had been on 0%)
But the allover CPU on line 3 shows 65%us and 20%sy, so for both a very high value - and by the way, this is how the system feels: very slow :-(
The system is almost always in this condition: very low CPU for all processes, but high user+system CPU.
Can anybody explain why there is such a high inconsistence within the top tool output?
And what tool can I use to better find out what causes the high user+system CPU utilization - top seems to be useless here.
update: meanwhile I've found this thread here, which discusses a similiar question, but I can't verify what is written there:
The command uptime shows the average CPU utilization per 1/5/15 minutes
This is close to what the first line of top outputs as sum of %us+%sy. But this is changing much more, maybe it is an average per 10s?
Even if looking longer time on the top output, the sum of %us+%sy is always several times higher than the summary of all %CPU
Thanks
Achim
You should read the manpage of top to understand its output more astutely. From the manpage:
%CPU -- CPU usage
The task's share of the elapsed CPU time since the last screen update, expressed as a percentage of total CPU time. The default screen update time is 3 seconds, which can be changed with #top -d ss.tt. To measure commulative CPU usage, run top -S.
-S : Cumulative time mode toggle
Starts top with the last remembered 'S' state reversed. When 'Cumulative mode' is On, each process is listed with the cpu time that it and its dead children have used.
The CPU states are shown in the Summary Area. They are always shown as a percentage and are for the time between now and the last refresh.
us -- User CPU time
The time the CPU has spent running users' processes that are not niced.
sy -- System CPU time
The time the CPU has spent running the kernel and its processes.
ni -- Nice CPU time
The time the CPU has spent running users' proccess that have been niced.
wa -- iowait
Amount of time the CPU has been waiting for I/O to complete.
hi -- Hardware IRQ
The amount of time the CPU has been servicing hardware interrupts.
si -- Software Interrupts
The amount of time the CPU has been servicing software interrupts.
st -- Steal Time
The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine).
Under normal circumstances %us+%sy should always be higher.

Linux memory reporting discrepancy [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I'm getting a memory usage discrepancy between meminfo and ps. Free is reporting much less free memory than what processes are apparently using according to ps.
According to free, I have only 3188mb free:
free -m
total used free shared buffers cached
Mem: 15360 13273 2086 0 79 1022
-/+ buffers/cache: 12171 3188
Swap: 0 0 0
I try to track down where the memory is going using ps (snipped below non 0 RSS values):
ps -A --sort -rss -o comm,pmem,rss
COMMAND %MEM RSS
mysqld 13.1 2062272
java 6.2 978072
ruby 0.7 114248
ruby 0.7 114144
squid 0.1 30716
ruby 0.0 11868
apache2 0.0 10132
apache2 0.0 9092
apache2 0.0 8504
PassengerHelper 0.0 5784
sshd 0.0 3008
apache2 0.0 2420
apache2 0.0 2228
bash 0.0 2120
sshd 0.0 1708
rsyslogd 0.0 1164
PassengerLoggin 0.0 880
ps 0.0 844
dbus-daemon 0.0 736
sshd 0.0 736
ntpd 0.0 664
squid 0.0 584
cron 0.0 532
ntpd 0.0 512
exim4 0.0 504
nrpe 0.0 496
PassengerWatchd 0.0 416
dhclient3 0.0 344
mysqld_safe 0.0 316
unlinkd 0.0 284
logger 0.0 252
init 0.0 200
getty 0.0 120
However, this doesn't make sense as adding up the RSS column leads to a total memory usage of only around 3287mb that should leave almost 12gb free!
I'm using kernel 2.6.16.33-xenU #2 SMP x86_64 on Amazon AWS.
Where is my memory going? Can anyone shed some light on how to track this down?
Check the usage of the Slab cache (Slab:, SReclaimable: and SUnreclaim: in /proc/meminfo). This is a cache of in-kernel data structures, and is separate from the page cache reported by free.
If the slab cache is resposible for a large portion of your "missing memory", check /proc/slabinfo to see where it's gone. If it's dentries or inodes, you can use sync ; echo 2 > /proc/sys/vm/drop_caches to get rid of them.
You can also use the slabtop tool to show the current usage of the Slab cache in a friendly format. c will sort the list by current cache size.
You cannot just add up the RSS or VSZ columns to get the amount of memory used. Unfortunately, memory usage on Linux is much more complicated than that. For a more thorough description see Understanding memory usage on Linux, which explains how shared libraries are shared between processes, but double-counted by tools like ps.
I don't know offhand how free computes the numbers it displays but if you need more details you can always dig up its source code.
I believe that you are missing the shared memory values. I don't think ps reports the shared RAM as part of the RSS field. Compare with the top RES field to see.
Of course if you do add in the shared RAM, how much do you add? Because it is shared the same RAM may show up credited to many different processes.
You can try to solve that problem by creative parsing of the /proc/[pid]/smaps files.
But still, that only gets you part of the way. Some memory pages are shared but accounted as resident. These pages get shared after a fork() call. They can become unshared at any time but until they are they don't count toward total used system RAM. The proc smaps file doesn't show these either.

Resources