So is meminfo the best way to say there is X free memory? I ask this question because my company states that this is not an accurate way to represent free memory. In fact, it is the sum of memfree, cached, buffers, slab and if on a vmware box, the vmware ballon is added from /proc/vmmemctl. thoughts?
Regarding the vmware host case, I had a similar problem with one or our servers, see https://serverfault.com/questions/435307/redhat-linux-server-paging-sum-of-res-rss-buffers-cached-total-who-is-u
I think the problem with meminfo is that it doesn't clearly show the memory used by vmmemctl. Since vmmemctl is a driver, the memory it uses doesn't appear also in top/ps.
I believe that /proc/meminfo is accurate (but I can't prove that), and more generally the /proc pseudo-file system gives you a lot of information about the system (and each process).
Of course it gives information about the kernel providing it (so the kernel running inside your VMware virtual machine).
If you are concerned by a specific process, see also this answer
Related
Is there a way to determine how much physical memory is being used by the network subsystem in the Linux kernel at any given point in time? I understand the per-connection memory limits can be specified via sysctl. But is there a tool to peek inside the TCP/IP stack and ask it how much buffered data it has per connection?
Did you try: ss -m? The documentation of the reported values seems scarce but you can make educated guesses based on their full names defined in linux/sock_diag.h.
I don't know whether there is a malloc library that providing interface to free-ed memory back to os when calling it.....
If there is no, what can I do to achieve that?
Under some operating systems, you can use sbrk to reduce the size of your arena. This may or may not hand that memory back to the OS.
In today's world of virtual memory, it may not really be necessary. There's a good chance that, if you just stop using the memory, it'll get swapped out and never bought back in to main storage (although it may still take up address space and swap file space) - that all depends on the OS.
It should happen automatically on free(), but sometimes an explicit malloc_trim() helps:
http://man7.org/linux/man-pages/man3/malloc_trim.3.html
I have a program that collects various kstat information on our Solaris systems and, now that we've introduced Linux into our data center, I'd like to do the same for Linux.
I'm having trouble, however, finding equivalents for many of the kstats. I was wondering if there is a library or utility that mimics kstats for the Linux environment. Even a partial implementation would be helpful.
As of right now, I've been parsing files in /proc but finding the right information has been hit or miss. For example, kstat has the following data:
unix::vminfo
swap_alloc
swap_avail
swap_free
swap_resv
In Linux, you have the entries "SwapTotal" and "SwapFree" but
a) It appears that swap_free actually corresponds to "SwapTotal" and swap_avail corresponds to "SwapFree"
b) I can't find values for swap_avail (Maybe SwapTotal minus SwapFree?) now swap_resv
Any ideas?
I'm not aware of a Linux kstat implementation but anyway, you are first facing a terminology issue here.
The Solaris kstats swap statistics you are referencing are using "swap" to mean the whole virtual memory, i.e. the swap areas plus a large part of the RAM.
On the other hand, the Linux SwapTotal and SwapFree statistics are only related to the swap area (i.e. on disk).
Another issue is Linux overcommit memory allocation so a memory reservation counter might not be maintained and wouldn't be useful anyway.
There is this meminfo documentation take 2 article on LWN which describes all fields from /proc/meminfo and says the following about SwapTotal and SwapFree:
SwapTotal: total amount of swap space available
SwapFree: Memory which has been evicted from RAM, and is temporarily
on the disk
There is also some discussion at http://kerneltrap.org/node/4097.
Perl version:
https://github.com/zfsonlinux/linux-kstat
"This is an implementation of the Sun::Solaris::Kstat Perl module
for Linux ZFS. It should behave identically to the Solaris version."
Ruby version:
https://www.rubydoc.info/gems/linux-kstat/Linux/Kstat
"The Kstat class encapsulates Linux kernel statistics derived from /proc/stat."
I thought this was expected behavior?
From: http://classic.chem.msu.su/cgi-bin/ceilidh.exe/gran/gamess/forum/?C35e9ea936bHW-7675-1380-00.htm
Paraphrased summary: "Working on the Linux port we found that cudaHostAlloc/cuMemHostAlloc CUDA API calls return un-initialized pinned memory. This hole may potentially allow one to examine regions of memory previously used by other programs and Linux kernel. We recommend everybody to stop running CUDA drivers on any multiuser system."
My understanding was that "Normal" malloc returns un-initialized memory, so I don't see what the difference here is...
The way I understand how memory allocation works would allow the following to happen:
-userA runs a program on a system that crunches a bunch of sensitive information. When the calculations are done, the results are written to disk, the processes exits, and userA logs off.
-userB logs in next. userB runs a program that requests all available memory in the system, and writes the content of his un-initialized memory, which contains some of userA's sensitive information that was left in RAM, to disk.
I have to be missing something here. What is it? Is memory zero'd-out somewhere? Is kernel/pinned memory special in a relevant way?
Memory returned by malloc() may be nonzero, but only after being used and freed by other code in the same process. Never another process. The OS is supposed to rigorously enforce memory protections between processes, even after they have exited.
Kernel/pinned memory is only special in that it apparently gave a kernel mode driver the opportunity to break the OS's process protection guarantees.
So no, this is not expected behavior; yes, this was a bug. Kudos to NVIDIA for acting on it so quickly!
The only part that requires root priviledges to install CUDA is the NVIDIA driver. As a result all operations done using NVIDIA compiler and link can be done using regular system calls, and standard compiling (provided you have the proper information -lol-). If any security holes lies there, it remains, wether or not cudaHostAlloc/cuMemHostAlloc is modified.
I am dubious about the first answer seen on this post. The man page for malloc specifies that
the memory is not cleared. The man page for free does not mention any clearing of the memory.
The clearing of memory seems to be in the responsability of the coder of a sensitive section -lol-, that leave the problem of an unexpected (rare) exit. Apart from VMS (good but not widely used OS), I dont think any OS accept the performance cost of a systematic clearing. I am not clear about the way the system may track in the heap of a newly allocated memory what was previously in the process area, and what was not.
My conclusion is: if you need a strict level of privacy, do not use a multi-user system
(or use VMS).
I have recently gotten a faulty RAM and despite already finding out this I would like to try a much easier concept - write a program that would allocate faulty regions of RAM and never release them. It might not work well if they get allocated before the program runs, but it'd be much easier to reboot on failure than to build a kernel with patches.
So the question is:
How to write a program that would allocate given sectors (or pages containing given sectors)
and (if possible) report if it was successful.
This will problematic. To understand why, you have to understand the relation between physical and virtual memory.
On any modern Operating System, programs will get a very large address space for themselves, with the remainder of the address space being used for the OS itself. Other programs are simply invisible: there's no address at which they're found. How is this possible? Simple: processes use virtual addresses. A virtual address does not correspond directly to physical RAM. Instead, there's an address translation table, managed by the OS. When your process runs, the table only contains mappings for RAM that's allocated to you.
Now, that implies that the OS decides what physical RAM is allocated to your program. It can (and will) change that at runtimke. For instance, swapping is implemented using the same mechanism. When swapping out, a page of RAM is written to disk, and its mapping deleted from the translation table. When you try to use the virtual address, the OS detects the missing mapping, restores the page from disk to RAM, and puts back a mapping. It's unlikely that you get back the same page of physical RAM, but the virtual address doesn't change during the whole swap-out/swap-in. So, even if you happened to allocate a page of bad memory, you couldn't keep it. Programs don't own RAM, they own a virtual address space.
Now, Linux does offer some specific kernel functions that allocate memory in a slightly different way, but it seems that you want to bypass the kernel entirely. You can find a much more detailed description in http://lwn.net/images/pdf/LDD3/ch08.pdf
Check out BadRAM: it seems to do exactly what you want.
Well, it's not an answer on how to write a program, but it fixes the issue whitout compiling a kernel:
Use memmap or mem parameters:
http://gquigs.blogspot.com/2009/01/bad-memory-howto.html
I will edit this answer when I get it running and give details.
The thing is write own kernel module, which can allocate physical address. And make it noswap with mlock(2).
I've never tried it. No warranty.