In windows, I can check the largest contiguous free blocks via 'feature memstats'. But this does not work on linux. I am currently working on a project which needs to load a very large matrix(59107200*17). And I ran into 'out of memory' error. Is there any way to check this and other memory infomation in matlab on linux?
Have you tried memory function? I think it should be available on Linux as well.
Added
By the way, 59107200*17 array of doubles require about 8 Gb of memory (each double number occupies 8 bytes). Do you have it?
Related
I made a C program which runs about a few milli seconds.
And I want to know that how much (stack and heap) memory is required to run the program.
I used Valgrind (massif), but it only measures the memory periodically.
How can I get it?
Thanks.
If your code is in C or C++ you might be able to use getrusage() which returns you various statistics about memory and time usage of your process.
Not all platforms support this though and will return 0 values for the memory-use options.
Instead you can look at the virtual file created in /proc/[pid]/statm (where [pid] is replaced by your process id. You can obtain this from getpid()).
This file will look like a text file with 7 integers. You are probably most interested in the first (all memory use) and sixth (data memory use) numbers in this file
$ /usr/bin/time -v /path/to/your/program
I have access to a shared workstation running Linux and have to load in a large .csv file. However, I am uncertain how much memory that requires of the system as there will be some overhead and I am not allowed to use more than a specific amount of the memory.
So can I by any means limit the memory usage either inside Matlab or as I start the job itself? Everything need to happen through the terminal.
I you are using MATLAB R2015 or later you can setup the array size limits in the Preferences:
http://de.mathworks.com/help/matlab/matlab_env/set-workspace-and-variable-preferences.html
In my opinion it would be a better solution to control the array sizes by your script/function.
I have a Ubuntu 12.10 (kernel 3.9.0-rc2) installation running on VMWare. I've given it 512MB RAM.
cat /proc/meminfo shows:
MemTotal: 507864 KB
MemFree: 440180
I want to use the swap (for some reason) so I wrote a C program which allocates a 500MB array (using malloc()) and fills it with junk. However, the program gets killed before it can fill the entire array and a message "Killed" appears on the screen.
I wanted to ask if this is normal behavior and what is the reason behind this? In my opinion, the swap should be used because the free RAM is insufficient.
Edit: I did not mention that I have 1GB swap. cat /proc/swaps shows:
/dev/sda5 Size: 1046524 Used: 14672.
The "Used" amount increases when I run the memory-eating program. But as you can see, a lot of swap is leftover. So why did the program have to be 'Killed'?
So I couldn't find a valid answer. I have a temporary solution:
I had modified the Virtual Machine settings to give 512MB RAM to the VM. Now I reverted back to 2GB and ran 5 parallel programs each consuming 500MB. Thankfully, all of them run and the swap gets used.
I just needed to use the swap for a project on swap management.
It also matters how you have written your C program to allocate the memory and what are the compiler flags. For example, if you are statically allocating memory (such as double A[N][N]), the behaviour is different from dynamically allocating it: (such as using malloc/calloc). Static allocations are limited by the memory model of the compiler (medium, small etc, often can be specified). Perhaps, a good starting point is :
http://en.wikipedia.org/wiki/C_dynamic_memory_allocation
Does this help?
The Memory that i'm trying to allocate is not huge or any thing. i just cant allocate
1.5 to 1.7 gigabyte of contagious memory. From what i know, windows gives you 2 gigabytes of virtual space to use in your application. so, a call like malloc(1500*1024*1024) is not totally crazy. i tried malloc ,new[], VirtualAlloc all didn't work.
is there something i'm missing here?
someone told me it has something to do with physical memory, i totally dismissed that because why was virtual space ,address translation tables and TLBs invented if i'm allocating physical memory.
if i'm allocating a 1.5 gig on a machine with 256 megabytes of ram and i try to access shouldn't the system be thrashing but working?
Different versions of Windows have different memory restrictions. If you're using a 32-bit version, you may need to use the 4GB tuning techniques to allocate more than 2GB.
If you are running a 32 bit version of windows, you have a max of 2Gb of virtual space. Your compiled programs and the C/C++ runtime libraries each use up some part of it, along with preallocated code and data segments. If you run a 32 bit windows, you have less memory space than you think. I'll agree that 1.5 Gb doesn't sound unreasonable, but then you would think that MS products weren't unreasonable, too, right?
Try samller pieces as a sanity check, (e.g., 1Gb); I suspect that will succeed. And try big allocations on a 64 bit system, where there isn't any practical upper limit.
Are you using ODBC? The ODBC dlls in 32-bit windows seem to insert themselves at an awkward place in the virtual address space, causing large allocations like yours to fail. A workaround is to configure your app to delay load the ODBC dlls, and then make sure you allocate your big chunk before you call anything that uses ODBC.
Linux noob question:
If I have 500MB of RAM, and 500MB of swap space, can the OS and processes then use 1GB of memory?
In other words, is the total amount of memory available to programs and the OS the total of the physical memory size and swap size?
I'm trying to figure out which SNMP counters to query, but need to understand how Linux uses virtual memory a little better first.
Thanks
Actually, it IS essentially correct, but your "virtual" memory does NOT reside beside your "physical memory" (as Matthew Scharley stated).
Your "virtual memory" is an abstraction layer covering both "physical" (as in RAM) and "swap" (as in hard-disk, which is of course as much physical as RAM is) memory.
Virtual memory is in essention an abstraction layer. Your program always addresses a "virtual" address, which your OS translates to an address in RAM or on disk (which needs to be loaded to RAM first) depending on where the data resides. So your program never has to worry about lack of memory.
Nothing is ever quite so simple anymore...
Memory pages are lazily allocated. A process can malloc() a large quantity of memory and never use it. So on your 500MB_RAM + 500MB_SWAP system, I could -- at least in theory -- allocate 2 gig of memory off the heap and things will run merrily along until I try to use too much of that memory. (At which point whatever process couldn't acquire more memory pages gets nuked. Hopefully it's my process. But not always.)
Individual processes may be limited to 4 gig as a hard address limitation on 32-bit systems. Even when you have more than 4 gig of RAM on the machine and you're using that bizarre segmented 36-bit atrocity from hell addressing scheme, individual processes are still limited to only 4 gigs. Some of that 4 gigs has to go for shared libraries and program code. So yer down to 2-3 gigs of stack+heap as an ADDRESSING limitation.
You can mmap files in, effectively giving you more memory. It basically acts as extra swap. I.e. Rather than loading a program's binary code data into memory and then swapping it out to the swapfile, the file is just mmapped. As needed, pages are swapped into RAM directly from the file.
You can get into some interesting stuff with sparse data and mmapped sparse files. I've seen X-windows claim enormous memory usage when in fact it was only using up a tiny bit.
BTW: "free" might help you. As might "cat /proc/meminfo" or the Vm lines in /proc/$PID/status. (Especially VmData and VmStk.) Or perhaps "ps up $PID"
Although mostly it's true, it's not entirely correct. For a particular process, the environment you run it in may limit the memory available to your process. Check the output of ulimit -v as well.
Yes, this is essentially correct. The actual numbers might be (very) marginally lower, but for all intents and purposes, if you have x physical memory and y virtual memory (swap in linux), then you have x + y memory available to the operating system and any programs running underneath the OS.