Apache Yarn - Allocating more memory than the Physical memory or RAM - apache-spark

I was considering changing yarn.nodemanager.resource.memory-mb to a value higher than the RAM available on my machine. Doing a quick search revealed that not many people are doing this.
Many long lived applications on yarn, are bound to have a jvm heap space allocation in which some of their memory is more frequently used and some of it is rarely used. In this case, it would make perfect sense for such applications to have some of their infrequently used memory portions swapped to disk and reallocating the available physical memory to other applications that need it.
Given the above background, can someone either please corroborate my reasoning or offer an alternate perspective? Also, can you please also clarify how the parameter yarn.nodemanager.vmem-pmem-ratio would work in the above case?

This is not a good idea. Trying to use more memory than what is available will eventually crash your Node Manager hosts.
There already is a feature called opportunistic containers which uses spare memory not used by the NMs and adds more containers to those hosts. Refer to:
YARN-1011 [Umbrella] Schedule containers based on utilization of currently allocated containers
In addition, Pepperdata has a product that does almost the same thing if you can't wait for YARN-1011.
https://www.pepperdata.com/products/capacity-optimizer/
As for yarn.nodemanager.vmem-pmem-ratio, don't enable this as it's not recommended anymore.
YARN-782 vcores-pcores ratio functions differently from vmem-pmem ratio in misleading way

Related

How To implement swapfile cross operating system

There are use cases where I can't have a lot of ram, and sometimes due to docker based services doesn't always provide more than 512mb/1gb of ram, or if I run multiple rust based gui apps and if each take 100mb of ram normally, how can I implement a swapfile/ virtual ram to exceed allotted ram? Also os level swapfiles don't let users choose which app can use real ram and which swapfile, so it can become a problem too. I want to use swapfile as much as possible, and not even real ram, if possible. Users and hosting services provide with lot of storage usually (more than 10gb normally) so it would be a good way to use the available storage too!
If swapfile or anything like that aren't possible, I would like to know if there is any difference in speed and cpu consumption between "cache data in ram" apps and "cache data in file and read it when required" apps. If the latter is slow normally and not as efficient as swapfiles, I would like to know the possible ways how os manages to make swapfiles that efficient than apps.
An application does not control whether the memory they allocate is allocated on real RAM, on a swap partition, or else. You just ask for memory, and the OS is responsible for finding available memory to give to you.
Besides that, note that using swap (sometimes called swapping) is extremely bad performance-wise. How much depends a lot on your hardware, but it's about three orders of magnitude. This is even amplified if you are interacting with a user: a program that is fetching some resources will not be too bothered if it has to wait one minute to get them instead of a few milliseconds because the system is under heavy load, but a user will generally not be that patient.
Also note that, when swapping, the OS does not chose which application gets the faster RAM and which ones get the swap memory at random. It will try to determine which application should be prioritized, by how much, etc. based on how it was configured (at least for the Linux kernel), so in reality it's the user who, in the end, decides which applications get the most RAM (ahead of time, of course: they are not prompted each time the kernel has to make that decision with a little pop-up...).
Finally, modern OS allow several applications to allocate memory that overlap, as long as each application is not fully using the memory it asked for (which is kind of usual), allowing you to run applications that in theory require more RAM that you actually have.
This was on the OS part: now to the application part. Usually, when you write a program (whose purpose is not specifically RAM-related), you should not really care for memory consumption (up to a certain point), especially in Rust. Not only that is usually handled by the OS in case you used a little bit too much memory, but when it's possible, most people prefer to trade a little more memory usage (even a lot more) for better CPU performance, because RAM is a lot cheaper than CPU.
There are exceptions, of course, in which the memory consumption is so high that you can't really afford not paying attention. In these cases, either you let the user deal with this problem (ie. this application is known to consume a lot of memory because there are no other ways to do this, so if you want to use it, just have a lot of memory), as often video games do, or you rethink your application to reduce the memory usage trading it for some CPU efficiency, as for example is done when you are handling graphs so huge you couldn't even store them on all the hard disks of the world (in which case your application has to be smart enough to be able to work on small parts of the graph at the time), or finally you are working with a big resource but which can be stored on the hard disk, so you just write it on a file and access it chunks-by-chunks, as some database managers do.

How to determine Node --max_old_space_size for a given memory limit?

We run Node processes inside Docker containers with hard memory caps of 1GB, 2GB, or 4GB. Each container generally just runs a single Node process (plus maybe a tiny shell script wrapper). Let's assume for the purposes of this question that the Node process never forks more processes.
For our larger containers, if we don't set --max_old_space_size ourselves, then in the version of Node we use (on a 64-bit machine) it defaults to 1400MB. (This will change to 2048MB in a later version of Node.)
Ideally we want our Node process to use as much of the container as possible without going over and running out of memory. The question is — what number should we use? My understanding is that this particular flag tunes the size of one of the largest pools of memory used by Node, but it's not the only pool — eg, there's a "non-old" part of the heap, there's stack, etc. How much should I subtract from the container's size when setting this flag in order to stay away from the cgroup memory limit but still make maximal use of the amount of memory allowed in this container?
I do note that from the same place where kMaxOldSpaceSizeHugeMemoryDevice is defined, it looks like the default "max semi space" is 16MB and the default "max executable size" is 512MB. So I suspect this means I should subtract at least 528 from the container's memory limit when determining the value for this flag. But surely there are other ways that Node uses memory?
(To be more specific, we are a hosting service that sells containers of particular sizes to our users, most of which use them for Node processes. We'd like to be able to advise our customers as to what flag to set so that they neither are killed by our limits nor pay us for capacity that Node's configuration doesn't let them actually use.)
There is, unfortunately, no particularly satisfactory answer to this question.
The constants you've found control the size of the garbage-collected heap, but as you've already guessed, there are many ways to consume memory that's not part of that heap:
For example, big strings and big TypedArrays are typically managed by the embedder (i.e. node and its modules, not V8 itself), and outside the GC'ed heap.
Node modules, in general, can consume whatever memory they want. Presumably you don't want to restrict what modules your customers can run, but that implies that you also can't predict how much memory those modules are going to require.
V8 also uses temporary memory outside the GC'ed heap for parsing and compilation. Numbers depend on the code that's being run, from a few kilobytes up to a gigabyte or more (e.g. for huge asm.js codebases) anything is possible. These are relatively short-lived memory consumption peaks, so on the one hand you probably don't want to limit long-lived heap memory to account for them, but on the other hand that means they can make your processes run into the system limit.

Behaviour of Mesos when using cgroups for Spark

I was wondering what the behaviour of Spark in fine-grained mode on Mesos would be, when cgroups are enabled.
One concern is the following: when I use Mesos+spark without cgroups, it already shows that the actual spark executor process uses at least 10% more memory, than what it promised to Mesos it would use. When enabling cgroups, would it kill the Spark-executors?
Second, how is file-cache handled? Spark relies heavily on file-cache. Is file-cache accounted to the amount of memory in Mesos? Probably not, but could we influence this? So for example, ideally I want Spark to use 8GB in total, of which 5GB should be used for the java process -- assuming that spark plays nicely and does not grow beyond 5GB -- and 3GB should be used as file-cache (max).
I hope someone has experience with this, because in order to test these things myself I would have to go through a lot of support requests from our cluster sysadmin, as cgroups rely on root credentials at one point - and I'd hate it to be in vain without having asked others.
To answer your first question, it seems you've got something mixed up with how cgroups work. The executor simply would not (,and it indeed does, as I can confirm) be able to allocate more memory than the cgroups would allow. So Mesos would not actually act as an process killer or anything*. But, some types of programs would indeed break on being unable to allocate more memory and it depends on the program if it then quits, or is able to run fine, but perhaps with less memory and/or performance.
For your second question, there don't seem to be any configuration settings in order to influence the actual cgroup memory amount. There seems to be a 1-to-1 mapping between the executor memory setting and what Spark gets from Mesos. However, I do think there is a hidden factor, because I can see Spark asks for roughly 5.8GB, but actually I set the executor memory to 5GB. (I'll update the ticket once I can find this hidden factor of probably 15% in the source code.)
Update, the setting you'd want is spark.mesos.executor.memoryOverhead. You can give a number in MegaBytes which is added to the executor memory as the total memory which will be used as Mesos resource, and thus as a cgroup memory limit.
*=Update2, actually cgroups by default does kill processes which grow beyond the control group's limit. I can confirm that the memory.oom_control in /cgroups/memory/x/ is set to '0' (which counter-intuitively is enabled). However in the case of Spark, it is the aformentioned 10-15% overhead which gives enough leeway to not encounter OOM.

Erlang garbage collection

I need your help in investigation of issue with Erlang memory consumption. How typical, isn't it?
We have two different deployment schemes.
In first scheme we running many identical nodes on small virtual machines (in Amazon AWS),
one node per machine. Each machine has 4Gb of RAM.
In another deployment scheme we running this nodes on big baremetal machines (with 64 Gb of RAM), with many nodes per machine. In this deployment nodes are isolated in docker containers (with memory limit set to 4 Gb).
I've noticed, that heap of processes in dockerized nodes are hogging up to 3 times much more RAM, than heaps in non-dockerized nodes with identical load. I suspect that garbage collection in non-dockerized nodes is more aggressive.
Unfortunately, I don't have any garbage collection statistics, but I would like to obtain it ASAP.
To give more information, I should say that we are using HiPE R17.1 on Ubuntu 14.04 with stock kernel. In both schemes we are running 8 schedulers per node, and using default fullsweep_after flag.
My blind suggestion is that Erlang default garbage collection relies (somehow) on /proc/meminfo (which is not actual in dockerized environment).
I am not C-guy and not familiar with emulator internals, so could someone point me to places in Erlang sources that are responsible for garbage collection and some emulator options which I can use to tweak this behavior?
Unfortunately VMs often try to be smarter with memory management than necessary and that not always plays nicely with the Erlang memory management model. Erlang tends to allocate and release a large number of small chunks of memory, which is very different to normal applications, which usually allocate and release a small number of big chunks of memory.
One of those technologies is Transparent Huge Pages (THP), which some OSes enable by default and which causes Erlang nodes running in such VMs to grow (until they crash).
https://access.redhat.com/solutions/46111
https://www.digitalocean.com/company/blog/transparent-huge-pages-and-alternative-memory-allocators/
https://docs.mongodb.org/manual/tutorial/transparent-huge-pages/
So, ensuring THP is switched off is first thing you can check.
The other is trying to tweak the memory options used when starting the Erlang VM itself, for example see this post:
Erlang: discrepancy of memory usage figures
Resulting options that worked for us:
-MBas aobf -MBlmbcs 512 -MEas aobf -MElmbcs 512
Some more theory about memory allocators:
http://www.erlang-factory.com/static/upload/media/139454517145429lukaslarsson.pdf
And more detailed description of memory allocator flags:
http://erlang.org/doc/man/erts_alloc.html
First thing to know, is that garbage collection i Erlang is process based. Each process is GC in their own time, and independently from each other. So garbage collection in your system is only dependent on data in your processes, not operating system itself.
That said, there could be some differencess between memory consumption from Eralang point of view, and System point of view. That why comparing erlang:memory to what your system is saying is always a good idea (it could show you some binary leaks, or other memory problems).
If you would like to understand little more about Erlang internals I would recommend those two talks:
https://www.youtube.com/watch?v=QbzH0L_0pxI
https://www.youtube.com/watch?v=YuPaX11vZyI
And from little better debugging of your memory management I could reccomend starting with http://ferd.github.io/recon/

Why are Sempaphores limited in Linux

We just ran out of semaphores on our Linux box, due to the use of too many Websphere Message Broker instances or somesuch.
A colleague and I got to wondering why this is even limited - it's just a bit of memory, right?
I thoroughly googled and found nothing.
Anyone know why this is?
cheers
Semaphores, when being used, require frequent access with very, very low overhead.
Having an expandable system where memory for each newly requested semaphore structure is allocated on the fly would introduce complexity that would slow down access to them because it would have to first look up where the particular semaphore in question at the moment is stored, then go fetch the memory where it is stored and check the value. It is easier and faster to keep them in one compact block of fixed memory that is readily at hand.
Having them dispersed throughout memory via dynamic allocation would also make it more difficult to efficiently use memory pages that are locked (that is, not subject to being swapped out when there are high demands on memory). The use of "locked in" memory pages for kernel data is especially important for time-sensitive and/or critical kernel functions.
Having the limit be a tunable parameter (see links in the comments of original question) allows it to be increased at runtime if needed via an "expensive" reallocation and relocation of the block. But typically this is done one time at system initialization before anything much is even using semaphores.
That said, the amount of memory used by a semaphore set is rather tiny. With modern memory available on systems being in the many gigabytes the original default limits on the number of them might seem a bit stingy. But keep in mind that on many systems semaphores are rarely used by user space processes and the linux kernel finds its way into lots of small embedded systems with rather limited memory, so setting the default limit arbitrarily high in case it might be used seems wasteful.
The few software packages, such as Oracle database for example, that do depend on having many semaphores available, typically do recommend in their installation and/or system tuning advice to increase the system limits.

Resources