Why ANTLR4 parsers accumulates ATNConfig objects? - antlr4

When developing a Parser for C++ using ANTLR, we made a batch parsing test case where a new parser is constructed to parse each C++ source file. The performance is acceptable at start - about 15 seconds per file. But after parsing some 150+ files the parsing of each file takes longer and longer and finally jvm throws a "GC limit exceeded" error.
Using JProfiler we found there are many ATNConfig objects being accumulated progressively after parsing each file. Starting from about 70M, they steadily pile up to beyond 500M until the heap is near full and GC takes 100% CPU time. The biggest objects (those that retain the most objects in heap) recognized by JProfiler include a DFA[] and a PredictionContextCache.
One thing to note is that we used 2 threads to run the parsing tasks concurrently. Although the threads don't share any parser or parse tree objects, we notice there are static fields in use by the parser generated by ANTLR, which may contribute to the memory issue in a multi-thread setup. Anyway it is just a suspect.
Does anyone have a clue what's the cause of "ATNConfig being accumulated"? Is there a solution already?

The ATNConfig instances are used for the dynamic DFA construction at runtime. The number of instances required is a function of the grammar and input. There are a few solutions available:
Increase the amount of memory you provide to the application. Try -Xmx12g as a starting point to see if the problem is too little memory for the application.
Each ATNConfig belongs to a DFA instance which represents the DFA for a particular decision. If you know the decision(s) which contain the most ATNConfig instances, you can work to simplify those decisions in the grammar.
Periodically clear the cached DFA, by calling Recognizer.clearDFA(). Note that clearing the DFA too often will hurt performance (if possible, do not clear the DFA at all).
You can use the "optimized" fork of the ANTLR 4. This fork of the project is designed to reduce the memory footprint, which can tremendously help performance for complicated grammars at the expense of speed for certain simple grammars.

Related

Why does v8 report duplicate module strings in heap in my jest tests?

In the process of upgrading node (16.1.x => 16.5.0), I observed that I'm getting OOM issues from jest. In troubleshooting, I'm periodically taking heap snapshots. I'm regularly seeing entries in "string" for module source (same shallow/retained size). In this example screenshot, you can see that the exact same module (React) is listed 2x. Sometimes, the module string is listed even 4x for any given source module.
Upon expansion, it says "system / Map", which suggests to me I think? that theres some v8 wide reference to this module string? That makes sense--maybe. node has a require cache, jest has a module cache, v8 and node i'd assume... share module references? The strings and compiled code buckets do increase regularly, but I expect them to get GC'd. In fact, I can see that many do--expansion of the items show the refs belonging to GC Roots. But I suspect something is holding on to these module references, and I fear it's not at the user level, but at the tooling level. This is somewhat evidenced by observation that only the node.js upgrade induces the OOM failure mode.
Why would my jest test have multiple instances of the same module (i am using --runInBand, so I don't expect multiple workers)
What tips would you offer to diagnose further?
I do show multiple VM Contexts, which I think makes sense--I suppose jest is running some test suites in some sort of isolation.
I do not have a reproduction--I am looking for discussion, best-know-methods, diagnostic ideas.
I can offer some thoughts:
"system / Map" does not mean "some v8 wide reference". "Map" is the internal name for "hidden class", which you may have heard of. The details don't even matter here; TL;DR: some internal thing, totally normal, not a sign of a problem.
Having several copies of the same string on the heap is also quite normal, because strings don't get deduplicated by default. So if you run some string-producing operation twice (such as: reading an external file), you'll get two copies of the string. I have no idea what jest does under the hood, but it's totally conceivable that running tests in parallel in mostly-isolated environments has a side effect of creating duplicate strings. That may be inefficient in a sense, but as long as they get GC'ed after a while, it's not really a problem.
If the specific hypothesis implied above (there are several tests in each file, and jest creates an in-memory copy of the entire file for each executing test) holds, then a possible mitigation might be to split your test files into smaller chunks (1.8MB is quite a lot for a single file). I don't have much confidence in this, but maybe it'd be easy for you to try it and see.
More generally: in the screenshot, there are 36MB of memory used by strings. That's far from being an OOM reason.
It might be insightful to measure the memory consumption of both Node versions. If, for example, it used to consume 4GB and now crashes when it reaches 2GB, that would indicate that the limit has changed. If it used to consume 2GB and now crashes when it reaches 4GB, that would imply that something major has changed. If it used to consume 1.98GB and now crashes when it reaches 2.0GB, then chances are something tiny has changed and you just happened to get lucky with the old version.
Until contradicting evidence turns up, I would operate under the assumption that the resource consumption is normal and simply must be accommodated. You could try giving Node more memory, or reducing the number of parallel test executions.
This seems like a known issue of Jest at Node JS v16.11.0+ and has already been reported to GitHub.

Coding with Revit API: tips to reduce memory use?

I have a quite 'general' question. I am developing with Revit API (with python), and I am sometimes observing that the Revit session gets slower during my tests and trials (the longer Revit stays open, the more it seems to happen). It's not getting to the point where it would be really problematic, but it made me think about it anyway..
So, since I have no programming background, I am pretty sure that my code is filled with really 'unorthodox' things that could be far better.
Would there be some basic 'tips and tricks' that I could follow (I mean, related to the Revit API) to help the speed of code execution? Or maybe should I say: to help reducing the memory use?
For instance, I've read about the 'Dispose' method available, notably when using Transactions (for instance here: http://thebuildingcoder.typepad.com/blog/2012/09/disposal-of-revit-api-objects.html ), but it's not very clear to me in the end if that's actually very important to do or not (and furthermore, since I'm using Python, and don't know where that puts me in the discussion about using "using" or not)?
Should I just 'Dispose' everything? ;)
Besides the 'Dispose' method, is there something else?
Thanks a lot,
Arnaud.
Basics:
Okay let's talk about a few important points here:
You're running scripts under IronPython which is an implementation of python in C# language
C# Language uses Garbage Collectors to collect unused memory.
Garbage Collector (GC) is a piece of program that is executed at intervals to collect the unused elements. It uses a series of techniques to group and categorize the target memory areas for later collection.
Your main program is halted by the operating system to allow the GC to collect memory. This means that if the GC needs more time to do its job at each interval, your program will get slow and you'll experience some lag.
Issue:
Now to the heart of this issue:
python is an object-oriented programming language at heart and IronPython creates objects (Similar to Elements in Revit in concept) for everything, from your variables to methods of a class to functions and everything else. This means that all these objects need to be collected when they're not used anymore.
When using python as a scripting language for a program, there is generally one single python Engine that executes all user inputs.
However Revit does not have a command prompt and an associated python engine. So every time you run a script in Revit, a new engine is created that executes the program and dies at the end.
This dramatically increases the amount of unused memory for the GC to collect.
Solution:
I'm the creator and maintainer of pyRevit and this issue was resolved in pyRevit v4.2
The solution was to set LightweightScopes = true when creating the IronPython engine and this will force the engine to create smaller objects. This dramatically decreased the memory used by IronPython and increased the amount of time until the user experiences Revit performance degradation.
Sorry i can't comment with a low reputation, i use another way to reduce memory, it's less pretty than the LightweightScopes trick, but it works for one-time cleanup after expensive operations :
import gc
my_object = some_huge_object
# [operation]
del my_object # or my_object = [] does the job for a list or dict
gc.collect()

Opposite of a 'streaming' implementation

I struggle finding a word describing a solution that is not streaming but processes everything in phases or stages resulting in keeping everything in memory. Processing something in bulk is not the right solution.
To give you an example we currently have a mechanism that has a list of identifiers. The list has like a million entries and can be loaded into memory, processed and the memory is freed or in a streaming solution the list is loaded line by line, each line is instantly processed and the memory footprint is therefore superior to the first solution.
So what is the word to describe the first algorithm / solution.
I’m facing a similar naming issue, and decided to settle on “buffered”.

Limiting work in progress of parallel operations of a streamed resource

I've found myself recently using the SemaphoreSlim class to limit the work in progress of a parallelisable operation on a (large) streamed resource:
// The below code is an example of the structure of the code, there are some
// omissions around handling of tasks that do not run to completion that should be in production code
SemaphoreSlim semaphore = new SemaphoreSlim(Environment.ProcessorCount * someMagicNumber);
foreach (var result in StreamResults())
{
semaphore.Wait();
var task = DoWorkAsync(result).ContinueWith(t => semaphore.Release());
...
}
This is to avoid bringing too many results into memory and the program being unable to cope (generally evidenced via an OutOfMemoryException). Though the code works and is reasonably performant, it still feels ungainly. Notably the someMagicNumber multiplier, which although tuned via profiling, may not be as optimal as it could be and isn't resilient to changes to the implementation of DoWorkAsync.
In the same way that thread pooling can overcome the obstacle of scheduling many things for execution, I would like something that can overcome the obstacle of scheduling many things to be loaded into memory based on the resources that are available.
Since it is deterministically impossible to decide whether an OutOfMemoryException will occur, I appreciate that what I'm looking for may only be achievable via statistical means or even not at all, but I hope that I'm missing something.
Here I'd say that you're probably overthinking this problem. The consequences for overshooting are rather high (the program crashes). The consequences for being too low are that the program might be slowed down. As long as you still have some buffer beyond a minimum value, further increases to the buffer will generally have little to no effect, unless the processing time of that task in the pipe is extraordinary volatile.
If your buffer is constantly filling up it generally means that the task before it in the pipe executes quite a bit quicker than the task that follows it, so even without a fairly small buffer it is likely to always ensure the task following it has some work. The buffer size needed to get 90% of the benefits of a buffer is usually going to be quite small (a few dozen items maybe) whereas the side needed to get an OOM error are like 6+ orders of magnate higher. As long as you're somewhere in-between those two numbers (and that's a pretty big range to land in) you'll be just fine.
Just run your static tests, pick a static number, maybe add a few percent extra for "just in case" and you should be good. At most, I'd move some of the magic numbers to a config file so that they can be altered without a recompile in the event that the input data or the machine specs change radically.

Specifying runtime behavior of a program

Are there any languages/extensions that allow the programmer to define general runtime behavior of a program during specific code segments?
Some garbage-collected languages let you modify the behavior of the GC at runtime. Like in lua, the collectgarbage function lets you do this. So, for example, you can stop the GC when you want to be sure that CPU resources aren't used in garbage collection for a critical section of code (after which you start the GC again).
I'm looking for a general way to specify intended behavior of the program without resorting to specifying specific GC tweaks. I'm interested even in an on-paper sort of specification method (ie something a programmer would code toward, but not program syntax that would actually implement that behavior). The point would be that this could be used to specify critical sections of code that shouldn't be interrupted (latency dependent activity) or other intended attributes of certain codepaths (maximum time between an output and an input or two outputs, average running time, etc).
For example, this syntax might describe that maximum time latencyDependentStuff should take is 5 milliseconds:
requireMaxTime(5) {
latencyDependentStuff();
}
Has anyone seen anything like this anywhere before?

Resources