How to clear memory to prevent "out of memory error" in VBA? - excel

I am running VBA code on a large Excel spreadsheet. How do I clear the memory between procedures/calls to prevent an "out of memory" issue occurring?

The best way to help memory to be freed is to nullify large objects:
Sub Whatever()
Dim someLargeObject as SomeObject
'expensive computation
Set someLargeObject = Nothing
End Sub
Also note that global variables remain allocated from one call to another, so if you don't need persistence you should either not use global variables or nullify them when you don't need them any longer.
However this won't help if:
you need the object after the procedure (obviously)
your object does not fit in memory
Another possibility is to switch to a 64 bit version of Excel which should be able to use more RAM before crashing (32 bits versions are typically limited at around 1.3GB).

I've found a workaround. At first it seemed it would take up more time, but it actually makes everything work smoother and faster due to less swapping and more memory available. This is not a scientific approach and it needs some testing before it works.
In the code, make Excel save the workbook every now and then. I had to loop through a sheet with 360 000 lines and it choked badly. After every 10 000 I made the code save the workbook and now it works like a charm even on a 32-bit Excel.
If you start Task Manager at the same time you can see the memory utilization go down drastically after each save.

Answer is you can't explicitly but you should be freeing memory in your routines.
Some tips though to help memory
Make sure you set object to null before exiting your routine.
Ensure you call Close on objects if they require it.
Don't use global variables unless absolutely necessary
I would recommend checking the memory usage after performing the routine again and again you may have a memory leak.

Found this thread looking for a solution to my problem. Mine required a different solution that I figured out that might be of use to others. My macro was deleting rows, shifting up, and copying rows to another worksheet. Memory usage was exploding to several gigs and causing "out of memory" after processing around only 4000 records. What solved it for me?
application.screenupdating = false
Added that at the beginning of my code (be sure to make it true again, at the end)
I knew that would make it run faster, which it did.. but had no idea about the memory thing.
After making this small change the memory usage didn't exceed 135 mb. Why did that work? No idea really. But it's worth a shot and might apply to you.

If you operate on a large dataset, it is very possible that arrays will be used.
For me creating a few arrays from 500 000 rows and 30 columns worksheet caused this error. I solved it simply by using the line below to get rid of array which is no longer necessary to me, before creating another one:
Erase vArray
Also if only 2 columns out of 30 are used, it is a good idea to create two 1-column arrays instead of one with 30 columns. It doesn't affect speed, but there will be a difference in memory usage.

I had a similar problem that I resolved myself.... I think it was partially my code hogging too much memory while too many "big things"
in my application - the workbook goes out and grabs another departments "daily report".. and I extract out all the information our team needs (to minimize mistakes and data entry).
I pull in their sheets directly... but I hate the fact that they use Merged cells... which I get rid of (ie unmerge, then find the resulting blank cells, and fill with the values from above)
I made my problem go away by
a)unmerging only the "used cells" - rather than merely attempting to do entire column... ie finding the last used row in the column, and unmerging only this range (there is literally 1000s of rows on each of the sheet I grab)
b) Knowing that the undo only looks after the last ~16 events... between each "unmerge" - i put 15 events which clear out what is stored in the "undo" to minimize the amount of memory held up (ie go to some cell with data in it.. and copy// paste special value... I was GUESSING that the accumulated sum of 30sheets each with 3 columns worth of data might be taxing memory set as side for undoing
Yes it doesn't allow for any chance of an Undo... but the entire purpose is to purge the old information and pull in the new time sensitive data for analysis so it wasn't an issue
Sound corny - but my problem went away

I was able to fix this error by simply initializing a variable that was being used later in my program. At the time, I wasn't using Option Explicit in my class/module.

Related

VBA - Export Image from Excel *without* using Clipboard (Copy/Paste)

There are a lot of great examples of how to take an Excel range, create an image from it, and save it to the drive. Here is one: Export pictures from excel file into jpg using VBA
This works great on a small scale, but when you try to run this through 3,000 or more iterations, a "memory leak" caused by the repeated use of the clipboard eventually erodes the process and the macro fails somewhere along the way. This occurs even when running 64-bit Excel on a powerful machine (50+ GB of RAM).
Are there any ways to do this without using the clipboard?? My first thought was to try to fix the memory leak issue, but all of those attempts have been unsuccessful. For context, I'm basically using the exact code as provided in the solution on link above (with a couple of added features to try to reduce memory leaking like auto-saving the workbook after every 100 images, etc.).
I'm also looking for what you mentioned; here's how to do it with a chart:
Dim file As String ' the path to the saved image, in the temp dir
file = Environ$("temp") & "\chart.gif"
Sheets("Sheet1").ChartObjects(0).Activate
Sheets("Sheet1").ChartObjects(0).Chart.Export Filename:=file, FilterName:="GIF"
There was ultimately no solution for the memory leak, it seems to be a systemic problem with VBA.
For those trying to programmatically generate charts, it is much easier to build in PHP.

VBA Excel Automation - Memory leak and two dot rule?

I'm having an issue with a memory leak while I run some vba code I wrote to look at a source spreadsheet, pull new data, then do some work on it, and save it to other spreadsheets. The code then uses Application.OnTime to call itself again in a few minutes giving me a continually updating dataset. All the excel files involved are under 10MB. The result is after a few hours running, the excel process will be multiple gigabytes as well as the Kernel Memory Paged Pool. Alternatively I've tried controlling the looping from a Word macro so that I'm able to kill the excel process after each run completes. This keeps the excel process memory usage in check but the Kernel Memory Paged Pool still grows seemingly without end - after about two days the paged pool will be about 10GB.
I've seen some advice about being wary of using two dots with COM objects as a source of memory issues.
How do I properly clean up Excel interop objects?
https://www.add-in-express.com/creating-addins-blog/2013/11/05/release-excel-com-objects/
But from what I've seen, these don't address the issue if you're coding within the Microsoft Visual Basic for Applications side of excel? Does the two dot issue remain if my code is all in VBA?
If it does, how far do I need to take idea when dealing with things like ranges, rows, columns etc. For example, a common task is finding the number of used rows in a sheet which I do by:
Dim ws as Excel.Worksheet
Set ws = ThisWorkbook.Worksheets("name")
With ws
rowsize = .Range("A1", .Range("A" & .Rows.Count).End(xlUp)).Rows.Count
End With
Do you need to create variables that hold Worksheet.Range and Range.Rows to avoid double dots? If so what would the above code look like properly written to observe the no two dot rule of thumb?
PS
I've tried debugging the memory leak more directly by using poolmon.exe. This repeatedly shows CMNb as the culprit tag, but I can't seem to get any further down this debug path as I'm unable to locate the tag using strings and findstr as exampled in the below link:
https://blogs.technet.microsoft.com/markrussinovich/2009/03/10/pushing-the-limits-of-windows-paged-and-nonpaged-pool/

Netlogo 5.1 (and 5.05) Behavior Space Memory Leak

I have posted on this before, but thought I had tracked it down to the NW extension, however, memory leakage still occurs in the latest version. I found this thread, which discusses a similar issues, but attributes it to Behavior Space:
http://netlogo-users.18673.x6.nabble.com/Behaviorspace-Memory-Leak-td5003468.html
I have found the same symptoms. My model starts out at around 650mb, but over each run the private working set memory rises, to the point where it hits the 1024 limit. I have sufficient memory to raise this, but in reality it will only delay the onset. I am using the table output, as based on previous discussions this helps, and it does, but it only slows the rate of increase. However, eventually the memory usage rises to a point where the PC starts to struggle. I am clearing all data between runs so there should be no hangover. I noticed in the highlighted thread that they were going to run headless. I will try this, but I wondered if anyone else had noticed the issue? My other option is to break the BehSpc simulation into a few batches so the issues never arises, bit i would be nice to let the model run and walk away as it takes around 2 hours to go through.
Some possible next steps:
1) Isolate the exact conditions under which the problem does or not occur. Can you make it happen without involving the nw extension, or not? Does it still happen if you remove some of the code from your model? What if you keep removing code — when does the problem go away? What is the smallest code that still causes the problem? Almost any bug can be demonstrated with only a small amount of code — and finding that smallest demonstration is exactly what is needed in order to track down the cause and fix it.
2) Use standard memory profiling tools for the JVM to see what kind of objects are using the memory. This might provide some clues to possible causes.
In general, we are not receiving other bug reports from users along these lines. It's routine, and has been for many years now, for people to use BehaviorSpace (both headless and not) and do experiments that last for hours or even for days. So whatever it is you're experiencing almost certainly has a more specific cause -- mostly likely, in the nw extension -- that could be isolated.

Reducing memory usage in an extended Mathematica session

I'm doing some rather long computations, which can easily span a few days. In the course of these computations, sometimes Mathematica will run out of memory. To this end, I've ended up resorting to something along the lines of:
ParallelEvaluate[$KernelID]; (* Force the kernels to launch *)
kernels = Kernels[];
Do[
If[Mod[iteration, n] == 0,
CloseKernels[kernels];
LaunchKernels[kernels];
ClearSystemCache[]];
(* Complicated stuff here *)
Export[...], (* If a computation ends early I don't want to lose past results *)
{iteration, min, max}]
This is great and all, but over time the main kernel accumulates memory. Currently, my main kernel is eating up roughly 1.4 GB of RAM. Is there any way I can force Mathematica to clear out the memory it's using? I've tried littering Share and Clear throughout the many Modules I'm using in my code, but the memory still seems to build up over time.
I've tried also to make sure I have nothing big and complicated running outside of a Module, so that something doesn't stay in scope too long. But even with this I still have my memory issues.
Is there anything I can do about this? I'm always going to have a large amount of memory being used, since most of my calculations involve several large and dense matrices (usually 1200 x 1200, but it can be more), so I'm wary about using MemoryConstrained.
Update:
The problem was exactly what Alexey Popkov stated in his answer. If you use Module, memory will leak slowly over time. It happened to be exacerbated in this case because I had multiple Module[..] statements. The "main" Module was within a ParallelTable where 8 kernels were running at once. Tack on the (relatively) large number of iterations, and this was a breeding ground for lots of memory leaks due to the bug with Module.
Since you are using Module extensively, I think you may be interested in knowing this bug with non-deleting temporary Module variables.
Example (non-deleting unlinked temporary variables with their definitions):
In[1]:= $HistoryLength=0;
a[b_]:=Module[{c,d},d:=9;d/;b===1];
Length#Names[$Context<>"*"]
Out[3]= 6
In[4]:= lst=Table[a[1],{1000}];
Length#Names[$Context<>"*"]
Out[5]= 1007
In[6]:= lst=.
Length#Names[$Context<>"*"]
Out[7]= 1007
In[8]:= Definition#d$999
Out[8]= Attributes[d$999]={Temporary}
d$999:=9
Note that in the above code I set $HistoryLength = 0; to stress this buggy behavior of Module. If you do not do this, temporary variables can still be linked from history variables (In and Out) and will not be removed with their definitions due to this reason in more broad set of cases (it is not a bug but a feature, as Leonid mentioned).
UPDATE: Just for the record. There is another old bug with non-deleting unreferenced Module variables after Part assignments to them in v.5.2 which is not completely fixed even in version 7.0.1:
In[1]:= $HistoryLength=0;$Version
Module[{L=Array[0&,10^7]},L[[#]]++&/#Range[100];];
Names["L$*"]
ByteCount#Symbol##&/#Names["L$*"]
Out[1]= 7.0 for Microsoft Windows (32-bit) (February 18, 2009)
Out[3]= {L$111}
Out[4]= {40000084}
Have you tried to evaluate $HistoryLength=0; in all subkernels and as well as in the master kernel? History tracking is the most common source for going out of memory.
Have you tried do not use slow and memory-consuming Export and use fast and efficient Put instead?
It is not clear from your post where you evaluate ClearSystemCache[] - in the master kernel or in subkernels? It looks like you evaluate it in the master kernel only. Try to evaluate it in all subkernels too before each iteration.

how to find a address everytime

i have been working on a server and it works with 2 programs i made one is the server and one is the error handler and if the main server fails it restarts it. the 2nd program's main way to handle data is by reading the values from the program(because when i was debugging i was filling in the address's), because writeing the values to a text file would just take too long and also would take up space i really need :| anyway i have about 100,000 values BUT i only need about 100 i need to find ONLY them and if i get the wrong one i'll i might crash it by trying to fix what's "wrong" when nothing is. (sometimes way more but it will not have more then 100k of them by the time i need to know the address's).
i don't need people to tell me how to do someother way to do it, i would really just like to know how to find one value in all of the other ones. and i can't write them to a text file i can only read them from memory because the way i set it up and i don't want to spend 2-3 weeks to recode it.
~edit~
Sorry, if i was not clear.
i need the address of a value in memory(i.e int, bool and etc), so i can find it.
also i really don't want to share anything with 2 program because if one crashs it might take the other with it. if they are shareing and if it crashs and does not restart then my server will be offline intell someone tells me or i do a update :| so a day or two.
if anyone else is confused sorry and just ask and i'll edit.
You won't be able to find them in memory unless you already know their values.
And if you already know their values, why bother looking them up?
If it'd take you 2-3 weeks to re-code it, you should probably spend those 2-3 weeks rewriting your "server" application so that it's more maintainable.
Sorry, it doesn't work that way. Many "values" (variables) are not stored in memory. Instead, they are stored in CPU registers. This is done because registers are a lot faster. However, they are also scarce, so in a big program like yours they will be reused. At different times, different variables will be mapped to a particular register. As a result, even if you know that localVariable732 is sometimes mapped to the ECX register, you won't know whether the ECX register currently contains the localVariable732 value.

Resources