I noticed memory leaks in my program and tracked it down to the signal handling. Seems crazy that there isn't a leak-free way to do this. I'm not worried about the "still reachable" bytes reported by Valgrind - I'm worried about the "possibly lost" bytes.
minimal reproducible example:
use tokio::signal;
use tokio::time::{sleep, Duration};
async fn sleep1() {
loop {
sleep(Duration::from_secs(1)).await;
println!("sleep1");
}
}
async fn sleep2() {
loop {
sleep(Duration::from_secs(2)).await;
println!("sleep2");
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let s1 = Box::pin(sleep1());
let s2 = Box::pin(sleep2());
let sig = Box::pin(signal::ctrl_c());
tokio::select! {
_ = s1 => {},
_ = s2 => {},
_ = sig => {},
};
println!("shutting down");
Ok(())
}
excerpt from Cargo.toml file:
edition = "2021"
tokio = { version = "1", features = ["full"] }
valgrind output:
==1366460== Command: target/debug/simulation
==1366460==
sleep1
sleep2
sleep1
sleep1
^Cshutting down
==1366460==
==1366460== HEAP SUMMARY:
==1366460== in use at exit: 25,884 bytes in 82 blocks
==1366460== total heap usage: 617 allocs, 535 frees, 145,635 bytes allocated
==1366460==
==1366460== LEAK SUMMARY:
==1366460== definitely lost: 0 bytes in 0 blocks
==1366460== indirectly lost: 0 bytes in 0 blocks
==1366460== possibly lost: 1,188 bytes in 3 blocks
==1366460== still reachable: 24,696 bytes in 79 blocks
==1366460== suppressed: 0 bytes in 0 blocks
==1366460== Rerun with --leak-check=full to see details of leaked memory
==1366460==
==1366460== For lists of detected and suppressed errors, rerun with: -s
==1366460== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
according to the tokio developers, this is a false positive resulting from the use of global variables. see here.
also, I found that if one really wishes to get rid of the valgrind error, it's possible to implement the signal handler in C and call it from rust. The rust program would spawn a blocking thread (something like tokio::task::spawn_blocking) that waits for the C code to catch a signal and terminate.
Related
I've been learning about memory management in Nodejs and I'm trying to understand why the following two behaviors occurs:
PS: I'm using the following utility functions to help me print memory to console:
function toMb (bytes) {
return (bytes / 1000000).toFixed(2);
}
function printMemoryData(){
const memory = process.memoryUsage();
return {
rss: `${toMb(memory.rss)} MB -> Resident Set Size - total memory allocated for the process execution`,
heapTotal: `${toMb(memory.heapTotal)} MB -> total size of the allocated heap`,
heapUsed: `${toMb(memory.heapUsed)} MB -> actual memory used during the execution`,
external: `${toMb(memory.external)} MB -> V8 external memory`,
};
}
Part 1) fs.readFile with encoding vs buffers
When I do:
let data;
fs.readFile('path/to/500MB', {}, function(err, buffer){
data = buffer
console.log('Memory usage after files read:', printMemoryData());
});
I get the following output:
Memory usage after files read: {
rss: '565.22 MB -> Resident Set Size - total memory allocated for the process execution',
heapTotal: '11.01 MB -> total size of the allocated heap',
heapUsed: '5.66 MB -> actual memory used during the execution',
external: '524.91 MB -> V8 external memory'
}
Even though I'm storing the data in a local data variable/v8object, the heap isn't used.
But when I do add the encoding:
fs.readFile('path/to/500MB', {encoding: 'utf-8'}, function(err, buffer){
console.log('Memory usage after files read:', printMemoryData());
});
I get the following output:
Memory usage after files read: {
rss: '1088.71 MB -> Resident Set Size - total memory allocated for the process execution',
heapTotal: '535.30 MB -> total size of the allocated heap',
heapUsed: '529.95 MB -> actual memory used during the execution',
external: '524.91 MB -> V8 external memory'
}
Why does the heap get used here instead of in the first function call without an encoding? I don't even have to store the buffer in a local variable for the heap to be used. I also understand that after the next event loop tick in the second example the heap will be cleaned up. But this leads me to my next question in Part 2
Part 2) This part is the same as part 1 but with streams.
const readStream = fs.createReadStream('path/to/500MB');
let data;
readStream.on('data', (buffer) => {
data+= buffer;
});
readStream.on('close', () => {
console.log(printMemoryData());
});
I get the output:
{
rss: '574.57 MB -> Resident Set Size - total memory allocated for the process execution',
heapTotal: '692.75 MB -> total size of the allocated heap',
heapUsed: '508.72 MB -> actual memory used during the execution',
external: '7.97 MB -> V8 external memory'
}
Why the difference in behavior with streams in heap used in part 2 vs the first function without encoding in part 1?
They both have an increase in RSS, but only in the streams does the heap get used when I store the buffer in a local variable/v8 object.
Thanks for any feedback.
use winapi::um::processthreadsapi::{OpenProcess,CreateRemoteThreadEx};
use winapi::um::errhandlingapi::GetLastError;
use winapi::shared::ntdef::NULL;
use winapi::um::winnt::{HANDLE,
MEM_COMMIT, MEM_RELEASE, MEM_RESERVE, PAGE_EXECUTE_READWRITE, PROCESS_ALL_ACCESS
};
pub struct KernelObj {
handle: HANDLE,
}
unsafe fn get_proccess(process_id: u32) -> Result<KernelObj, DWORD> {
println!("inside get proccess => {}",process_id);
let process = OpenProcess(PROCESS_ALL_ACCESS, 0, process_id);
if process == NULL {
Err(GetLastError())
} else {
Ok(KernelObj{ handle: process })
}
}
I am using rust winapi on windows 11 I am trying to do a process injection
whenever I tried to open explorer.exe it always returned this error
and it returning error 5
[target.'cfg(windows)'.dependencies]
winapi = { version = "0.3", features = ["winuser","errhandlingapi"] }
<read memory from 0x674 failed (0 of 1 bytes read)>
Edit => I Found that the code is still working but while debugging the debugger shows that error.
I develope node js with mongodb application on windows. For memory check, I usually use windows task manager, but it is not an good option, I think.
How to check exact memory usage of mongodb queries? I can aggregate all needed data in one query, but maybe 2 projection will be better option.
You can use simple one line code using Object.entries() and adjust to readable format
Object.entries(process.memoryUsage()).forEach(item => console.log(`${item[0]}: ${(item[1] / 1024 / 1024).toFixed(4)} MB`))
output:
rss: 70.7695 MB
heapTotal: 85.4063 MB
heapUsed: 55.2614 MB
external: 0.0794 MB
you can use the built in memoryUsage method. The usage will be displayed in bytes.
Here is a simple example:
let usage = process.memoryUsage()
for (var i = 0; i < 4; i++) {
console.log(`${Object.keys(usage)[i]}: ${Object.values(usage)[i]} bytes`)
}
example log:
rss: 634034 bytes
heapTotal: 340239 bytes
heapUsed: 129323 bytes
external: 10232 bytes
this code
const file = require("fs").createWriteStream("./test.dat");
for(var i = 0; i < 1e7; i++){
file.write("a");
}
gives this error message after running for about 30 seconds
<--- Last few GCs --->
[47234:0x103001400] 27539 ms: Mark-sweep 1406.1 (1458.4) -> 1406.1 (1458.4) MB, 2641.4 / 0.0 ms allocation failure GC in old space requested
[47234:0x103001400] 29526 ms: Mark-sweep 1406.1 (1458.4) -> 1406.1 (1438.9) MB, 1986.8 / 0.0 ms last resort GC in old spacerequested
[47234:0x103001400] 32154 ms: Mark-sweep 1406.1 (1438.9) -> 1406.1 (1438.9) MB, 2628.3 / 0.0 ms last resort GC in old spacerequested
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x30f4a8e25ee1 <JSObject>
1: /* anonymous */ [/Users/matthewschupack/dev/streamTests/1/write.js:~1] [pc=0x270efe213894](this=0x30f4e07ed2f1 <Object map = 0x30f4ede823b9>,exports=0x30f4e07ed2f1 <Object map = 0x30f4ede823b9>,require=0x30f4e07ed2a9 <JSFunction require (sfi = 0x30f493b410f1)>,module=0x30f4e07ed221 <Module map = 0x30f4edec1601>,__filename=0x30f493b47221 <String[49]: /Users/matthewschupack/dev/streamTests/...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
1: node::Abort() [/usr/local/bin/node]
2: node::FatalException(v8::Isolate*, v8::Local<v8::Value>, v8::Local<v8::Message>) [/usr/local/bin/node]
3: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [/usr/local/bin/node]
4: v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationSpace) [/usr/local/bin/node]
5: v8::internal::Runtime_AllocateInTargetSpace(int, v8::internal::Object**, v8::internal::Isolate*) [/usr/local/bin/node]
6: 0x270efe08463d
7: 0x270efe213894
8: 0x270efe174048
[1] 47234 abort node write.js
whereas this code
const file = require("fs").createWriteStream("./test.dat");
for(var i = 0; i < 1e6; i++){
file.write("aaaaaaaaaa");//ten a's
}
runs perfectly almost instantly and produces a 10MB file. As I understood it, the point of streams is that both versions should run in about the same amount of time since the data is identical. Even increasing the number of as to 100 or 1000 per iteration hardly increases the running time even and writes a 1GB file without any issues. Writing a single character per iteration at 1e6 iterations also works fine.
What's going on here?
The out of memory error happens because you're not waiting for the drain event to be emitted, without waiting Node.js will buffer all written chunks until maximum memory usage occurs.
.write will return false if the internal buffer is greater than highWaterMark which defaults to 16384 bytes (16kb). In your code, you're not handling the return value of .write, and so the buffer is never flushed.
This can be tested very easily using: tail -f test.dat
When executing your script, you will see that nothing is being written on test.dat until the script finishes.
For 1e7 the buffer should be cleared 610 times.
1e7 / 16384 = 610
A solution is to check for .write return value and if false is returned, use file.once('drain') wrapped in a promise to wait until drain event is emitted
NOTE: writable.writableHighWaterMark was added in node v9.3.0
const file = require("fs").createWriteStream("./test.dat");
(async() => {
for(let i = 0; i < 1e7; i++) {
if(!file.write('a')) {
// Will pause every 16384 iterations until `drain` is emitted
await new Promise(resolve => file.once('drain', resolve));
}
}
})();
Now if you dotail -f test.dat you will see how data is being written while the script is still running.
As of why you get memory issues with 1e7 and not 1e6 we have to take a look into how Node.Js does the buffering, that happen at the writeOrBuffer function.
This sample code will allow us to have a rough estimate of the memory usage:
const count = Number(process.argv[2]) || 1e6;
const state = {};
function nop() {}
const buffer = (data) => {
const last = state.lastBufferedRequest;
state.lastBufferedRequest = {
chunk: Buffer.from(data),
encoding: 'buffer',
isBuf: true,
callback: nop,
next: null
};
if(last)
last.next = state.lastBufferedRequest;
else
state.bufferedRequest = state.lastBufferedRequest;
state.bufferedRequestCount += 1;
}
const start = process.memoryUsage().heapUsed;
for(let i = 0; i < count; i++) {
buffer('a');
}
const used = (process.memoryUsage().heapUsed - start) / 1024 / 1024;
console.log(`${Math.round(used * 100) / 100} MB`);
When executed:
// node memory.js <count>
1e4: 1.98 MB
1e5: 16.75 MB
1e6: 160 MB
5e6: 801.74 MB
8e6: 1282.22 MB
9e6: 1442.22 MB - Out of memory
1e7: 1602.97 MB - Out of memory
So each object uses ~0.16 kb, and when doing 1e7 writes without waiting for drain event, you have 10 million of those objects in memory (To be fair it crashes before reaching 10M)
It doesn't matter if you use a single a or 1000, the memory increase from that is negligible.
You can increase the max memory used by node with --max_old_space_size={MB} flag (Of course this is not the solution, is just for checking the memory consumption without crashing the script):
node --max_old_space_size=4096 memory.js 1e7
UPDATE: I made a mistake on the memory snippet which led to a 30% increase on memory usage. I was creating a new callback for every .write, Node reuses nop callback.
UPDATE II
If you're writing always the same value (doubtful in a real scenario), you can reduce greatly the memory usage & execution time by passing the same buffer every time:
const buf = Buffer.from('a');
for(let i = 0; i < 1e7; i++) {
if(!file.write(buf)) {
// Will pause every 16384 iterations until `drain` is emitted
await new Promise(resolve => file.once('drain', resolve));
}
}
It is quite surprising to see the counters heapUsed and external showing reduction but still the heapTotal showing a spike.
***Memory Log - Before soak summarization 2
"rss":217214976,"heapTotal":189153280,"heapUsed":163918648,"external":1092977
Spike in rss: 4096
Spike in heapTotal: 0
Spike in heapUsed: 22240
Spike in external: 0
***Memory Log - Before summarizing log summary for type SOAK
"rss":220295168,"heapTotal":192294912,"heapUsed":157634440,"external":318075
Spike in rss: 3080192
Spike in heapTotal: 3141632
Spike in heapUsed: -6284208
Spike in external: -774902
Any ideas why the heapTotal is drastically increasing despite the heapUsed and external going drastically down. I mean I really thought that heapTotal = heapUsed + external.
I am using the following code to track memory
var prevStats;
function logMemory (path,comment) {
if (! fs.existsSync(path)) {
fs.mkdirSync(path, DIR_MODE);
}
path = pathUtils.posix.join(path,"memoryLeak.txt");
if (comment) comment = comment.replace(/(.+)/,' - $1');
var memStats = process.memoryUsage();
fs.appendFileSync(path,`\r\n\r\n***Memory Log ${comment}\r\n`+JSON.stringify(process.memoryUsage()));
if (prevStats) {
_.forEach (
Object.keys(memStats),
key => {
if (memStats.hasOwnProperty(key)) {
fs.appendFileSync(path,`\r\nSpike in ${key}: ${memStats[key]-prevStats[key]}`);
}
}
);
}
prevStats = memStats;
}