What diagnostic tools are available for Node.js applications? - node.js

There are many tools out there, which diagnostics tools are good for diagnostic memory leak issues for node.js applications?

Yes, IDDE is a powerful tool not only for memory leak detection, but for a wide variety of problem determination of Node.js misbehaviors, including crashes and hangs.
Here is the link for overview, installation, and what is new information: https://www.ibm.com/developerworks/java/jdk/tools/idde
I would start with nodeoverview command. Note that every command starts with a bang (!) and every command is entered with a control (ctrl+enter) for reasons.
!nodeoverview {
Heap and Garbage Collection
Memory allocator, used: 981 MB, available: 482 MB
GC Count: 144
This shows up the occupancy of the heap.
Then, use jsmeminfo to figure out the predominent resident objects in the heap.
!jsmeminfo {
Memory allocator, used: 981 MB, available: 482 MB
Total Heap Objects: 21559924
Largest 5 heap objects Type Size (bytes) More information
0x00000000de06d319 FIXED_ARRAY_TYPE 131112 !array 0x00000000de06d319
0x00000000de0ac6d9 FIXED_ARRAY_TYPE 98360 !array 0x00000000de0ac6d9
0x00000000e90e2f09 ASCII_STRING_TYPE 48152 !string 0x00000000e90e2f09
0x00000000e9035099 ASCII_STRING_TYPE 48088 !string 0x00000000e9035099
0x00000000e9004101 ASCII_STRING_TYPE 40936 !string 0x00000000e9004101
Most Frequent 5 object types Frequency
JS_OBJECT_TYPE 15371393
FIXED_ARRAY_TYPE 6175379
ASCII_INTERNALIZED_STRING_TYPE 3476
BYTE_ARRAY_TYPE 1572
JS_FUNCTION_TYPE 1434
}
Review the application based on this information and see they holding up the memory as shown is justified or not.
If you want to 'dissect' the objects further to see the content, use object expansion commands such as !jsobject or !array:
!array 0x00000000de06d319 {
Array type : FIXED_ARRAY_TYPE
Len : 16387
Showing first 100 elements only
0 : 0xd9400000000 (SMI)
1 : 0x3fe00000000 (SMI)
2 : 0x400000000000 (SMI)
3 : 0x9a1103d1 (ASCII_INTERNALIZED_STRING_TYPE : !print 0x000000009A1103D1 )
4 : 0x9a1042a9 (ASCII_INTERNALIZED_STRING_TYPE : !print 0x000000009A1042A9 )
...
}
If you want to 'segregate' the entire heap into sections based on object's internal types, user jsgroupobjects. This is more useful when you have multiple dumps taken at different time intervals, and want to compare which objects grew over time.
!jsgroupobjects {
Representative Object Address Object Type Num Objects Constructor Num Properties Properties
!jsobject 0x00000000c8244fd1 JS_OBJECT_TYPE 6133503 Object 0
!jsobject 0x00000000c8004161 JS_OBJECT_TYPE 6133499 Database 0
!jsobject 0x00000000c8004101 JS_OBJECT_TYPE 3066750 MyRecord 0
!jsobject 0x00000000c869b111 JS_OBJECT_TYPE 37302 Object 0
!jsobject 0x00000000de05b959 JS_FUNCTION_TYPE 542 0
!jsobject 0x00000000de04bcc1 JS_FUNCTION_TYPE 267 0
!jsobject 0x00000000de04aa09 JS_FUNCTION_TYPE 251 0
!jsobject 0x00000000de04a911 JS_FUNCTION_TYPE 227 0
!jsobject 0x00000000de0a48c9 JS_ARRAY_TYPE 190 Array 0
!jsobject 0x00000000de04a7e9 JS_FUNCTION_TYPE 102 0
!jsobject 0x00000000de04e379 JS_ARRAY_TYPE 34 Array 0
!jsobject 0x00000000de050db1 JS_OBJECT_TYPE 30 Object 0
!jsobject 0x00000000c2938151 JS_REGEXP_TYPE 18 RegExp 0
!jsobject 0x00000000c2955a11 JS_OBJECT_TYPE 15 NativeModule 0
!jsobject 0x00000000c2944519 JS_OBJECT_TYPE 11 Object 0
!jsobject 0x00003abc617bee71 JS_OBJECT_TYPE 102 CallSite 3 receiver, fun, pos
If you want to examine a single object, do jsobject on the object address.
!jsobject 0x00003abc617bee71 {
Object has fast properties
Number of descriptors : 3
Name Value More Information
receiver 0x0000251abe506c91
fun 0x00003abc617bb241
pos 0x00001dfd00000000 SMI = 0x1dfd
}

module https://www.npmjs.com/package/appmetrics but it is more for monitoring and profiling.
You can check it out, it is useful.

Related

While running, is it possible to display the currently allocated buffers managed by LeakSanitizer?

I've a very large program (okay, only 13,000 lines of code according to cloc) which leaks. I know because over time, it uses more and more resident memory.
I have the sanitizer option turned on, but on a clean exit, all my C++ software will properly clean everything as expected. So I don't see anything growing in the sanitizer output.
What would be useful in this case, is a way to call a function which displays the (large) list of allocated buffers while running the code. I can then look at a diff of two such outputs and see what was allocated anew. The leaked buffers will be in there...
At this point, though, I just don't see any header with sanitizer functions I could call to see such a list. Does it exist?
Lsan interface is available in sanitizer/lsan_interface.h but AFAIK it has no API to print allocation info. The best you can get is compile your code with Asan (which includes Lsan as well) and use __asan_print_accumulated_stats to get basic allocation statistics:
$ cat tmp.c
#include <sanitizer/asan_interface.h>
#include <stdlib.h>
int main() {
malloc(100);
__asan_print_accumulated_stats();
return 0;
}
$ gcc -fsanitize=address -g tmp.c && ./a.out
Stats: 0M malloced (0M for red zones) by 2 calls
Stats: 0M realloced by 0 calls
Stats: 0M freed by 0 calls
Stats: 0M really freed by 0 calls
Stats: 0M (0M-0M) mmaped; 5 maps, 0 unmaps
mallocs by size class: 7:1; 11:1;
Stats: malloc large: 0
Stats: StackDepot: 2 ids; 0M allocated
Stats: SizeClassAllocator64: 0M mapped in 256 allocations; remains 256
07 (112): mapped: 64K allocs: 128 frees: 0 inuse: 128 num_freed_chunks 457 avail: 585 rss: 4K releases: 0
11 (176): mapped: 64K allocs: 128 frees: 0 inuse: 128 num_freed_chunks 244 avail: 372 rss: 4K releases: 0
Stats: LargeMmapAllocator: allocated 0 times, remains 0 (0 K) max 0 M; by size logs:
=================================================================
==15060==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 100 byte(s) in 1 object(s) allocated from:
#0 0x7fdf2194fb40 in __interceptor_malloc (/usr/lib/x86_64-linux-gnu/libasan.so.4+0xdeb40)
#1 0x559ca08a7857 in main /home/yugr/tmp.c:5
#2 0x7fdf214a1bf6 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21bf6)
SUMMARY: AddressSanitizer: 100 byte(s) leaked in 1 allocation(s).
Unfortunately there is no way to print exact allocations.

Golang : fatal error: runtime: out of memory

I trying to use this package in Github for string matching. My dictionary is 4 MB. When creating the Trie, I got fatal error: runtime: out of memory. I am using Ubuntu 14.04 with 8 GB of RAM and Golang version 1.4.2.
It seems the error come from the line 99 (now) here : m.trie = make([]node, max)
The program stops at this line.
This is the error:
fatal error: runtime: out of memory
runtime stack:
runtime.SysMap(0xc209cd0000, 0x3b1bc0000, 0x570a00, 0x5783f8)
/usr/local/go/src/runtime/mem_linux.c:149 +0x98
runtime.MHeap_SysAlloc(0x57dae0, 0x3b1bc0000, 0x4296f2)
/usr/local/go/src/runtime/malloc.c:284 +0x124
runtime.MHeap_Alloc(0x57dae0, 0x1d8dda, 0x10100000000, 0x8)
/usr/local/go/src/runtime/mheap.c:240 +0x66
goroutine 1 [running]:
runtime.switchtoM()
/usr/local/go/src/runtime/asm_amd64.s:198 fp=0xc208518a60 sp=0xc208518a58
runtime.mallocgc(0x3b1bb25f0, 0x4d7fc0, 0x0, 0xc20803c0d0)
/usr/local/go/src/runtime/malloc.go:199 +0x9f3 fp=0xc208518b10 sp=0xc208518a60
runtime.newarray(0x4d7fc0, 0x3a164e, 0x1)
/usr/local/go/src/runtime/malloc.go:365 +0xc1 fp=0xc208518b48 sp=0xc208518b10
runtime.makeslice(0x4a52a0, 0x3a164e, 0x3a164e, 0x0, 0x0, 0x0)
/usr/local/go/src/runtime/slice.go:32 +0x15c fp=0xc208518b90 sp=0xc208518b48
github.com/mf/ahocorasick.(*Matcher).buildTrie(0xc2083c7e60, 0xc209860000, 0x26afb, 0x2f555)
/home/go/ahocorasick/ahocorasick.go:104 +0x28b fp=0xc208518d90 sp=0xc208518b90
github.com/mf/ahocorasick.NewStringMatcher(0xc208bd0000, 0x26afb, 0x2d600, 0x8)
/home/go/ahocorasick/ahocorasick.go:222 +0x34b fp=0xc208518ec0 sp=0xc208518d90
main.main()
/home/go/seme/substrings.go:66 +0x257 fp=0xc208518f98 sp=0xc208518ec0
runtime.main()
/usr/local/go/src/runtime/proc.go:63 +0xf3 fp=0xc208518fe0 sp=0xc208518f98
runtime.goexit()
/usr/local/go/src/runtime/asm_amd64.s:2232 +0x1 fp=0xc208518fe8 sp=0xc208518fe0
exit status 2
This is the content of the main function (taken from the same repo: test file)
var dictionary = InitDictionary()
var bytes = []byte(""Partial invoice (€100,000, so roughly 40%) for the consignment C27655 we shipped on 15th August to London from the Make Believe Town depot. INV2345 is for the balance.. Customer contact (Sigourney) says they will pay this on the usual credit terms (30 days).")
var precomputed = ahocorasick.NewStringMatcher(dictionary)// line 66 here
fmt.Println(precomputed.Match(bytes))
Your structure is awfully inefficient in terms of memory, let's look at the internals. But before that, a quick reminder of the space required for some go types:
bool: 1 byte
int: 4 bytes
uintptr: 4 bytes
[N]type: N*sizeof(type)
[]type: 12 + len(slice)*sizeof(type)
Now, let's have a look at your structure:
type node struct {
root bool // 1 byte
b []byte // 12 + len(slice)*1
output bool // 1 byte
index int // 4 bytes
counter int // 4 bytes
child [256]*node // 256*4 = 1024 bytes
fails [256]*node // 256*4 = 1024 bytes
suffix *node // 4 bytes
fail *node // 4 bytes
}
Ok, you should have a guess of what happens here: each node weighs more than 2KB, this is huge ! Finally, we'll look at the code that you use to initialize your trie:
func (m *Matcher) buildTrie(dictionary [][]byte) {
max := 1
for _, blice := range dictionary {
max += len(blice)
}
m.trie = make([]node, max)
// ...
}
You said your dictionary is 4 MB. If it is 4MB in total, then it means that at the end of the for loop, max = 4MB. It it holds 4 MB different words, then max = 4MB*avg(word_length).
We'll take the first scenario, the nicest one. You are initializing a slice of 4M of nodes, each of which uses 2KB. Yup, that makes a nice 8GB necessary.
You should review how you build your trie. From the wikipedia page related to the Aho-Corasick algorithm, each node contains one character, so there is at most 256 characters that go from the root, not 4MB.
Some material to make it right: https://web.archive.org/web/20160315124629/http://www.cs.uku.fi/~kilpelai/BSA05/lectures/slides04.pdf
The node type has a memory size of 2084 bytes.
I wrote a litte program to demonstrate the memory usage: https://play.golang.org/p/szm7AirsDB
As you can see, the three strings (11(+1) bytes in size) dictionary := []string{"fizz", "buzz", "123"} require 24 MB of memory.
If your dictionary has a length of 4 MB you would need about 4000 * 2084 = 8.1 GB of memory.
So you should try to decrease the size of your dictionary.
Set resource limit to unlimited worked for me
if ulimit -a return 0 run ulimit -c unlimited
Maybe set a real size limit to be more secure

write error: No space left on device in embedded linux

all
I have a embedded board, run linux OS. and I use yaffs2 as rootfs.
I run a program on it, but after some times, it got a error "error No space left on device.". but I checked the flash, there still have a lot free space.
I just write some config file. the config file is rarely update. the program will write some log to flash. log size is limited to 2M.
I don't know why, and how to solve.
Help me please!(my first language is not English,sorry. hope you understand what I say)
some debug info:
# ./write_test
version 1.0
close file :: No space left on device
return errno 28
# cat /proc/yaffs
YAFFS built:Nov 23 2015 16:57:34
Device 0 "rootfs"
start_block........... 0
end_block............. 511
total_bytes_per_chunk. 2048
use_nand_ecc.......... 1
no_tags_ecc........... 1
is_yaffs2............. 1
inband_tags........... 0
empty_lost_n_found.... 0
disable_lazy_load..... 0
refresh_period........ 500
n_caches.............. 10
n_reserved_blocks..... 5
always_check_erased... 0
data_bytes_per_chunk.. 2048
chunk_grp_bits........ 0
chunk_grp_size........ 1
n_erased_blocks....... 366
blocks_in_checkpt..... 0
n_tnodes.............. 749
n_obj................. 477
n_free_chunks......... 23579
n_page_writes......... 6092
n_page_reads.......... 11524
n_erasures............ 96
n_gc_copies........... 5490
all_gcs............... 1136
passive_gc_count...... 1136
oldest_dirty_gc_count. 95
n_gc_blocks........... 96
bg_gcs................ 96
n_retired_writes...... 0
n_retired_blocks...... 0
n_ecc_fixed........... 0
n_ecc_unfixed......... 0
n_tags_ecc_fixed...... 0
n_tags_ecc_unfixed.... 0
cache_hits............ 0
n_deleted_files....... 0
n_unlinked_files...... 289
refresh_count......... 1
n_bg_deletions........ 0
Device 2 "data"
start_block........... 0
end_block............. 927
total_bytes_per_chunk. 2048
use_nand_ecc.......... 1
no_tags_ecc........... 1
is_yaffs2............. 1
inband_tags........... 0
empty_lost_n_found.... 0
disable_lazy_load..... 0
refresh_period........ 500
n_caches.............. 10
n_reserved_blocks..... 5
always_check_erased... 0
data_bytes_per_chunk.. 2048
chunk_grp_bits........ 0
chunk_grp_size........ 1
n_erased_blocks....... 10
blocks_in_checkpt..... 0
n_tnodes.............. 4211
n_obj................. 24
n_free_chunks......... 658
n_page_writes......... 430
n_page_reads.......... 467
n_erasures............ 7
n_gc_copies........... 421
all_gcs............... 20
passive_gc_count...... 13
oldest_dirty_gc_count. 3
n_gc_blocks........... 6
bg_gcs................ 4
n_retired_writes...... 0
n_retired_blocks...... 0
n_ecc_fixed........... 0
n_ecc_unfixed......... 0
n_tags_ecc_fixed...... 0
n_tags_ecc_unfixed.... 0
cache_hits............ 0
n_deleted_files....... 0
n_unlinked_files...... 2
refresh_count......... 1
n_bg_deletions........ 0
#
log and config file stored in "data".
thanks!!
In General this could be your disk space (here Flash), first of all check your flash space with with df -h (or other commands you have.. df is present in BusyBox). But if your flash space (specially on your program partition) is ok, this could be your "inode" (directory) space problem, you could see your inode usage with df -i command. (a good link for this: https://wiki.gentoo.org/wiki/Knowledge_Base:No_space_left_on_device_while_there_is_plenty_of_space_available)
If non of these is the problem cause, I think you have to have a deeper look at your code, specially if you deal with disk I/O!
Also good to mention that be aware of memory & heap space & free all allocated spaces in you functions.

Valgrind Memory Leak in strdup

I am doing a small Project. I am checking about memory leaks using the tool Valgrind. When I use this tool, I got the bellow information.
> 584 bytes in 74 blocks are definitely lost in loss record 103 of 104
> ==4628== at 0x402BE68: malloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
> ==4628== by 0x41CF8D0: strdup (strdup.c:43)
> ==4628== by 0x8060B95: main (in mycall)
>
> LEAK SUMMARY:
> ==4628== definitely lost: 584 bytes in 74 blocks
> ==4628== indirectly lost: 0 bytes in 0 blocks
> ==4628== possibly lost: 0 bytes in 0 blocks
> ==4628== still reachable: 21,414 bytes in 383 blocks
> ==4628== suppressed: 0 bytes in 0 blocks
> ==4628==
> ==4628== For counts of detected and suppressed errors, rerun with: -v
> ==4628== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
These are the codings I have used the function strdup. I have used in LEX code.
{string} {
yylval.string = strdup(yytext + 1);
yylval.string[yyleng - 2] = 0;
return PPSTRING;
}
{numvar} { yylval.string = strdup(yytext);return(PPNUMVAR); }
{sysnumvar} { yylval.string = (char *) strdup(yytext);return(PPSYSNUMVAR); }
I don't know in which point the memory has been leaked.
strdup function allocate necessary memory to store the sourcing string implicitly, you need to free the returned string (i.e., yylval.string in your code) manually.

Decipher garbage collection output

I was running a sample program program using
rahul#g3ck0:~/programs/Remodel$ GOGCTRACE=1 go run main.go
gc1(1): 0+0+0 ms 0 -> 0 MB 422 -> 346 (422-76) objects 0 handoff
gc2(1): 0+0+0 ms 0 -> 0 MB 2791 -> 1664 (2867-1203) objects 0 handoff
gc3(1): 0+0+0 ms 1 -> 0 MB 4576 -> 2632 (5779-3147) objects 0 handoff
gc4(1): 0+0+0 ms 1 -> 0 MB 3380 -> 2771 (6527-3756) objects 0 handoff
gc5(1): 0+0+0 ms 1 -> 0 MB 3511 -> 2915 (7267-4352) objects 0 handoff
gc6(1): 0+0+0 ms 1 -> 0 MB 6573 -> 2792 (10925-8133) objects 0 handoff
gc7(1): 0+0+0 ms 1 -> 0 MB 4859 -> 3059 (12992-9933) objects 0 handoff
gc8(1): 0+0+0 ms 1 -> 0 MB 4554 -> 3358 (14487-11129) objects 0 handoff
gc9(1): 0+0+0 ms 1 -> 0 MB 8633 -> 4116 (19762-15646) objects 0 handoff
gc10(1): 0+0+0 ms 1 -> 0 MB 9415 -> 4769 (25061-20292) objects 0 handoff
gc11(1): 0+0+0 ms 1 -> 0 MB 6636 -> 4685 (26928-22243) objects 0 handoff
gc12(1): 0+0+0 ms 1 -> 0 MB 6741 -> 4802 (28984-24182) objects 0 handoff
gc13(1): 0+0+0 ms 1 -> 0 MB 9654 -> 5097 (33836-28739) objects 0 handoff
gc1(1): 0+0+0 ms 0 -> 0 MB 209 -> 171 (209-38) objects 0 handoff
Help me understand the first part i.e.
0 + 0 + 0 => Mark + Sweep + Clean times
Does 422 -> 346 means that there has been memory cleanup from 422MB to 346 MB?
If yes, then how come the memory is been reduced when there was nothing to be cleaned up?
In Go 1.5, the format of this output has changed considerably. For the full documentation, head over to http://godoc.org/runtime and search for "gctrace:"
gctrace: setting gctrace=1 causes the garbage collector to emit a single line to standard
error at each collection, summarizing the amount of memory collected and the
length of the pause. Setting gctrace=2 emits the same summary but also
repeats each collection. The format of this line is subject to change.
Currently, it is:
gc # ##s #%: #+...+# ms clock, #+...+# ms cpu, #->#-># MB, # MB goal, # P
where the fields are as follows:
gc # the GC number, incremented at each GC
##s time in seconds since program start
#% percentage of time spent in GC since program start
#+...+# wall-clock/CPU times for the phases of the GC
#->#-># MB heap size at GC start, at GC end, and live heap
# MB goal goal heap size
# P number of processors used
The phases are stop-the-world (STW) sweep termination, scan,
synchronize Ps, mark, and STW mark termination. The CPU times
for mark are broken down in to assist time (GC performed in
line with allocation), background GC time, and idle GC time.
If the line ends with "(forced)", this GC was forced by a
runtime.GC() call and all phases are STW.
The output is generated from this line: http://golang.org/src/pkg/runtime/mgc0.c?#L2147
So the different parts are:
0+0+0 ms : mark, sweep and clean duration in ms
1 -> 0 MB : heap before and after in MB
209 - 171 : objects before and after
(209-38) objects : number of allocs and frees
handoff (and in Go 1.2 steal and yields) are internals of the algorithm.

Resources