HOW TO VIEW THE V8.LOG CREATED AFTER RUNNING NODE-PROFILER - node.js

profiler to check on cpu profiling of my node.js server.It created a log called as v8.log.I also downloaded the node-tick-processor it created data as below
Statistical profiling result from v8.log, (298287 ticks, 2 unaccounted, 0 excluded).
[Unknown]:
ticks total nonlib name
2 0.0%
[Shared libraries]:
ticks total nonlib name
295618 99.1% 0.0% /lib/x86_64-linux-gnu/libc-2.19.so
1999 0.7% 0.0% /usr/local/bin/node
119 0.0% 0.0% /lib/x86_64-linux-gnu/libpthread-2.19.so
59 0.0% 0.0% 7fff509b3000-7fff509b5000
5 0.0% 0.0% /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
I've no idea on the above log.Any help regarding this is much useful.

I recently had the same problem. Try installing the "tick" processor instead. I think it is more compatible with many versions of node: https://www.npmjs.com/package/tick. Also if you are on a mac, make sure you pass in "--mac" when running it.

Related

Change in max-old-space-size between nodejs 16 and nodejs 14?

We are using --max-old-space-size=8192 to run our complete E2E jest 26 tests with npm test.
node --max-old-space-size=8192 node_modules/jest/bin/jest --runInBand --coverage --detectOpenHandles --logHeapUsage --no-cache
We upgraded to node 16.14.2 and suddenly the tests stopps at exactly 4G with OOM under windows as well as Ubuntu 20.04.4 LTS.
The same behavior with node 17.8.0
I switched back to node 14.18.1 and see the following performance graph with process explorer.
With node 16 I get OOM at 4G in the beginning of the E2E test.
<--- Last few GCs --->
[14184:00000277700CA440] 1097059 ms: Mark-sweep (reduce) 2799.4 (3161.8) -> 2798.8 (3123.2) MB, 1520.8 / 0.4 ms (average mu = 0.099, current mu = 0.064) last resort GC in old space requested
[14184:00000277700CA440] 1098475 ms: Mark-sweep (reduce) 2798.8 (3123.2) -> 2798.7 (3116.2) MB, 1416.0 / 1.6 ms (average mu = 0.053, current mu = 0.000) last resort GC in old space requested
I switched between the node versions with nvm-windows.
The packages were all installed with npm from node16. They run perfectly on node 14.
Tried other several space related v8-options but no positive effect on node 16 and 17.
Didn't want to open an issue of github/node yet as it cannot be isolated easily.
Any suggestions?
Update:
My first deep finding in node 16 V8 is that --huge-max-old-generation-size is now true be default.
This limits the memory to 4G.
See also https://github.com/v8/v8/commit/b2f75b008d14fd1e1ef8579c9c4d2bc7d374efd3.
And Heap::MaxOldGenerationSize
And Heap::HeapSizeFromPhysicalMemory
The max-old-space is limited down to 4G there as far as I understood. (At least when huge-old-space is on)
Now setting --no-huge-max-old-generation-size --max-old-space-size=8192 still has no effect and OOM at 4G again.
Update 2:
I tracked the v8 heap statistics and see just before the OOM at 4G following infos from v8.getHeapSpaceStatistics() and v8.getHeapStatistics()
total_heap_size : 3184 MB
total_heap_size_executable : 127 MB
total_physical_size : 3184 MB
total_available_size : 9162 MB
used_heap_size : 2817 MB
heap_size_limit : 12048 MB
malloced_memory : 2 MB
peak_malloced_memory : 44 MB
does_zap_garbage : 0 MB
number_of_native_contexts : 0 MB
number_of_detached_contexts : 0 MB
read_only_space : size : 0 MB, used: 0 MB, avail: 0 MB, phy: 0 MB
old_space : size : 2425 MB, used: 2111 MB, avail: 268 MB, phy: 2425 MB
code_space : size : 127 MB, used: 110 MB, avail: 8 MB, phy: 127 MB
map_space : size : 44 MB, used: 39 MB, avail: 4 MB, phy: 44 MB
large_object_space : size : 555 MB, used: 541 MB, avail: 0 MB, phy: 555 MB
code_large_object_space : size : 0 MB, used: 0 MB, avail: 0 MB, phy: 0 MB
new_large_object_space : size : 0 MB, used: 0 MB, avail: 15 MB, phy: 0 MB
new_space : size : 32 MB, used: 13 MB, avail: 2 MB, phy: 32 MB
<--- Last few GCs --->
[7940:000001B87F118E70] 546939 ms: Mark-sweep (reduce) 2774.1 (3123.5) -> 2773.6 (3084.7) MB, 498.6 / 0.3 ms (average mu = 0.080, current mu = 0.044) last resort GC in old space requested
[7940:000001B87F118E70] 547453 ms: Mark-sweep (reduce) 2773.6 (3084.7) -> 2773.4 (3077.2) MB, 513.2 / 0.3 ms (average mu = 0.040, current mu = 0.000) last resort GC in old space requested
<--- JS stacktrace --->
Update 3:
Upgraded to jest 27.5.1 and no difference. node 14 is fine but node 16/17 got stuck at 4G while their heap statistics report huge amount of available space.
For now the only solution is to use node 16.10.0 for running the jest tests
The problem is discussed in github.com/facebook/jest/issues/11956 but none of the suggested jest config changes seem to work generally.
Large jest test suites still cause the memory leak (or memory limit) to happen.

CircleCI Angular ng build - Allocation Failure (Memory issue)?

We've been running our builds on circleci for a while. Recently (sometimes) they fail because of allocation failure when running ng build.
The specific build command we are using is
ng build --prod --sourcemaps --aot true --build-optimizer --env=stage
This is the output log.
70% building modules 1562/1562 modules 0 active
79% module and chunk tree optimization
80% chunk modules optimization
81% advanced chunk modules optimization
82% module reviving
83% module order optimization
84% module id optimization
85% chunk reviving
86% chunk order optimization
87% chunk id optimization
88% hashing
89% module assets processing
90% chunk assets processing
91% additional chunk assets processing
92% recording 91% additional asset processing
92% chunk asset optimization
<--- Last few GCs --->
121548 ms: Scavenge 1327.9 (1434.3) -> 1327.8 (1434.3) MB, 21.8 / 0 ms (+ 1.6 ms in 9 steps since last GC) [allocation failure].
121572 ms: Scavenge 1327.9 (1434.3) -> 1327.9 (1434.3) MB, 22.7 / 0 ms (+ 0.3 ms in 1 steps since last GC) [allocation failure].
121595 ms: Scavenge 1327.9 (1434.3) -> 1327.9 (1434.3) MB, 22.9 / 0 ms [allocation failure].
121617 ms: Scavenge 1327.9 (1434.3) -> 1327.9 (1434.3) MB, 22.0 / 0 ms [allocation failure].
<--- JS stacktrace --->
Cannot get stack trace in GC.
FATAL ERROR: Scavenger: semi-space copy
Allocation failed - process out of memory
Aborted (core dumped)
Exited with code 134
When run locally with top filtering to the Pid of node, it hits about 1.4GB of memory usage, without sourcemaps it hits about 800mb.
CircleCI allow 4gb (from what I can find) of memory, I don't understand why I am getting this error (randomly).
Any ideas are much appreciated.
There are numerous open/closed/duplicate issues on github about the issue, so I'm just posting important information from the issues. May be one or more suggestions might work (I personally haven't encountered bug yet!).
Disable sourcemaps if you don't need them
Downgrade angular-cli and check if it solves your issue
Install and use increase-memory-limit package in your app
Increase max_old_space_size as specified here
I Hope it helps!
References:
https://github.com/angular/angular-cli/issues/10897
https://github.com/angular/angular-cli/issues/5618

node not allowing custom heap limit (--max-old-space-size)

I'm on Ubuntu 15.10 and am running node v6.2.1.
My machine has 15GB of RAM:
>> sudo lshw -class memory
*-memory
description: System memory
physical id: 4
size: 15GiB
But, when I try to start node with an increased heap limit:
node --max-old-space-size=2048
...it immediately runs out of memory:
<--- Last few GCs --->
25 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.7 / 0 ms [allocation failure] [GC in old space requested].
26 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.8 / 0 ms [allocation failure] [GC in old space requested].
27 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.9 / 0 ms [allocation failure] [GC in old space requested].
28 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.7 / 0 ms [last resort gc].
29 ms: Mark-sweep 1.9 (19.5) -> 1.9 (19.5) MB, 0.8 / 0 ms [last resort gc].
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x3d67857d <JS Object>
2: replace [native string.js:134] [pc=0x4bb3f0c7] (this=0xb523c015 <Very long string[2051]>,N=0xb523d05d <JS RegExp>,O=0xb520b269 <String[2]: \">)
3: setupConfig [internal/process.js:112] [pc=0x4bb3d146] (this=0xb523727d <an Object with map 0x2ea0bc25>,_source=0x454086c1 <an Object with map 0x2ea0deb1>)
4: startup(aka startup) [node.js:51] [pc=0x4bb3713e] (this=0x3d6080c9 <undefined>)
...
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
Aborted (core dumped)
Any advice on how I can start a node process with a higher heap limit?
As far as I know, there was an issue with (old-space-)memory on 6.2.1. Update to 6.4 and see what happens. I had a similar issue while using gulp watchers. On first sight, 4GB were not enough, so I tried pushing up to 11.5 (which seemed to be the max limit). After all, the problem was with gulp.run() as it's deprecated now.
What I wanted to say is that it's not always about memory where the bug resides :)
That doesn't quite answer your question, but perhaps it's of help.

can't execute sphinx search command not found

I have an Opensuse 13.2 operating system. Also I installed sphinx search engine. Everything was great till I tried to execute search command in terminal.
search: command not found
Before that I configured sphinx, executed indexer, executed searchd, all good.
Here is searchd output command:
vitalik-opensuse:/home/vitalik # searchd
Sphinx 2.2.8-id64-release (rel22-r4942)
Copyright (c) 2001-2015, Andrew Aksyonoff
Copyright (c) 2008-2015, Sphinx Technologies Inc(http://sphinxsearch.com)
using config file '/etc/sphinx/sphinx.conf'...
listening on all interfaces, port=9312
listening on all interfaces, port=9306
precaching index 'deal'
precaching index 'deal-have'
precaching index 'deal-want'
precached 3 indexes in 0.086 sec
Here is indexer output:
vitalik-opensuse:/home/vitalik # indexer --all
Sphinx 2.2.8-id64-release (rel22-r4942)
Copyright (c) 2001-2015, Andrew Aksyonoff
Copyright (c) 2008-2015, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file '/etc/sphinx/sphinx.conf'...
indexing index 'deal'...
collected 7 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 7 docs, 1355 bytes
total 0.021 sec, 62708 bytes/sec, 323.95 docs/sec
indexing index 'deal-have'...
collected 7 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 7 docs, 1355 bytes
total 0.007 sec, 193240 bytes/sec, 998.28 docs/sec
indexing index 'deal-want'...
collected 7 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 7 docs, 1355 bytes
total 0.006 sec, 207758 bytes/sec, 1073.29 docs/sec
skipping non-plain index 'dist1'...
total 12 reads, 0.000 sec, 0.8 kb/call avg, 0.0 msec/call avg
total 36 writes, 0.000 sec, 0.4 kb/call avg, 0.0 msec/call avg
But I still can't execute search command to try searching via terminal.
Sphinx version is 2.2.8, installed from
opensuse-13.2-server-search-repository
What I am doing wrong?
Thank you.
The search tool no longer exists. It was removed a few versions ago.
Its long been broken, and not a realistic search experience.
Use test.php etc from the api folder to test searchd.

oracle coherence segmentation fault

I just installed Oracle Coherence 3.6 on RHEL 5.5. When I execute cache-server.sh I get a lot of GC warnings about allocating large blocks and then it fails with a segmentation fault. Suggestions? Here is the stack:
GC Warning: Repeated allocation of very large block (appr. size 1024000):
May lead to memory leak and poor performance.
GC Warning: Repeated allocation of very large block (appr. size 1024000):
May lead to memory leak and poor performance.
./bin/cache-server.sh: line 24: 6142 Segmentation fault $JAVAEXEC -server -showversion $JAVA_OPTS -cp "$COHERENCE_HOME/lib/coherence.jar" com.tangosol.net.DefaultCacheServer $1
[root#localhost coherence_3.6]# swapon -s
Filename Type Size Used Priority
/dev/mapper/VolGroup00-LogVol01 partition 2097144 0 -1
[root#localhost coherence_3.6]# free
total used free shared buffers cached
Mem: 3631880 662792 2969088 0 142636 353244
-/+ buffers/cache: 166912 3464968
Swap: 2097144 0 2097144
[root#localhost coherence_3.6]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
147G 6.7G 133G 5% /
/dev/sda1 99M 12M 82M 13% /boot
tmpfs 1.8G 0 1.8G 0% /dev/shm
/dev/hdb 2.8G 2.8G 0 100% /media/RHEL_5.5 Source
/dev/hda 57M 57M 0 100% /media/VBOXADDITIONS_4.2.16_86992
[root#localhost coherence_3.6]#
I haven't seen this issue before, but to start, I'd suggest the following:
Check for Linux updates. The JVMs for example now try to use large pages, and there have been some bugs in RH related to large pages that are fixed in the latest versions.
Download the latest Java 7 JDK. While no JDK is entirely bug-free, we have done extensive testing with JDK 7 patch levels 15, 21 and 40.
Download the latest version of Coherence. Coherence 12.1.2 is now out, but if you don't want to go for the very latest, then Coherence 3.7.1 is the suggested version. (The release after 3.7.1 is called 12.1.2. That is to align with Oracle versioning.)
I would check your space allocation on disk and memory/swap. You are probably running out of space somewhere.
df -h
free
You could also check your Java version - make sure that you are well patched.
Are you using Java 6 or Java 7?
There are Oracle forums for Coherence - you should try and ask the question there - thats where the real experts hang out.

Resources