Memory usage of Shared Library in NFS mounted File system - linux

I am using a NFS mounted File system for Linux based embedded system box. I have few shared libraries, sizes of which varies from 1MB to 20MB. I am running the application which is dependent on these libraries.
While running the application, i checked the /proc/TaskPID/smap.
Size: 4692 kB
Rss: 1880 kB
Pss: 1880 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 1880 kB
Private_Dirty: 0 kB
Referenced: 1880 kB
Anonymous: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Now as per my understanding, it means that Library is partially loaded (Since RSS says lesser value to Size)? If so, on a reference to other portion, trying to get that part into memory (Hope my understanding is correct) will be more costlier in case of NFS mounted system.So can we make it load every thing before running?

Related

How to solve "Java Failed to write core dump ..." for Rstudio running in EMR

I'm running a data preparation script with Sparklyr on Rstudio (running in EMR). The code exuction crashes and I got this error report I don't understand properly:
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f99b77fb968, pid=12298, tid=0x00007f99b8167940
#
# JRE version: OpenJDK Runtime Environment (8.0_171-b10) (build 1.8.0_171-b10)
# Java VM: OpenJDK 64-Bit Server VM (25.171-b10 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libR.so+0xbe968]
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
/proc/meminfo:
MemTotal: 32949048 kB
MemFree: 11480880 kB
MemAvailable: 19345760 kB
Buffers: 455776 kB
Cached: 5598268 kB
SwapCached: 0 kB
Active: 14572612 kB
Inactive: 3704688 kB
Active(anon): 12223280 kB
Inactive(anon): 48 kB
Active(file): 2349332 kB
Inactive(file): 3704640 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 748 kB
Writeback: 0 kB
AnonPages: 12223264 kB
Mapped: 198336 kB
...
I want to know how to fix the problem so my code could run entirely ?
Thanks in advance !

Java heap out of memory exception tomcat linux

Please help me,my live application sometimes throw exception out of memory java heap
however I set the max size to 512M half of virtual server size
I've searched on google and traced my Server like attached image
can anyone tell me where is the error please ?
the data in console is below
System load: 0.01 Processes: 74
Usage of /: 16.2% of 29.40GB Users logged in: 0
Memory usage: 60%
Swap usage: 0%
developer#pc:/$ free -m
total used free shared buffers cached
Mem: 994 754 239 0 24 138
-/+ buffers/cache: 592 401
Swap: 0 0 0

erl with centos "Failed to create main carrier for ll_alloc"

i am having a centos vps. i have installed erlang
by the command
rpm -Uvh erlang-17.4-1.el6.x86_64.rpm
Now whenever i try to run my rabbitmq-server. or i just issue erl command
then i get this error.
Failed to create main carrier for ll_alloc Aborted
is it some memory issue erlang is unable to get free memory or what?
here are memory stats of the machine
sudo cat /proc/meminfo
MemTotal: 4194304 kB
MemFree: 104520 kB
Cached: 2718800 kB
Buffers: 0 kB
Active: 1729508 kB
Inactive: 2170684 kB
Active(anon): 559168 kB
Inactive(anon): 627436 kB
Active(file): 1170340 kB
Inactive(file): 1543248 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 44 kB
Writeback: 0 kB
AnonPages: 1186604 kB
Shmem: 5212 kB
Slab: 189472 kB
SReclaimable: 155768 kB
SUnreclaim: 33704 kB
what should i do?
i figured out it was a memory issue when i shutdown tomcat to make available few more mbs of memory than the erl started.

Knowing which real node is up in a Cassandra cluster under virtual node setting

Virtual node is a powerful setting in Cassandra which ease the burden of assigning proper initial token for each node, but sometimes I found it is a pain when reading its output of nodetool ring where each node is described by tons of lines. For example:
node-1 155 Up Normal 228.55 KB 8.31% 7196378057413163154
node-1 155 Up Normal 228.55 KB 8.31% 7215375135797395653
node-1 155 Up Normal 228.55 KB 8.31% 7299851409832649823
node-1 155 Up Normal 228.55 KB 8.31% 7361899028342316034
node-1 155 Up Normal 228.55 KB 8.31% 7470359832465044920
node-1 155 Up Normal 228.55 KB 8.31% 7631123206720404219
node-1 155 Up Normal 228.55 KB 8.31% 7675034684873781539
node-1 155 Up Normal 228.55 KB 8.31% 7871044212864174985
node-1 155 Up Normal 228.55 KB 8.31% 7888407753199222932
node-1 155 Up Normal 228.55 KB 8.31% 7916197345035903777
node-1 155 Up Normal 228.55 KB 8.31% 7940203367286725631
node-1 155 Up Normal 228.55 KB 8.31% 7981190016602200507
node-1 155 Up Normal 228.55 KB 8.31% 8015518064513163806
node-1 155 Up Normal 228.55 KB 8.31% 8018007479871405889
.....
If my goal is to just simply know which real node is up, and how much data each real node possesses, can I know how should I do it?
You should use nodetool status, which outputs just one line per node e.g.
$ bin/nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 152.64 KB 256 100.0% 22f70e40-4070-483a-9fa6-e272556b7164 rack1

Java segmentation fault at libglib (Red Hat Enterprise Linux Server release 5.5)

has anyone ever seen the following Java segmentation fault at libglib g_list_last? The stack shows nothing more than the g_list_last and it says that "Current thread is native thread".
The Java 6 VM was running JBOSS 6 and there was no custom native code.
The server runs normally for some hours and then breaks... always with the exactly same error. I'm posting the most interesting excerpts from the hs_err file.
Thanks in advance for any clue!
Regards,
Doug
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000003e5022a5e3, pid=14845, tid=1196464448
#
# JRE version: 6.0_23-b05
# Java VM: Java HotSpot(TM) 64-Bit Server VM (19.0-b09 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libglib-2.0.so.0+0x2a5e3] g_list_last+0x13
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
#
--------------- T H R E A D ---------------
Current thread is native thread
siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x0000010068f06abb
Registers:
RAX=0x0000010068f06ab3, RBX=0x000000004d59ee10, RCX=0x000000004e60aeb0, RDX=0x0000000000000000
RSP=0x0000000047508e18, RBP=0x00002aaab9afcca0, RSI=0x00002aaab9afcca0, RDI=0x0000010068f06ab3
R8 =0x0000000000000001, R9 =0x0000000000003a93, R10=0x0000000000000000, R11=0x0000003e5022abb0
R12=0x000000047c6556b8, R13=0x00002aaab8c7a3f0, R14=0x000000004d698e40, R15=0x000000004da3c4b0
RIP=0x0000003e5022a5e3, EFL=0x0000000000010202, CSGSFS=0x0000000000000033, ERR=0x0000000000000004
TRAPNO=0x000000000000000e
...
R11=0x0000003e5022abb0
0x0000003e5022abb0: g_list_append+0 in /lib64/libglib-2.0.so.0 at 0x0000003e50200000
R12=0x000000047c6556b8
[error occurred during error reporting (printing registers, top of stack, instructions near pc), id 0xb]
Stack: [0x00000000474c9000,0x000000004750a000], sp=0x0000000047508e18, free space=255k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [libglib-2.0.so.0+0x2a5e3] g_list_last+0x13
--------------- P R O C E S S ---------------
VM state:not at safepoint (normal execution)
VM Mutex/Monitor currently owned by a thread: None
Heap
PSYoungGen total 4767296K, used 4345622K [0x00000006c2800000, 0x0000000800000000, 0x0000000800000000)
eden space 4368704K, 99% used [0x00000006c2800000,0x00000007caaac208,0x00000007cd250000)
from space 398592K, 4% used [0x00000007cd250000,0x00000007ce369990,0x00000007e5790000)
to space 373184K, 0% used [0x00000007e9390000,0x00000007e9390000,0x0000000800000000)
PSOldGen total 10403840K, used 1828930K [0x0000000447800000, 0x00000006c2800000, 0x00000006c2800000)
object space 10403840K, 17% used [0x0000000447800000,0x00000004b7210910,0x00000006c2800000)
PSPermGen total 288448K, used 288427K [0x0000000347800000, 0x00000003591b0000, 0x0000000447800000)
object space 288448K, 99% used [0x0000000347800000,0x00000003591aaf10,0x00000003591b0000)
...
--------------- S Y S T E M ---------------
OS:Red Hat Enterprise Linux Server release 5.5 (Tikanga)
uname:Linux 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64
libc:glibc 2.5 NPTL 2.5
rlimit: STACK 10240k, CORE 0k, NPROC 1056767, NOFILE 16384, AS infinity
load average:1.01 0.58 0.40
/proc/meminfo:
MemTotal: 132086452 kB
MemFree: 12656648 kB
Buffers: 1441372 kB
Cached: 107627992 kB
SwapCached: 0 kB
Active: 77778444 kB
Inactive: 39851400 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 132086452 kB
LowFree: 12656648 kB
SwapTotal: 61440552 kB
SwapFree: 61440552 kB
Dirty: 864 kB
Writeback: 0 kB
AnonPages: 8560164 kB
Mapped: 84312 kB
Slab: 1645472 kB
PageTables: 31956 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 127483776 kB
Committed_AS: 20373196 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 297932 kB
VmallocChunk: 34359436991 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
CPU:total 32 (8 cores per cpu, 2 threads per core) family 6 model 47 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, ht
Memory: 4k page, physical 132086452k(12656648k free), swap 61440552k(61440552k free)
vm_info: Java HotSpot(TM) 64-Bit Server VM (19.0-b09) for linux-amd64 JRE (1.6.0_23-b05), built on Nov 12 2010 14:12:21 by "java_re" with gcc 3.2.2 (SuSE Linux)

Resources