Knowing which real node is up in a Cassandra cluster under virtual node setting - cassandra

Virtual node is a powerful setting in Cassandra which ease the burden of assigning proper initial token for each node, but sometimes I found it is a pain when reading its output of nodetool ring where each node is described by tons of lines. For example:
node-1 155 Up Normal 228.55 KB 8.31% 7196378057413163154
node-1 155 Up Normal 228.55 KB 8.31% 7215375135797395653
node-1 155 Up Normal 228.55 KB 8.31% 7299851409832649823
node-1 155 Up Normal 228.55 KB 8.31% 7361899028342316034
node-1 155 Up Normal 228.55 KB 8.31% 7470359832465044920
node-1 155 Up Normal 228.55 KB 8.31% 7631123206720404219
node-1 155 Up Normal 228.55 KB 8.31% 7675034684873781539
node-1 155 Up Normal 228.55 KB 8.31% 7871044212864174985
node-1 155 Up Normal 228.55 KB 8.31% 7888407753199222932
node-1 155 Up Normal 228.55 KB 8.31% 7916197345035903777
node-1 155 Up Normal 228.55 KB 8.31% 7940203367286725631
node-1 155 Up Normal 228.55 KB 8.31% 7981190016602200507
node-1 155 Up Normal 228.55 KB 8.31% 8015518064513163806
node-1 155 Up Normal 228.55 KB 8.31% 8018007479871405889
.....
If my goal is to just simply know which real node is up, and how much data each real node possesses, can I know how should I do it?

You should use nodetool status, which outputs just one line per node e.g.
$ bin/nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 127.0.0.1 152.64 KB 256 100.0% 22f70e40-4070-483a-9fa6-e272556b7164 rack1

Related

erl with centos "Failed to create main carrier for ll_alloc"

i am having a centos vps. i have installed erlang
by the command
rpm -Uvh erlang-17.4-1.el6.x86_64.rpm
Now whenever i try to run my rabbitmq-server. or i just issue erl command
then i get this error.
Failed to create main carrier for ll_alloc Aborted
is it some memory issue erlang is unable to get free memory or what?
here are memory stats of the machine
sudo cat /proc/meminfo
MemTotal: 4194304 kB
MemFree: 104520 kB
Cached: 2718800 kB
Buffers: 0 kB
Active: 1729508 kB
Inactive: 2170684 kB
Active(anon): 559168 kB
Inactive(anon): 627436 kB
Active(file): 1170340 kB
Inactive(file): 1543248 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 44 kB
Writeback: 0 kB
AnonPages: 1186604 kB
Shmem: 5212 kB
Slab: 189472 kB
SReclaimable: 155768 kB
SUnreclaim: 33704 kB
what should i do?
i figured out it was a memory issue when i shutdown tomcat to make available few more mbs of memory than the erl started.

Cassandra load is high on one of the nodes

I have an 8 node Cassandra cluster (Cassandra 2.0.8). When I ran the status using nodetool, I see the following. I am a newbie and wondering why the load on one of the nodes (that node is my initial seed node) is high compared to others?
I also noticed that when I try to push data into Cassandra Table (column family) using PIG, that one node is using very high CPU (95%+) while the others are not (20-30%)
Note: Ownership information does not include topology; for complete information, specify a keyspace
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN xxx.xxx.xx.xxx 15.55 MB 256 6.2% ------------------------------------ rack1
UN xxx.xxx.xx.xxx 36.89 MB 256 6.2% ------------------------------------ rack1
UN xxx.xxx.xx.xxx 3.77 GB 256 6.2% ------------------------------------ rack1
UN xxx.xxx.xx.xxx 1.04 GB 256 56.2% ------------------------------------ rack1
UN xxx.xxx.xx.xxx 43.49 MB 256 6.2% ------------------------------------ rack1
UN xxx.xxx.xx.xxx 40.36 MB 256 6.2% ------------------------------------ rack1
UN xxx.xxx.xx.xxx 43.69 MB 256 6.2% ------------------------------------ rack1
UN xxx.xxx.xx.xxx 40.23 MB 256 6.2% ------------------------------------ rack1
Any help is appreciated. Thank you.
You mentioned that you are pushing data through PIG. If so are you using the Hadoop support of Cassandra?
If yes, it is likely that your splits are causing this.

All inserts go to same node in Cassandra despite Murmur3 partitioner

I am storing the following column family in Cassandra
Mac Address // PKEY
TimeStamp // PKEY
LocationID
ownerName
Signal Strength
The primary key is (Mac Address, TimeStamp).
The size of each row is 100 bytes approximately. I have a Cassandra cluster of 4 nodes, each with 1 GB / 512 MB RAM, and 25 GB disk. I inserted 20 million rows of above column family (which translates to 2 GB). Each row inserted has a different MacAddress, so there are no two rows with same MAC Address. From what I understand, Partition will happen on MacAddress, and since there are no same MACs, the 20 million rows should be evenly (well, almost) distributed. I was surprised to find that 99.7% of the data was residing on only one of the nodes. The other three nodes had very little number of rows. My replication factor was set to 1 (since I'm using this setup only for PoC).
Is there any case when such a thing could happen? I could not find any thing on web close to this issue, any help would be appreciated.
Datacenter: datacenter1
==========
Replicas: 1
Address Rack Status State Load Owns Token
9158199868423703627
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367184151655397632
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367194907514299830
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367205663373202028
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367216419232104226
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367227175091006423
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367237930949908621
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367248686808810819
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367259442667713017
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367270198526615214
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367280954385517412
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367291710244419610
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367302466103321808
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367313221962224005
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367323977821126203
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367334733680028401
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367345489538930599
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367356245397832796
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367367001256734994
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367377757115637192
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 367388512974539390
.....
.....
.....
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368646948466096527
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368657704324998725
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368668460183900923
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368679216042803121
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368689971901705318
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368700727760607516
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368711483619509714
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368722239478411912
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368732995337314109
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368743751196216307
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368754507055118505
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368765262914020703
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368776018772922900
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368786774631825098
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368797530490727296
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368808286349629494
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368819042208531691
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368829798067433889
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368840553926336087
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368851309785238285
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368862065644140482
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368872821503042680
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368883577361944878
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368894333220847076
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368905089079749273
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368915844938651471
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368926600797553669
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368937356656455867
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368948112515358064
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368958868374260262
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368969624233162460
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368980380092064658
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 368991135950966855
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369001891809869053
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369012647668771251
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369023403527673449
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369034159386575646
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369044915245477844
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369055671104380042
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369066426963282240
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369077182822184437
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369087938681086635
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369098694539988833
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369109450398891031
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369120206257793228
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369130962116695426
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369141717975597624
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369152473834499822
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369163229693402019
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369173985552304217
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369184741411206415
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369195497270108613
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369206253129010810
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369217008987913008
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369227764846815206
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369238520705717404
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369249276564619601
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369260032423521799
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369270788282423997
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369281544141326195
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369292300000228392
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369303055859130590
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369313811718032788
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369324567576934986
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369335323435837183
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369346079294739381
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369356835153641579
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369367591012543777
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369378346871445974
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369389102730348172
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369399858589250370
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369410614448152568
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369421370307054765
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369432126165956963
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369442882024859161
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369453637883761359
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369464393742663556
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369475149601565754
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369485905460467952
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369496661319370150
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369507417178272347
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369518173037174545
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369528928896076743
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369539684754978941
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369550440613881138
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369561196472783336
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369571952331685534
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369582708190587732
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369593464049489929
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369604219908392127
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369614975767294325
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369625731626196523
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369636487485098720
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369647243344000918
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369657999202903116
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369668755061805314
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369679510920707511
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369690266779609709
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369701022638511907
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369711778497414105
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369722534356316302
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369733290215218500
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369744046074120698
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369754801933022896
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369765557791925093
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369776313650827291
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369787069509729489
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369797825368631687
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369808581227533884
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369819337086436082
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369830092945338280
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369840848804240478
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369851604663142675
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369862360522044873
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369873116380947071
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369883872239849269
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369894628098751467
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369905383957653665
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369916139816555863
192.168.121.176 rack1 Up Normal 758.61 KB 0.03% 369926895675458061
192.168.121.129 rack1 Up Normal 9.5 MB 0.18% 3898129039052635843
192.168.121.129 rack1 Up Normal 9.5 MB 0.18% 3898262574450307387
192.168.121.129 rack1 Up Normal 9.5 MB 0.18% -4748915780896388021
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% -4676858186858460085
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% -4604800592820532149
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% -4532742998782604213
....
....
....
....
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 3970053097692892235
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4042110691730820171
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4114168285768748107
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4186225879806676043
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4258283473844603979
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4330341067882531915
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4402398661920459851
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4474456255958387787
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4546513849996315723
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4618571444034243659
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4690629038072171595
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4762686632110099531
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4834744226148027467
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4906801820185955403
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 4978859414223883339
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5050917008261811275
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5122974602299739211
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5195032196337667147
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5267089790375595083
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5339147384413523019
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5411204978451450955
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5483262572489378891
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5555320166527306827
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5627377760565234763
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5699435354603162699
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5771492948641090635
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5843550542679018571
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5915608136716946507
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 5987665730754874443
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6059723324792802379
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6131780918830730315
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6203838512868658251
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6275896106906586187
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6347953700944514123
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6420011294982442059
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6492068889020369995
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6564126483058297931
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6636184077096225867
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6708241671134153803
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6780299265172081739
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6852356859210009675
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6924414453247937611
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 6996472047285865547
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7068529641323793483
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7140587235361721419
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7212644829399649355
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7284702423437577291
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7356760017475505227
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7428817611513433163
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7500875205551361099
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7572932799589289035
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7644990393627216971
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7717047987665144907
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7789105581703072843
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7861163175741000779
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 7933220769778928715
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8005278363816856651
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8077335957854784587
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8149393551892712523
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8221451145930640459
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8293508739968568395
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8365566334006496331
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8437623928044424267
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8509681522082352203
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8581739116120280139
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8653796710158208075
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8725854304196136011
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8797911898234063947
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8869969492271991883
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 8942027086309919819
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 9014084680347847755
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 9086142274385775691
192.168.121.192 rack1 Up Normal 5.86 GB 99.72% 9158199868423703627
How did you "see" that 99% of the data was in one node? The reason I ask is because I had a similar problem a few days ago, but I'm not sure if my scenario is the same as yours. I had installed 1.2 Datastax community edition when it was first coming out. I saw that 99.7% of my data was on one node. I decided to use their latest AMI for 1.2, ran the data insertion test again, and then did a nodetool -h localhost ring. I guess the nodetool script was updated because I saw an even distribution and it didn't list all the 512 nodes. It instead listed 2 nodes.

Java segmentation fault at libglib (Red Hat Enterprise Linux Server release 5.5)

has anyone ever seen the following Java segmentation fault at libglib g_list_last? The stack shows nothing more than the g_list_last and it says that "Current thread is native thread".
The Java 6 VM was running JBOSS 6 and there was no custom native code.
The server runs normally for some hours and then breaks... always with the exactly same error. I'm posting the most interesting excerpts from the hs_err file.
Thanks in advance for any clue!
Regards,
Doug
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000003e5022a5e3, pid=14845, tid=1196464448
#
# JRE version: 6.0_23-b05
# Java VM: Java HotSpot(TM) 64-Bit Server VM (19.0-b09 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libglib-2.0.so.0+0x2a5e3] g_list_last+0x13
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
#
--------------- T H R E A D ---------------
Current thread is native thread
siginfo:si_signo=SIGSEGV: si_errno=0, si_code=1 (SEGV_MAPERR), si_addr=0x0000010068f06abb
Registers:
RAX=0x0000010068f06ab3, RBX=0x000000004d59ee10, RCX=0x000000004e60aeb0, RDX=0x0000000000000000
RSP=0x0000000047508e18, RBP=0x00002aaab9afcca0, RSI=0x00002aaab9afcca0, RDI=0x0000010068f06ab3
R8 =0x0000000000000001, R9 =0x0000000000003a93, R10=0x0000000000000000, R11=0x0000003e5022abb0
R12=0x000000047c6556b8, R13=0x00002aaab8c7a3f0, R14=0x000000004d698e40, R15=0x000000004da3c4b0
RIP=0x0000003e5022a5e3, EFL=0x0000000000010202, CSGSFS=0x0000000000000033, ERR=0x0000000000000004
TRAPNO=0x000000000000000e
...
R11=0x0000003e5022abb0
0x0000003e5022abb0: g_list_append+0 in /lib64/libglib-2.0.so.0 at 0x0000003e50200000
R12=0x000000047c6556b8
[error occurred during error reporting (printing registers, top of stack, instructions near pc), id 0xb]
Stack: [0x00000000474c9000,0x000000004750a000], sp=0x0000000047508e18, free space=255k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C [libglib-2.0.so.0+0x2a5e3] g_list_last+0x13
--------------- P R O C E S S ---------------
VM state:not at safepoint (normal execution)
VM Mutex/Monitor currently owned by a thread: None
Heap
PSYoungGen total 4767296K, used 4345622K [0x00000006c2800000, 0x0000000800000000, 0x0000000800000000)
eden space 4368704K, 99% used [0x00000006c2800000,0x00000007caaac208,0x00000007cd250000)
from space 398592K, 4% used [0x00000007cd250000,0x00000007ce369990,0x00000007e5790000)
to space 373184K, 0% used [0x00000007e9390000,0x00000007e9390000,0x0000000800000000)
PSOldGen total 10403840K, used 1828930K [0x0000000447800000, 0x00000006c2800000, 0x00000006c2800000)
object space 10403840K, 17% used [0x0000000447800000,0x00000004b7210910,0x00000006c2800000)
PSPermGen total 288448K, used 288427K [0x0000000347800000, 0x00000003591b0000, 0x0000000447800000)
object space 288448K, 99% used [0x0000000347800000,0x00000003591aaf10,0x00000003591b0000)
...
--------------- S Y S T E M ---------------
OS:Red Hat Enterprise Linux Server release 5.5 (Tikanga)
uname:Linux 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64
libc:glibc 2.5 NPTL 2.5
rlimit: STACK 10240k, CORE 0k, NPROC 1056767, NOFILE 16384, AS infinity
load average:1.01 0.58 0.40
/proc/meminfo:
MemTotal: 132086452 kB
MemFree: 12656648 kB
Buffers: 1441372 kB
Cached: 107627992 kB
SwapCached: 0 kB
Active: 77778444 kB
Inactive: 39851400 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 132086452 kB
LowFree: 12656648 kB
SwapTotal: 61440552 kB
SwapFree: 61440552 kB
Dirty: 864 kB
Writeback: 0 kB
AnonPages: 8560164 kB
Mapped: 84312 kB
Slab: 1645472 kB
PageTables: 31956 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 127483776 kB
Committed_AS: 20373196 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 297932 kB
VmallocChunk: 34359436991 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
CPU:total 32 (8 cores per cpu, 2 threads per core) family 6 model 47 stepping 2, cmov, cx8, fxsr, mmx, sse, sse2, sse3, ssse3, sse4.1, sse4.2, popcnt, ht
Memory: 4k page, physical 132086452k(12656648k free), swap 61440552k(61440552k free)
vm_info: Java HotSpot(TM) 64-Bit Server VM (19.0-b09) for linux-amd64 JRE (1.6.0_23-b05), built on Nov 12 2010 14:12:21 by "java_re" with gcc 3.2.2 (SuSE Linux)

Memory usage of Shared Library in NFS mounted File system

I am using a NFS mounted File system for Linux based embedded system box. I have few shared libraries, sizes of which varies from 1MB to 20MB. I am running the application which is dependent on these libraries.
While running the application, i checked the /proc/TaskPID/smap.
Size: 4692 kB
Rss: 1880 kB
Pss: 1880 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 1880 kB
Private_Dirty: 0 kB
Referenced: 1880 kB
Anonymous: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Now as per my understanding, it means that Library is partially loaded (Since RSS says lesser value to Size)? If so, on a reference to other portion, trying to get that part into memory (Hope my understanding is correct) will be more costlier in case of NFS mounted system.So can we make it load every thing before running?

Resources