Cassandra nodes going down on trying to query - cassandra

Cassandra nodes go down and the query fails with an consistency error.
INFO [Service Thread] 2017-07-10 02:49:18,159 GCInspector.java:258 - ConcurrentMarkSweep GC in 6330ms. CMS Old Gen: 2908389776 -> 2987845256; Par Eden Space: 671088640 -> 0;
INFO [Service Thread] 2017-07-10 02:49:27,138 GCInspector.java:258 - ConcurrentMarkSweep GC in 8897ms. CMS Old Gen: 2987845256 -> 3324514112; Par Eden Space: 671088640 -> 0;
INFO [Service Thread] 2017-07-10 02:49:34,948 GCInspector.java:258 - ConcurrentMarkSweep GC in 7667ms. CMS Old Gen: 3324514112 -> 3342860256; Par Eden Space: 671088640 -> 277520992;
INFO [Service Thread] 2017-07-10 02:49:45,485 GCInspector.java:258 - ConcurrentMarkSweep GC in 9951ms. CMS Old Gen: 3342860256 -> 3342860216; Par Eden Space: 671088640 -> 671088632; Par Survivor Space: 83886072 -> 21614264
INFO [Service Thread] 2017-07-10 02:49:54,541 GCInspector.java:258 - ConcurrentMarkSweep GC in 8684ms. CMS Old Gen: 3342860264 -> 3342860232; Par Eden Space: 671088632 -> 671088616; Par Survivor Space: 83886064 -> 72300944
Garbage Collection seems to be taking a long time.
What could be causing it? How can I fix the problem?

Related

Cassandra : high GC activity while the cluster seems to do nothing

I have shutdown every web services using Cassandra.
I have shutdown any ETL using Cassandra.
The last domain-level table compaction is from yesterday (2021-11-18T15:47:00.822). Since then, only compactions on system tables have occured :
Compaction History:
id keyspace_name columnfamily_name compacted_at bytes_in bytes_out rows_merged
c0f4b1e0-4917-11ec-bf5a-0d5dfeeee6e2 system sstable_activity 2021-11-19T10:04:51.198 78314 19505 {1:12, 4:601}
5cd3e350-490f-11ec-bf5a-0d5dfeeee6e2 system size_estimates 2021-11-19T09:04:47.237 115889 26314 {4:6}
9ba752d0-48fe-11ec-bf5a-0d5dfeeee6e2 system sstable_activity 2021-11-19T07:04:51.197 77987 19558 {1:12, 4:601}
3786d260-48f6-11ec-bf5a-0d5dfeeee6e2 system size_estimates 2021-11-19T06:04:47.238 115994 26169 {4:6}
765a41e0-48e5-11ec-bf5a-0d5dfeeee6e2 system sstable_activity 2021-11-19T04:04:51.198 77853 19531 {1:8, 4:601}
12399a60-48dd-11ec-bf5a-0d5dfeeee6e2 system size_estimates 2021-11-19T03:04:47.238 115978 26290 {4:6}
510cbbc0-48cc-11ec-bf5a-0d5dfeeee6e2 system sstable_activity 2021-11-19T01:04:51.196 78419 19595 {1:12, 4:601}
ecec1440-48c3-11ec-bf5a-0d5dfeeee6e2 system size_estimates 2021-11-19T00:04:47.236 115838 26175 {4:6}
2bbf83c0-48b3-11ec-bf5a-0d5dfeeee6e2 system sstable_activity 2021-11-18T22:04:51.196 77380 19566 {1:12, 4:601}
c79edc40-48aa-11ec-bf5a-0d5dfeeee6e2 system size_estimates 2021-11-18T21:04:47.236 116007 26208 {4:6}
06735d30-489a-11ec-bf5a-0d5dfeeee6e2 system sstable_activity 2021-11-18T19:04:51.203 76300 19101 {1:9, 2:3, 3:2, 4:599}
a2517d30-4891-11ec-bf5a-0d5dfeeee6e2 system size_estimates 2021-11-18T18:04:47.235 115858 26258 {4:6}
e3e5a870-4882-11ec-bf5a-0d5dfeeee6e2 system_distributed repair_history 2021-11-18T16:19:14.807 5220983 5232639 {1:49, 2:1, 3:2}
e10c5ba0-4880-11ec-bf5a-0d5dfeeee6e2 system sstable_activity 2021-11-18T16:04:51.034 75302 19166 {1:46, 2:33, 3:50, 4:549}
But still, the Cassandra cluster seems to have high garbadge collector activity :
WARN [Service Thread] 2021-11-18 19:14:17,736 GCInspector.java:283 - ParNew GC in 1073ms. CMS Old Gen: 13461870544 -> 13461916520; Par Eden Space: 1716774456 -> 0; Par Survivor Space: 13116112 -> 57443048
WARN [Service Thread] 2021-11-18 19:14:19,116 GCInspector.java:283 - ParNew GC in 1070ms. CMS Old Gen: 13461916520 -> 13461979400; Par Eden Space: 1714728464 -> 0; Par Survivor Space: 57443048 -> 37282896
WARN [Service Thread] 2021-11-18 19:14:20,466 GCInspector.java:283 - ParNew GC in 1070ms. CMS Old Gen: 13461979400 -> 13462018112; Par Eden Space: 1718091776 -> 0; Par Survivor Space: 37282896 -> 17129408
WARN [Service Thread] 2021-11-18 19:14:21,816 GCInspector.java:283 - ParNew GC in 1070ms. CMS Old Gen: 13462018112 -> 13462045144; Par Eden Space: 1718091776 -> 0; Par Survivor Space: 17129408 -> 39569800
WARN [Service Thread] 2021-11-18 19:14:23,164 GCInspector.java:283 - ParNew GC in 1071ms. CMS Old Gen: 13462045144 -> 13462076376; Par Eden Space: 1717080600 -> 0; Par Survivor Space: 39569800 -> 26910864
WARN [Service Thread] 2021-11-18 19:14:24,524 GCInspector.java:283 - ParNew GC in 1071ms. CMS Old Gen: 13462076376 -> 13462113800; Par Eden Space: 1718091776 -> 0; Par Survivor Space: 26910864 -> 36179936
WARN [Service Thread] 2021-11-18 19:14:25,869 GCInspector.java:283 - ParNew GC in 1069ms. CMS Old Gen: 13462113800 -> 13462137272; Par Eden Space: 1717733528 -> 0; Par Survivor Space: 36179936 -> 30547296
WARN [Service Thread] 2021-11-18 19:14:27,230 GCInspector.java:283 - ParNew GC in 1069ms. CMS Old Gen: 13462137272 -> 13462163256; Par Eden Space: 1718091776 -> 0; Par Survivor Space: 30547296 -> 33604888
WARN [Service Thread] 2021-11-18 19:14:28,574 GCInspector.java:283 - ParNew GC in 1073ms. CMS Old Gen: 13462163256 -> 13462187040; Par Eden Space: 1715261960 -> 0; Par Survivor Space: 33604888 -> 28871272
WARN [Service Thread] 2021-11-18 19:14:29,946 GCInspector.java:283 - ParNew GC in 1069ms. CMS Old Gen: 13462187040 -> 13462216656; Par Eden Space: 1718091776 -> 0; Par Survivor Space: 28871272 -> 37053656
WARN [Service Thread] 2021-11-18 19:14:31,328 GCInspector.java:283 - ParNew GC in 1070ms. CMS Old Gen: 13462216656 -> 13462237976; Par Eden Space: 1718091776 -> 0; Par Survivor Space: 37053656 -> 23342920
WARN [Service Thread] 2021-11-18 19:14:32,743 GCInspector.java:283 - ParNew GC in 1071ms. CMS Old Gen: 13462237976 -> 13462278432; Par Eden Space: 1718091776 -> 0; Par Survivor Space: 23342920 -> 21896200
WARN [Service Thread] 2021-11-18 19:14:34,206 GCInspector.java:283 - ParNew GC in 1071ms. CMS Old Gen: 13462278432 -> 13462343008; Par Eden Space: 1718091776 -> 0; Par Survivor Space: 21896200 -> 20168000
WARN [Service Thread] 2021-11-18 19:14:35,696 GCInspector.java:283 - ParNew GC in 1070ms. CMS Old Gen: 13462343008 -> 13462438104; Par Eden Space: 1717981344 -> 0; Par Survivor Space: 20168000 -> 29781856
WARN [Service Thread] 2021-11-18 19:14:37,115 GCInspector.java:283 - ParNew GC in 1072ms. CMS Old Gen: 13462438104 -> 13462532752; Par Eden Space: 1717180224 -> 0; Par Survivor Space: 29781856 -> 15873392
...
WARN [Service Thread] 2021-11-19 10:34:10,753 GCInspector.java:283 - ParNew GC in 1081ms. CMS Old Gen: 21366236160 -> 22047866248; Par Eden Space: 1692018856 -> 0;
WARN [Service Thread] 2021-11-19 10:34:11,961 GCInspector.java:283 - ParNew GC in 1080ms. CMS Old Gen: 22047866248 -> 22711292400; Par Eden Space: 1718091776 -> 0;
WARN [Service Thread] 2021-11-19 10:34:13,190 GCInspector.java:283 - ParNew GC in 1082ms. CMS Old Gen: 22711292400 -> 23322328920; Par Eden Space: 1718091776 -> 0;
WARN [Service Thread] 2021-11-19 10:34:14,414 GCInspector.java:283 - ParNew GC in 1076ms. CMS Old Gen: 23322328920 -> 23938244632; Par Eden Space: 1710429576 -> 0;
WARN [Service Thread] 2021-11-19 10:34:15,628 GCInspector.java:283 - ParNew GC in 1083ms. CMS Old Gen: 23938244632 -> 24531937352; Par Eden Space: 1718091776 -> 0;
WARN [Service Thread] 2021-11-19 10:34:17,014 GCInspector.java:283 - ParNew GC in 1079ms. CMS Old Gen: 24531937352 -> 25077213400; Par Eden Space: 1718091776 -> 0;
WARN [Service Thread] 2021-11-19 10:34:18,219 GCInspector.java:283 - ParNew GC in 1082ms. CMS Old Gen: 25077213400 -> 25634088464; Par Eden Space: 1689565160 -> 0;
WARN [Service Thread] 2021-11-19 10:34:19,423 GCInspector.java:283 - ParNew GC in 1085ms. CMS Old Gen: 25634088464 -> 26549529728; Par Eden Space: 1714413672 -> 0;
WARN [Service Thread] 2021-11-19 10:34:20,656 GCInspector.java:283 - ParNew GC in 1088ms. CMS Old Gen: 26549529728 -> 27291610392; Par Eden Space: 1707391776 -> 0;
WARN [Service Thread] 2021-11-19 10:34:21,951 GCInspector.java:283 - ParNew GC in 1080ms. CMS Old Gen: 27290538440 -> 27875777144; Par Eden Space: 1718054488 -> 0;
WARN [Service Thread] 2021-11-19 10:34:23,171 GCInspector.java:283 - ParNew GC in 1082ms. CMS Old Gen: 27788203256 -> 28539500224; Par Eden Space: 1717476200 -> 0;
WARN [Service Thread] 2021-11-19 10:34:24,404 GCInspector.java:283 - ParNew GC in 1084ms. CMS Old Gen: 28313984168 -> 28943208880; Par Eden Space: 1690698568 -> 0;
WARN [Service Thread] 2021-11-19 10:34:25,674 GCInspector.java:283 - ParNew GC in 1079ms. CMS Old Gen: 28649641192 -> 29197701416; Par Eden Space: 1667998792 -> 0;
WARN [Service Thread] 2021-11-19 10:34:26,911 GCInspector.java:283 - ParNew GC in 1075ms. CMS Old Gen: 28973128960 -> 29454364992; Par Eden Space: 1718091776 -> 0;
WARN [Service Thread] 2021-11-19 10:34:28,137 GCInspector.java:283 - ParNew GC in 1079ms. CMS Old Gen: 29252627776 -> 29846619728; Par Eden Space: 1718091776 -> 0;
WARN [Service Thread] 2021-11-19 10:34:29,345 GCInspector.java:283 - ParNew GC in 1083ms. CMS Old Gen: 28703301152 -> 29313662360; Par Eden Space: 1684884992 -> 0;
How it is possible ?
Thank you
A contractor of us set the GC to CMS, even if the memory allocated to the heap was > 32 GB. That's why we saw such messages, and long GC pauses.
Turning GC to G1GC solve the issue

Java eden space is not 8 times larger than s0 space

according to oracle's doc default parameter values for SurvivorRatio is 8, that means each survivor space will be one-eighth the size of eden space.
but in my application it don't work
$ jmap -heap 48865
Attaching to process ID 48865, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.45-b02
using thread-local object allocation.
Parallel GC with 8 thread(s)
Heap Configuration:
MinHeapFreeRatio = 0
MaxHeapFreeRatio = 100
MaxHeapSize = 4294967296 (4096.0MB)
NewSize = 89128960 (85.0MB)
MaxNewSize = 1431306240 (1365.0MB)
OldSize = 179306496 (171.0MB)
NewRatio = 2
SurvivorRatio = 8
MetaspaceSize = 21807104 (20.796875MB)
CompressedClassSpaceSize = 1073741824 (1024.0MB)
MaxMetaspaceSize = 17592186044415 MB
G1HeapRegionSize = 0 (0.0MB)
Heap Usage:
PS Young Generation
Eden Space:
capacity = 67108864 (64.0MB)
used = 64519920 (61.53099060058594MB)
free = 2588944 (2.4690093994140625MB)
96.14217281341553% used
From Space:
capacity = 11010048 (10.5MB)
used = 0 (0.0MB)
free = 11010048 (10.5MB)
0.0% used
To Space:
capacity = 11010048 (10.5MB)
used = 0 (0.0MB)
free = 11010048 (10.5MB)
0.0% used
PS Old Generation
capacity = 179306496 (171.0MB)
used = 0 (0.0MB)
free = 179306496 (171.0MB)
0.0% used
7552 interned Strings occupying 605288 bytes.
but in VisualVM eden space is 1.332G and S0 is 455M, eden is only 3 times larger than S0 not the 8
You have neither disabled -XX:-UseAdaptiveSizePolicy, nor set -Xms equal to -Xmx, so JVM is free to resize heap generations (and survivor spaces) in runtime. In this case the estimated maximum Survior size is
MaxSurvivor = NewGen / MinSurvivorRatio
where -XX:MinSurvivorRatio=3 by default. Note: this is an estimated maximum, not the actual size.
See also this answer.

I am sufferring JAVA G1 issue

does any one encounter this kind of issue in java G1 gc
the first highlight user time is about 4 ms
but the second one user time is 0 ms and system time is about 4ms.
in G1 gc system time shouldn't be high, is it a bug in G1 gc?
below is my gc argunments
Xms200g -Xmx200g -Xmn30g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSCompactAtFullCollection -XX:CMSMaxAbortablePrecleanTime=5000 -XX:+CMSClassUnloadingEnabled -XX:+CMSScavengeBeforeRemark -verbose:gc -XX:+PrintPromotionFailure -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC
2018-01-07T04:54:39.995+0800: 906650.864: [GC (Allocation Failure) 2018-01-07T04:54:39.996+0800: 906650.865: [ParNew
Desired survivor size 1610612736 bytes, new threshold 6 (max 6)
- age 1: 69747632 bytes, 69747632 total
- age 2: 9641544 bytes, 79389176 total
- age 3: 10522192 bytes, 89911368 total
- age 4: 11732392 bytes, 101643760 total
- age 5: 9158960 bytes, 110802720 total
- age 6: 10917528 bytes, 121720248 total
: 25341731K->170431K(28311552K), 0.2088528 secs] 153045380K->127882325K(206569472K), 0.2094236 secs] [Times: **user=4.53 sys=0.00, real=0.21 secs]**
Heap after GC invocations=32432 (full 10):
par new generation total 28311552K, used 170431K [0x00007f6058000000, 0x00007f67d8000000, 0x00007f67d8000000)
eden space 25165824K, 0% used [0x00007f6058000000, 0x00007f6058000000, 0x00007f6658000000)
from space 3145728K, 5% used [0x00007f6658000000, 0x00007f666266ffe0, 0x00007f6718000000)
to space 3145728K, 0% used [0x00007f6718000000, 0x00007f6718000000, 0x00007f67d8000000)
concurrent mark-sweep generation total 178257920K, used 127711893K [0x00007f67d8000000, 0x00007f9258000000, 0x00007f9258000000)
Metaspace used 54995K, capacity 55688K, committed 56028K, reserved 57344K
}
2018-01-07T04:54:40.205+0800: 906651.074: Total time for which application threads were stopped: 0.2269738 seconds, Stopping threads took: 0.0001692 seconds
{Heap before GC invocations=32432 (full 10):
par new generation total 28311552K, used 25336255K [0x00007f6058000000, 0x00007f67d8000000, 0x00007f67d8000000)
eden space 25165824K, 100% used [0x00007f6058000000, 0x00007f6658000000, 0x00007f6658000000)
from space 3145728K, 5% used [0x00007f6658000000, 0x00007f666266ffe0, 0x00007f6718000000)
to space 3145728K, 0% used [0x00007f6718000000, 0x00007f6718000000, 0x00007f67d8000000)
concurrent mark-sweep generation total 178257920K, used 127711893K [0x00007f67d8000000, 0x00007f9258000000, 0x00007f9258000000)
Metaspace used 54995K, capacity 55688K, committed 56028K, reserved 57344K
2018-01-07T04:55:02.541+0800: 906673.411: [GC (Allocation Failure) 2018-01-07T04:55:02.542+0800: 906673.411: [ParNew
Desired survivor size 1610612736 bytes, new threshold 6 (max 6)
- age 1: 93841912 bytes, 93841912 total
- age 2: 11310104 bytes, 105152016 total
- age 3: 8967160 bytes, 114119176 total
- age 4: 10278920 bytes, 124398096 total
- age 5: 11626160 bytes, 136024256 total
- age 6: 9077432 bytes, 145101688 total
: 25336255K->195827K(28311552K), 0.1926783 secs] 153048149K->127918291K(206569472K), 0.1932366 secs] [Times: **user=0.00 sys=4.07, real=0.20 secs]**
Heap after GC invocations=32433 (full 10):
par new generation total 28311552K, used 195827K [0x00007f6058000000, 0x00007f67d8000000, 0x00007f67d8000000)
eden space 25165824K, 0% used [0x00007f6058000000, 0x00007f6058000000, 0x00007f6658000000)
from space 3145728K, 6% used [0x00007f6718000000, 0x00007f6723f3cf38, 0x00007f67d8000000)
to space 3145728K, 0% used [0x00007f6658000000, 0x00007f6658000000, 0x00007f6718000000)
concurrent mark-sweep generation total 178257920K, used 127722463K [0x00007f67d8000000, 0x00007f9258000000, 0x00007f9258000000)
Metaspace used 54995K, capacity 55688K, committed 56028K, reserved 57344K
}
2018-01-07T04:55:02.735+0800: 906673.604: Total time for which application threads were stopped: 0.2149603 seconds, Stopping threads took: 0.0002262 seconds
2018-01-07T04:55:14.673+0800: 906685.542: Total time for which application threads were stopped: 0.0183883 seconds, Stopping threads took: 0.0002046 seconds
2018-01-07T04:55:14.797+0800: 906685.666: Total time for which application threads were stopped: 0.0135349 seconds, Stopping threads took: 0.0002472 seconds
2018-01-07T04:55:14.810+0800: 906685.679: Total time for which application threads were stopped: 0.0129019 seconds, Stopping threads took: 0.0001014 seconds
2018-01-07T04:55:14.823+0800: 906685.692: Total time for which application threads were stopped: 0.0125939 seconds, Stopping threads took: 0.0002915 seconds
2018-01-07T04:55:21.597+0800: 906692.466: Total time for which application threads were stopped: 0.0137018 seconds, Stopping threads took: 0.0001683 seconds
{Heap before GC invocations=32433 (full 10):
your command-line specifies -XX:+UseConcMarkSweepGC - this isn't a G1 issue.

Haskell space leak in hash table insertion

I have been coding a histogram and I have had some great help on here. I have been coding my histogram using a hash table to store the keys and frequency values because the distribution of the keys are unknown; so they might not be sorted or consecutively together.
The problem with my code is that it spends too much time in GC so looks like a space leak as the time spent in GC is 60.3% - so my productivity is a poor 39.7%.
What is going wrong? I have tried to make things strict in the histogram function and I've also in-lined it (GC time went from 69.1% to 59.4%.)
Please note I have simplified this code by not updating the frequencies in the HT.
{-# LANGUAGE BangPatterns #-}
import qualified Data.HashTable.IO as H
import qualified Data.Vector as V
type HashTable k v = H.BasicHashTable k v
n :: Int
n = 5000000
kv :: V.Vector (Int,Int)
kv = V.zip k v
where
k = V.generate n (\i -> i `mod` 10)
v = V.generate n (\i -> 1)
histogram :: V.Vector (Int,Int) -> Int -> IO (H.CuckooHashTable Int Int)
histogram vec !n = do
ht <- H.newSized n
go ht (n-1)
where
go ht = go'
where
go' (-1) = return ht
go' !i = do
let (k,v) = vec V.! i
H.insert ht k v
go' (i-1)
{-# INLINE histogram #-}
main :: IO ()
main = do
ht <- histogram kv n
putStrLn "done"
Here's how it is compiled:
ghc --make -O3 -fllvm -rtsopts histogram.hs
Diagnosis:
jap#devbox:~/dev$ ./histogram +RTS -sstderr
done
863,187,472 bytes allocated in the heap
708,960,048 bytes copied during GC
410,476,592 bytes maximum residency (5 sample(s))
4,791,736 bytes maximum slop
613 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 1284 colls, 0 par 0.46s 0.46s 0.0004s 0.0322s
Gen 1 5 colls, 0 par 0.36s 0.36s 0.0730s 0.2053s
INIT time 0.00s ( 0.00s elapsed)
MUT time 0.51s ( 0.50s elapsed)
GC time 0.82s ( 0.82s elapsed)
EXIT time 0.03s ( 0.04s elapsed)
Total time 1.36s ( 1.36s elapsed)
%GC time 60.3% (60.4% elapsed)
Alloc rate 1,708,131,822 bytes per MUT second
Productivity 39.7% of total user, 39.7% of total elapsed
For the sake of comparison, this is what I get running your code as posted:
863,187,472 bytes allocated in the heap
708,960,048 bytes copied during GC
410,476,592 bytes maximum residency (5 sample(s))
4,791,736 bytes maximum slop
613 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 1284 colls, 0 par 1.01s 1.01s 0.0008s 0.0766s
Gen 1 5 colls, 0 par 0.81s 0.81s 0.1626s 0.4783s
INIT time 0.00s ( 0.00s elapsed)
MUT time 1.04s ( 1.04s elapsed)
GC time 1.82s ( 1.82s elapsed)
EXIT time 0.04s ( 0.04s elapsed)
Total time 2.91s ( 2.91s elapsed)
%GC time 62.6% (62.6% elapsed)
Alloc rate 827,493,210 bytes per MUT second
Productivity 37.4% of total user, 37.4% of total elapsed
Given that your vector elements are just (Int, Int) tuples, we have no reason not to use Data.Vector.Unboxed instead of plain Data.Vector. That already leads to significant improvement:
743,148,592 bytes allocated in the heap
38,440 bytes copied during GC
231,096,768 bytes maximum residency (4 sample(s))
4,759,104 bytes maximum slop
226 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 977 colls, 0 par 0.23s 0.23s 0.0002s 0.0479s
Gen 1 4 colls, 0 par 0.22s 0.22s 0.0543s 0.1080s
INIT time 0.00s ( 0.00s elapsed)
MUT time 1.04s ( 1.04s elapsed)
GC time 0.45s ( 0.45s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 1.49s ( 1.49s elapsed)
%GC time 30.2% (30.2% elapsed)
Alloc rate 715,050,070 bytes per MUT second
Productivity 69.8% of total user, 69.9% of total elapsed
Next, instead of hand-rolling recursion over the vector, we might use the optimised functions the vector library provides for that purpose. Code...
import qualified Data.HashTable.IO as H
import qualified Data.Vector.Unboxed as V
n :: Int
n = 5000000
kv :: V.Vector (Int,Int)
kv = V.zip k v
where
k = V.generate n (\i -> i `mod` 10)
v = V.generate n (\i -> 1)
histogram :: V.Vector (Int,Int) -> Int -> IO (H.CuckooHashTable Int Int)
histogram vec n = do
ht <- H.newSized n
V.mapM_ (\(k, v) -> H.insert ht k v) vec
return ht
{-# INLINE histogram #-}
main :: IO ()
main = do
ht <- histogram kv n
putStrLn "done"
... and result:
583,151,048 bytes allocated in the heap
35,632 bytes copied during GC
151,096,672 bytes maximum residency (3 sample(s))
3,003,040 bytes maximum slop
148 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 826 colls, 0 par 0.20s 0.20s 0.0002s 0.0423s
Gen 1 3 colls, 0 par 0.12s 0.12s 0.0411s 0.1222s
INIT time 0.00s ( 0.00s elapsed)
MUT time 0.92s ( 0.92s elapsed)
GC time 0.32s ( 0.33s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 1.25s ( 1.25s elapsed)
%GC time 25.9% (26.0% elapsed)
Alloc rate 631,677,209 bytes per MUT second
Productivity 74.1% of total user, 74.0% of total elapsed
81MB saved, not bad at all. Can we do even better?
A heap profile (which should be the first thing you think of when having memory consumption woes - debugging them without one is shooting in the dark) will reveal that, even with the original code, peak memory consumption happens very early on. Strictly speaking we do not have a leak; we just spend a lot of memory from the beginning. Now, note that the hash table is created with ht <- H.newSized n, with n = 5000000. Unless you expect to have so many different keys (as opposed to elements), that is extremely wasteful. Changing the initial size to 10 (the number of keys you actually have in your test) improves things dramatically:
432,059,960 bytes allocated in the heap
50,200 bytes copied during GC
44,416 bytes maximum residency (2 sample(s))
25,216 bytes maximum slop
1 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 825 colls, 0 par 0.01s 0.01s 0.0000s 0.0000s
Gen 1 2 colls, 0 par 0.00s 0.00s 0.0002s 0.0003s
INIT time 0.00s ( 0.00s elapsed)
MUT time 0.90s ( 0.90s elapsed)
GC time 0.01s ( 0.01s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 0.91s ( 0.90s elapsed)
%GC time 0.6% (0.6% elapsed)
Alloc rate 481,061,802 bytes per MUT second
Productivity 99.4% of total user, 99.4% of total elapsed
Finally, we might as well make our life simpler and try using the pure, yet efficient, hash map from unordered-containers. Code...
import qualified Data.HashMap.Strict as M
import qualified Data.Vector.Unboxed as V
n :: Int
n = 5000000
kv :: V.Vector (Int,Int)
kv = V.zip k v
where
k = V.generate n (\i -> i `mod` 10)
v = V.generate n (\i -> 1)
histogram :: V.Vector (Int,Int) -> M.HashMap Int Int
histogram vec =
V.foldl' (\ht (k, v) -> M.insert k v ht) M.empty vec
main :: IO ()
main = do
print $ M.size $ histogram kv
putStrLn "done"
... and result.
55,760 bytes allocated in the heap
3,512 bytes copied during GC
44,416 bytes maximum residency (1 sample(s))
17,024 bytes maximum slop
1 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 0 colls, 0 par 0.00s 0.00s 0.0000s 0.0000s
Gen 1 1 colls, 0 par 0.00s 0.00s 0.0002s 0.0002s
INIT time 0.00s ( 0.00s elapsed)
MUT time 0.34s ( 0.34s elapsed)
GC time 0.00s ( 0.00s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 0.34s ( 0.34s elapsed)
%GC time 0.0% (0.0% elapsed)
Alloc rate 162,667 bytes per MUT second
Productivity 99.9% of total user, 100.0% of total elapsed
~60% faster. It remains to be seen how it would scale with a larger amount of keys, but with your test data unordered-containers ends up being not only more convenient (pure functions; actually updating the histogram values only takes changing M.insert to M.insertWith) but also faster.

Decipher garbage collection output

I was running a sample program program using
rahul#g3ck0:~/programs/Remodel$ GOGCTRACE=1 go run main.go
gc1(1): 0+0+0 ms 0 -> 0 MB 422 -> 346 (422-76) objects 0 handoff
gc2(1): 0+0+0 ms 0 -> 0 MB 2791 -> 1664 (2867-1203) objects 0 handoff
gc3(1): 0+0+0 ms 1 -> 0 MB 4576 -> 2632 (5779-3147) objects 0 handoff
gc4(1): 0+0+0 ms 1 -> 0 MB 3380 -> 2771 (6527-3756) objects 0 handoff
gc5(1): 0+0+0 ms 1 -> 0 MB 3511 -> 2915 (7267-4352) objects 0 handoff
gc6(1): 0+0+0 ms 1 -> 0 MB 6573 -> 2792 (10925-8133) objects 0 handoff
gc7(1): 0+0+0 ms 1 -> 0 MB 4859 -> 3059 (12992-9933) objects 0 handoff
gc8(1): 0+0+0 ms 1 -> 0 MB 4554 -> 3358 (14487-11129) objects 0 handoff
gc9(1): 0+0+0 ms 1 -> 0 MB 8633 -> 4116 (19762-15646) objects 0 handoff
gc10(1): 0+0+0 ms 1 -> 0 MB 9415 -> 4769 (25061-20292) objects 0 handoff
gc11(1): 0+0+0 ms 1 -> 0 MB 6636 -> 4685 (26928-22243) objects 0 handoff
gc12(1): 0+0+0 ms 1 -> 0 MB 6741 -> 4802 (28984-24182) objects 0 handoff
gc13(1): 0+0+0 ms 1 -> 0 MB 9654 -> 5097 (33836-28739) objects 0 handoff
gc1(1): 0+0+0 ms 0 -> 0 MB 209 -> 171 (209-38) objects 0 handoff
Help me understand the first part i.e.
0 + 0 + 0 => Mark + Sweep + Clean times
Does 422 -> 346 means that there has been memory cleanup from 422MB to 346 MB?
If yes, then how come the memory is been reduced when there was nothing to be cleaned up?
In Go 1.5, the format of this output has changed considerably. For the full documentation, head over to http://godoc.org/runtime and search for "gctrace:"
gctrace: setting gctrace=1 causes the garbage collector to emit a single line to standard
error at each collection, summarizing the amount of memory collected and the
length of the pause. Setting gctrace=2 emits the same summary but also
repeats each collection. The format of this line is subject to change.
Currently, it is:
gc # ##s #%: #+...+# ms clock, #+...+# ms cpu, #->#-># MB, # MB goal, # P
where the fields are as follows:
gc # the GC number, incremented at each GC
##s time in seconds since program start
#% percentage of time spent in GC since program start
#+...+# wall-clock/CPU times for the phases of the GC
#->#-># MB heap size at GC start, at GC end, and live heap
# MB goal goal heap size
# P number of processors used
The phases are stop-the-world (STW) sweep termination, scan,
synchronize Ps, mark, and STW mark termination. The CPU times
for mark are broken down in to assist time (GC performed in
line with allocation), background GC time, and idle GC time.
If the line ends with "(forced)", this GC was forced by a
runtime.GC() call and all phases are STW.
The output is generated from this line: http://golang.org/src/pkg/runtime/mgc0.c?#L2147
So the different parts are:
0+0+0 ms : mark, sweep and clean duration in ms
1 -> 0 MB : heap before and after in MB
209 - 171 : objects before and after
(209-38) objects : number of allocs and frees
handoff (and in Go 1.2 steal and yields) are internals of the algorithm.

Resources