How is bounce rate calculated in Google Analytics? - node.js

Pulling some of the metrics from Google Analytics (including bounce rate). Saved data given below.
Which metrics are used to calculate bounce rate?
How can I calculate bounce rate using other metrics values?
{
"_id" : ObjectId("5ecd09c83f80224b219b6827"),
"source" : "(direct)",
"medium" : "(none)",
"pagePath" : "/",
"channelGrouping" : "Direct",
"deviceCategory" : "desktop",
"date" : "20180326",
"users" : 6,
"sessions" : 6,
"bounces" : 3,
"avgSessionDuration" : 95.3333333333333,
"pageviews" : 6,
"newUsers" : 5,
"sessionDuration" : 572,
"pageviewsPerSession" : 1,
"bounceRate" : 50,
"goal" : 0,
"accId" : "92025510",
"agencyId" : ObjectId("5e3136e4c2a1b60c89ae07cc"),
"accountMongoId" : ObjectId("5e4ee454cdc4db6a02696405"),
"dataForDate" : "2018-04-01",
"dataForDateTime" : ISODate("2018-03-26T00:00:00.000Z")
}
/* 2 */
{
"_id" : ObjectId("5ecd09c83f80224b219b682c"),
"source" : "(direct)",
"medium" : "(none)",
"pagePath" : "/",
"channelGrouping" : "Direct",
"deviceCategory" : "desktop",
"date" : "20180401",
"users" : 1,
"sessions" : 1,
"bounces" : 1,
"avgSessionDuration" : 0,
"pageviews" : 1,
"newUsers" : 1,
"sessionDuration" : 0,
"pageviewsPerSession" : 1,
"bounceRate" : 100,
"goal" : 0,
"accId" : "92025510",
"agencyId" : ObjectId("5e3136e4c2a1b60c89ae07cc"),
"accountMongoId" : ObjectId("5e4ee454cdc4db6a02696405"),
"dataForDate" : "2018-04-01",
"dataForDateTime" : ISODate("2018-04-01T00:00:00.000Z")
}

bounce rate
The percentage of single-page session (i.e., session in which the person left the property from the first page).
This is an internal calculation done by google its not one that you can calculate yourself i suggest using ga:bouncerate

Related

How is the mongodb query performance so much slower on production instance vs local instance

I have a local mongo running on my machine and a mongodb running on an ec2 container using m5.large with ebs storage.
I'm aware that there will always be some difference (networking) etc when it comes to making a request locally vs making a request externally in the cloud to a production mongo.
However im finding with this particularly trivial query (assuming) its taking much longer than i expected.
node.js mongo query
let query = {
date: {
$gte: from,
},
game_id: gameId,
};
let documents = await gameStreams.find(query).toArray();
I am using node.js to make a query to mongodb. This code is ran locally against local (my laptop) and production (ec2) version of mongo
**local:**: time_taken 4003.124ms
**production**: time_taken 71187.316ms
Locally it takes around 4-9 seconds but when making the same query to production its taking 70-80 seconds. Why would there be such a huge difference here?
Note: i am using toArray() and the number of documents returned is 300,000.
db.getCollection('gamestreams').stats()
{
"ns" : "gaming.gamestreams",
"size" : 4138284702.0,
"count" : 14011415,
"avgObjSize" : 295,
"storageSize" : 900091904,
"freeStorageSize" : 671744,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},
"creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=false),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",
"type" : "file",
"uri" : "statistics:table:collection-0--7606124955649373726",
"LSM" : {
"bloom filter false positives" : 0,
"bloom filter hits" : 0,
"bloom filter misses" : 0,
"bloom filter pages evicted from cache" : 0,
"bloom filter pages read into cache" : 0,
"bloom filters in the LSM tree" : 0,
"chunks in the LSM tree" : 0,
"highest merge generation in the LSM tree" : 0,
"queries that could have benefited from a Bloom filter that did not exist" : 0,
"sleep for LSM checkpoint throttle" : 0,
"sleep for LSM merge throttle" : 0,
"total size of bloom filters" : 0
},
"block-manager" : {
"allocations requiring file extension" : 51772,
"blocks allocated" : 58597,
"blocks freed" : 7000,
"checkpoint size" : 899403776,
"file allocation unit size" : 4096,
"file bytes available for reuse" : 671744,
"file magic number" : 120897,
"file major version number" : 1,
"file size in bytes" : 900091904,
"minor version number" : 0
},
"btree" : {
"btree checkpoint generation" : 5456,
"btree clean tree checkpoint expiration time" : NumberLong(9223372036854775807),
"column-store fixed-size leaf pages" : 0,
"column-store internal pages" : 0,
"column-store variable-size RLE encoded values" : 0,
"column-store variable-size deleted values" : 0,
"column-store variable-size leaf pages" : 0,
"fixed-record size" : 0,
"maximum internal page key size" : 368,
"maximum internal page size" : 4096,
"maximum leaf page key size" : 2867,
"maximum leaf page size" : 32768,
"maximum leaf page value size" : 67108864,
"maximum tree depth" : 4,
"number of key/value pairs" : 0,
"overflow pages" : 0,
"pages rewritten by compaction" : 0,
"row-store empty values" : 0,
"row-store internal pages" : 0,
"row-store leaf pages" : 0
},
"cache" : {
"bytes currently in the cache" : 2587594533.0,
"bytes dirty in the cache cumulative" : 1180911380.0,
"bytes read into cache" : 11221545998.0,
"bytes written from cache" : 4664026027.0,
"checkpoint blocked page eviction" : 0,
"data source pages selected for eviction unable to be evicted" : 339,
"eviction walk passes of a file" : 13253,
"eviction walk target pages histogram - 0-9" : 3513,
"eviction walk target pages histogram - 10-31" : 4855,
"eviction walk target pages histogram - 128 and higher" : 0,
"eviction walk target pages histogram - 32-63" : 2504,
"eviction walk target pages histogram - 64-128" : 2381,
"eviction walks abandoned" : 539,
"eviction walks gave up because they restarted their walk twice" : 145,
"eviction walks gave up because they saw too many pages and found no candidates" : 4738,
"eviction walks gave up because they saw too many pages and found too few candidates" : 2980,
"eviction walks reached end of tree" : 7126,
"eviction walks started from root of tree" : 8404,
"eviction walks started from saved location in tree" : 4849,
"hazard pointer blocked page eviction" : 14,
"history store table reads" : 0,
"in-memory page passed criteria to be split" : 669,
"in-memory page splits" : 338,
"internal pages evicted" : 383,
"internal pages split during eviction" : 5,
"leaf pages split during eviction" : 1012,
"modified pages evicted" : 1398,
"overflow pages read into cache" : 0,
"page split during eviction deepened the tree" : 1,
"page written requiring history store records" : 1180,
"pages read into cache" : 125966,
"pages read into cache after truncate" : 1,
"pages read into cache after truncate in prepare state" : 0,
"pages requested from the cache" : 33299436,
"pages seen by eviction walk" : 14244088,
"pages written from cache" : 57731,
"pages written requiring in-memory restoration" : 45,
"tracked dirty bytes in the cache" : 0,
"unmodified pages evicted" : 147454
},
"cache_walk" : {
"Average difference between current eviction generation when the page was last considered" : 0,
"Average on-disk page image size seen" : 0,
"Average time in cache for pages that have been visited by the eviction server" : 0,
"Average time in cache for pages that have not been visited by the eviction server" : 0,
"Clean pages currently in cache" : 0,
"Current eviction generation" : 0,
"Dirty pages currently in cache" : 0,
"Entries in the root page" : 0,
"Internal pages currently in cache" : 0,
"Leaf pages currently in cache" : 0,
"Maximum difference between current eviction generation when the page was last considered" : 0,
"Maximum page size seen" : 0,
"Minimum on-disk page image size seen" : 0,
"Number of pages never visited by eviction server" : 0,
"On-disk page image sizes smaller than a single allocation unit" : 0,
"Pages created in memory and never written" : 0,
"Pages currently queued for eviction" : 0,
"Pages that could not be queued for eviction" : 0,
"Refs skipped during cache traversal" : 0,
"Size of the root page" : 0,
"Total number of pages currently in cache" : 0
},
"checkpoint-cleanup" : {
"pages added for eviction" : 3,
"pages removed" : 0,
"pages skipped during tree walk" : 1992915,
"pages visited" : 4892576
},
"compression" : {
"compressed page maximum internal page size prior to compression" : 4096,
"compressed page maximum leaf page size prior to compression " : 131072,
"compressed pages read" : 125583,
"compressed pages written" : 55619,
"page written failed to compress" : 0,
"page written was too small to compress" : 2112
},
"cursor" : {
"Total number of entries skipped by cursor next calls" : 0,
"Total number of entries skipped by cursor prev calls" : 0,
"Total number of entries skipped to position the history store cursor" : 0,
"bulk loaded cursor insert calls" : 0,
"cache cursors reuse count" : 72908,
"close calls that result in cache" : 0,
"create calls" : 305,
"cursor next calls that skip greater than or equal to 100 entries" : 0,
"cursor next calls that skip less than 100 entries" : 14756445,
"cursor prev calls that skip greater than or equal to 100 entries" : 0,
"cursor prev calls that skip less than 100 entries" : 1,
"insert calls" : 14072870,
"insert key and value bytes" : 4212628410.0,
"modify" : 0,
"modify key and value bytes affected" : 0,
"modify value bytes modified" : 0,
"next calls" : 14756445,
"open cursor count" : 0,
"operation restarted" : 109,
"prev calls" : 1,
"remove calls" : 0,
"remove key bytes removed" : 0,
"reserve calls" : 0,
"reset calls" : 224078,
"search calls" : 13128942,
"search history store calls" : 0,
"search near calls" : 14772,
"truncate calls" : 0,
"update calls" : 0,
"update key and value bytes" : 0,
"update value size change" : 0
},
"reconciliation" : {
"approximate byte size of timestamps in pages written" : 84197280,
"approximate byte size of transaction IDs in pages written" : 77480,
"dictionary matches" : 0,
"fast-path pages deleted" : 0,
"internal page key bytes discarded using suffix compression" : 91023,
"internal page multi-block writes" : 747,
"internal-page overflow keys" : 0,
"leaf page key bytes discarded using prefix compression" : 0,
"leaf page multi-block writes" : 1368,
"leaf-page overflow keys" : 0,
"maximum blocks required for a page" : 1,
"overflow values written" : 0,
"page checksum matches" : 23657,
"page reconciliation calls" : 3073,
"page reconciliation calls for eviction" : 999,
"pages deleted" : 3,
"pages written including an aggregated newest start durable timestamp " : 1173,
"pages written including an aggregated newest stop durable timestamp " : 0,
"pages written including an aggregated newest stop timestamp " : 0,
"pages written including an aggregated newest stop transaction ID" : 0,
"pages written including an aggregated oldest start timestamp " : 1065,
"pages written including an aggregated oldest start transaction ID " : 19,
"pages written including an aggregated prepare" : 0,
"pages written including at least one prepare" : 0,
"pages written including at least one start durable timestamp" : 31568,
"pages written including at least one start timestamp" : 31568,
"pages written including at least one start transaction ID" : 63,
"pages written including at least one stop durable timestamp" : 0,
"pages written including at least one stop timestamp" : 0,
"pages written including at least one stop transaction ID" : 0,
"records written including a prepare" : 0,
"records written including a start durable timestamp" : 5262330,
"records written including a start timestamp" : 5262330,
"records written including a start transaction ID" : 9685,
"records written including a stop durable timestamp" : 0,
"records written including a stop timestamp" : 0,
"records written including a stop transaction ID" : 0
},
"session" : {
"object compaction" : 0
},
"transaction" : {
"race to read prepared update retry" : 0,
"update conflicts" : 0
}
},
"nindexes" : 8,
"indexBuilds" : [],
"totalIndexSize" : 2627395584.0,
"totalSize" : 3527487488.0,
"indexSizes" : {
"_id_" : 270036992,
"service_id_1" : 303837184,
"service_type_1" : 249204736,
"name_1" : 245424128,
"date_1" : 226025472,
"game_id_1" : 247881728,
"service_id_1_service_type_1_name_1_date_1_game_id_1" : 996036608,
"game_id_1_date_1" : 88948736
},
"scaleFactor" : 1,
"ok" : 1.0,
"$clusterTime" : {
"clusterTime" : Timestamp(1607345809, 1),
"signature" : {
"hash" : { "$binary" : "CP8MT7YsL4t+raZn62AZhZdmTFg=", "$type" : "00" },
"keyId" : NumberLong(6855964983600087045)
}
},
"operationTime" : Timestamp(1607345809, 1)
}
db.getCollection('gamestreams').find({ date: { $gte: new Date('2019-07-03T23:59:59.999Z') }, game_id: '24024' }).explain()
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "gaming.gamestreams",
"indexFilterSet" : false,
"parsedQuery" : {
"$and" : [
{
"game_id" : {
"$eq" : "24024"
}
},
{
"date" : {
"$gte" : ISODate("2019-07-03T23:59:59.999Z")
}
}
]
},
"queryHash" : "FCE2088F",
"planCacheKey" : "A564EAFA",
"winningPlan" : {
"stage" : "FETCH",
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"game_id" : 1.0,
"date" : 1.0
},
"indexName" : "game_id_1_date_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"game_id" : [],
"date" : []
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"game_id" : [
"[\"24024\", \"24024\"]"
],
"date" : [
"[new Date(1562198399999), new Date(9223372036854775807)]"
]
}
}
},
"rejectedPlans" : [
{
"stage" : "FETCH",
"filter" : {
"game_id" : {
"$eq" : "24024"
}
},
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"date" : 1
},
"indexName" : "date_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"date" : []
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"date" : [
"[new Date(1562198399999), new Date(9223372036854775807)]"
]
}
}
},
{
"stage" : "FETCH",
"filter" : {
"date" : {
"$gte" : ISODate("2019-07-03T23:59:59.999Z")
}
},
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"game_id" : 1
},
"indexName" : "game_id_1",
"isMultiKey" : false,
"multiKeyPaths" : {
"game_id" : []
},
"isUnique" : false,
"isSparse" : false,
"isPartial" : false,
"indexVersion" : 2,
"direction" : "forward",
"indexBounds" : {
"game_id" : [
"[\"24024\", \"24024\"]"
]
}
}
}
]
},
"serverInfo" : {
"host" : "mongo-prod-mongodb-0",
"port" : 27017,
"version" : "4.4.1",
"gitVersion" : "ad91a93a5a31e175f5cbf8c69561e788bbc55ce1"
},
"ok" : 1.0,
"$clusterTime" : {
"clusterTime" : Timestamp(1607346119, 1),
"signature" : {
"hash" : { "$binary" : "XMuLYE5NVhHktX9wZ79LteksXFs=", "$type" : "00" },
"keyId" : NumberLong(6855964983600087045)
}
},
"operationTime" : Timestamp(1607346119, 1)
}
Stats above are taken from production mongo, if there is any other information you need me to provide please let me know.
Note: using the cursor with a batchSize(10000) helps a little, reducing time taken on production to around 40 seconds But it still does not seem quite right compared to performance i get locally.
There are a lot reasons some of which includes network.
Are you fetching large amount data?
If yes, try to use limit() to limit the number of documents you fetch at once. You'd need pagination later.
If you do not want a pagination, then i suggest you exclude some unwanted fields before fetching MongoDoc. It will help your query run faster.
Solution :
If you have large amount of data, then you should use mongodb sharding (distributed database).
https://docs.mongodb.com/manual/sharding/
No need to retrieve all data a at same time. You could use mongodb limit
to retrieve data.
https://docs.mongodb.com/compass/master/query/limit
Just use mongodb indexes for fast queries.
https://docs.mongodb.com/manual/indexes/

Structured Spark streaming metrics retrieval

I have an application with structured Spark streaming and I would like to get some metrics like scheduling delay, latency, etc. Usually, such metrics can be found in Spark UI Streaming tab, however, such functionality does not exist for structured streaming as far as I know.
So how can I get these metrics values?
For now, I have tried to use query progress but not all of the required metrics can be found in the results:
QueryProgress {
"timestamp" : "2019-11-19T20:14:07.011Z",
"batchId" : 1,
"numInputRows" : 8,
"inputRowsPerSecond" : 0.8429038036034138,
"processedRowsPerSecond" : 1.1210762331838564,
"durationMs" : {
"addBatch" : 6902,
"getBatch" : 1,
"getEndOffset" : 0,
"queryPlanning" : 81,
"setOffsetRange" : 20,
"triggerExecution" : 7136,
"walCommit" : 41
},
"stateOperators" : [ {
"numRowsTotal" : 2,
"numRowsUpdated" : 2,
"memoryUsedBytes" : 75415,
"customMetrics" : {
"loadedMapCacheHitCount" : 400,
"loadedMapCacheMissCount" : 0,
"stateOnCurrentVersionSizeBytes" : 17815
}
} ],
"sources" : [ {
"description" : "KafkaV2[Subscribe[tweets]]",
"startOffset" : {
"tweets" : {
"0" : 579
}
},
"endOffset" : {
"tweets" : {
"0" : 587
}
},
"numInputRows" : 8,
"inputRowsPerSecond" : 0.8429038036034138,
"processedRowsPerSecond" : 1.1210762331838564
} ]

How to get sum of a particular field of a collection in MongoDB collection using PyMongo?

My MongoDB contains the following data
{
"_id" : ObjectId("5c1b742eb1829b69963029e8"),
"duration" : 12,
"cost" : 450,
"tax" : 81,
"tags" : [],
"participants" : [
ObjectId("5c1b6a8f348ddb15e4a8aac7"),
ObjectId("5c1b742eb1829b69963029e7")
],
"initiatorId" : ObjectId("5c1b6a8f348ddb15e4a8aac7"),
"context" : "coach",
"accountId" : ObjectId("5bdfe7b01cbf9460c9bb5d68"),
"status" : "over",
"webhook" : "http://d4bdc1ef.ngrok.io/api/v1/webhook_callback",
"hostId" : "5be002109a708109f862a03e",
"createdAt" : ISODate("2018-12-20T10:51:26.143Z"),
"updatedAt" : ISODate("2018-12-20T10:51:44.962Z"),
"__v" : 0,
"endedAt" : ISODate("2018-12-20T10:51:44.612Z"),
"startedAt" : ISODate("2018-12-20T10:51:32.992Z"),
"type" : "voip"
}
{
"_id" : ObjectId("5c1b7451b1829b69963029ea"),
"duration" : 1,
"cost" : 150,
"tax" : 27,
"tags" : [],
"participants" : [
ObjectId("5c1b6a8f348ddb15e4a8aac7"),
ObjectId("5c1b7451b1829b69963029e9")
],
"initiatorId" : ObjectId("5c1b6a8f348ddb15e4a8aac7"),
"context" : "coach",
"accountId" : ObjectId("5bdfe7b01cbf9460c9bb5d68"),
"status" : "over",
"webhook" : "http://d4bdc1ef.ngrok.io/api/v1/webhook_callback",
"hostId" : "5be002109a708109f862a03e",
"createdAt" : ISODate("2018-12-20T10:52:01.560Z"),
"updatedAt" : ISODate("2018-12-20T10:52:08.018Z"),
"__v" : 0,
"endedAt" : ISODate("2018-12-20T10:52:07.667Z"),
"startedAt" : ISODate("2018-12-20T10:52:06.762Z"),
"type" : "voip"
}
I want to get the total duration (sum of duration field) for a particular accountID where status is equals to "over" for a particular date range. Anyway to accomplish this using PyMongo? I am unable to form the query
Well I was doing some pretty basic mistakes while converting the query to PyMongo aggregation function. All I would say is be careful with the query structure format and especially the keys are to be encapsulated within quotes(""). To solve this all I have to do was
from bson.objectid import ObjectId
pipe = [
{"$match": {"accountId": ObjectId(accountId),
"status": "over",
"startedAt": {"$gte": startDate,
"$lte": EndDate
}
}},
{"$project": {"readableDate":
{"$dateToString":
{"format": "%Y-%m-%d", "date": "$startedAt"}},
"accountId": str("$accountId"),
"duration": "$duration"
}},
{"$group": {"_id": {"date": "$readableDate",
"accountId": str("$accountId")}, "totalCallDuration": {"$sum": "$duration"}}}]
for doc in db.VoiceCall.aggregate(pipe):
print(doc)
Just a reminder : the startDate and EndDate are in Python datetime format.

How to count value of matching with the field value in mongoDB

I have a document in which perform 2 task.
I am new in mongoDB ,These tasks are very small and trying to explain.
To calculate number of users which have values of start hour, start Min, and start Sec these values should be less than or equal to start_min, start_hour which is defined in dObject.(We have to calculate number of users which come in an organisation ("orgId") earlier or at on time, Users have startHour,startMin and startSec this values should be equal or less than values which are defined in dObject Data (start_hour 10 and start_min 0) ).
To calculate number of users which have values of start hour, start Min, and start Sec these values should be greater to start_min, start_hour which is defined in dObject.(We have to calculate number of users which come in an organisation ("orgId") late Users have startHour,startMin and startSec this values should be greater than values which are defined in dObject Data (start_hour 10 and start_min 0) ).
NOTE
consider end_sec and start_sec with zero(0) in dObject Data. and keeping in mind organization Id and shiftId should be unique(By grouping).
MongoDB Document is give below.
{
"_id" : {
"orgId" : "af39bc69-1938-4149-b9f7-f101fd9baf73",
"userId" : "68c88420-4c22-11e7-ad9f-e5d77ee44b6c",
"checkinShiftId" : "7803421a-a680-4173-99af-a55c6f211724"
},
"lastSeen" : ISODate("2018-01-19T16:44:47.467+05:30"),
"firstSeen" : ISODate("2018-01-19T10:27:44.217+05:30"),
"dObjectInformation" : [ ],
"startHour" : 10,
"startMin" : 27,
"startSec" : 44,
"lastHour" : 16,
"lastMin" : 44,
"lastSec" : 47
},
{
"_id" : {
"orgId" : "af39bc69-1938-4149-b9f7-f101fd9baf73",
"userId" : "5cce5410-dca1-11e6-a87c-29432312f160",
"checkinShiftId" : "7803421a-a680-4173-99af-a55c6f211724"
},
"lastSeen" : ISODate("2018-01-19T20:32:50.041+05:30"),
"firstSeen" : ISODate("2018-01-19T11:05:10.095+05:30"),
"dObjectInformation" : [
{
"_id" : "5de84f30-6c62-11e7-9a37-dfd3a80c2c8c",
"dObjectData" : {
"end_min" : "0",
"end_hour" : "19",
"start_min" : "0",
"start_hour" : "10",
"dObjectId" : "7803421a-a680-4173-99af-a55c6f211724",
"objectName" : "shift",
"objectId" : "5a6ecc6a-1d71-4400-adc6-2eb3b39cb5d0",
"updatedOn" : ISODate("2018-01-16T17:48:22.990+05:30"),
"addedOn" : ISODate("2017-07-19T14:42:19.620+05:30")
}
}
],
"startHour" : 11,
"startMin" : 5,
"startSec" : 10,
"lastHour" : 20,
"lastMin" : 32,
"lastSec" : 50
}
Expected Result
{
"shiftId":"avc",
"orgId":"2323"
"numberOfUserReachBeforeOrEqualOnTime":"12",
"numberOfUserReachAfterOnTime":"4"
}
Please suggest me how to use aggregate Query.Thanks!

Mongo Secondary Sync Stuck in Recovery State

I have one master node and two secondary node. Due to some issues with one secondary node, it got stopped few days back. Now I am trying to resync the secondary node but after some time it goes into recovery state and stuck there.
For sync I deleted the data directory and restarted mongod service.
DB Size - 1.1 TB
sms3:PRIMARY> rs.status()
{
"set" : "sms3",
"date" : ISODate("2015-09-01T08:33:40Z"),
"myState" : 1,
"members" : [
{
"_id" : 9,
"name" : "abc:27117",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 9415375,
"optime" : Timestamp(1441096420, 7),
"optimeDate" : ISODate("2015-09-01T08:33:40Z"),
"self" : true
},
{
"_id" : 10,
"name" : "def:27117",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 9411728,
"optime" : Timestamp(1441096418, 159),
"optimeDate" : ISODate("2015-09-01T08:33:38Z"),
"lastHeartbeat" : ISODate("2015-09-01T08:33:38Z"),
"lastHeartbeatRecv" : ISODate("2015-09-01T08:33:39Z"),
"pingMs" : 0,
"syncingTo" : "db330.oak1.omniture.com:27117"
},
{
"_id" : 11,
"name" : "ghi:27117",
"health" : 1,
"state" : 3,
"stateStr" : "RECOVERING",
"uptime" : 53615,
"optime" : Timestamp(1441042830, 300),
"optimeDate" : ISODate("2015-08-31T17:40:30Z"),
"lastHeartbeat" : ISODate("2015-09-01T08:33:39Z"),
"lastHeartbeatRecv" : ISODate("2015-09-01T08:33:39Z"),
"pingMs" : 0,
"syncingTo" : "db330.oak1.omniture.com:27117"
}
],
"ok" : 1
}
sms3:PRIMARY> rs.config()
{
"_id" : "sms3",
"version" : 87615,
"members" : [
{
"_id" : 9,
"host" : "abc:27117"
},
{
"_id" : 10,
"host" : "def:27117",
"priority" : 0.5
},
{
"_id" : 11,
"host" : "ghi:27117",
"priority" : 0.5
}
]
}
As I understand your capped collection is gone out of sync. So you will need to clear everything from secondary and then restart the sync.
I found the solution. I synced the data directory on secondary node which was down from another secondary node which was up and restarted mongo.

Resources