Arango DB Replication applier not working - arangodb

I am trying to setup a master-slave model of Arango. Able to do a first batch update but applier for live sync is not working. It keeps failing on indexing constraint which works perfectly fine in master and does not have a duplicate key issue.
require("#arangodb/replication").setupReplication({
...> endpoint: "tcp://master:8529",
...> username: “name”,
...> password: “pass”,
...> autoStart: true,
...> incremental:true,
...> verbose:true,
...> });
applier state.
{
"state" : {
"started" : "2020-12-08T07:21:50Z",
"running" : false,
"phase" : "inactive",
"lastAppliedContinuousTick" : null,
"lastProcessedContinuousTick" : null,
"lastAvailableContinuousTick" : null,
"safeResumeTick" : null,
"progress" : {
"time" : "2020-12-09T07:07:44Z",
"message" : "applier shut down",
"failedConnects" : 0
},
"totalRequests" : 4,
"totalFailedConnects" : 0,
"totalEvents" : 0,
"totalDocuments" : 0,
"totalRemovals" : 0,
"totalResyncs" : 3,
"totalOperationsExcluded" : 0,
"totalApplyTime" : 0,
"averageApplyTime" : 0,
"totalFetchTime" : 0,
"averageFetchTime" : 0,
"lastError" : {
"errorNum" : 0
},
"time" : "2020-12-09T07:13:02Z"
},
"server" : {
"version" : "3.6.4",
"serverId" : "237391144398597"
},
"endpoint" :
I tried (sync, async) everything. It is just doing the first batch update and live updates are not happening. Somehow applier is just shutting down. Please help

Can you try either
require("#arangodb/replication").setupReplication({
endpoint: "tcp://master:8529",
username: “name”,
password: “pass”,
autoStart: true,
incremental:true,
verbose:true,
includeSystem: true
});
for starting the applier on the current database, or, the following for starting the applier for all databases/the entire server
require("#arangodb/replication").setupReplicationGlobal({
endpoint: "tcp://master:8529",
username: “name”,
password: “pass”,
autoStart: true,
incremental:true,
verbose:true
});
In the latter case (setupReplicationGlobal) you can later check the state of the applier via
require("#arangodb/replication").globalApplier.state();
(mind the globalApplier here vs. just applier)

Related

mongodb taking too much time for old entries

i am new in mongodb and i am facing an issue, i have around millions of documents in my collectionand i am trying to find single entry using findOne({}) command and when i am trying to find recent entries then response comes in miliseconds but when i am trying to fetch older entries around 600 millionth document then it takes around 2 minutes on mongo shell and my node server gives
{ MongoErro : connection 1 to 127.0.0.1:27017 timed out }
and my nodejs server sends an empty response. can any one tell me what should i do to resolve this issueThanks in advance
explain gives me
db.contacts.find({"phoneNumber":"9165900137"}).explain("executionStats")
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "meanApp.contacts",
"indexFilterSet" : false,
"parsedQuery" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"winningPlan" : {
"stage" : "COLLSCAN",
"filter" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"direction" : "forward"
},
"rejectedPlans" : [ ]
},
"executionStats" : {
"executionSuccess" : true,
"nReturned" : 1,
"executionTimeMillis" : 321188,
"totalKeysExamined" : 0,
"totalDocsExamined" : 495587806,
"executionStages" : {
"stage" : "COLLSCAN",
"filter" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"nReturned" : 1,
"executionTimeMillisEstimate" : 295230,
"works" : 495587808,
"advanced" : 1,
"needTime" : 495587806,
"needYield" : 0,
"saveState" : 3871779,
"restoreState" : 3871779,
"isEOF" : 1,
"invalidates" : 0,
"direction" : "forward",
"docsExamined" : 495587806
}
},
"serverInfo" : {
"host" : "li1025-15.members.linode.com",
"port" : 27017,
"version" : "3.2.16",
"gitVersion" : "056bf45128114e44c5358c7a8776fb582363e094"
},
"ok" : 1
}
As indicated in the explain plan results, the current query is doing Collection Scan. This means it has to scan every document in collection to produce the match and you have got about half a billion documents.
Try adding this index and it might take a bit to create it.
db.contacts.createIndex( { phoneNumber: 1 }, { background: true } )
Run the query once the index creation is successful, you must see a dramatic improvement in performance. To be certain whether index got picked up, try explain again and it should no longer say COLLSCAN.

Getting null properties when launching JBoss EAP 6.4 Management Console

Am getting an weird JSON message when I try to access (for the first time) my local JBoss EAP 6.4's management console.
Steps taken:
Unzipped jboss-eap-6.4.zip
cd jboss-eap-6.4/bin
sh add-user.sh
Created a Management User with a specific password and also an Application User with a specific password.
Started by local instance:
sh standalone.sh
Everything went perfectly well on server startup...
When I tried to access the management console and entered in the Managed User Name and Password pair, by going to this URL:
http://127.0.0.1:9990/management
This is the JSON I received:
{"management-major-version" : 1, "management-micro-version" : 0,
"management-minor-version" : 7, "name" : "mycomputer", "namespaces" : [],
"product-name" : "EAP", "product-version" : "6.4.0.GA", "profile-name" : null,
"release-codename" : "Janus", "release-version" : "7.5.0.Final-redhat-21",
"schema-locations" : [], "core-service" : {"platform-mbean" : null,
"management" : null, "service-container" : null,
"server-environment" : null, "patching" : null, "module-loading" : null},
"deployment" : null, "deployment-overlay" : null, "extension" :
{"org.jboss.as.clustering.infinispan" : null, "org.jboss.as.connector" : null,
"org.jboss.as.deployment-scanner" : null, "org.jboss.as.ee" : null,
"org.jboss.as.ejb3" : null, "org.jboss.as.jaxrs" : null, "org.jboss.as.jdr" : null,
"org.jboss.as.jmx" : null, "org.jboss.as.jpa" : null,
"org.jboss.as.jsf" : null, "org.jboss.as.logging" : null,
"org.jboss.as.mail" : null, "org.jboss.as.naming" : null,
"org.jboss.as.pojo" : null, "org.jboss.as.remoting" : null,
"org.jboss.as.sar" : null, "org.jboss.as.security" : null,
"org.jboss.as.threads" : null, "org.jboss.as.transactions" : null,
"org.jboss.as.web" : null, "org.jboss.as.webservices" : null,
"org.jboss.as.weld" : null}, "interface" : {"management" : null,
"public" : null, "unsecure" : null}, "path" :
{"jboss.server.log.dir" : null, "jboss.server.data.dir" : null,
"jboss.server.base.dir" : null, "jboss.server.config.dir" : null,
"user.dir" : null, "user.home" : null, "jboss.server.temp.dir"
null, "jboss.controller.temp.dir" : null, "jboss.home.dir" : null,
"java.home" : null}, "socket-binding-group" : {"standard-sockets" :
null}, "subsystem" : {"jaxrs" : null, "jpa" : null, "ee" : null,
"transactions" : null, "remoting" : null, "web" : null, "jmx" :
null, "security" : null, "weld" : null, "pojo" : null, "infinispan" : null,
"jca" : null, "datasources" : null, "logging" : null, "naming" : null,
"webservices" : null, "jsf" : null, "jdr" : null, "deployment-scanner" : null,
"ejb3" : null, "mail" : null, "threads" : null, "sar" : null,
"resource-adapters" : null}, "system-property" : null}
Is this supposed to be correct response?
Is the management console supposed to be accessed by going to:
http://127.0.0.1:9990/
Would really appreciate if someone could clarify on this?
Thanks for taking the time to read this.
Yes, this looks to be expected behavior as you are hitting the REST API management endpoint. You want to hit the JBoss Management Console. Try:
http://localhost:9990/console/App.html

Correct way to connect node.js to a sharded replica cluster in MongoDB using mongoose

So recently we redesigned our MongoDB database cluster to use SSL and replica sets in addition to the sharding we had already implemented. SSL wasn't too difficult to get working, we just needed to split up the private key and certificate and then everything worked fine. However, getting my Node.js app to connect to both mongos instances is proving to be more difficult than I anticipated.
Before we implemented replica sets, we just had two shards, each of them running a mongos router, and in mongoose I gave it the following connection string:
mongodb://Host1:27017,Host2:27017/DatabaseName
Then, in the options object to the connection, I passed in the following:
{mongos: true}
This seems to work just fine. However, after the replica sets are implemented, whenever I pass the mongos option, the application never connects. Our cluster is now setup so that there are 4 MongoDB servers in 2 replica sets of 2 servers each. The master in each replica set is also running a mongos router instance. I assumed I should be able to connect the same way as before, however it never connects. If I create the connection using just 1 shard with no options, the application connects just fine. However, this is not ideal as the whole point is to have redundancy among the router instances. Can anyone offer some insight here?
Here is the output of sh.status():
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("57571fc5bfe098f05bbbe370")
}
shards:
{ "_id" : "rs0", "host" : "rs0/mongodb-2:27018,mongodb-3:27018" }
{ "_id" : "rs1", "host" : "rs1/mongodb-4:27018,mongodb-5:27018" }
active mongoses:
"3.2.7" : 4
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "Demo", "primary" : "rs0", "partitioned" : true }
I was asked to output rs.config(), here it is from the 1st master node:
{
"_id" : "rs0",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "mongodb-2:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "mongodb-3:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("57571692c490a699f61e3784")
}
}
Alright, so I finally figured it out. I went through the logs on the server and saw that the client was trying to connect and wasn't using SSL so kept getting booted by the server. This was confusing to me because I set SSL in the server options and had the correct keys and cert bundle, as I was able to connect to a single instance just fine. Then I looked through the mongo driver options here. It shows that there are options you need to set for mongos itself regarding SSL. After setting these explicitly I was able to connect.
In summary, this options object allowed me to connect:
var options = {
"server": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
},
"mongos": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
}
}
while this options object did not:
var options = {
"server": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
},
"mongos": true
}
I think the server object is probably redundant, but I left it in.

Query time mongoose occasionally taking 3-4 seconds

I've written a back end node server for a multiplayer game I'm developing and most of the time each request takes about 20-100ms to resolve. However, sometimes (Maybe 1 out of 50 requests) I will do the same request and it will take 2000+ms to resolve.
The server is written entirely in node.js and is hosted on heroku. I am using mongoose to make the calls to the database.
Here is a screenshot of the logs, at the top you can see how queries normally function. The request comes in at 19:03:03.68 and the response is sent out at 19:03:03.73, saving all the data finishes at at 19:03:03.74. Heroku logs the request as taking 58ms which is the desired and expect outcome.
Below that is when the issue occurs. You can see multiple requests come in from two separate clients (Each client sends ~1 request per second which is correct) However the requests build up and after about 2000-5000ms they will all quickly resolve one after another. I’ve tried narrowing down the issue without much luck, but I believe it’s related to when I query the database as you can see multiple requests come in but the first query to the database doesn’t actually resolve until around 2300ms later. As far as I can tell these requests are identical to the ones that resolve in 20-100ms and occur completely at random.
The actual code is similar to this on the server (Simplified for the sake of this question):
console.log (“request received”);
Game.findOne({‘id’: gameID}, function(err, theGame){
console.log("First Query");
I also opened up the mongo shell for the database to look for queries taking an excessive amount of time (>2000ms) with this code:
db.system.profile.find( {millis: {$gt : 2000} } ).sort( { ts: 1} );
Here are the slightly modified results which should include everything relevant:
{ "op" : "update", "ns" : "theDb.players", "query" :
{ "_id" : ObjectId("572b8eb242d70903005df0df")
}, "updateobj" :
{ "$set" :
{ "lastSeen" : ISODate("2016-05-05T18:19:30.761Z"), "timeElapsed" : 16
}
}, "nscanned" : 1, "nscannedObjects" : 1, "nMatched" : 1, "nModified" : 1, "fastmod" : true, "keyUpdates" : 0, "writeConflicts" : 0, "numYield" : 0, "locks" :
{ "Global" :
{ "acquireCount":
{ "r" : NumberLong(2), "w" : NumberLong(2) }
}, "MMAPV1Journal" :
{ "acquireCount" :
{ "w" : NumberLong(2) }, "acquireWaitCount" :
{ "w" : NumberLong(1) }, "timeAcquiringMicros" :
{ "w" : NumberLong(7294179) }
}, "Database" :
{ "acquireCount" :
{ "w" : NumberLong(2) }
}, "Collection" :
{ "acquireCount" :
{ "W" : NumberLong(1) }
}, "oplog" :
{ "acquireCount" :
{ "w" : NumberLong(1) }
}
}, "milli" : 2298, "execStats" : {}, "ts" : ISODate("2016-05-05T18:19:33.060Z")
Second Result:
{ "op" : "update", "ns" : "theDb.connections", "query" :
{ "_id" : ObjectId("572b8eaf42d70903005df0dd")
}, "updateobj" :
{ "$set" :
{ "internalCounter" : 3, "lastCount" : 3, "lastSeen" : ISODate("2016-05-05T18:19:30.761Z"), "playerID" : 128863276517, "sinceLast" : 0
}
}, "nscanned" : 1, "nscannedObjects" : 1, "nMatched" : 1, "nModified" : 1, "keyUpdates" : 0, "writeConflicts" : 0, "numYield" : 0, "locks" :
{ "Global" :
{ "acquireCount" :
{ "r" : NumberLong(2), "w" : NumberLong(2)
}
}, "MMAPV1Journal" :
{ "acquireCount" :
{ "w" : NumberLong(2) }, "acquireWaitCount" :
{ "w" : NumberLong(1) }, "timeAcquiringMicros" :
{ "w" :NumberLong(7294149) }
}, "Database" :
{ "acquireCount" :
{ "w" : NumberLong(2) }
}, "Collection" :
{ "acquireCount" :
{ "W" : NumberLong(1) }
}, "oplog" :
{ "acquireCount" :
{ "w" : NumberLong(1) }
}
}, "millis" : 2299, "execStats" : {},"ts" : ISODate("2016-05-05T18:19:33.061Z")
I really need to ensure the latency for any request never exceeds 500ms otherwise it extremely irritating in the game itself. I’m really at a loss for what might be causing this and how to figure out more.
I'm assuming the cause for the issue is that timeAcquiringMicros is so long. I'm unsure of what is causing this though.
*Note, the client is requesting the data with just standard http requests, I’m not currently using any sockets.
Alright, I've finally solved the issue. The problem wasn't actually connected to anything that I had done. I was using the sandbox plan that mlab offers in connection to heroku which had my application competing for processing time with other people also using the sandbox plan. Their queries were slowing down the database causing those spikes in response times.
The solution: I had to upgrade to their shared cluster plan. Since upgrading I haven't had any irregularities in query times.

arangodb replication applier stopped with error 1413: no start tick

4:32 AM (2 minutes ago)
i followed the replication (master-slave) setup guide, and got data replicated fine, but the applier is stopped with the following error. i searched manual and googled without finding much other than the error message itself. really appreciate if you could provide some clues as how to troubleshoot this. thanks!
arangosh [_system]> require("org/arangodb/replication").applier.state();
{
"state" : {
"running" : false,
"lastAppliedContinuousTick" : null,
"lastProcessedContinuousTick" : null,
"lastAvailableContinuousTick" : null,
"progress" : {
"time" : "2014-04-15T20:20:13Z",
"message" : "applier stopped",
"failedConnects" : 0
},
"totalRequests" : 5,
"totalFailedConnects" : 0,
"totalEvents" : 0,
"lastError" : {
"time" : "2014-04-15T20:20:13Z",
"errorMessage" : "no start tick",
"errorNum" : 1413
},
"time" : "2014-04-15T20:37:50Z"
},
"server" : {
"version" : "2.0.4",
"serverId" : "83323931320193"
},
"endpoint" : "tcp://master:8529",
"database" : "_system"
}
This might happen if you start the applier for the very first time without specifying a tick. Use
require("org/arangodb/replication").applier.start(1)
instead.

Resources