Graylog2 Failed Upgrade - graylog2

We upgraded from graylog 2.1.3 to 2.3.2 and now receive this message repeatedly. Some parts of the UI load but not Search or Streams. Alerts are still going out. Anyone now how I can fix this? Rollback seems to not work at all.
Could not apply filter [StreamMatcher] on message <d8fa4293-dc7a-11e7-bc81-0a206782e8c1>:
java.lang.IllegalStateException: index set must not be null! (stream id=5a00a043a9b2c72984c581b6 title="My Streams")

What seems to have happened is that some streams did not get the "index_set_id" added in their definition in the Streams Collection in mongo. Here is an example of a bad one:
{
"_id" : ObjectId("5a1d6bb2a9b2c72984e24dc0"),
"creator_user_id" : "admin",
"matching_type" : "AND",
"description" : "EU2 Queue Prod",
"created_at" : ISODate("2017-11-28T13:59:14.546Z"),
"disabled" : false,
"title" : "EU2 Queue Prod",
"content_pack" : null
}
I was able to add the "index_set_id" : "59bb08b469d42f3bcfa6f18e" value in and restore the streams:
{
"_id" : ObjectId("5a1d6bb2a9b2c72984e24dc0"),
"creator_user_id" : "admin",
"index_set_id" : "59bb08b469d42f3bcfa6f18e",
"matching_type" : "AND",
"description" : "EU2 Queue Prod",
"created_at" : ISODate("2017-11-28T13:59:14.546Z"),
"disabled" : false,
"title" : "EU2 Queue Prod",
"content_pack" : null
}

I faced this issue too with other version of Graylog in kubernetes environment.
I took below actions to fix this issue:
From Graylog UI under Stream menu, select more actions next to your stream, in your case its : My stream click > edit stream > select "Default index set" from drop down list.
Do it for all the available streams.

Related

How to syncronize Azure IoT Hub device twin with device changes?

For our IoT solution we are trying to tackle a synchronizing issue with the device Twin.
In the normal situation the Cloud is in charge. So the cloud will set a desired property in the IoT hub device twin. The device will get a notification, change the property on the device and write the reported property that the device is in sync.
But for our case the user of the device can also change properties locally. So in this case the reported property will change and is out of sync with the desired.
How should we handle this? update the desired? leave it as is?
And a other case can be that properties can be deleted from both sides. see the attacted picture.
Writen use cases
here an example of the json twin:
"desired" : {
"recipes" : {
"recipe1" : {
"uri" : "blob.name.csv",
"version" : "1"
},{
"recipe2" : {
"uri" : "blob.name.csv",
"version" : "1"
},{
"recipe3" : {
"uri" : "blob.name.csv",
"version" : "1"
}
}
},
"reported" : {
"recipes" : {
"recipe1" : {
"uri" : "blob.name.csv",
"version" : "1"
},{
"recipe2" : {
"uri" : "blob.name.csv",
"version" : "3"
},{
"recipe3" : {
"uri" : "blob.name.csv",
"version" : "2"
}
}
I hope the question is clear. Thanks in advance.
Kind regards,
Marc
The approach to conflict resolution is specific to the business, it's not possible to define a universal rule. In some scenarios the user intent is more important than the service, and viceversa.
For instance an employee working late wants an office temperature of 76F, and automatic building management service wants a temp of 70F out of hours, in this case the user wins (desired property is discarded). In another example, an employee wants to enter the office building out of hours and turn on all the light, but the building management service won't allow it (while a building admin would be allowed instead...) etc.

Why doesn't MongoDB cursor.maxTimeMS work?

Maybe I'm missing something, but according to the documentation and all the posts online, setting
cursor.maxTimeMS(1000).toArray(...)
should time out after 1000ms, and MongoDB should kill the operation after timeout.
But none of this is happening.
First, there is no timeout. It keeps going.
Second, I check db.currentOp() and the operation is still there, eating up all the memory. This later adds up and crashes the database with OOM.
Anyway running db.currentOp() after several minutes of no response prints:
{
"inprog" : [
{
"host" : "db2:27017",
"desc" : "conn20",
"connectionId" : 20,
"client" : "127.0.0.1:59214",
"clientMetadata" : {
"driver" : {
"name" : "nodejs",
"version" : "3.1.4"
},
"os" : {
"type" : "Linux",
"name" : "linux",
"architecture" : "x64",
"version" : "4.15.0-30-generic"
},
"platform" : "Node.js v8.10.0, LE, mongodb-core: 3.1.3"
},
"active" : true,
"currentOpTime" : "2018-09-14T00:10:29.903+0000",
"opid" : 11056,
"lsid" : {
"id" : UUID("78a2d853-30bf-4d6d-a208-0a150d9bf8be"),
"uid" : BinData(0,"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=")
},
"secs_running" : NumberLong(649),
"microsecs_running" : NumberLong(649968360),
"op" : "command",
As you can see, this has been running for 649 seconds, even though I explicitly specified 1000ms.
What is going on here? I've been pulling my hair out for two days and can't figure this out.
I had the same issue and had to update the mongodb driver from 3.1.1 to 3.3.5 and it worked like a charm!

How to get all jobs status via Spark Hidden Rest API

I am using spark 1.6.2, and I am using the hidden REST API (http://arturmkrtchyan.com/apache-spark-hidden-rest-api).
How can I to get all jobs status in one rest call instead of using it for each - http://spark-cluster-ip:6066/v1/submissions/status/driver-20151008145126-0000 ?
Depending on exactly what you need, you can use :8080/json to get a json representing all the applications. You should see an activeapps array which has a short info on each application (including its status e.g. Running):
For example, if I open up spark-shell I get the following field in the json:
"memoryused" : 82944,
"activeapps" : [ {
"starttime" : 1484638046648,
"id" : "app-20170117022726-0113",
"name" : "Spark shell",
"cores" : 60,
"user" : "assaf",
"memoryperslave" : 27648,
"submitdate" : "Tue Jan 17 02:27:26 EST 2017",
"state" : "RUNNING",
"duration" : 26954
} ],
Note that this is basically adding /json to the UI port rather than going to the submission port.

How should I do multi-threaded insertion in Neo4j?

I am trying to solve a problem that occurs when inserting related nodes in Neo4j. Nodes are inserted by several threads using the standard save method of org.springframework.data.neo4j.repository.GraphRepository.
Sometimes the insertion fails when fetching a related node in order to define a relationship. The exception messages are like this: org.neo4j.graphdb.NotFoundException: '__type__' on http://neo4j:7474/db/data/relationship/105550.
Calling this URL from curl returns a JSON object which appears to have __type__ correctly defined, which suggests that the exception is caused by a race between inserting threads.
The method that originates the calls to the repository is annotated #Neo4jTransactional. What atomicity and transaction isolation does #Neo4jTransactional guarantee? And how should I use it for multi-threaded insertions?
Update:
I have now been able to see this happening in the debugger. The code is trying to fetch the node at one end of this relationship, together with all its relationships. It throws an exception because the type attribute is missing. This is the JSON initially returned.
{
"extensions" : { },
"start" : "http://localhost:7474/db/data/node/617",
"property" : "http://localhost:7474/db/data/relationship/533/properties/{key}",
"self" : "http://localhost:7474/db/data/relationship/533",
"properties" : "http://localhost:7474/db/data/relationship/533/properties",
"type" : "CONTAINS",
"end" : "http://localhost:7474/db/data/node/650",
"metadata" : {
"id" : 533,
"type" : "CONTAINS"
},
"data" : { }
}
A few seconds later, the same REST call returns this JSON:
{
"extensions" : { },
"start" : "http://localhost:7474/db/data/node/617",
"property" : "http://localhost:7474/db/data/relationship/533/properties/{key}",
"self" : "http://localhost:7474/db/data/relationship/533",
"properties" : "http://localhost:7474/db/data/relationship/533/properties",
"type" : "CONTAINS",
"end" : "http://localhost:7474/db/data/node/650",
"metadata" : {
"id" : 533,
"type" : "CONTAINS"
},
"data" : {
"__type__" : "ProductRelationship"
}
}
I can't understand why there is such a long delay between inserting the relationship and specifying the type. Why doesn't it all happen at once?

Mongodb increased db.currentOp() issue

My site using mongodb for the chat application. Mongodb queries are getting timed out so i checked the db.currentOp(). Below is the currentOp() and Mongodb details,
637 active operations
750 inactive operations
Other details about mongodb:
Mongo db is running with sharding
I have two databases
a)First database having, two table only
b)Second database having , 5 tables
My questions are, why the current.Op() count got increased suddenly and what are the causes we have to taken care if currentOp() count is increased. Please help me on this and apologies for my bad English.
Below are the sample output of my currentOp()
MongoDB shell version: 1.8.2
> db.currentOp()
{
"inprog" : [
{
"opid" : "msdata1:234234234",
"active" : true,
"lockType" : "read",
"waitingForLock" : false,
"secs_running" : 43534,
"op" : "getmore",
"ns" : "local.oplog.rs",
"query" : {
},
"client_s" : "70.52.078.123:12345",
"desc" : "conn"
},
{
"opid" : "msdata1:2342323423",
"active" : true,
"lockType" : "read",
"waitingForLock" : false,
"secs_running" : 231231,
"op" : "query",
"ns" : "ichat.chatmemberlist",
"query" : {
"count" : "chatmemberlist",
"query" : {
"Mid" : "23423",
"bmid" : "23423"
}
},
"client_s" : "70.52.078.123:12345",
"desc" : "conn"
},
{
"opid" : "msdata1:2342323423",
"active" : false,
"lockType" : "write",
"waitingForLock" : true,
"op" : "update",
"ns" : "?ichat.useravail",
"query" : {
"Mid" : "23423"
},
"client_s" : "70.512.078234.423:12345",
"desc" : "conn"
},
...
...
...
From the limited amount of info, I can see that your queries are just running a really long time: "secs_running" : 231231, means 231 seconds. It's likely that you don't have enough resources available for the type of queries that you are running. That could be that you don't have enough memory, or perhaps too much queries that are acquiring a lock. If you're not on MongoDB 2.0.x yet, then you might want to upgrade to that too as it has vastly improved locking: http://blog.pythonisito.com/2011/12/mongodbs-write-lock.html
I would advice to check the mongodb.log file to see which queries are being slow, then use explain to figure out whether you've indexes on the columns and then either add indexes, or see how you can re-design your schema if that might look like a better solution.

Resources