Cant find reference in Mongo DB documentation - node.js

I have been struggling with MongoDB documentation for some time now. I want to know what will method returns and their definitions for each field. It's just that it's not on their documentation
For example I want to know what will db.collection.deleteMany() returns
The documentation does not tell you what will it return when it succeeded.
To overcome this I thought I can just know it by myself by just returning trying it and return the raw result from the db.collection.deleteMany() method
and I there you got the result
{
"result": {
"n": 32,
"ok": 1
},
"connection": {
"_events": {},
"_eventsCount": 4,
"id": 1,
"address": "127.0.0.1:27017",
"bson": {},
"socketTimeout": 360000,
"monitorCommands": false,
"closed": false,
"destroyed": false,
"lastIsMasterMS": 2
},
"deletedCount": 32,
"n": 32,
"ok": 1
}
But then I realized I don't know what those fields definition
What's result.n? What's result.ok ? What's connection._events?
Am I doing it wrong how will I find those definitions please help thanks

I just had to go to the API reference for the particular driver I was using and not to the method references.
For Node.js here is the documentation for API references.

Related

Arangodb Search view is not consistent with collection

I am using arangodb version 3.9.3 and I am experiencing an inconsistency between an arangodb collection and the corresponding arangosearch view on top of it.
The number of documents found by querying the collection is nearly half of the number of documents found by querying the view.
I am using the following query
RETURN COUNT(FOR n IN collection RETURN n)
which returns 4353.
And the query on view
RETURN COUNT(FOR n IN view SEARCH true RETURN n)
returns 7303.
Because of this inconsistency, the LIMIT operation is not working as expected and queries are also returning extra results. Moreover, it seems that it is returning old documents as well.
I also tested it on older arangodb versions.
It is happening on both 3.7.18, 3.8.7 and on latest 3.9.3. And I am using a single node instance.
My workflow is something like this:
I just delete and create a document (changing few attributes of the document) multiple times (like 1000 times) in a loop which leads to inconsistency in the view.
And this seems to only happen when I have primarySortOrder/storedValues defined.
I am creating the View on app startup which looks like this
{
"writebufferSizeMax": 33554432,
"writebufferIdle": 64,
"cleanupIntervalStep": 2,
"commitIntervalMsec": 250,
"consolidationIntervalMsec": 500,
"consolidationPolicy": {
"type": "tier",
"segmentsBytesFloor": 2097152,
"segmentsBytesMax": 5368709120,
"segmentsMax": 10,
"segmentsMin": 1,
"minScore": 0
},
"primarySortCompression": "none",
"writebufferActive": 0,
"links": {
"FilterNodes": {
"analyzers": [
"identity"
],
"fields": {
"name": {},
"city": {}
},
"includeAllFields": false,
"storeValues": "none",
"trackListPositions": false
}
},
"globallyUniqueId": "h9943DEC4CDFB/231",
"id": "231",
"storedValues": [],
"primarySort": [
{
"field": "name",
"asc": true
},
{
"field": "city",
"asc": true
}
],
"type": "arangosearch"
}
I am unsure if I am doing something wrong or if this is a longstanding bug, which seems like a pretty strong inconsistency.
Has anyone else encountered this? And can anyone help?
Thanks

LLRP for Zebra FX7500 with llrpjs doesn't read tags

Using the llrpjs library for Node.js, we are attempting to read tags from the Zebra FX7500 (Motorola?). This discussion points to the RFID Reader Software Interface Control Guide pages 142-144, but does not indicate potential values to set up the device.
From what we can gather, we should issue a SET_READER_CONFIG with a custom parameter (MotoDefaultSpec = VendorIdentifier: 161, ParameterSubtype: 102, UseDefaultSpecForAutoMode: true). Do we need to include the ROSpec and/or AccessSpec values as well (are they required)? After sending the SET_READER_CONFIG message, do we still need to send the regular LLRP messages (ADD_ROSPEC, ENABLE_ROSPEC, START_ROSPEC)? Without the MotoDefaultSpec, even after sending the regular LLRP messages, sending a GET_REPORT does not retrieve tags nor does a custom message with MOTO_GET_TAG_EVENT_REPORT. They both trigger a RO_ACCESS_REPORT event message, but the tagReportData is null.
The README file for llrpjs lists "Vendor definitions support" as a TODO item. While that is somewhat vague, is it possible that the library just hasn't implemented custom LLRP extension (messages/parameters) support, which is why none of our attempts are working? The MotoDefaultSpec parameter and MOTO_GET_TAG_EVENT_REPORT are custom to the vendor/chipset. The MOTO_GET_TAG_EVENT_REPORT custom message seems to trigger a RO_ACCESS_REPORT similar to the base LLRP GET_REPORT message, so we assume that part is working.
It is worth noting that Zebra's 123RFID Desktop setup and optimization tool connects and reads tags as expected, so the device and antenna are working (reading tags).
Could these issues be related to the ROSPEC file we are using (see below)?
{
"$schema": "https://llrpjs.github.io/schema/core/encoding/json/1.0/llrp-1x0.schema.json",
"id": 1,
"type": "ADD_ROSPEC",
"data": {
"ROSpec": {
"ROSpecID": 123,
"Priority": 1,
"CurrentState": "Disabled",
"ROBoundarySpec": {
"ROSpecStartTrigger": {
"ROSpecStartTriggerType": "Immediate"
},
"ROSpecStopTrigger": {
"ROSpecStopTriggerType": "Null",
"DurationTriggerValue": 0
}
},
"AISpec": {
"AntennaIDs": [1, 2, 3, 4],
"AISpecStopTrigger": {
"AISpecStopTriggerType": "Null",
"DurationTrigger": 0
},
"InventoryParameterSpec": {
"InventoryParameterSpecID": 1234,
"ProtocolID": "EPCGlobalClass1Gen2"
}
},
"ROReportSpec": {
"ROReportTrigger": "Upon_N_Tags_Or_End_Of_ROSpec",
"N": 1,
"TagReportContentSelector": {
"EnableROSpecID": true,
"EnableAntennaID": true,
"EnableFirstSeenTimestamp": true,
"EnableLastSeenTimestamp": true,
"EnableSpecIndex": false,
"EnableInventoryParameterSpecID": false,
"EnableChannelIndex": false,
"EnablePeakRSSI": false,
"EnableTagSeenCount": true,
"EnableAccessSpecID": false
}
}
}
}
}
For anyone having a similar issue, we found that attempting to configure more antennas than the Zebra device has connected caused the entire spec to fail. In our case, we had two antennas connected, so including antennas 3 and 4 in the spec was causing the problem.
See below for the working ROSPEC. The extra antennas in the data.AISpec.AntennaIDs property were removed and allowed our application to connect and read tags.
We are still having some issues with llrpjs when trying to STOP_ROSPEC because it sends an RO_ACCESS_REPORT response without a resName value. See the issue on GitHub for more information.
That said, our application works without sending the STOP_ROSPEC command.
{
"$schema": "https://llrpjs.github.io/schema/core/encoding/json/1.0/llrp-1x0.schema.json",
"id": 1,
"type": "ADD_ROSPEC",
"data": {
"ROSpec": {
"ROSpecID": 123,
"Priority": 1,
"CurrentState": "Disabled",
"ROBoundarySpec": {
"ROSpecStartTrigger": {
"ROSpecStartTriggerType": "Null"
},
"ROSpecStopTrigger": {
"ROSpecStopTriggerType": "Null",
"DurationTriggerValue": 0
}
},
"AISpec": {
"AntennaIDs": [1, 2],
"AISpecStopTrigger": {
"AISpecStopTriggerType": "Null",
"DurationTrigger": 0
},
"InventoryParameterSpec": {
"InventoryParameterSpecID": 1234,
"ProtocolID": "EPCGlobalClass1Gen2",
"AntennaConfiguration": {
"AntennaID": 1,
"RFReceiver": {
"ReceiverSensitivity": 0
},
"RFTransmitter": {
"HopTableID": 1,
"ChannelIndex": 1,
"TransmitPower": 170
},
"C1G2InventoryCommand": {
"TagInventoryStateAware": false,
"C1G2RFControl": {
"ModeIndex": 23,
"Tari": 0
},
"C1G2SingulationControl": {
"Session": 1,
"TagPopulation": 32,
"TagTransitTime": 0,
"C1G2TagInventoryStateAwareSingulationAction": {
"I": "State_A",
"S": "SL"
}
}
}
}
}
},
"ROReportSpec": {
"ROReportTrigger": "Upon_N_Tags_Or_End_Of_AISpec",
"N": 1,
"TagReportContentSelector": {
"EnableROSpecID": true,
"EnableAntennaID": true,
"EnableFirstSeenTimestamp": true,
"EnableLastSeenTimestamp": true,
"EnableTagSeenCount": true,
"EnableSpecIndex": false,
"EnableInventoryParameterSpecID": false,
"EnableChannelIndex": false,
"EnablePeakRSSI": false,
"EnableAccessSpecID": false
}
}
}
}
}

What is a GitLab line_code as referenced when creating a new merge request thread

I'm trying to create a discussion note on a merge request on a certain line of a file with the GitLab api using this endpoint: https://docs.gitlab.com/ee/api/discussions.html#create-new-merge-request-thread
Part of the payload asks for a line_code
Attribute
Type
Required
Description
position[line_range][start][line_code]
string
yes
Line code for the start line
When I issue a POST I get a response with:
"message": "400 (Bad request) \"Note {:line_code=>[\"can't be blank\", \"must be a valid line code\"], :position=>[\"is incomplete\"]}\" not given"
What is this line_code? Is it some kind of calculated value? The documentation is rather vague here.
When I issue a GET for all the current notes on a merge_request I can see some notes have this line_code (see below). I'm trying to figure out how to create that value for new notes.
{
"id": 89,
"type": "DiffNote",
"body": "4",
"attachment": null,
"author": {
"id": 6,
"name": "brian c",
"username": "bc",
"state": "active",
"avatar_url": "https://www.gravatar.com/avatar/f590a9cf57136732dd0cb5z9b1563390?s=80&d=identicon",
"web_url": "http://gitlab.mycompany.us/thisIsMe"
},
"created_at": "2021-01-11T21:46:23.861Z",
"updated_at": "2021-01-11T21:46:23.861Z",
"system": false,
"noteable_id": 21,
"noteable_type": "MergeRequest",
"position": {
"base_sha": "3bf8094f0d54fc70a66698bd582f25c77243de3b",
"start_sha": "3bf8094f0d54fc70a66698bd582f25c77243de3b",
"head_sha": "a10e73cf84eae38286df56f4b58fa221d7eefc44",
"old_path": "b.txt",
"new_path": "b.txt",
"position_type": "text",
"old_line": null,
"new_line": 4,
"line_range": {
"start": {
"line_code": "aceba96ffdf13ce4cd4171c0248420cc03108ef0_0_4",
"type": "new",
"old_line": null,
"new_line": 4
},
"end": {
"line_code": "aceba96ffdf13ce4cd4171c0248420cc03108ef0_0_4",
"type": "new",
"old_line": null,
"new_line": 4
}
}
},
"resolvable": true,
"resolved": false,
"resolved_by": null,
"confidential": false,
"noteable_iid": 3,
"commands_changes": {}
},
Line code is hash of the file name + underscore + old line number + underscore + new line number
The documentation is wrong. line_code is required only if you are using position.line_range which is only required for adding diff note spanning multiple lines of diff. You don't need to deal with line_code for single-line diff notes. So it is not a required parameter. You can just use position.old_line or position.new_line.
That represents the line in the file you want the comment to appear on. For Merge Requests, comments can either be on the code or general discussions (though the API names seem to be backwards).
To add a general discussion comment, you can use the Notes API: https://docs.gitlab.com/ee/api/notes.html#create-new-merge-request-note. This will look like the comment here: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/52673#note_495396729
To add a comment to the changed code in a Merge Request, you can use the Discussions API here: https://docs.gitlab.com/ee/api/discussions.html#create-new-merge-request-thread. This operation has options to set the file path and line number a comment should start on, a range that a comment applies to, etc. This is an example of a comment on the code itself: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/52673/diffs#2eda52c44979de93f257b305ada778372eacba0b_6_5
It's not a bug, I meet this scene before.
maybe your "old_line" is not really exist.
just let set old_line and old_path to null
it will work normally.

Azure Search Fails to Return Expected Result When No OR Multiple SearchFields Are Defined

I have a fairly basic Azure Search index with several fields of searchable string data, for example [abridged]...
"fields": [
{
"name": "Field1",
"type": "Edm.String",
"facetable": false,
"filterable": true,
"key": true,
"retrievable": true,
"searchable": true,
"sortable": false,
"analyzer": null,
"indexAnalyzer": null,
"searchAnalyzer": null,
"synonymMaps": [],
"fields": []
},
{
"name": "Field2",
"type": "Edm.String",
"facetable": false,
"filterable": true,
"retrievable": true,
"searchable": true,
"sortable": false,
"analyzer": "en.microsoft",
"indexAnalyzer": null,
"searchAnalyzer": null,
"synonymMaps": [],
"fields": []
}
]
Field1 is loaded with alphanumeric id data and Field2 is loaded with English language string data, specifically the name/title of the record. searchMode=all is also being used to ensure the accuracy of the results.
Let's say one of the records indexed has the following Field2 data: BA (Hons) in Business, Organisational Behaviour and Coaching. Putting that into the en.microsoft analyzer, this is the result we get out:
"tokens": [
{
"token": "ba",
"startOffset": 0,
"endOffset": 2,
"position": 0
},
{
"token": "hon",
"startOffset": 4,
"endOffset": 8,
"position": 1
},
{
"token": "hons",
"startOffset": 4,
"endOffset": 8,
"position": 1
},
{
"token": "business",
"startOffset": 13,
"endOffset": 21,
"position": 3
},
{
"token": "organizational",
"startOffset": 23,
"endOffset": 37,
"position": 4
},
{
"token": "organisational",
"startOffset": 23,
"endOffset": 37,
"position": 4
},
{
"token": "behavior",
"startOffset": 38,
"endOffset": 47,
"position": 5
},
{
"token": "behaviour",
"startOffset": 38,
"endOffset": 47,
"position": 5
},
{
"token": "coach",
"startOffset": 52,
"endOffset": 60,
"position": 7
},
{
"token": "coaching",
"startOffset": 52,
"endOffset": 60,
"position": 7
}
]
As you can see, the tokens returned are what you'd expect for such a string. However, when it comes to using that same indexed string value as a search term (sadly a valid user case in this instance), the results returned are not as expected unless you explicitly use searchFields=Field2.
Query 1 (Returns 0 results):
?searchMode=all&search=BA%20(Hons)%20in%20Business%2C%20Organisational%20Behaviour%20and%20Coaching
Query 2 (Returns 0 results):
?searchMode=all&searchFields=Field1,Field2&search=BA%20(Hons)%20in%20Business%2C%20Organisational%20Behaviour%20and%20Coaching
Query 3 (Returns 1 result as expected):
?searchMode=all&searchFields=Field2&search=BA%20(Hons)%20in%20Business%2C%20Organisational%20Behaviour%20and%20Coaching
So why does this only return the expected result with searchFields=Field2 and not with no searchFields defined or searchFields=Field1,Field2? I would not expect a no match on Field1 to exclude a result that's clearly matching on Field2?
Furthermore, removing the "in" and "and" within the search term seems to correct the issue and return the expected result. For example:
Query 4 (Returns 1 result as expected):
?searchMode=all&search=BA%20(Hons)%20Business%2C%20Organisational%20Behaviour%20Coaching
(This is almost like one analyzer is tokenizing the indexed data and a completely different analyzer is tokenizing the search term, although that theory doesn't make any sense when taking into consideration Query 3, as that provides a positive match using the exact same indexed data/search term.)
Is anybody able to shed some light as to what's going on here as I'm completely out of ideas and I can't find anything more in the documentation?
NB. Please bear in mind that I'm looking to understand why Azure Search is behaving in this way and not necessarily wanting a work around.
The reason you don't get any hits is due to how stopwords are handled when you use searchMode=all. The standard analyzer does not remove stopwords. The Lucene and Microsoft analyzers for English removes stopwords. I verified by creating an index with your property definitions and sample data. If you use the standard analyzer, stopwords are not removed and you will get a match also when using searchMode=all. To get a match when using either Lucene or Microsoft analyzers with simple query mode, you would have to use a phrase search.
When you test the en.microsoft analyzer in your example, you only get the response from what the first stage of the analyzer does. It splits your query into tokens. In your case, two of the tokens are also stopwords in English (in, and). Stopword removal is part of lexical analysis, which is done later in stage 2 as explained in the article called Anatomy of a search request. Furthermore, lexical analysis is only applied to "query types that require complete terms", like searchMode=all. See Exceptions to lexical analysis for more examples.
There is a previous post here about this that explains in more detail. See Queries with stopwords and searchMode=all return no results
I know you did not ask for workarounds, but to better understand what goes on it could be useful to list some possible workarounds.
For English analyzers, use phrase search by wrapping the query in quotes: search="BA (Hons) in Business, Organisational Behaviour and Coaching"&searchMode=all
The standard analyzer works the way you expect: search=BA (Hons) in Business, Organisational Behaviour and Coaching&searchMode=all
Disable lexical analysis by defining a custom analyzer.

Fetching host availability to external webpage in Nagios

Is there any possible way to fetch the live availability of host/host group from Nagios monitoring tool (where host/hostgroups are already configured) which can be redirected/captured to an external webpage.
are there any exposed API's to do that, couldn't found a way.
Nagios is on a Linux host.
Any help or info is appreciated.
EDIT1:
I have a hostgroup say for example 'All_prod' in this hostgroup I will be having around 20 linux hosts for all the host there would be some metrics/checks defined (example availability, cpu load, free memory ..etc). Here I want the report of only availability metrics of all the host(example : lets say if in 24 hours if the availability is down for 10 minutes then it should provide me with the report as it was down for 10 minutes in 24 hours or just give me any related info which i can evaluate using data evaluation).
it would be great if there are any API's to fetch that information, which will return the data as json/xml.
You can use the Nagios JSON API. You can use the query builder here http://NAGIOSURL/jsonquery.html.
But, to answer your specific question, the queries for hosts would look like this:
http://NAGIOSURL/cgi-bin/statusjson.cgi?query=host&hostname=localhost
Which will output something similar to the following:
{
"format_version": 0,
"result": {
"query_time": 1497384499000,
"cgi": "statusjson.cgi",
"user": "nagiosadmin",
"query": "host",
"query_status": "released",
"program_start": 1497368240000,
"last_data_update": 1497384489000,
"type_code": 0,
"type_text": "Success",
"message": ""
},
"data": {
"host": {
"name": "localhost",
"plugin_output": "egsdda",
"long_plugin_output": "",
"perf_data": "",
"status": 8,
"last_update": 1497384489000,
"has_been_checked": true,
"should_be_scheduled": false,
"current_attempt": 10,
"max_attempts": 10,
"last_check": 1496158536000,
"next_check": 0,
"check_options": 0,
"check_type": 1,
"last_state_change": 1496158536000,
"last_hard_state_change": 1496158536000,
"last_hard_state": 1,
"last_time_up": 1496158009000,
"last_time_down": 1496158536000,
"last_time_unreachable": 1480459504000,
"state_type": 1,
"last_notification": 1496158536000,
"next_notification": 1496165736000,
"no_more_notifications": false,
"notifications_enabled": true,
"problem_has_been_acknowledged": false,
"acknowledgement_type": 0,
"current_notification_number": 2,
"accept_passive_checks": true,
"event_handler_enabled": true,
"checks_enabled": false,
"flap_detection_enabled": true,
"is_flapping": false,
"percent_state_change": 0,
"latency": 0.49,
"execution_time": 0,
"scheduled_downtime_depth": 0,
"process_performance_data": true,
"obsess": true
}
}
}
And for hostgroups:
http://NAGIOSURL/nagios/cgi-bin/statusjson.cgi?query=hostlist&hostgroup=linux-servers
Which will output something similar to the following:
{
"format_version": 0,
"result": {
"query_time": 1497384613000,
"cgi": "statusjson.cgi",
"user": "nagiosadmin",
"query": "hostlist",
"query_status": "released",
"program_start": 1497368240000,
"last_data_update": 1497384609000,
"type_code": 0,
"type_text": "Success",
"message": ""
},
"data": {
"selectors": {
"hostgroup": "linux-servers"
},
"hostlist": {
"localhost": 8
}
}
}
Hope this helps!
EDIT 1 (To correspond with the question's EDIT 1):
What you're asking for isn't built in by default. You can use the above methods to grab the data for each host (but it sounds like you want it for each service), so again we will use the JSON API found at http://YOURNAGIOSURL/jsonquery.html to grab service data..
http://YOURNAGIOSURL/nagios/cgi-bin/statusjson.cgi?query=service&hostname=localhost&servicedescription=Current+Load
We'll get the following output (something similar, anyway):
{
"format_version": 0,
"result": {
"query_time": 1497875258000,
"cgi": "statusjson.cgi",
"user": "nagiosadmin",
"query": "service",
"query_status": "released",
"program_start": 1497800686000,
"last_data_update": 1497875255000,
"type_code": 0,
"type_text": "Success",
"message": ""
},
"data": {
"service": {
"host_name": "localhost",
"description": "Current Load",
"plugin_output": "OK - load average: 0.00, 0.00, 0.00",
"long_plugin_output": "",
"perf_data": "load1=0.000;5.000;10.000;0; load5=0.000;4.000;6.000;0; load15=0.000;3.000;4.000;0;",
"max_attempts": 4,
"current_attempt": 1,
"status": 2,
"last_update": 1497875255000,
"has_been_checked": true,
"should_be_scheduled": true,
"last_check": 1497875014000,
"check_options": 0,
"check_type": 0,
"checks_enabled": true,
"last_state_change": 1497019191000,
"last_hard_state_change": 1497019191000,
"last_hard_state": 0,
"last_time_ok": 1497875014000,
"last_time_warning": 1497019191000,
"last_time_unknown": 0,
"last_time_critical": 1497018891000,
"state_type": 1,
"last_notification": 0,
"next_notification": 0,
"next_check": 1497875314000,
"no_more_notifications": false,
"notifications_enabled": true,
"problem_has_been_acknowledged": false,
"acknowledgement_type": 0,
"current_notification_number": 0,
"accept_passive_checks": true,
"event_handler_enabled": true,
"flap_detection_enabled": true,
"is_flapping": false,
"percent_state_change": 0,
"latency": 0,
"execution_time": 0,
"scheduled_downtime_depth": 0,
"process_performance_data": true,
"obsess": true
}
}
}
The most important line for what you're trying to do (as far as I understand it) is the perfdata line:
"perf_data": "load1=0.000;5.000;10.000;0; load5=0.000;4.000;6.000;0; load15=0.000;3.000;4.000;0;",
This is the data you'd use to generate whatever custom metrics report you're trying to generate.
Keep in mind this is something that is sort of built in to Nagios XI (not in an exportable format like you're requesting) but the metrics component does allow you to easily drill down and take a look at some metric specific data.
Hope this helps!

Resources