Read BLOB from Oracle Database in Node.js - node.js

I am trying to read a BLOB data type from Oracle using a nodejs from npm package called oracledb and this is my select statement, it hold picture in it:
select media from test_data
I have tried using a few solution but it doesn't work.
Here is my response from the res.json:
{
"MEDIA": {
"_readableState": {
"objectMode": false,
"highWaterMark": 16384,
"buffer": {
"head": null,
"tail": null,
"length": 0
},
"length": 0,
"pipes": [],
"flowing": null,
"ended": false,
"endEmitted": false,
.....
},
How do I get the hex value of it?

You are seeing an instance of the node-oracledb Lob class which you can stream from.
However, it can be easiest to get a Buffer directly. Do this by setting oracledb.fetchAsBuffer, for example:
oracledb.fetchAsBuffer = [ oracledb.BLOB ];
const result = await connection.execute(`SELECT b FROM mylobs WHERE id = 2`);
if (result.rows.length === 0)
console.error("No results");
else {
const blob = result.rows[0][0];
console.log(blob.toString()); // assuming printable characters
}
Refer to the documentation Simple LOB Queries and PL/SQL OUT Binds.

Related

Facing TLS write failed: -2 issue while writing to aerospike

I have aerospike 6.0.0 installed on server and being used as cache. Also I am using Node.js client 5.2.0 to connect to the database.The aerospike nodes are configured to use TLS. The config of the namespace I am using is:
namespace application_cache {
memory-size 5G
replication-factor 2
default-ttl 1h
nsup-period 1h
high-water-memory-pct 80
stop-writes-pct 90
storage-engine memory
}
The config I am passing while connecting to the Db is:
"config": {
"captureStackTraces":true,
"connTimeoutMs": 10000,
"maxConnsPerNode": 1000,
"maxCommandsInQueue" : 300,
"maxCommandsInProcess" : 100,
"user": "test",
"password": "test",
"policies": {
"read": {
"maxRetries": 3,
"totalTimeout": 0,
"socketTimeout": 0
},
"write": {
"maxRetries": 3,
"totalTimeout": 0,
"socketTimeout": 0
},
"info": {
"timeout": 10000
}
},
"tls": {
"enable": true,
"cafile": "./certs/ca.crt",
"keyfile": "./certs/server.key",
"certfile": "./certs/server.crt"
},
"hosts": [
{
"addr": "-----.com",
"port": 4333,
"tlsname": "----"
},
{
"addr": "---.com",
"port": 4333,
"tlsname": "-----"
},
{
"addr": "----.com",
"port": 4333,
"tlsname": "-----"
}
]
}
This is the function I am using to write to a set in this namespace:
async function updateApp(appKey, bins) {
let result = {};
try {
const writePolicy = new Aerospike.WritePolicy({
maxRetries: 3,
socketTimeout: 0,
totalTimeout: 0,
exists: Aerospike.policy.exists.CREATE_OR_UPDATE,
key: Aerospike.policy.key.SEND
});
const key = new Aerospike.Key(namespace, set, appKey);
let meta = {
ttl: 3600
};
let bins = {};
result = await asc.put(key, bins, meta, writePolicy);
} catch (err) {
console.error("Aerospike Error:", JSON.stringify({params: {namespace, set, appKey}, err: {code: err.code, message: err.message}}));
return err;
}
return result;
}
It works most of the time but once in a while I get this error:
Aerospike Error: {"params":{"namespace":"application_cache","set":"apps","packageName":"com.test.application"},"err":{"code":-6,"message":"TLS write failed: -2 34D53CA5C4E0734F 10.57.49.180:4333"}}
Every record in this set has about 12-15 bins and the size of the record varies between 300Kb to 1.5Mb. The bigger the size of the record the higher chance of this error showing up. Has anyone faced this issue before and what could be causing this?
I answered this on the community forum but figured I'd add it here as well.
Looking at the server error codes a code: 6 is AS_ERR_BIN_EXISTS. Your write policy is setup with exists: Aerospike.policy.exists.CREATE_OR_UPDATE but the options for exists are:
Name
Description
IGNORE
Write the record, regardless of existence. (I.e. create or update.)
CREATE
Create a record, ONLY if it doesn't exist.
UPDATE
Update a record, ONLY if it exists.
REPLACE
Completely replace a record, ONLY if it exists.
CREATE_OR_REPLACE
Completely replace a record if it exists, otherwise create it.
My guess is that it's somehow defaulting to a CREATE only causing it to throw that error. Try updating your write policy to use exists: Aerospike.policy.exists.IGNORE and see if you still get that error.

LLRP for Zebra FX7500 with llrpjs doesn't read tags

Using the llrpjs library for Node.js, we are attempting to read tags from the Zebra FX7500 (Motorola?). This discussion points to the RFID Reader Software Interface Control Guide pages 142-144, but does not indicate potential values to set up the device.
From what we can gather, we should issue a SET_READER_CONFIG with a custom parameter (MotoDefaultSpec = VendorIdentifier: 161, ParameterSubtype: 102, UseDefaultSpecForAutoMode: true). Do we need to include the ROSpec and/or AccessSpec values as well (are they required)? After sending the SET_READER_CONFIG message, do we still need to send the regular LLRP messages (ADD_ROSPEC, ENABLE_ROSPEC, START_ROSPEC)? Without the MotoDefaultSpec, even after sending the regular LLRP messages, sending a GET_REPORT does not retrieve tags nor does a custom message with MOTO_GET_TAG_EVENT_REPORT. They both trigger a RO_ACCESS_REPORT event message, but the tagReportData is null.
The README file for llrpjs lists "Vendor definitions support" as a TODO item. While that is somewhat vague, is it possible that the library just hasn't implemented custom LLRP extension (messages/parameters) support, which is why none of our attempts are working? The MotoDefaultSpec parameter and MOTO_GET_TAG_EVENT_REPORT are custom to the vendor/chipset. The MOTO_GET_TAG_EVENT_REPORT custom message seems to trigger a RO_ACCESS_REPORT similar to the base LLRP GET_REPORT message, so we assume that part is working.
It is worth noting that Zebra's 123RFID Desktop setup and optimization tool connects and reads tags as expected, so the device and antenna are working (reading tags).
Could these issues be related to the ROSPEC file we are using (see below)?
{
"$schema": "https://llrpjs.github.io/schema/core/encoding/json/1.0/llrp-1x0.schema.json",
"id": 1,
"type": "ADD_ROSPEC",
"data": {
"ROSpec": {
"ROSpecID": 123,
"Priority": 1,
"CurrentState": "Disabled",
"ROBoundarySpec": {
"ROSpecStartTrigger": {
"ROSpecStartTriggerType": "Immediate"
},
"ROSpecStopTrigger": {
"ROSpecStopTriggerType": "Null",
"DurationTriggerValue": 0
}
},
"AISpec": {
"AntennaIDs": [1, 2, 3, 4],
"AISpecStopTrigger": {
"AISpecStopTriggerType": "Null",
"DurationTrigger": 0
},
"InventoryParameterSpec": {
"InventoryParameterSpecID": 1234,
"ProtocolID": "EPCGlobalClass1Gen2"
}
},
"ROReportSpec": {
"ROReportTrigger": "Upon_N_Tags_Or_End_Of_ROSpec",
"N": 1,
"TagReportContentSelector": {
"EnableROSpecID": true,
"EnableAntennaID": true,
"EnableFirstSeenTimestamp": true,
"EnableLastSeenTimestamp": true,
"EnableSpecIndex": false,
"EnableInventoryParameterSpecID": false,
"EnableChannelIndex": false,
"EnablePeakRSSI": false,
"EnableTagSeenCount": true,
"EnableAccessSpecID": false
}
}
}
}
}
For anyone having a similar issue, we found that attempting to configure more antennas than the Zebra device has connected caused the entire spec to fail. In our case, we had two antennas connected, so including antennas 3 and 4 in the spec was causing the problem.
See below for the working ROSPEC. The extra antennas in the data.AISpec.AntennaIDs property were removed and allowed our application to connect and read tags.
We are still having some issues with llrpjs when trying to STOP_ROSPEC because it sends an RO_ACCESS_REPORT response without a resName value. See the issue on GitHub for more information.
That said, our application works without sending the STOP_ROSPEC command.
{
"$schema": "https://llrpjs.github.io/schema/core/encoding/json/1.0/llrp-1x0.schema.json",
"id": 1,
"type": "ADD_ROSPEC",
"data": {
"ROSpec": {
"ROSpecID": 123,
"Priority": 1,
"CurrentState": "Disabled",
"ROBoundarySpec": {
"ROSpecStartTrigger": {
"ROSpecStartTriggerType": "Null"
},
"ROSpecStopTrigger": {
"ROSpecStopTriggerType": "Null",
"DurationTriggerValue": 0
}
},
"AISpec": {
"AntennaIDs": [1, 2],
"AISpecStopTrigger": {
"AISpecStopTriggerType": "Null",
"DurationTrigger": 0
},
"InventoryParameterSpec": {
"InventoryParameterSpecID": 1234,
"ProtocolID": "EPCGlobalClass1Gen2",
"AntennaConfiguration": {
"AntennaID": 1,
"RFReceiver": {
"ReceiverSensitivity": 0
},
"RFTransmitter": {
"HopTableID": 1,
"ChannelIndex": 1,
"TransmitPower": 170
},
"C1G2InventoryCommand": {
"TagInventoryStateAware": false,
"C1G2RFControl": {
"ModeIndex": 23,
"Tari": 0
},
"C1G2SingulationControl": {
"Session": 1,
"TagPopulation": 32,
"TagTransitTime": 0,
"C1G2TagInventoryStateAwareSingulationAction": {
"I": "State_A",
"S": "SL"
}
}
}
}
}
},
"ROReportSpec": {
"ROReportTrigger": "Upon_N_Tags_Or_End_Of_AISpec",
"N": 1,
"TagReportContentSelector": {
"EnableROSpecID": true,
"EnableAntennaID": true,
"EnableFirstSeenTimestamp": true,
"EnableLastSeenTimestamp": true,
"EnableTagSeenCount": true,
"EnableSpecIndex": false,
"EnableInventoryParameterSpecID": false,
"EnableChannelIndex": false,
"EnablePeakRSSI": false,
"EnableAccessSpecID": false
}
}
}
}
}

How to iterate/loop through next pages in an API request in PowerQuery/PowerBI?

The API am working with gets called and returns data like below (tried from postman):
Example call 1:
https://api.aaaaaa.com/api/v1/survey?step=1&limit=1000
{
"_links": {
"base": "http://api.aaaaaa.com",
"self": "/api/v1/survey?step=1&limit=1000",
"next": "/api/v1/survey?step=2&limit=1000"
},
"start": 1,
"limit": 1000,
"size": 1000,
"total": 3158,
"results": [
{....//data
}
]
Example call 2:
https://api.aaaaaa.com/api/v1/survey?step=2&limit=1000
{
"_links": {
"base": "http://api.aaaaaa.com",
"self": "/api/v1/survey?step=2&limit=1000",
"prev": "/api/v1/survey?step=1&limit=1000",
"next": "/api/v1/survey?step=3&limit=1000"
},
"start": 1001,
"limit": 1000,
"size": 1000,
"total": 3158,
"results": [
{....//data
}
]
In Power BI/Power Query am trying to iterate over to get the next link and append/combine with the base API url to fetch next page/set of records.
However, am only getting 1 page in the list i.e. GeneratedList {0}. Whereas, from the Postman response it can be seen that it has 3 pages (step=1,2,3). So, was expecting to return 3 values in the list for each page. Here's the code:
let
BaseUrl = "https://api.aaaaaa.com",
SurveyUrl = Text.Combine(BaseUrl,"/api/v1/survey?step=1&limit=1000"),
Options = [ApiKeyName= "TOKEN"],
url = Web.Contents(SurveyUrl, [Headers=[TOKEN="zzzzzzzzzzzzzzz"]]),
FnGetOnePage =
(url) as record =>
let
Source = Json.Document(url),
data = try Source[results] otherwise null,
next = try Source[_links][next] otherwise null,
res = [Data=data, Next=Text.Combine(SurveyUrl, next)]
in
res,
GeneratedList =
List.Generate(
()=>[i=0, res = try FnGetOnePage(url) otherwise null],
each [res][Data]<>null,
each [i=[i]+1, res = try FnGetOnePage([res][Next]) otherwise null],
each [res][Data])
in
GeneratedList
Please suggest/help in where am getting it wrong in the above steps ??
Another approach I could think of is to directly loop/iterate over step number value in the url i.e. ...
/api/v1/survey?step=<loop_over_this_number>&limit=1000
; but haven't tried. So, any suggestions are most welcome.
Reference:- https://datachant.com/2016/06/27/cursor-based-pagination-power-query/
Thanks!

Replacing a value for JSON object in Node.js

I tried replacing the value for the key "Information" from the below JSON Object using the code
"itsmdata.incidentParamsJSON.IncidentContainerJson.replace("Information",option);"
but getting error as object is not defined (Attachment)
{
"ServiceName": "IM_LogOrUpdateIncident",
"objCommonParameters": {
"_ProxyDetails": {
"ProxyID": 0,
"ReturnType": "JSON",
"OrgID": 1,
"TokenID": null
},
"incidentParamsJSON": {
"IncidentContainerJson": "{\"SelectedAssets\":null,\"Ticket\":{\"Caller_EmailID\":null,\"Closure_Code_Name\":null,\"Description_Name\":\"Account Unlock\",\"Instance\":null},\"TicketInformation\":{\"Information\":\"account locked out\"},\"CustomFields\":null}"
},
"RequestType": "RemoteCall"
}
}
If you try to update properties of itsmdata you can try this
let str = itsmdata.objCommonParameters.incidentParamsJSON.IncidentContainerJson;
itsmdata.objCommonParameters.incidentParamsJSON.IncidentContainerJson = str.replace(/Information/gi, option);

Fetching host availability to external webpage in Nagios

Is there any possible way to fetch the live availability of host/host group from Nagios monitoring tool (where host/hostgroups are already configured) which can be redirected/captured to an external webpage.
are there any exposed API's to do that, couldn't found a way.
Nagios is on a Linux host.
Any help or info is appreciated.
EDIT1:
I have a hostgroup say for example 'All_prod' in this hostgroup I will be having around 20 linux hosts for all the host there would be some metrics/checks defined (example availability, cpu load, free memory ..etc). Here I want the report of only availability metrics of all the host(example : lets say if in 24 hours if the availability is down for 10 minutes then it should provide me with the report as it was down for 10 minutes in 24 hours or just give me any related info which i can evaluate using data evaluation).
it would be great if there are any API's to fetch that information, which will return the data as json/xml.
You can use the Nagios JSON API. You can use the query builder here http://NAGIOSURL/jsonquery.html.
But, to answer your specific question, the queries for hosts would look like this:
http://NAGIOSURL/cgi-bin/statusjson.cgi?query=host&hostname=localhost
Which will output something similar to the following:
{
"format_version": 0,
"result": {
"query_time": 1497384499000,
"cgi": "statusjson.cgi",
"user": "nagiosadmin",
"query": "host",
"query_status": "released",
"program_start": 1497368240000,
"last_data_update": 1497384489000,
"type_code": 0,
"type_text": "Success",
"message": ""
},
"data": {
"host": {
"name": "localhost",
"plugin_output": "egsdda",
"long_plugin_output": "",
"perf_data": "",
"status": 8,
"last_update": 1497384489000,
"has_been_checked": true,
"should_be_scheduled": false,
"current_attempt": 10,
"max_attempts": 10,
"last_check": 1496158536000,
"next_check": 0,
"check_options": 0,
"check_type": 1,
"last_state_change": 1496158536000,
"last_hard_state_change": 1496158536000,
"last_hard_state": 1,
"last_time_up": 1496158009000,
"last_time_down": 1496158536000,
"last_time_unreachable": 1480459504000,
"state_type": 1,
"last_notification": 1496158536000,
"next_notification": 1496165736000,
"no_more_notifications": false,
"notifications_enabled": true,
"problem_has_been_acknowledged": false,
"acknowledgement_type": 0,
"current_notification_number": 2,
"accept_passive_checks": true,
"event_handler_enabled": true,
"checks_enabled": false,
"flap_detection_enabled": true,
"is_flapping": false,
"percent_state_change": 0,
"latency": 0.49,
"execution_time": 0,
"scheduled_downtime_depth": 0,
"process_performance_data": true,
"obsess": true
}
}
}
And for hostgroups:
http://NAGIOSURL/nagios/cgi-bin/statusjson.cgi?query=hostlist&hostgroup=linux-servers
Which will output something similar to the following:
{
"format_version": 0,
"result": {
"query_time": 1497384613000,
"cgi": "statusjson.cgi",
"user": "nagiosadmin",
"query": "hostlist",
"query_status": "released",
"program_start": 1497368240000,
"last_data_update": 1497384609000,
"type_code": 0,
"type_text": "Success",
"message": ""
},
"data": {
"selectors": {
"hostgroup": "linux-servers"
},
"hostlist": {
"localhost": 8
}
}
}
Hope this helps!
EDIT 1 (To correspond with the question's EDIT 1):
What you're asking for isn't built in by default. You can use the above methods to grab the data for each host (but it sounds like you want it for each service), so again we will use the JSON API found at http://YOURNAGIOSURL/jsonquery.html to grab service data..
http://YOURNAGIOSURL/nagios/cgi-bin/statusjson.cgi?query=service&hostname=localhost&servicedescription=Current+Load
We'll get the following output (something similar, anyway):
{
"format_version": 0,
"result": {
"query_time": 1497875258000,
"cgi": "statusjson.cgi",
"user": "nagiosadmin",
"query": "service",
"query_status": "released",
"program_start": 1497800686000,
"last_data_update": 1497875255000,
"type_code": 0,
"type_text": "Success",
"message": ""
},
"data": {
"service": {
"host_name": "localhost",
"description": "Current Load",
"plugin_output": "OK - load average: 0.00, 0.00, 0.00",
"long_plugin_output": "",
"perf_data": "load1=0.000;5.000;10.000;0; load5=0.000;4.000;6.000;0; load15=0.000;3.000;4.000;0;",
"max_attempts": 4,
"current_attempt": 1,
"status": 2,
"last_update": 1497875255000,
"has_been_checked": true,
"should_be_scheduled": true,
"last_check": 1497875014000,
"check_options": 0,
"check_type": 0,
"checks_enabled": true,
"last_state_change": 1497019191000,
"last_hard_state_change": 1497019191000,
"last_hard_state": 0,
"last_time_ok": 1497875014000,
"last_time_warning": 1497019191000,
"last_time_unknown": 0,
"last_time_critical": 1497018891000,
"state_type": 1,
"last_notification": 0,
"next_notification": 0,
"next_check": 1497875314000,
"no_more_notifications": false,
"notifications_enabled": true,
"problem_has_been_acknowledged": false,
"acknowledgement_type": 0,
"current_notification_number": 0,
"accept_passive_checks": true,
"event_handler_enabled": true,
"flap_detection_enabled": true,
"is_flapping": false,
"percent_state_change": 0,
"latency": 0,
"execution_time": 0,
"scheduled_downtime_depth": 0,
"process_performance_data": true,
"obsess": true
}
}
}
The most important line for what you're trying to do (as far as I understand it) is the perfdata line:
"perf_data": "load1=0.000;5.000;10.000;0; load5=0.000;4.000;6.000;0; load15=0.000;3.000;4.000;0;",
This is the data you'd use to generate whatever custom metrics report you're trying to generate.
Keep in mind this is something that is sort of built in to Nagios XI (not in an exportable format like you're requesting) but the metrics component does allow you to easily drill down and take a look at some metric specific data.
Hope this helps!

Resources