I have aerospike 6.0.0 installed on server and being used as cache. Also I am using Node.js client 5.2.0 to connect to the database.The aerospike nodes are configured to use TLS. The config of the namespace I am using is:
namespace application_cache {
memory-size 5G
replication-factor 2
default-ttl 1h
nsup-period 1h
high-water-memory-pct 80
stop-writes-pct 90
storage-engine memory
}
The config I am passing while connecting to the Db is:
"config": {
"captureStackTraces":true,
"connTimeoutMs": 10000,
"maxConnsPerNode": 1000,
"maxCommandsInQueue" : 300,
"maxCommandsInProcess" : 100,
"user": "test",
"password": "test",
"policies": {
"read": {
"maxRetries": 3,
"totalTimeout": 0,
"socketTimeout": 0
},
"write": {
"maxRetries": 3,
"totalTimeout": 0,
"socketTimeout": 0
},
"info": {
"timeout": 10000
}
},
"tls": {
"enable": true,
"cafile": "./certs/ca.crt",
"keyfile": "./certs/server.key",
"certfile": "./certs/server.crt"
},
"hosts": [
{
"addr": "-----.com",
"port": 4333,
"tlsname": "----"
},
{
"addr": "---.com",
"port": 4333,
"tlsname": "-----"
},
{
"addr": "----.com",
"port": 4333,
"tlsname": "-----"
}
]
}
This is the function I am using to write to a set in this namespace:
async function updateApp(appKey, bins) {
let result = {};
try {
const writePolicy = new Aerospike.WritePolicy({
maxRetries: 3,
socketTimeout: 0,
totalTimeout: 0,
exists: Aerospike.policy.exists.CREATE_OR_UPDATE,
key: Aerospike.policy.key.SEND
});
const key = new Aerospike.Key(namespace, set, appKey);
let meta = {
ttl: 3600
};
let bins = {};
result = await asc.put(key, bins, meta, writePolicy);
} catch (err) {
console.error("Aerospike Error:", JSON.stringify({params: {namespace, set, appKey}, err: {code: err.code, message: err.message}}));
return err;
}
return result;
}
It works most of the time but once in a while I get this error:
Aerospike Error: {"params":{"namespace":"application_cache","set":"apps","packageName":"com.test.application"},"err":{"code":-6,"message":"TLS write failed: -2 34D53CA5C4E0734F 10.57.49.180:4333"}}
Every record in this set has about 12-15 bins and the size of the record varies between 300Kb to 1.5Mb. The bigger the size of the record the higher chance of this error showing up. Has anyone faced this issue before and what could be causing this?
I answered this on the community forum but figured I'd add it here as well.
Looking at the server error codes a code: 6 is AS_ERR_BIN_EXISTS. Your write policy is setup with exists: Aerospike.policy.exists.CREATE_OR_UPDATE but the options for exists are:
Name
Description
IGNORE
Write the record, regardless of existence. (I.e. create or update.)
CREATE
Create a record, ONLY if it doesn't exist.
UPDATE
Update a record, ONLY if it exists.
REPLACE
Completely replace a record, ONLY if it exists.
CREATE_OR_REPLACE
Completely replace a record if it exists, otherwise create it.
My guess is that it's somehow defaulting to a CREATE only causing it to throw that error. Try updating your write policy to use exists: Aerospike.policy.exists.IGNORE and see if you still get that error.
Related
I am trying to update a batch of jobs to use some instance pools with the databricks api and when I try to use the update endpoint, the job just does not update. It says it executed without errors, but when I check the job, it was not updated.
What am I doing wrong?
What i used to update the job:
I used the get endpoint using the job_id to get my job settings and all
I updated the resulting data with the values that i needed and executed the call to update the job.
'custom_tags': {'ResourceClass': 'Serverless'},
'driver_instance_pool_id': 'my-pool-id',
'driver_node_type_id': None,
'instance_pool_id': 'my-other-pool-id',
'node_type_id': None
I used this documentation, https://docs.databricks.com/dev-tools/api/latest/jobs.html#operation/JobsUpdate
here is my payload
{
"created_time": 1672165913242,
"creator_user_name": "email#email.com",
"job_id": 123123123123,
"run_as_owner": true,
"run_as_user_name": "email#email.com",
"settings": {
"email_notifications": {
"no_alert_for_skipped_runs": false,
"on_failure": [
"email1#email.com",
"email2#email.com"
]
},
"format": "MULTI_TASK",
"job_clusters": [
{
"job_cluster_key": "the_cluster_key",
"new_cluster": {
"autoscale": {
"max_workers": 4,
"min_workers": 2
},
"aws_attributes": {
"availability": "SPOT_WITH_FALLBACK",
"ebs_volume_count": 0,
"first_on_demand": 1,
"instance_profile_arn": "arn:aws:iam::XXXXXXXXXX:instance-profile/instance-profile",
"spot_bid_price_percent": 100,
"zone_id": "us-east-1a"
},
"cluster_log_conf": {
"s3": {
"canned_acl": "bucket-owner-full-control",
"destination": "s3://some-bucket/log/log_123123123/",
"enable_encryption": true,
"region": "us-east-1"
}
},
"cluster_name": "",
"custom_tags": {
"ResourceClass": "Serverless"
},
"data_security_mode": "SINGLE_USER",
"driver_instance_pool_id": "my-driver-pool-id",
"enable_elastic_disk": true,
"instance_pool_id": "my-worker-pool-id",
"runtime_engine": "PHOTON",
"spark_conf": {...},
"spark_env_vars": {...},
"spark_version": "..."
}
}
],
"max_concurrent_runs": 1,
"name": "my_job",
"schedule": {...},
"tags": {...},
"tasks": [{...},{...},{...}],
"timeout_seconds": 79200,
"webhook_notifications": {}
}
}
I tried to use the update endpoint and reading the docs for information but I found nothing related to the issue.
I finally got it
I was using partial update and found that this does not work for the whole job payload
So I changed the endpoint to use full update (reset) and it worked
After struggling with:
mongoose to determine update-upsert is doing insert or update
I'm still at a loss for determining whether and update or insert occurred with something like this:
let updateResult = await myDB.updateOne(
searchKey,
newData,
{ upsert: true }
);
updateResult contains:
{ n: 1, nModified: 1, ok: 1 }
nModified does not seem to indicate whether the operation resulted in an update or insert but rather it seems to have some other undertermined meaning.
$ mongod --version
db version v4.4.18
Build Info: {
"version": "4.4.18",
"gitVersion": "8ed32b5c2c68ebe7f8ae2ebe8d23f36037a17dea",
"openSSLVersion": "OpenSSL 1.1.1f 31 Mar 2020",
"modules": [],
"allocator": "tcmalloc",
"environment": {
"distmod": "ubuntu2004",
"distarch": "x86_64",
"target_arch": "x86_64"
}
}
"mongoose": "^5.10.13",
if you are using mongoose then you will get below result when update
{
"acknowledged": true,
"modifiedCount": 1, // if document is updated
"upsertedId": null, // id of new inserted document (if upsert)
"upsertedCount": 0, // count of inserted documents
"matchedCount": 1 // count of documents that match with condition for update
}
Using the llrpjs library for Node.js, we are attempting to read tags from the Zebra FX7500 (Motorola?). This discussion points to the RFID Reader Software Interface Control Guide pages 142-144, but does not indicate potential values to set up the device.
From what we can gather, we should issue a SET_READER_CONFIG with a custom parameter (MotoDefaultSpec = VendorIdentifier: 161, ParameterSubtype: 102, UseDefaultSpecForAutoMode: true). Do we need to include the ROSpec and/or AccessSpec values as well (are they required)? After sending the SET_READER_CONFIG message, do we still need to send the regular LLRP messages (ADD_ROSPEC, ENABLE_ROSPEC, START_ROSPEC)? Without the MotoDefaultSpec, even after sending the regular LLRP messages, sending a GET_REPORT does not retrieve tags nor does a custom message with MOTO_GET_TAG_EVENT_REPORT. They both trigger a RO_ACCESS_REPORT event message, but the tagReportData is null.
The README file for llrpjs lists "Vendor definitions support" as a TODO item. While that is somewhat vague, is it possible that the library just hasn't implemented custom LLRP extension (messages/parameters) support, which is why none of our attempts are working? The MotoDefaultSpec parameter and MOTO_GET_TAG_EVENT_REPORT are custom to the vendor/chipset. The MOTO_GET_TAG_EVENT_REPORT custom message seems to trigger a RO_ACCESS_REPORT similar to the base LLRP GET_REPORT message, so we assume that part is working.
It is worth noting that Zebra's 123RFID Desktop setup and optimization tool connects and reads tags as expected, so the device and antenna are working (reading tags).
Could these issues be related to the ROSPEC file we are using (see below)?
{
"$schema": "https://llrpjs.github.io/schema/core/encoding/json/1.0/llrp-1x0.schema.json",
"id": 1,
"type": "ADD_ROSPEC",
"data": {
"ROSpec": {
"ROSpecID": 123,
"Priority": 1,
"CurrentState": "Disabled",
"ROBoundarySpec": {
"ROSpecStartTrigger": {
"ROSpecStartTriggerType": "Immediate"
},
"ROSpecStopTrigger": {
"ROSpecStopTriggerType": "Null",
"DurationTriggerValue": 0
}
},
"AISpec": {
"AntennaIDs": [1, 2, 3, 4],
"AISpecStopTrigger": {
"AISpecStopTriggerType": "Null",
"DurationTrigger": 0
},
"InventoryParameterSpec": {
"InventoryParameterSpecID": 1234,
"ProtocolID": "EPCGlobalClass1Gen2"
}
},
"ROReportSpec": {
"ROReportTrigger": "Upon_N_Tags_Or_End_Of_ROSpec",
"N": 1,
"TagReportContentSelector": {
"EnableROSpecID": true,
"EnableAntennaID": true,
"EnableFirstSeenTimestamp": true,
"EnableLastSeenTimestamp": true,
"EnableSpecIndex": false,
"EnableInventoryParameterSpecID": false,
"EnableChannelIndex": false,
"EnablePeakRSSI": false,
"EnableTagSeenCount": true,
"EnableAccessSpecID": false
}
}
}
}
}
For anyone having a similar issue, we found that attempting to configure more antennas than the Zebra device has connected caused the entire spec to fail. In our case, we had two antennas connected, so including antennas 3 and 4 in the spec was causing the problem.
See below for the working ROSPEC. The extra antennas in the data.AISpec.AntennaIDs property were removed and allowed our application to connect and read tags.
We are still having some issues with llrpjs when trying to STOP_ROSPEC because it sends an RO_ACCESS_REPORT response without a resName value. See the issue on GitHub for more information.
That said, our application works without sending the STOP_ROSPEC command.
{
"$schema": "https://llrpjs.github.io/schema/core/encoding/json/1.0/llrp-1x0.schema.json",
"id": 1,
"type": "ADD_ROSPEC",
"data": {
"ROSpec": {
"ROSpecID": 123,
"Priority": 1,
"CurrentState": "Disabled",
"ROBoundarySpec": {
"ROSpecStartTrigger": {
"ROSpecStartTriggerType": "Null"
},
"ROSpecStopTrigger": {
"ROSpecStopTriggerType": "Null",
"DurationTriggerValue": 0
}
},
"AISpec": {
"AntennaIDs": [1, 2],
"AISpecStopTrigger": {
"AISpecStopTriggerType": "Null",
"DurationTrigger": 0
},
"InventoryParameterSpec": {
"InventoryParameterSpecID": 1234,
"ProtocolID": "EPCGlobalClass1Gen2",
"AntennaConfiguration": {
"AntennaID": 1,
"RFReceiver": {
"ReceiverSensitivity": 0
},
"RFTransmitter": {
"HopTableID": 1,
"ChannelIndex": 1,
"TransmitPower": 170
},
"C1G2InventoryCommand": {
"TagInventoryStateAware": false,
"C1G2RFControl": {
"ModeIndex": 23,
"Tari": 0
},
"C1G2SingulationControl": {
"Session": 1,
"TagPopulation": 32,
"TagTransitTime": 0,
"C1G2TagInventoryStateAwareSingulationAction": {
"I": "State_A",
"S": "SL"
}
}
}
}
}
},
"ROReportSpec": {
"ROReportTrigger": "Upon_N_Tags_Or_End_Of_AISpec",
"N": 1,
"TagReportContentSelector": {
"EnableROSpecID": true,
"EnableAntennaID": true,
"EnableFirstSeenTimestamp": true,
"EnableLastSeenTimestamp": true,
"EnableTagSeenCount": true,
"EnableSpecIndex": false,
"EnableInventoryParameterSpecID": false,
"EnableChannelIndex": false,
"EnablePeakRSSI": false,
"EnableAccessSpecID": false
}
}
}
}
}
I am trying to read a BLOB data type from Oracle using a nodejs from npm package called oracledb and this is my select statement, it hold picture in it:
select media from test_data
I have tried using a few solution but it doesn't work.
Here is my response from the res.json:
{
"MEDIA": {
"_readableState": {
"objectMode": false,
"highWaterMark": 16384,
"buffer": {
"head": null,
"tail": null,
"length": 0
},
"length": 0,
"pipes": [],
"flowing": null,
"ended": false,
"endEmitted": false,
.....
},
How do I get the hex value of it?
You are seeing an instance of the node-oracledb Lob class which you can stream from.
However, it can be easiest to get a Buffer directly. Do this by setting oracledb.fetchAsBuffer, for example:
oracledb.fetchAsBuffer = [ oracledb.BLOB ];
const result = await connection.execute(`SELECT b FROM mylobs WHERE id = 2`);
if (result.rows.length === 0)
console.error("No results");
else {
const blob = result.rows[0][0];
console.log(blob.toString()); // assuming printable characters
}
Refer to the documentation Simple LOB Queries and PL/SQL OUT Binds.
{
members {
id
lastName
}
}
When I tried to get the data from members table, I can get the following responses.
{ "data": {
"members": [
{
"id": "TWVtYmVyOjE=",
"lastName": "temp"
},
{
"id": "TWVtYmVyOjI=",
"lastName": "temp2"
}
] } }
However, when I tried to update the row with 'id' where clause, the console shows error.
mutation {
updateMembers(
input: {
values: {
email: "testing#test.com"
},
where: {
id: 3
}
}
) {
affectedCount
clientMutationId
}
}
"message": "Unknown column 'NaN' in 'where clause'",
Some results from above confused me.
Why the id returned is not a numeric value? From the db, it is a number.
When I updated the record, can I use numeric id value in where clause?
I am using nodejs, apollo-client and graphql-sequelize-crud
TL;DR: check out my possibly not relay compatible PR here https://github.com/Glavin001/graphql-sequelize-crud/pull/30
Basically, the internal source code is calling the fromGlobalId API from relay-graphql, but passed a primitive value in it (e.g. your 3), causing it to return undefined. Hence I just removed the call from the source code and made a pull request.
P.S. This buggy thing which used my 2 hours to solve failed in build, I think this solution may not be consistent enough.
Please try this
mutation {
updateMembers(
input: {
values: {
email: "testing#test.com"
},
where: {
id: "3"
}
}
) {
affectedCount
clientMutationId
}
}