JMS Text Message on Oracle AQ from Nodejs - node.js

I'm trying to enqueue a jms text message on oracle AQ from nodejs.
const enqueue = async () => {
try {
await oracle.createPool();
const connection = await oracle.getConnection();
const jmsMessageType = "SYS.AQ$_JMS_TEXT_MESSAGE";
const queue = await connection.getQueue(bv.REQUEST_QUEUE_NAME, {payloadType: jmsMessageType});
const theRequest = new queue.payloadTypeClass({
text_length: request.length,
text_vc: request
});
await queue.enqOne(theRequest);
await connection.commit();
} catch(e){
console.error(e);
}
}
enqueue();
I can see that the message is queued in the AQ's table in oracle, but the consumer breaks when trying to dequeue the message:
oracle.jms.AQjmsException: JMS-120: Dequeue failed
at oracle.jms.AQjmsError.throwEx(AQjmsError.java:337)
at oracle.jms.AQjmsConsumer.jdbcDequeueCommon(AQjmsConsumer.java:1995)
at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:1374)
at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:1292)
at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:1270)
at oracle.jms.AQjmsConsumer.receiveNoWait(AQjmsConsumer.java:1068)
...
Caused by: java.lang.NullPointerException
at oracle.jms.AQjmsTextMessage.readTextMessageContainer(AQjmsTextMessage.java:328)
at oracle.jms.AQjmsTextMessage.<init>(AQjmsTextMessage.java:161)
at oracle.jms.AQjmsConsumer.jdbcDequeueCommon(AQjmsConsumer.java:1751)
... 19 more
Any ideas on the correct structure of the JMSTextMessage type?

Basically just had to get the definitions of the types and user UPPER CASE for the property names. Upper case is very important - it just ignores lower case property names.
SYS.AQ$_JMS_TEXT_MESSAGE
SYS.AQ$_JMS_HEADER
SYS.AQ$_JMS_USERPROPARRAY
SYS.AQ$_JMS_USERPROPERTY
Look here if you need more:
https://docs.oracle.com/cd/B10501_01/appdev.920/a96612/t_jms3.htm
const theRequest = new queue.payloadTypeClass(
{
HEADER: {
USERID: "YOUR_USER",
PROPERTIES: [
{
NAME: "JMS_OracleDeliveryMode",
TYPE: 100,
STR_VALUE: "2",
NUM_VALUE: null,
JAVA_TYPE: 27
},
{
NAME: "JMS_OracleTimestamp",
TYPE: 200,
STR_VALUE: null,
NUM_VALUE: new Date().getTime(),
JAVA_TYPE: 24
}
]
},
TEXT_LEN: request.length,
TEXT_VC: request
}
);

Related

How can I fix update_mask.paths error on AnalyticsAdminServiceClient(GA4) Nodejs?

I tried to update "DataRetentionSettings" using Google Analytics Admin Client(GA4) in Nodejs, but I got the following error.
Error: 3 INVALID_ARGUMENT: One or more values in the field 'update_mask.paths_list' was invalid, but all values must be valid.eventDataRetention, resetUserDataOnNewActivity
at Object.callErrorFromStatus (C:\my_working_path\GA4_manager\node_modules\#grpc\grpc-js\build\src\call.js:31:26)
at Object.onReceiveStatus (C:\my_working_path\GA4_manager\node_modules\#grpc\grpc-js\build\src\client.js:189:52)
at Object.onReceiveStatus (C:\my_working_path\GA4_manager\node_modules\#grpc\grpc-js\build\src\client-interceptors.js:365:141)
at Object.onReceiveStatus (C:\my_working_path\GA4_manager\node_modules\#grpc\grpc-js\build\src\client-interceptors.js:328:181)
at C:\my_working_path\GA4_manager\node_modules\#grpc\grpc-js\build\src\call-stream.js:187:78
at processTicksAndRejections (node:internal/process/task_queues:78:11) {
code: 3,
details: "One or more values in the field 'update_mask.paths_list' was invalid, but all values must be valid.eventDataRetention, resetUserDataOnNewActivity",
metadata: Metadata {
internalRepr: Map(1) { 'grpc-server-stats-bin' => [Array] },
options: {}
},
note: 'Exception occurred in retry method that was not classified as transient'
}
The code is as follows.
const analyticsAdmin = require("#google-analytics/admin");
class Main {
constructor() {
this.analyticsAdminClient = new analyticsAdmin.AnalyticsAdminServiceClient({
keyFilename: "./mykeyfile.json",
});
}
async updateDataRetentionSettings() {
const name = "properties/*********/dataRetentionSettings";
const request = {
dataRetentionSettings: {
name: name,
eventDataRetention: "FOURTEEN_MONTHS",
resetUserDataOnNewActivity: true,
},
updateMask: {
paths: ["eventDataRetention", "resetUserDataOnNewActivity"],
},
};
let retention = {};
try {
retention = await this.analyticsAdminClient.updateDataRetentionSettings(request);
} catch (e) {
console.log(e);
process.exit(1);
}
return retention[0];
}
}
const client = new Main();
client.updateDataRetentionSettings();
I also added "name" to the paths property of updateMask and the result was the same.
Here is the document I referred to.
AnalyticsAdminServiceClient
And the client version is 4.0.0.
How can I update DataRetentionSettings via API?
To update property in GA 4 then you could try as follows :
const {AnalyticsAdminServiceClient} = require('#google-analytics/admin').v1alpha; // ---> This dependency should be installed
const credentialFile = '/usr/local/credentialFile.json';
const adminClient = new AnalyticsAdminServiceClient(
{keyFilename: credentialFile} // --> credentialFile will be the path of service account's detail json file in your local machine
);
async function callUpdateProperty() {
// Construct request
const updateMask = {
paths: ["display_name"] // --> Please keep in mind that name should in camel case. like I have added for 'displayName' as 'display_name'
};
const property = {
"name" : "properties/123",
"displayName": "New Display Name"
};
const request = {
property,
updateMask,
};
// Run request
const response = await adminClient.updateProperty(request);

How to make kuzzle-device-manager plugin API actions works?

I successfully installed and loaded kuzzle-device-manager in the backend file:
import { Backend } from 'kuzzle';
import { DeviceManagerPlugin } from 'kuzzle-device-manager';
const app = new Backend('playground');
console.log(app.config);
const deviceManager = new DeviceManagerPlugin();
const mappings = {
updatedAt: { type: 'date' },
payloadUuid: { type: 'keyword' },
value: { type: 'float' }
}
deviceManager.devices.registerMeasure('humidity', mappings)
app.plugin.use(deviceManager)
app.start()
.then(async () => {
// Interact with Kuzzle API to create a new index if it does not already exist
console.log(' started!');
})
.catch(console.error);
But when i try to use controllers from that plugin for example device-manager/device with create action i get an error output.
Here is my "client" code in js:
const { Kuzzle, WebSocket } = require("kuzzle-sdk")
const kuzzle = new Kuzzle(
new WebSocket('KUZZLE_IP')
)
kuzzle.on('networkError', error => {
console.error('Network Error: ', error);
})
const run = async () => {
try {
// Connects to the Kuzzle server
await kuzzle.connect();
// Creates an index
const result = await kuzzle.query({
index: "nyc-open-data",
controller: "device-manager/device",
action: "create",
body: {
model: "model-1234",
reference: "reference-1234"
}
}, {
queuable: false
})
console.log(result)
} catch (error) {
console.error(error.message);
} finally {
kuzzle.disconnect();
}
};
run();
And the result log:
API action "device-manager/device":"create" not found
Note: The nyc-open-data index exists and is empty.
We apologize for this mistake in the documentation, the device-manager/device:create method is not available because the plugin is using auto-provisioning until the v2.
You should send a payload to your decoder, the plugin will automatically provision the device if it does not exists https://docs.kuzzle.io/official-plugins/device-manager/1/guides/decoders/#receive-payloads

Elasticsearch node js point in time search_phase_execution_exception

const body = {
query: {
geo_shape: {
geometry: {
relation: 'within',
shape: {
type: 'polygon',
coordinates: [$polygon],
},
},
},
},
pit: {
id: "t_yxAwEPZXNyaS1wYzYtMjAxN3IxFjZxU2RBTzNyUXhTUV9XbzhHSk9IZ3cAFjhlclRmRGFLUU5TVHZKNXZReUc3SWcAAAAAAAALmpMWQkNwYmVSeGVRaHU2aDFZZExFRjZXZwEWNnFTZEFPM3JReFNRX1dvOEdKT0hndwAA",
keep_alive: "1m",
},
};
Query fails with search_phase_execution_exception at onBody
Without pit query works fine but it's needed to retrieve more than 10000 hits
Well, using PIT in NodeJS ElasticSearch's client is not clear, or at least is not well documented. You can create a PIT using the client like:
const pitRes = await elastic.openPointInTime({
index: index,
keep_alive: "1m"
});
pit_id = pitRes.body.id;
But there is no way to use that pit_id in the search method, and it's not documented properly :S
BUT, you can use the scroll API as follows:
const scrollSearch = await elastic.helpers.scrollSearch({
index: index,
body: {
"size": 10000,
"query": {
"query_string": {
"fields": [ "vm_ref", "org", "vm" ],
"query": organization + moreQuery
},
"sort": [
{ "utc_date": "desc" }
]
}
}});
And then read the results as follows:
let res = [];
try {
for await (const result of scrollSearch) {
res.push(...result.body.hits.hits);
}
} catch (e) {
console.log(e);
}
I know that's not the exact answer to your question, but I hope it helps ;)
The usage of point-in-time for pagination of search results is now documented in ElasticSearch. You can find more or less detailed explanations here: Paginate search results
I prepared an example that may give an idea about how to implement the workflow, described in the documentation:
async function searchWithPointInTime(cluster, index, chunkSize, keepAlive) {
if (!chunkSize) {
chunkSize = 5000;
}
if (!keepAlive) {
keepAlive = "1m";
}
const client = new Client({ node: cluster });
let pointInTimeId = null;
let searchAfter = null;
try {
// Open point in time
pointInTimeId = (await client.openPointInTime({ index, keep_alive: keepAlive })).body.id;
// Query next chunk of data
while (true) {
const size = remained === null ? chunkSize : Math.min(remained, chunkSize);
const response = await client.search({
// Pay attention: no index here (because it will come from the point-in-time)
body: {
size: chunkSize,
track_total_hits: false, // This will make query faster
query: {
// (1) TODO: put any filter you need here (instead of match_all)
match_all: {},
},
pit: {
id: pointInTimeId,
keep_alive: keepAlive,
},
// Sorting should be by _shard_doc or at least include _shard_doc
sort: [{ _shard_doc: "desc" }],
// The next parameter is very important - it tells Elastic to bring us next portion
...(searchAfter !== null && { search_after: [searchAfter] }),
},
});
const { hits } = response.body.hits;
if (!hits || !hits.length) {
break; // No more data
}
for (hit of hits) {
// (2) TODO: Do whatever you need with results
}
// Check if we done reading the data
if (hits.length < size) {
break; // We finished reading all data
}
// Get next value for the 'search after' position
// by extracting the _shard_doc from the sort key of the last hit
searchAfter = hits[hits.length - 1].sort[0];
}
} catch (ex) {
console.error(ex);
} finally {
// Close point in time
if (pointInTime) {
await client.closePointInTime({ body: { id: pointInTime } });
}
}
}

Mock multiple api call inside one function using Moxios

I am writing a test case for my service class. I want to mock multiple calls inside one function as I am making two API calls from one function. I tried following but it is not working
it('should get store info', async done => {
const store: any = DealersAPIFixture.generateStoreInfo();
moxios.wait(() => {
const request = moxios.requests.mostRecent();
request.respondWith({
status: 200,
response: store
});
const nextRequest = moxios.requests.at(1);
nextRequest.respondWith({
status: 200,
response: DealersAPIFixture.generateLocation()
});
});
const params = {
dealerId: store.dealerId,
storeId: store.storeId,
uid: 'h0pw1p20'
};
return DealerServices.retrieveStoreInfo(params).then((data: IStore) => {
const expectedOutput = DealersFixture.generateStoreInfo(data);
expect(data).toMatchObject(expectedOutput);
});
});
const nextRequest is always undefined
it throw error TypeError: Cannot read property 'respondWith' of undefined
here is my service class
static async retrieveStoreInfo(
queryParam: IStoreQueryString
): Promise<IStore> {
const res = await request(getDealerStoreParams(queryParam));
try {
const locationResponse = await graphQlRequest({
query: locationQuery,
variables: { storeId: res.data.storeId }
});
res.data['inventoryLocationCode'] =
locationResponse.data?.location?.inventoryLocationCode;
} catch (e) {
res.data['inventoryLocationCode'] = 'N/A';
}
return res.data;
}
Late for the party, but I had to resolve this same problem just today.
My (not ideal) solution is to use moxios.stubRequest for each request except for the last one. This solution is based on the fact that moxios.stubRequest pushes requests to moxios.requests, so, you'll be able to analyze all requests after responding to the last call.
The code will look something like this (considering you have 3 requests to do):
moxios.stubRequest("get-dealer-store-params", {
status: 200,
response: {
name: "Audi",
location: "Berlin",
}
});
moxios.stubRequest("graph-ql-request", {
status: 204,
});
moxios.wait(() => {
const lastRequest = moxios.requests.mostRecent();
lastRequest.respondWith({
status: 200,
response: {
isEverythingWentFine: true,
},
});
// Here you can analyze any request you want
// Assert getDealerStoreParams's request
const dealerStoreParamsRequest = moxios.requests.first();
expect(dealerStoreParamsRequest.config.headers.Accept).toBe("application/x-www-form-urlencoded");
// Assert graphQlRequest
const graphQlRequest = moxios.requests.get("POST", "graph-ql-request");
...
// Assert last request
expect(lastRequest.config.url).toBe("status");
});

Job processing microservices using bull

I would like to process scheduled jobs using node.js bull. Basically I have two processors that handle 2 types of jobs. There is one configurator that configures the jobs which will be added to the bull queue using cron.
The scheduler will be in one microservice and the each of the processor will be a separate microservice. So I will be having 3 micro services.
My question is am I using the correct pattern with bull?
index.js
const Queue = require('bull');
const fetchQueue = new Queue('MyScheduler');
fetchQueue.add("fetcher", {name: "earthQuakeAlert"}, {repeat: {cron: '1-59/2 * * * *'}, removeOnComplete: true});
fetchQueue.add("fetcher", {name: "weatherAlert"}, {repeat: {cron: '3-59/3 * * * *'}, removeOnComplete: true});
processor-configurator.js
const Queue=require('bull');
const scheduler = new Queue("MyScheduler");
scheduler.process("processor", __dirname + "/alert-processor");
fetcher-configurator.js
const Queue=require('bull');
const scheduler = new Queue("MyScheduler");
scheduler.process("fetcher", __dirname+"/fetcher");
fetcher.js
const Queue = require('bull');
const moment = require('moment');
module.exports = function (job) {
const scheduler = new Queue('MyScheduler');
console.log("Insider processor ", job.data, moment().format("YYYY-MM-DD hh:mm:ss"));
scheduler.add('processor', {'name': 'Email needs to be sent'}, {removeOnComplete: true});
return Promise.resolve()
};
alert-processor.js
const Queue = require('bull');
const moment = require('moment');
module.exports = function (job) {
const scheduler = new Queue('MyScheduler');
console.log("Insider processor ", job.data, moment().format("YYYY-MM-DD hh:mm:ss"));
scheduler.add('processor', {'name': 'Email needs to be sent'}, {removeOnComplete: true});
return Promise.resolve()
};
There will be three microservices -
node index.js
node fetcher-configurator.js
node processor-configurator.js
I see inconsistent behavior from bull. Sometimes I am getting the error Missing process handler for job type
Quoting myself with a hope this will be helpful for someone else
This is because both workers use the same queue. Worker tries to get next job from queue, receives a job with wrong type (eg "fetcher" instead of "processor") and fails because it knows how to handle "processor" and doesn't know what to do with "fetcher". Bull doesn't allow you to take only compatible jobs from queue, both workers should be able to process all types of jobs. The simplest solution would be to use two different queues, one for processors and one for fetchers. Then you can remove names from jobs and processors, it won't be needed anymore since name is defined by the queue.
https://github.com/OptimalBits/bull/issues/1481
The Bull:
expiration-queue.js
import Queue from 'bull';
import { ExpirationCompletePublisher } from '../events/publishers/expiration-complete-publisher';
import { natsWrapper } from '../nats-wrapper';
interface Payload {
orderId: string;
}
const expirationQueue = new Queue<Payload>('order:expiration', {
redis: {
host: process.env.REDIS_HOST,
},
});
expirationQueue.process(async (job) => {
console.log('Expiries order id', job.data.orderId);
new ExpirationCompletePublisher(natsWrapper.client).publish({
orderId: job.data.orderId,
});
});
export { expirationQueue };
promotionEndQueue.js
import Queue from 'bull';
import { PromotionEndedPublisher } from '../events/publishers/promotion-ended-publisher';
import { natsWrapper } from '../nats-wrapper';
interface Payload {
promotionId: string;
}
const promotionEndQueue = new Queue<Payload>('promotions:end', {
redis: {
host: process.env.REDIS_HOST, // look at expiration-depl.yaml
},
});
promotionEndQueue.process(async (job) => {
console.log('Expiries promotion id', job.data.promotionId);
new PromotionEndedPublisher(natsWrapper.client).publish({
promotionId: job.data.promotionId,
});
});
export { promotionEndQueue };
order-created-listener.js
import { Listener, OrderCreatedEvent, Subjects } from '#your-lib/common';
import { queueGroupName } from './queue-group-name';
import { Message } from 'node-nats-streaming';
import { expirationQueue } from '../../queues/expiration-queue';
export class OrderCreatedListener extends Listener<OrderCreatedEvent> {
subject: Subjects.OrderCreated = Subjects.OrderCreated;
queueGroupName = queueGroupName;
async onMessage(data: OrderCreatedEvent['data'], msg: Message) {
// delay = expiredTime - currentTime
const delay = new Date(data.expiresAt).getTime() - new Date().getTime();
// console.log("delay", delay)
await expirationQueue.add(
{
orderId: data.id,
},
{
delay,
}
);
msg.ack();
}
}
promotion-started-listener.js
import {
Listener,
PromotionStartedEvent,
Subjects,
} from '#your-lib/common';
import { queueGroupName } from './queue-group-name';
import { Message } from 'node-nats-streaming';
import { promotionEndQueue } from '../../queues/promotions-end-queue';
export class PromotionStartedListener extends Listener<PromotionStartedEvent> {
subject: Subjects.PromotionStarted = Subjects.PromotionStarted;
queueGroupName = queueGroupName;
async onMessage(data: PromotionStartedEvent['data'], msg: Message) {
// delay = expiredTime - currentTime
const delay = new Date(data.endTime).getTime() - new Date().getTime();
// console.log("delay", delay)
await promotionEndQueue.add(
{
promotionId: data.id,
},
{
delay,
}
);
msg.ack();
}
}

Resources