I am working on a project that requires the usage of a few rabbitmq queues. One of the queues requires that the messages are delayed for processing at a time in the future. I noticed in the documentation for rabbmitmq there is a new plugin called RabbitMQ Delayed Message Plugin that seems to allow this functionality. In the past for rabbmitmq based projects, I used seneca-amqp-transport for message adding and processing. The issue is that I have not seen any documentation for seneca or been able to find any examples outlining how to add header properties.
It seems as if I need to initially make sure the queue is created with x-delayed-type. Additionally, as each message is added to the queue, I need to make sure the x-delay header parameter is added to the message before it is sent to rabbbitmq. Is there a way to pass this parameter, x-delay, with seneca-amqp-transport?
Here is my current code for adding a message to the queue:
return new Promise((resolve, reject) => {
const client = require('seneca')()
.use('seneca-amqp-transport')
.client({
type: 'amqp',
pin: 'action:perform_time_consuming_act',
url: process.env.AMQP_SEND_URL
}).ready(() => {
client.act('action:perform_time_consuming_act', {
message: {data: 'this is a test'}
}, (err, res) => {
if (err) {
reject(err);
}
resolve(true);
});
});
}
In the code above, where would header-related data go?
I just looked up the code of the library and under lib/client/publisher.js , this should do the trick
function publish(message, exchange, rk, options) {
const opts = Object.assign({}, options, {
replyTo: replyQueue,
contentType: JSON_CONTENT_TYPE,
x-delay: 5000,
correlationId: correlationId
});
return ch.publish(exchange, rk, Buffer.from(message), opts);
}
Give it a Try , should work. Here the delay value if set to 5000 milliseconds. You can also overload the publish method to take the value as a parameter.
Related
referencing https://github.com/grpc/proposal/blob/master/A6-client-retries.md
it is not clear where the retry policy is actually placed or referenced, is it part of
protoLoader.loadSync(PROTO_PATH, {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
})
A supplemental question once retry is setup, regarding the call.on('xxxxxx', where in the API docs are the options listed? Using vscode I don't get any lint suggestions, but copilot gave me these, is there a more comprehensive list?
call.on('end', () => {
console.log("---END---")
})
call.on('error', err => {
console.log("---ERROR---:"+JSON.stringify(err))
})
call.on('status', status => {
console.log("---STATUS---")
})
call.on('metadata', metadata => {
console.log("---METADATA---")
})
call.on('cancelled', () => {
console.log("---CANCELLED---")
})
call.on('close', () => {
console.log("---CLOSE---")
})
call.on('finish', () => {
console.log("---FINISH---")
})
call.on('drain', () => {
console.log("---DRAIN---")
})
First, the retry functionality is currently not supported in the Node gRPC library (but it is in development).
Second, once the retry functionality is supported, it can be configured in the service config, as specified in the Integration with Service Config section of the proposal you linked. The service config can be provided to the client automatically by the service owner through the name resolution mechanism, or it can be provided when constructing a Client or Channel object by setting the grpc.service_config channel argument with a value that is a string containing a JSON-encoded service config.
Third, the call objects returned when calling methods are Node stream objects with additional events for metadata and status. Depending on how the method is defined in the .proto file, the call can be a Readable stream, a Writable stream, both, or neither, and it will emit the corresponding methods. The cancelled event is only emitted by server call objects, which do not emit the metadata or status events.
I am experimenting with Node.js and the application insights SDK in two separate function apps. Nodejs is just what I am comfortable with to quickly poc, this might not be the final language so I don't want to know any language specific solutions, simply how application insights behaves in the context of function apps and what it expects to be able to draw a proper application map.
My goal is to be able to write simple queries in log analytics to get the full chain of a single request through multiple function apps, no matter how these are connected. I also want an accurate (as possible) view of the system in the application map in application insights.
My assumption is that a properly set operation_id and operation_parentId would yield both a queryable trace using kusto and a proper application map.
I've set up the following flow:
Function1 only exposes a HTTP trigger, whereas Function2 exposes both a HTTP and Service Bus trigger.
The full flow looks like this:
I call Function1 using GET http://function1.com?input=test
Function1 calls Function2 using REST at GET http://function2.com?input=test
Function1 uses the response from Function2 to add a message to a service bus queue
Function2 has a trigger on that same queue
I am mixing patterns here just to see what the application map does and understand how to use this correctly.
For step 1 through 3, I can see the entire chain in my logs on a single operation_Id. In this screenshot the same operationId spans two different function apps:
What I would expect to find in this log is also the trigger of the service bus where the trigger is called ServiceBusTrigger. The service bus does trigger on the message, it just gets a different operationId.
To get the REST correlation to work, I followed the guidelines from applicationinsights npm package in the section called Setting up Auto-Correlation for Azure Functions.
This is what Function1 looks like (the entrypoint and start of the chain)
let appInsights = require('applicationinsights')
appInsights.setup()
.setAutoCollectConsole(true, true)
.setDistributedTracingMode(appInsights.DistributedTracingModes.AI_AND_W3C)
.start()
const https = require('https')
const httpTrigger = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.')
const response = await callOtherFunction(req)
context.res = {
body: response
}
context.log("Sending response on service bus")
context.bindings.outputSbQueue = response;
}
async function callOtherFunction(req) {
return new Promise((resolve, reject) => {
https.get(`https://function2.azurewebsites.net/api/HttpTrigger1?code=${process.env.FUNCTION_2_CODE}&input=${req.query.input}`, (resp) => {
let data = ''
resp.on('data', (chunk) => {
data += chunk
})
resp.on('end', () => {
resolve(data)
})
}).on("error", (err) => {
reject("Error: " + err.message)
})
})
}
module.exports = async function contextPropagatingHttpTrigger(context, req) {
// Start an AI Correlation Context using the provided Function context
const correlationContext = appInsights.startOperation(context, req);
// Wrap the Function runtime with correlationContext
return appInsights.wrapWithCorrelationContext(async () => {
const startTime = Date.now(); // Start trackRequest timer
// Run the Function
const result = await httpTrigger(context, req);
// Track Request on completion
appInsights.defaultClient.trackRequest({
name: context.req.method + " " + context.req.url,
resultCode: context.res.status,
success: true,
url: req.url,
time: new Date(startTime),
duration: Date.now() - startTime,
id: correlationContext.operation.parentId,
});
appInsights.defaultClient.flush();
return result;
}, correlationContext)();
};
And this is what the HTTP trigger in Function2 looks like:
let appInsights = require('applicationinsights')
appInsights.setup()
.setAutoCollectConsole(true, true)
.setDistributedTracingMode(appInsights.DistributedTracingModes.AI_AND_W3C)
.start()
const httpTrigger = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.')
context.res = {
body: `Function 2 received ${req.query.input}`
}
}
module.exports = async function contextPropagatingHttpTrigger(context, req) {
// Start an AI Correlation Context using the provided Function context
const correlationContext = appInsights.startOperation(context, req);
// Wrap the Function runtime with correlationContext
return appInsights.wrapWithCorrelationContext(async () => {
const startTime = Date.now(); // Start trackRequest timer
// Run the Function
const result = await httpTrigger(context, req);
// Track Request on completion
appInsights.defaultClient.trackRequest({
name: context.req.method + " " + context.req.url,
resultCode: context.res.status,
success: true,
url: req.url,
time: new Date(startTime),
duration: Date.now() - startTime,
id: correlationContext.operation.parentId,
});
appInsights.defaultClient.flush();
return result;
}, correlationContext)();
};
The Node.js application insights documentation says:
The Node.js client library can automatically monitor incoming and outgoing HTTP requests, exceptions, and some system metrics.
So this seems to work for HTTP, but what is the proper way to do this over (for instance) a service bus queue to get a nice message trace and correct application map? The above solution for the applicationinsights SDK seems to only be for HTTP requests where you use the req object on the context. How is the operationId persisted in cross-app communication in these instances?
What is the proper way of doing this across other messaging channels? What do I get for free from application insights, and what do I need to stitch myself?
UPDATE
I found this piece of information in the application map documentation which seems to support the working theory that only REST/HTTP calls will be able to get traced. But then the question remains, how does the output binding work if it is not a HTTP call?
The app map finds components by following HTTP dependency calls made between servers with the Application Insights SDK installed.
UPDATE 2
In the end I gave up on this. In conclusion, Application Insights traces some things but it is very unclear when and how that works and also depends on language. For the Node.js docs it says:
The Node.js client library can automatically monitor incoming and outgoing HTTP requests, exceptions, and some system metrics. Beginning in version 0.20, the client library also can monitor some common third-party packages, like MongoDB, MySQL, and Redis. All events related to an incoming HTTP request are correlated for faster troubleshooting.
I solved this by taking inspiration from OpenTracing. Our entire stack runs in Azure Functions, so I've implemented logic to use correlationId that passes through all processes. Each process is a span. Each function/process is responsible for logging according to a structured logging framework.
I am trying to send a message without MessageGroupId because I basically don't need it. I have a few microservices running, that should read from the queue any time and if I put the same group ID it means that only one service can read these messages one by one.
Now generating an UUID as a MessageGroupId sounds like a bad practice.
Is there a way to disable MessageGroupId or send a default value that won't act as a MessageGroupId?
const params = {
MessageDeduplicationId: `${uuidv1()}`,
MessageBody: JSON.stringify({
name: 'Ben',
lastName: 'Beri',
}),
QueueUrl: `https://sqs.us-east-1.amazonaws.com/${accountId}/${queueName}`,
};
sqs.sendMessage(params, (err, data) => {
if (err) {
console.log('error! ' + err.message);
return;
}
console.log(data.MessageId);
});
error! The request must contain the parameter MessageGroupId.
We can't insert the message into the queue without messagegroupid, if you want messages to be picked sequentially, then use the same messagegroupid for all the messages, else use unique value for each.
What are the implications you are facing with using UUID as messagegroupid
I'm sure this kind of problem have been resolved here many time but I can't find how those question was formulated.
I have a micro-services that handle the communication between my infrastructure and a MQTT Broker. Every time a HTTP request is received I send a "Who is alive in the room XXX ?" message on the MQTT Broker, and every client registered on the "XXX/alive" topic have to answer and I wait Y milliseconds before closing the request by sending back the responses received to the client.
It works well when I'm handling one request. But it screws up when more than one request is asked at a time.
Here is the Express route handling the HTTP requests :
app.get('/espPassports', (req, res) => {
mqttHelper.getESPPassports(req.query.model_Name).then((passports) => {
res.send(passports).end();
}).catch(err => {
res.send(err).end();
})
})
Here is how the getESPPassports works :
getESPPassports: async (model_Name) => {
return new Promise((resolve, reject) => {
// Say there is a request performed
ongoing_request.isOpen = true;
ongoing_request.model_Name = model_Name;
// Ask who is alive
con.publish(topic, "ASK");
setTimeout(() => {
// If no answer after given timeout
if (ongoing_request.passports.length == 0) {
reject({ error: "No MQTT passports found" });
// Else send a deep clone of the answers (else it's empty)
} else {
resolve(JSON.parse(JSON.stringify(ongoing_request.passports)));
}
// Delete the current request object and 'close it'
ongoing_request.passports.length = 0;
ongoing_request.isOpen = false;
ongoing_request.model_Name = ""
}, process.env.mqtt_timeout || 2000)
})
}
};
And here is the MQTT listener :
con.on("message", (topic, message) => {
// If a passport is received check the topic and if there is a request opened
if (_checkTopic(topic) && ongoing_request.isOpen) {
try {
ongoing_request.passports.push(JSON.parse(message));
} catch (error) {
// do stuff if error
}
}
}
})
I know the problem come from the boolean i'm using to specify if there is a request ongoing, I was thinking to create an object for each new request and identify them by a unique id (like a timetamp) but I have no way to make the MQTT listneners to know this unique id.
I have some other solution in mind but I'm not sure they'll work and I feel like there is a way to handle that nicely that I don't know about.
Have a good day.
You need to generate a unique id for each request and include it in the MQTT message, you can then cache the Express response object keyed by the unique id.
The devices need to include the unique id in their responses so they can be paired up with the right response.
The other approach is just to cache responses from the devices and assign the cache a Time to Live so you don't need to ask the devices every time.
I would like to prevent a registration with an email address which already exists. Is it possible to use express-validator's new syntax for this? For example:
router.post('/register', [
check('email').custom((value, {req}) => {
return new Promise((resolve, reject) => {
Users.findOne({email:req.body.email}, function(err, user){
if(err) {
reject(new Error('Server Error'))
}
if(Boolean(user)) {
reject(new Error('E-mail already in use'))
}
resolve(true)
});
});
})
]
....
How would i pass Users?
express-validator is only aware of the request object itself, what keeps its complexity pretty low for the end-user.
More importantly, it only truly knows about the request's input locations -- body, cookies, headers, query and params.
Your custom validator is completely correct. That being said, it might not be testable, as you seem to be depending on global context.
In order to make it testable, the 2 options that I see are:
1. Inject req.Users:
This one would involve using some middleware that sets your store objects onto req:
// Validator definition
const emailValidator = (value, { req }) => {
return req.Users.findOne({ email: value }).then(...);
}
// In production code
// Sets req.Users, req.ToDo, req.YourOtherBusinessNeed
app.use(myObjectsStore.middleware);
app.post('/users', check('email').custom(emailValidator));
// In tests
req = { Users: MockedUsersObject };
expect(emailValidator('foo#bar.com', { req })).rejects.toThrow('email exists');
2. Write a factory function that returns an instance of your validator:
This is my preferred solution, as it doesn't involve using the request object anymore.
// Validator definition
const createEmailValidator = Users => value => {
return Users.findOne({ email: value }).then(...);
};
// In production code
app.post('/users', [
check('email').custom(createEmailValidator(myObjectsStore.Users)),
]);
// Or in tests
expect(createEmailValidator(MockedUsersObject)('foo#bar.com')).rejects.toThrow('email exists');
Hope this helps!
Converting my comments into a final, conclusive answer here :
A validator is simply supposed to validate the fields of request entities against the given criteria of data type / length / pattern.
You would need to write the method yourself, to determine if the user pre-exists or not. An express-validator ( or rather any validator ) would not do the task of cherry picking if the item exists in your list of items ( or your data-source), neither should it interact with the data-source concerned.