I have a few Zeit micro services. This setup is a RESTful API for multiple frontends/domains/clients
I need to, in my configs that are spread throughout the apps, differentiate between these clients. I can, in my handlers, setup a process.env.CLIENT_ID for example that I can use in my config handler to know which config to load. However this would mean launching a new http/micro process for each requesting domain (or whatever method I use - info such as client id will prob come in a header) in order to maintain the process.env.CLIENT_ID throughout the request and not have it overwritten by another simultaneous request from another client.
So I have to have each microservice check the client ID, determine if it has already launched a process for that client and use that else launch a new one.
This seems messy but not sure how else to handle things. Passing the client id around with code calls (i.e. getConfg(client, key) is not practical in my situation and I would like to avoid that.
Options:
Pass client id around everywhere
Launch new process per host
?
Is there a better way or have I made a mistake in my assumptions?
If the process per client approach is the better way I am wondering if there is an existing solution to manage this? Ive looked at http proxy, micro cluster etc but none seem to provide a solution to this issue.
Well I found this nice tool https://github.com/othiym23/node-continuation-local-storage
// Micro handler
const { createNamespace } = require('continuation-local-storage')
let namespace = createNamespace('foo')
const handler = async (req, res) => {
const clientId = // some header thing or host
namespace.run(function() {
namespace.set('clientId', clientId)
someCode()
})
})
// Some other file
const { getNamespace } = require('continuation-local-storage')
const someCode = () => {
const namespace = getNamespace('foo')
console.log(namespace.get('clientId'))
}
Related
I am required to save logs into a MySQL database of each request and response made to the backend. The issue is that we are migrating to microservices architecture. The backend was made with NodeJS and Express, and it has a middleware that does this task. Currently, it has this middleware attached to each microservice.
I would like to isolate this middleware as its own microservice. The issue is that I don't know how to redirect the traffic this way. This is how I would like to manage it:
I would like to do it this way, because we can make changes or add features to the middleware without having to implement it in each microservice. This is the middleware's code:
const connection = require("../database/db");
const viewLog = (req, res, next) => {
const oldWrite = res.write,
oldEnd = res.end,
chunks = [],
now = new Date();
res.write = function (chunk) {
chunks.push(chunk);
oldWrite.apply(res, arguments);
};
res.end = function (chunk, error) {
if (chunk) chunks.push(chunk);
const bodyRes = Buffer.concat(chunks).toString("utf8");
connection.query("CALL HospitalGatifu.insertLog(?,?,?,?,?)", [
`[${req.method}] - ${req.url}`,
`${JSON.stringify(req.body) || "{}"}`,
bodyRes,
res.statusCode === 400 ? 1 : 0,
now,
]);
oldEnd.apply(res, arguments);
};
next();
};
module.exports = viewLog;
I think there might be a way to manage this with Nginx which is the reverse proxy that we are using. I would like to get an approach of how to change the logs middleware.
Perhaps you might want to take a look at the sidecar pattern which is used in microservice architectures for common tasks (like logging).
In short, a sidecar runs in a container besides your microservice container. One task of the sidecar could be intercepting network traffic and logging requests and responses (and a lot of other possible tasks). The major advantage of this pattern is that you don't need to change any code in your microservices and you don't have to manage traffic redirection yourself. The latter will be handled by the sidecar itself.
The disadvantage is that you are required to run your microservices containerized and use some kind of container orchestration solution. I assume this being the case since you are moving towards a microservices based application.
One question about the log service in between of the webapp and the NGNIX server. What if the logging services goes down for some reason, is it acceptable for the entire application to go down?
Let me give you not exactly what you requested but something to think about.
I can think on 3 solutions for the issue of logging in microservices, each 1 have its own advantages and disadvantages:
Create a shared library that handles the logs, I think its the best choice in must cases. An article I wrote about shared libraries
You can create API gateway, it is great solution for shared logic to all the requests. So it will probably be more work but then can be used for other shared logic. Further read (not written by me :) )
A third option (which I personally don't like) is create a log microservice that listens to LogEvent or something like that. Then from your MSs publish this event whenever needed.
I have a library where I am making the api calls. The client used to make the calls is passed an agent created with HttpsAgent from agentkeepalive.
The edge case I am trying to handle is that when the endpoint that I am trying to reach matches let's say https://somedomain.com, because of SSL certificate issue I need to update the servicename/hostname of the request so it will go through.
The problem is that I know how to do that through nodeHTTPS.Agent and now I need to find a way to preserve the initial configuration passed to the agent in the first place. Consider this example
// node api
const response = await makeReq({
...etcOptions,
agent: keepAliveAgent // agentkeepalive instance with some configurations
})
// now, in make request library
function makeReq(options) {
if (options.address.match(/^https:\/\/(.*)\.somedomain\.com$/)) {
// here, either I do something like, if options.agent -> options.agent.servicename = "*.etcsomedomain.com"
// or
options.agent = new nodeHTTPS.Agent({
...(options.agent && {...options.agent.options}), // trying to retrieve initial passed options for this agent
servername: '*.etcsomedomain.com'
});
}
}
Another problem is that passed agent.options strucutre might be different since that agent is created through a library that might differ from nodeHTTPS.Agent
Any idea how I can handle this edge case correctly?
Greetings Stackoverflow.
I've been using stackoverflow for years to find answers, and this is my first attempts to make a question myself. So feel free to tell me if I'm doing it wrong way.
Currently I'm developing a data analytical system based on microservice architecture.
It is assumed that this system will consist of a dozen self-sufficient microservices communicating with each other by RabbitMQ. Each of them is encapsulated in a docker-container and the whole system is powered by docker-swarm in the production.
In particular each microservice is a node.js application and related database, connected with some ORM interface. Its task is to manage and serve data in a CRUD manner, and to provide results of some prepared queries based on the contained data. Nothing extraordinary.
To provide microservice-microservice communication I assume to use amqplib. But the way to use it is uncertain yet.
My current question is how to make use of amqplib in a OOP manner to link inter microservice communication network with application's object-related functionality? By OOP manner, I mean the possibility to replace amqplib (and RabbitMQ itself) in the long run without the need to make changes to the data-related logic.
What I really searching for is the example of currently working microservice application utilizing AMQP. I'd pretty much appreciate that if somebody could give a link to it.
And the second part of my question.
Does it make sense to build microservice application based on event-driven principals, and just pass messages from RabbitMQ to the application's main event queue? So that each procedure would be called the same way, despite the fact that it is an internal or external event.
As for the abstract example of single microservice:
Let's say I have an event service and a listener connected to this service:
class UserManager {
constructor(eventService) {
this.eventService = eventService;
this.eventServce.on("users.user.create-request", (payload) => {
User.create(payload); // User interface is omitted in this example
}
}
}
const eventService = new EventEmmiter();
const userManager = new UserManager(eventService);
On the other hand I've got RabbitMQ connection, that is waiting for messages:
const amqp = require('amqplib');
amqp.connect('amqp-service-in-docker').then(connection => {
connection.createChannel().then(channel => {
// Here we use topic type of exchange to be able to filter only related messages
channel.assertExchange('some-exchange', 'topic');
channel.assertQueue('').then(queue => {
// And here we are waiting only the related messages
channel.bind(queue.queue, 'some-exchange', 'users.*');
channel.consume(queue.queue, message => {
// And here is the crucial part
}
}
}
}
What I'm currently think off is to just parse and forward this message to eventService and use it's routing key as the name of the event, like this:
channel.consume(query.query, message => {
const eventName = message.fields.routingKey;
const eventPayload = JSON.parse(message.content.toString());
eventService.emit(eventName, eventPayload);
}
But how about RPC's? Should I make another exchange or even a channel for them with another approach, something like:
// In RPC channel
channel.consume(query.query, message => {
eventService.once('users.user.create-response', response => {
const recipient = message.properites.replyTo;
const correlationId = msg.properties.correlationId;
// Send response to specified recipient
channel.sendToQueue(
recipient,
Buffer.from(JSON.stringify(resonse)),
{
correlationId: correlationId
}
);
channel.ack(message);
});
// Same thing
const eventName = message.fields.routingKey;
const eventPayload = JSON.parse(message.content.toString());
eventService.emit(eventName, eventPayload);
}
And then my User class should fire 'users.user.create-response' event every time it creates a new user. Isn't this a crutch?
I am using this (contentful-export) library in my express app like so
const app = require('express');
...
app.get('/export', (req, rex, next) => {
const contentfulExport = require('contentful-export');
const options = {
...
}
contentfulExport(options).then((result) => {
res.send(result);
});
})
now this does work, but the method takes a bit of time and sends status / progress messages to the node console, but I would like to keep the user updated also.. is there a way I can send the node console progress messages to the client??
This is my first time using node / express any help would be appreciated, I'm not sure if this already has an answer since im not entirely sure what to call it?
Looking of the documentation for contentful-export I don't think this is possible. The way this usually works in Node is that you have an object (contentfulExport in this case), you call a method on this object and the same object is also an EventEmitter. This way you'd get a hook to react to fired events.
// pseudo code
someLibrary.on('someEvent', (event) => { /* do something */ })
someLibrary.doLongRunningTask()
.then(/* ... */)
This is not documented for contentful-export so I assume that there is no way to hook into the log messages that are sent to the console.
Your question has another tricky angle though. In the code you shared you include a single endpoint (/export). If you would like to display updates or show some progress you'd probably need a second endpoint giving information about the progress of your long running task (which you can not access with contentful-export though).
The way this is usually handled is that you kick of a long running task via a certain HTTP endpoint and then use another endpoint that serves infos via polling or or a web socket connection.
Sorry that I can't give a proper solution but due to the limitation of contentful-export I don't think there is a clean/easy way to show progress of the exported data.
Hope that helps. :)
This is a follow-up question to my issue outlined here.
The Gateway serves as an entry point to the application, to which every request from the client is made. The gateway then allocates the request to the responsible microservices and also handles authentication.
In this case the gateway listens for HTTP POST /bok and notifies the Microservice A to create a book. Thus Microservice A ist responsible for managing and storing everything about the book entity.
The following pseudo-code is a simplified implementation of this architecture:
Queue Communication
Gateway
router.post('/book', (req, res) => {
queue.publish('CreateBook', req.body);
queue.consume('BookCreated', (book) => {
const user = getUserFromOtherMicroService(book.userId);
res.json({ book, user });
});
});
Microservcie A
queue.consume('CreateBook', (payload) => {
const book = createBook(payload);
eventStore.insert('BookCreated', book);
const createdBook = updateProjectionDatabase(book);
queue.publish('BookCreated', createdBook);
})
But I am not quite sure about this because of the following reasons:
The listener for consuming BookCreated in the Gateway will be recreated every time a user requests to create a new book
What if 2 users simultaneously create a book and the wrong book will be returned?
I don't know how to fetch additional data (e.g. getUserFromOtherMicroService)
That's why I though about implementing this architecture:
Direct and Queue Communication
Gateway
router.post('/book', async (req, res) => {
const book = await makeHttpRequest('microservice-a/create-book', req.body);
const user = await makeHttpRequest('microservice-b/getUser', book.userId);
res.json({ book, user });
});
Microservice A
router.post('/create-book', (req, res) => {
const book = createBook(req.body);
eventStore.insert('BookCreated', book);
const createdBook = updateProjectionDatabase(book);
queue.publish('BookCreated', createdBook);
res.json(createdBook);
})
But I am also not really sure about this implementation because:
Don't I violate CQRS when I return the book after creation? (because I should only return OK or ERROR)
Isn't it inefficient to make another HTTP request in a microservices system?
Based on the comments above .
Approach 1
In this case your api gateway will be used to drop the message in the queue. This approach is more appropriate if your process is going to take long time and you have a queue workers sitting behind to pick up the messages and process. But your client side has to poll to get the results. Say you are looking for airline ticket . You drop the message. Your get an ID to poll. Your client will keep on polling until the results are available.
But in this case you will have a challenge , as you drop the message how you are going to generate the ID that client would poll ? Do you assign the ID to message at Gateway and drop in the queue and return the same ID for the client to poll to get result ? again this approach is good for web/worker kind of scenario.
Approach 2
Since Your API gateway is custom application that would handle the authentication and redirect the request to respective service. Your Microsvc A would create book and publish the event and your Microservice B and C would be using it . Your Gateway will wait for the Microservice A to return response with ID (or event metadata of newly created object) of book that is created so you don't poll for it later and client has it. If you want you can have additional information from other microservices you can fetch at this time and can send aggregated response.
For any data that is available in Microservice A,B,C you will be getting via Gateway. Make sure your gateway is highly available.
Hope that helps . Let me know if you have any questions !