WCF Server (net-tcp) against backand running non-thread-safe unmanaged code - multithreading

I've been tasked with writing a WCF service host (server) for an existing (session-full) service -- not hard so far. The service is an ADO Proxy server that proxies ADO connections to various back end databases. This works well in most cases, but one of the ADO .NET data providers I need to support is implemented as a driver connecting to an unmanaged code (C) API that is not thread-safe.
The preferred solutions, make the code thread-safe or implement a thread-safe, managed driver are off the table right now. It's been suggested that I could spin multiple processes as a sort of back end or second level proxy, but this struck me as a nightmare to implement when I first heard it, and even more so as I did a trial implementation.
My question is, is there another solution I am missing here? I've played around so far with ConncurrencyMode.Single And UseSynchronization = true, but the real heart of the matter is being both sessionfull and having a non-thread-safe back end. So far no luck. Am I stuck with proxying the connection to multiple processes, or can someone else suggest a more elegant solution?
Thanks!

What I would do (actually I have been in this very situation myself) is to spin up a dedicated thread that will service requests dispatched to the unmanaged API. The thread will sit there waiting for a request message. The request message will instruct the thread to do something with the API. Once the thread is finished processing the request it construct a response message. The response message will contain the returned data. The pattern is super easy if you use BlockingCollection to queue the request and response messages.
public class SingleThreadedApiAbstraction
{
private BlockingCollection<Request> requests = new BlockingCollection<Request>();
private BlockingCollection<Response> responses = new BlockingCollection<Response>();
public SingleThreadedApiAbstraction()
{
var thread = new Thread(Run);
thread.IsBackground = true;
thread.Start();
}
public /* Return Type */ SomeApiMethod(/* Parameters */)
{
var request = new Request(/* Parameters */);
requests.Add(request); // Submit the request.
var response = responses.Take(); // Wait for the response.
return response.ReturnValue;
}
private void Run()
{
while (true)
{
var request = requests.Take(); // Wait for a request.
// Forward the request parameters to the API.
// Then construct a response object with the return value.
var response = new Response(/* Returned Data */);
responses.Add(response); // Publish the response.
}
}
}
The idea is that the API is only ever accessed via this dedicated thread. It does not matter how or who is calling SomeApiMethod. It is important to note that Take blocks if the queue is empty. The Take method is where the magic happens.

Related

Nodejs prevent new request before send response to last request

How to prevent new requests before sending the response to the last request. on On the other hand just process one request at the same time.
app.get('/get', function (req, res) {
//Stop enter new request
someAsyncFunction(function(result){
res.send(result);
//New Request can enter now
}
}
Even tho I agree with jfriend00 that this might not be the optimal way to do this, if you see that it's the way to go, I would just use some kind of state management to check if it's allowed to access that /get request and return a different response if it's not.
You can use your database to do this. I strongly recommend using Redis for this because it's in-memory and really quick. So it's super convenient. You can use mongodb or mysql if you prefer so, but Redis would be the best. This is how it would look, abstractly -
Let's say you have an entry in your database called isLoading, and it's set to false by default.
app.get('/get', function (req, res) {
//get isloading from your state management of choice and check it's value
if(isLoading == true) {
// If the app is loading, notify the client that he should wait
// You can check for the status code in your client and react accordingly
return res.status(226).json({message: "I'm currently being used, hold on"})
}
// Code below executes if isLoading is not true
//Set your isLoading DB variable to true, and proceed to do what you have
isLoading = true
someAsyncFunction(function(result){
// Only after this is done, isLoading is set to false and someAsyncFunction can be ran again
isLoading = false
return res.send(result)
}
}
Hope this helps
Uhhhh, servers are designed to handle multiple requests from multiple users so while one request is being processed with asynchronous operations, other requests can be processed. Without that, they don't scale beyond a few users. That is the design of any server framework for node.js, including Express.
So, whatever problem you're actually trying to solve, that is NOT how you should solve it.
If you have some sort of concurrency issue that is pushing you to ask for this, then please share the ACTUAL concurrency problem you need to solve because it's much better to solve it a different way than to handicap your server into one request at a time.

How to implement rabbitMQ into node.js microservice app right way?

Greetings Stackoverflow.
I've been using stackoverflow for years to find answers, and this is my first attempts to make a question myself. So feel free to tell me if I'm doing it wrong way.
Currently I'm developing a data analytical system based on microservice architecture.
It is assumed that this system will consist of a dozen self-sufficient microservices communicating with each other by RabbitMQ. Each of them is encapsulated in a docker-container and the whole system is powered by docker-swarm in the production.
In particular each microservice is a node.js application and related database, connected with some ORM interface. Its task is to manage and serve data in a CRUD manner, and to provide results of some prepared queries based on the contained data. Nothing extraordinary.
To provide microservice-microservice communication I assume to use amqplib. But the way to use it is uncertain yet.
My current question is how to make use of amqplib in a OOP manner to link inter microservice communication network with application's object-related functionality? By OOP manner, I mean the possibility to replace amqplib (and RabbitMQ itself) in the long run without the need to make changes to the data-related logic.
What I really searching for is the example of currently working microservice application utilizing AMQP. I'd pretty much appreciate that if somebody could give a link to it.
And the second part of my question.
Does it make sense to build microservice application based on event-driven principals, and just pass messages from RabbitMQ to the application's main event queue? So that each procedure would be called the same way, despite the fact that it is an internal or external event.
As for the abstract example of single microservice:
Let's say I have an event service and a listener connected to this service:
class UserManager {
constructor(eventService) {
this.eventService = eventService;
this.eventServce.on("users.user.create-request", (payload) => {
User.create(payload); // User interface is omitted in this example
}
}
}
const eventService = new EventEmmiter();
const userManager = new UserManager(eventService);
On the other hand I've got RabbitMQ connection, that is waiting for messages:
const amqp = require('amqplib');
amqp.connect('amqp-service-in-docker').then(connection => {
connection.createChannel().then(channel => {
// Here we use topic type of exchange to be able to filter only related messages
channel.assertExchange('some-exchange', 'topic');
channel.assertQueue('').then(queue => {
// And here we are waiting only the related messages
channel.bind(queue.queue, 'some-exchange', 'users.*');
channel.consume(queue.queue, message => {
// And here is the crucial part
}
}
}
}
What I'm currently think off is to just parse and forward this message to eventService and use it's routing key as the name of the event, like this:
channel.consume(query.query, message => {
const eventName = message.fields.routingKey;
const eventPayload = JSON.parse(message.content.toString());
eventService.emit(eventName, eventPayload);
}
But how about RPC's? Should I make another exchange or even a channel for them with another approach, something like:
// In RPC channel
channel.consume(query.query, message => {
eventService.once('users.user.create-response', response => {
const recipient = message.properites.replyTo;
const correlationId = msg.properties.correlationId;
// Send response to specified recipient
channel.sendToQueue(
recipient,
Buffer.from(JSON.stringify(resonse)),
{
correlationId: correlationId
}
);
channel.ack(message);
});
// Same thing
const eventName = message.fields.routingKey;
const eventPayload = JSON.parse(message.content.toString());
eventService.emit(eventName, eventPayload);
}
And then my User class should fire 'users.user.create-response' event every time it creates a new user. Isn't this a crutch?

keep a nodejs request in waiting until first is completed

I have a situation with nodejs api. what I want to archive is when same user is hitting the same API at same time , i want to block or queue his second request until first is completed.
PS- i want to apply this for same user
thanks in advance
I am not sure doing anything on the server side (like semaphores) will solve this issue if the app is both stateless and is going be scaled horizontally in Production over two or more replicas.
All the Pods (app servers) will have to maintain the same semaphore value for the end-point being used
I think you can achieve the same mechanism with a Database Flag or use Redis to indicate the operation is in progress on one of the app servers.
It is as good as having sessions (in terms of maintaining a certain state) as per the client request
You will also need a recovery mechanism to reset the semaphore if the operation carried out by that end-point fails or crashes the thread.
You can do this by using semaphore. The semaphore will be based on the client where each client will have only 1 semaphore while receiving a request the server stops receiving another request by locking mechanism and after responding, the lock should be released.
Demo:
let clientSemaphores = {};
const semaphore = require('semaphore');
var server = require('http').createServer(function(req, res) {
var client = req.url.split("/")[1]; //client id to specify
console.log(client, " request recieved");
if (!clientSemaphores[client] || clientSemaphores[client].current < clientSemaphores[client].capacity){
clientSemaphores[client] = clientSemaphores[client] || semaphore(1);
clientSemaphores[client].take(function() {
setTimeout(() => {
res.write(client + " Then good day, madam!\n");
res.end(client + " We hope to see you soon for tea.");
clientSemaphores[client].leave();
}, 5000);
});
} else {
res.end(client + " Request already processing... please wait...");
}
});
server.listen(8000);
OR
HTTP Pipelining
Persistent HTTP allows us to reuse an existing connection between multiple application requests, but it implies a strict first in, first out (FIFO) queuing order on the client: dispatch request, wait for the full response, dispatch next request from the client queue. HTTP pipelining is a small but important optimization to this workflow, which allows us to relocate the FIFO queue from the client (request queuing) to the server (response queuing).
Reference: HTTP Pipelining

Unique configuration per vhost for Micro

I have a few Zeit micro services. This setup is a RESTful API for multiple frontends/domains/clients
I need to, in my configs that are spread throughout the apps, differentiate between these clients. I can, in my handlers, setup a process.env.CLIENT_ID for example that I can use in my config handler to know which config to load. However this would mean launching a new http/micro process for each requesting domain (or whatever method I use - info such as client id will prob come in a header) in order to maintain the process.env.CLIENT_ID throughout the request and not have it overwritten by another simultaneous request from another client.
So I have to have each microservice check the client ID, determine if it has already launched a process for that client and use that else launch a new one.
This seems messy but not sure how else to handle things. Passing the client id around with code calls (i.e. getConfg(client, key) is not practical in my situation and I would like to avoid that.
Options:
Pass client id around everywhere
Launch new process per host
?
Is there a better way or have I made a mistake in my assumptions?
If the process per client approach is the better way I am wondering if there is an existing solution to manage this? Ive looked at http proxy, micro cluster etc but none seem to provide a solution to this issue.
Well I found this nice tool https://github.com/othiym23/node-continuation-local-storage
// Micro handler
const { createNamespace } = require('continuation-local-storage')
let namespace = createNamespace('foo')
const handler = async (req, res) => {
const clientId = // some header thing or host
namespace.run(function() {
namespace.set('clientId', clientId)
someCode()
})
})
// Some other file
const { getNamespace } = require('continuation-local-storage')
const someCode = () => {
const namespace = getNamespace('foo')
console.log(namespace.get('clientId'))
}

How to lock (Mutex) in NodeJS?

There are external resources (accessing available inventories through an API) that can only be accessed one thread at a time.
My problems are:
NodeJS server handles requests concurrently, we might have multiple requests at the same time trying to reserve inventories.
If I hit the inventory API concurrently, then it will return duplicate available inventories
Therefore, I need to make sure that I am hitting the inventory API one thread at a time
There is no way for me to change the inventory API (legacy), therefore I must find a way to synchronize my nodejs server.
Note:
There is only one nodejs server, running one process, so I only need to synchronize the requests within that server
Low traffic server running on express.js
I'd use something like the async module's queue and set its concurrency parameter to 1. That way, you can put as many tasks in the queue as you need to run, but they'll only run one at a time.
The queue would look something like:
var inventoryQueue = async.queue(function(task, callback) {
// use the values in "task" to call your inventory API here
// pass your results to "callback" when you're done
}, 1);
Then, to make an inventory API request, you'd do something like:
var inventoryRequestData = { /* data you need to make your request; product id, etc. */ };
inventoryQueue.push(inventoryRequestData, function(err, results) {
// this will be called with your results
});

Resources