WCF net.TCP traffic clarification - multithreading

I have implement a WCF service that implement call back.
I have client web app connect to WCF via HTTP API
and Remote client app, that run in windows OS and connect to WCF using net.TCP include callback support.
now client send actions to remote and remote execute them and return status by callback return value.
I have a thread, that every 2min send ImAlive (call bool WCF.ImALIVE(machineID)) to keep the net.TCP alive if there is no activities.
My question:
if I get callback action from client, and while remote execute it ImAlive thread wakeup and call WCF.ImALIVE, is there will be any issue of block or deadlock or time out?

It depends on the ServiceBehavior ConcurrencyMode, read more here http://www.codeproject.com/Articles/89858/WCF-Concurrency-Single-Multiple-and-Reentrant-and.
Here is also a duplicate of what you are trying.
Use WCF Service with basic authentication user
You can see this related question also, but related to instancing on server side.
Сan I use PerCall instancing and Reentrant concurrency in the same service?

Related

Best way to connect 2 separate node processes with socket.io communicating to a client

I'm new to working with sockets and have a small system design question:
I have 2 separate node processes for a web app, 1 is a simulator that is constantly running and the 2nd is an api server. Both share the same MongoDB database and we have a React app running for the client, served by the api server.
I'm looking to implement socket.io for real-time notifications and so I've set up a simple connection between the api and client.
My problem is that while the simulator runs, there are some events that I also want to trigger push notifications for so my question is how to hook that into everything?
The file hierarchy is like:
app/
simulator/
api/
client/
I saw this article for communication between node processes and I currently have 3 solutions in mind:
Leave hierarchy as it is and install socket.io package inside simulator as well. I'm not sure if sockets work this way but can both simulator and api connect to the same socket?
Move simulator file into api file to fork as a child process so that the 2 processes can communicate via child/parent messaging. simulator will message api which will then emit updates through the socket to client
Leave hierarchy as is and communicate via node-ipc. Same situation as above with simulator messaging api first before api emits that to client
If 1 is possible, that seems like the best solution in my impression. It seems like extra work to add an additional layer of messaging for 2 and 3.
Leave hierarchy as it is and install socket.io package inside simulator as well. I'm not sure if sockets work this way but can both simulator and api connect to the same socket?
The client would have to create a separate socket.io connection to the simulator process. Then, the client can receive data from the API server over one connection and from the simulator over another connection. You would need two separate, independent socket.io connections from the client, one to the API server and one to the simulator. Simulator and API server cannot share the same socket unless they are in the same process.
Move simulator file into api file to fork as a child process so that the 2 processes can communicate via child/parent messaging. simulator will message api which will then emit updates through the socket to client
This is really part of a broader option that the simulator communicates with the API server and sends it data that the API server can then send to the client over the single socket.io connection that the client made to the API server.
There are lots of different ways for the simulator process to communicate with the API server.
Since it's already an API server, you can just make an API for this (probably non-public). The simulator calls an API to send data to the client. The API server receives that data and sends it to the client.
As you suggest, if the simulator is run from the API server as a child process, then you can use parent/child communication messaging built into node.js. Note, you don't have to move the simulator files into the API file at all. You can just use child_process to launch the simulator as another nodejs app from another project. You just have to know the path to that other project.
You can use any another communication mechanism you want between the simulator process and the API server process. There could be a socket.io connection between them. You could use several forms of IPC, etc...
If 1 is possible, that seems like the best solution in my impression.
Your #1 option is not possible as separate processes can't use the same socket.io connection.
It seems like extra work to add an additional layer of messaging for 2 and 3.
My options #1 and #2 are not much code in each server. You're doing interprocess communication. You should expect to use some code to enable that. But, it's not hard at all.
If the lifetime of the simulator server and the API server are always together (they have no independent uses), then I'd probably do the child process thing where the API server launches the simulator and then use parent/child messaging to communicate between them. You do NOT have to combine sources to do this.
The child_process module can run the simulator process by just knowing what directory it is located in.
Otherwise, I'd probably make a small web server on a non-public port in the API server and have the simulator just send data to that other web server. I often refer to this as a control port. It's a way of "controlling or diagnosing" the API server internals and can only be accessed from within the private network and/or with credentials. The reason I'd use a separate web server (in the same nodejs app as the API server) is to make it easy to secure so it can't be accessed from the outside world like the regular public APIs can. You just put the internal web server on a port that is not exposed to the outside world.
You should check Socket.IO docs about adapters and Emitters. This allows to connect to sockets from different node processes and scalability.

socket.io server to relay/proxy some events

I currently have a socket.io server spawned by a nodeJS web API server.
The UI runs separately and connects to the API via web socket. This is mostly used for notifications and connectivity status checks.
However the API also acts as a gateway for several micro services. One of these is responsible for computing the data necessary for the UI to render some charts. This operation is long-lasting and due to many reasons the computation will only start when a request is received.
In a nutshell, the UI sends a REST request to the API and the API currently uses gRPC to send the request to the micro service. This is bad because it locks both API and UI.
To avoid locking the socket server on the API should be be able to relay the UI request and the "computation ended" event received by the micro service, this way nothing would be locked. This could eventually lead to the gRPC server on the micro service to be removed.
Is this something achievable with socket.io?
If not is the only way for the API to spawn a secondary socket connection to the micro service for each one received by the UI?
Is this a bad idea?
I hope this is clear, thanks.
I actually ended up not using socket.io. However this can still be done with it if the API spawns a server and has the different services connected as clients, https://socket.io/docs/rooms-and-namespaces/ can be used.
This way messages can be "relayed" and even broadcasted from the server to both in case something happens.

grpc complete async java service request/response mapping

A Java service (let's call it portal) is both a gRPC client as well as server. It serves millions of gRPC clients (server), each client requesting for some task/resource. Based on the incoming request, portal will figure out the backend services and talk to one or more of them, and send the returned response(s) to the originating client. Hence, here, the requirement is:
Original millions of clients will have their own timeouts
The portal should not have a thread blocking for the millions of clients (async). It should also not have a thread blocking for each client's call to the backend services (async). We can use the same thread which received a client call for invoking the backend services.
If the original client times out, portal should be able to communicate it to the backend services or terminate the specific call to the backend services.
On error from backend services, portal should be able to communicate it back to the specific client whose call failed.
So the questions here are:
We have to use async unary calls here, correct?
How do the intermediate server (portal) match the original requests to the responses from the backend services?
In case of errors on backend services, how does the intermediate server propagate the error?
How does the intermediate server propagate the deadlines?
How does the intermediate server cancel the requests on the backend services, if the originating client terminates?
gRPC Java can make a proxy relatively easily. Using async stubs for such a proxy would be common. When the proxy creates its outgoing RPCs, it can save a reference to the original RPC in the callback of the outgoing RPC. When the outgoing RPC's callback fires, simply issue the same call to the original RPC. That solves both messages and errors.
Deadline and cancellation propagation are automatically handled by io.grpc.Context.
You may want to reference this grpc-level proxy example (which has not been merged to grpc/grpc-java). It uses ClientCall/ServerCall because it was convenient and because it did not want to parse the messages. It is possible to do the same thing using the StreamObserver API.
The main difficulty in such a proxy would be to observe flow control. The example I referenced does this. If using StreamObserver API you should cast the StreamObserver passed to the server to ServerCallStreamObserver and get a ClientCallStreamObserver by passing a ClientResponseObserver to the client stub.

How Request and Response will got process in service stack?

I am using service stack to build the create RESTful services, not have depth knowledge of it. This works as sending request and getting response back. I have scenario and my question is depends on it.
Scenario: I am sending request from browser or any client where I am able to send request to server. Consider server will take 3 seconds to process single request and send back response to browser. After one second, I have sent another request to server from same browser(client). Now I am getting response of second request which I sent later.
Question 1: What is happening behind with the first request which I did not get response.
Question 2: How I can stop processing of orphan request.
Edit : I have used IIS server to host services.
ServiceStack executes requests concurrently on multithreaded web servers, whether you're hosting on ASP.NET/IIS or self-hosted so 2 concurrent requests are running concurrently on different threads. There are different scenarios possible if you're executing async tasks in your Services in which it frees up the thread to execute different tasks, but the implementation details are largely irrelevant here.
HTTP Web Requests are each executed to their end, even when its client connection is lost your Services are never notified and no Exceptions are raised.
But for long running Services you can enable the high-level ServiceStack's Cancellable Requests Feature which enables a way for clients to cancel long running requests.

Using RabbitMQ to capture web application log

I'm trying to setup RabbitMQ to take web application logs to a log server.
My log server will listen to one channel and store the logs that comes in.
There are several web applications that need to send info to the log server.
With many connections (users) hitting the web server, what is the best design to publish messages to RabbitMQ without locking each other? Is it a good idea to keep opening a new connection to the MQ for each web request? Is there some sort of message queue pool?
I'm using IIS for a web server.
I assume you’re leveraging the .NET framework to build your application, given that it’s hosted in IIS. If so, you can also leverage Daishi.AMQP, which has a built-in QueuePool feature. Here is a tutorial that outlines the mechanism in full.
To answer your question, you should initially establish a connection to RabbitMQ from your application server. You can then initialise a Channel (a process that executes within the context of the underlying connection) to serve each HTTP request. It is not a good idea to establish a new connection for each request.
RabbitMQ has a build in queue feature. It is well documented, have a look at the official docs: http://www.rabbitmq.com/getstarted.html

Resources