How duplicate web requests on to multiple servers? - iis

We have a memory leak that is difficult to duplicate outside of production. We were thinking that it would be best if we had a duplicate server where the responses are ignored, that receives all the same requests as the primary server. Our web server is IIS. Is there any way to do this in a Windows server environment?

Related

Can we consider Apollo server as an application server?

I am lost in the difference between web server and application server but as i could understand that web server serves a static content while application server used in case of large scale traffic to manage data!
So, can i consider Apollo server for example an application server as it can receive client queries and retrieve data from a database and manipulate it?
I know this question may make no sense for backend experts, but i am a frontend developer who try to grasp some backend concepts

Horizontal scaling with a node.js app & socket io

My team and I are working on a digital signage platform.
We have ~ 2000 Raspberry Pi around the world connected to a Nodejs server using Socket IO. The Raspberries are initiating the connection.
We would like to be able to scale horizontally our application on multiple servers but we have a problem that we can’t figure out.
Basically, the application stores the sockets of the connected Raspberry in an array.
We have an external program that calls the API within the server, this results by the server searching which sockets will be "impacted" by the API call and send them the informations.
After lots of search, we assume that we have to stores the sockets (or their ID) elsewhere (Redis ?), to make the application stateless. Then, any server can respond to a API call and look the sockets in a central place.
Unfortunately, we can’t find any detailed example on how to do that.
Can you please help us ?
Thanks
(You can't store sockets from multiple server instances in a shared datastore like redis: they only make sense in the context of the server where they were initiated).
You will need a cluster of node.js servers to handle this. There are various ways to make a cluster. They all involve directing incoming connections from your RPis to a "generic" hostname, for example server.example.com. Behind that server.example.com hostname will be multiple node.js servers.
Each incoming connection from each RPi connects to just one of those multiple servers. (You know this, I believe.) This means one node.js server in your cluster "owns" each individual RPi.
(Telling you how to rig up a cluster of node.js servers is beyond the scope of this answer. Hints: round-robin DNS or a reverse-proxy nginx front end.)
Then, you want to route -- to fan out -- the incoming data from each API call to each server in the cluster, so the server can route it to the RPis it owns.
Here's a good way to handle that:
Set up a redis cache or other shared data store. It can be very small.
When each node.js server starts, have it register itself as active. That is, have it place its own specific address for handling API calls into the shared server. The specific address is probably of the form 12.34.56.78:3000: that is, an IP address and port.
Have each server update that address every so often, once a minute or so, to show it is still alive.
When an API call arrives at server.example.com, it will come to a more-or-less randomly chosen node.js server instance.
Get that server to read the list of server addresses from the redis cache
Get that server to repeat the API call to all servers except itself. Add a parameter like repeated=yes to the repeated API calls.
Then, each server looks at its list of connected sockets and does what your application requires.
On server shutdown, have the server unregister itself -- remove its address from redis -- if possible.
In other words, build a way of fanning out the API calls to all active node.js servers in your cluster.
If this must scale up to a very large number (more than a hundred or so) node.js servers, or to many hundreds of API calls a minute, you probably should investigate using message queuing software.
SECURE YOUR REDIS server from random cybercreeps on the internet.

Node socket.io on load balanced Amazon EC2

I have a standard LAMP EC2 instance set-up running on Amazon's AWS. Having also installed Node.js, socket.io and Express to meet the demands of live updating, I am now at the stage of load balancing the application. That's all working, but my sockets aren't. This is how my set-up looks:-
--- EC2 >> Node.js + socket.io
/
Client >> ELB --
\
--- EC2 >> Node.js + socket.io
[RDS MySQL - EC2 instances communicate to this]
As you can see, each instance has an installation of Node and socket.io. However, occasionally Chrome debug will 400 the socket request returning the reason {"code":1,"message":"Session ID unknown"}, and I guess this is because it's communicating to the other instance.
Additionally, let's say I am on page A and the socket needs to emit to page B - because of the load balancer these two pages might well be on a different instance (they will both be open at the same time). Using something like Sticky Sessions, to my knowledge, wouldn't work in that scenario because both pages would be restricted to their respective instances.
How can I get around this issue? Will I need a whole dedicated instance just for Node? That seems somewhat overkill...
The issues come up when you consider both websocket traffic (layer 4 -ish) and HTTP traffic (layer 7) moving across a load balancer that can only inspect one layer at a time. For example, if you set the ELB to load balance on layer 7 (HTTP/HTTPS) then websockets will not work at all across the ELB. However, if you set the ELB to load balance on layer 4 (TCP) then any fallback HTTP polling requests could end up at any of the upstream servers.
You have two options here. You can figure out a way to effectively load balance both HTTP and websocket requests or find a way to deterministically map requests to upstream servers regardless of the protocol.
The first one is pretty involved and requires another load balancer. A good walkthrough can be found here. It's worth noting that when that post was written HAProxy didn't have native SSL support. Now that this is the case it might be possible to just remove the ELB entirely, if that's the route you want to go. If that's the case the second option might be better.
Otherwise you can use HAProxy on its own (or a paid version of Nginx) to implement a deterministic load balancing mechanism. In this case you would use IP hashing since socket.io does not provide a route-based mechanism to identify a particular server like sockjs. This would use the first 3 octets of the IP address to determine which upstream server gets each request so unless the user changes IP addresses between HTTP polls then this should work.
The solution would be for the two(or more) node.js installs to use a common session source.
Here is a previous question on using REDIS as a common session store for node.js How to share session between NodeJs and PHP using Redis?
and another
Node.js Express sessions using connect-redis with Unix Domain Sockets

Node.js & Socket.io with High Availibility

We have a node.js server that is primarily used with socket.io for browser inter connectivity in a web application.
We want to have a high availability solution which would theoretically consist of two node.js servers, one as a primary server and the other as a backup should the primary fail. The solution would allow that if or when the primary node.js server goes down the backup would take over to provide seamless functionality without interruption.
Is there a solution that allows socket.io to maintain the array of client connections over multiple servers without duplication of clients or of messages sent?
Is there another paradigm we should be considering for HA and node.js?
There is no way to have a webSocket auto fail-over without any interruption to a new server when the one it is currently connected to goes down. The webSockets that were connected to the server that went down will die. That's just how TCP sockets work.
Fortunately with socket.io, the client will quickly realize that the connection has been lost (within seconds) and the clients will try to reconnect fairly quickly. If your backup server is immediately in place (e.g. hot standby) to handle the incoming socket.io connections, then the reconnect will be fairly seamless from the client point of view. It will appear to just be a momentary network interruption from the client's point of view.
On the server, however, you need to not only have a backup, but you have to be able to restore any state that was present for each connection. If the connections are just pipes for delivering notifications and are stateless, then this is fairly easy since your backup server that receives the reconnects will immediately be in business.
If your socket.io connections are stateful on the server-side, then you will need a way to restore/access that state when the backup server takes over. One way of doing this is by keeping the state in a redis server that is separate from your web server (though you will then need a backup/high availability plan for the redis server too).
SocketIO in primary and backup server can be connected to a redis server. This will maintain the sessions in the primary server and can be used by backup server, The clients should once again connect to the new server(when primary fails).
SoketIO- Redis
HA - proxy is used for load balancing between multiple node.js instances. The usage of HA proxy will depend on how you are going to deal with failure of primary server. If you have any method to automatically switch primary server, then HA-proxy will not be much useful, else you can configure HA-Proxy to forward request to backup server if the primary server is unreachable.
Other options similar to HA-Proxy are:
node-http-proxy
Nginx

Designing real-time web application (Node.js and socket.io)

I want to ask about some good practices. I have a Node.js (Express) web server and socket.io push server (in case technology matters). I can turn both of them into one application but I want them separated (they can communicate with each other if necessary). There are two reasons to do that:
It will be easier to manage, debug and develop the app;
It will be a lot easier to scale the app. I can just add another instance of push server or web server if necessary;
This is at least what I believe. The only problem is that when a client connects to the seperate socket.io server then it won't send cookies (different port, cross-domain policy).
The workaround I came up with is to put a reverse proxy (written in Node.js as well) in front and check what kind of request we are dealing with and send it to web server or push server accordingly. Great, now we have cookies in both web server and push server. The reverse proxy can be a load balancer which is an additional bonus.
It looks like a good idea to me. What do you think about this design? Perhaps any other workaround for cookie problem?
I recently did something simular, we initially used a node.js reverse proxy but ran into reliability/scalability problems. We found serving static files and proxying requests was best left to nginx. haproxy is also a very viable solution for stand alone proxying as well.
HaProxy
Nginix as a reverse proxy

Resources