I have a situation where I want two servers to talk to each other in a test. As in Server A is triggered and then sends a message to Server B.
I can send requests to both Server A and B individually, but when I try to get Server A to talk to B at 0.0.0.0:[port] I get ConnectionRefused, message: "Connection refused"
edit I wrote out an example
https://github.com/JesseAbram/rocket_testing_issue
The Rocket Client used for testing uses a function called launch_local() which starts the server without binding to any listening addresses. So instead of using that, I spun up servers as normal in separate threads so they were not blocking, and then used reqwest to talk to them.
Related
I have created a simple GraphQL Subscription using Nest.js/Apollo GraphQL over Node.js. My client application which is a react.js/apollo client works find with the server. The client subscibes to the server via GraphQL similar to:
subscription
{
studentAdded
{
id
}
}
My problem is that it works only locally. When I deploy my server back-end to a hosted docker over internet, client won't receive data anymore.
I have traced the client, it sends GET request on ws://api.example.com:8010/graphql and receives the successful HTTP/1.1 101 Switching Protocols response. However, nothing is received from server like when the server was on my local machine. Checking the remote server log showed me that the client successfully connects to server. There, I can see onConnect log messages.
Now I need any guidance to solve the problem.
I check several things myself. Firstly, I thought WebSocket address is prohibited in the network but then realized that it is on same port as normal HTTP. Secondly, supposed that WebSocket messages/frames are transmitted over UDP but I was not correct, it is over TCP and no need to worry about network settings.
Additionally I have read several github threads and StackOverflow questions. But did not find any clue. I am not directly using Node.js/WebSocket, instead, I am using Nest.js/GraphqQL subscription. It has made my search tougher.
Your help is highly appreciated.
My team and I are working on a digital signage platform.
We have ~ 2000 Raspberry Pi around the world connected to a Nodejs server using Socket IO. The Raspberries are initiating the connection.
We would like to be able to scale horizontally our application on multiple servers but we have a problem that we can’t figure out.
Basically, the application stores the sockets of the connected Raspberry in an array.
We have an external program that calls the API within the server, this results by the server searching which sockets will be "impacted" by the API call and send them the informations.
After lots of search, we assume that we have to stores the sockets (or their ID) elsewhere (Redis ?), to make the application stateless. Then, any server can respond to a API call and look the sockets in a central place.
Unfortunately, we can’t find any detailed example on how to do that.
Can you please help us ?
Thanks
(You can't store sockets from multiple server instances in a shared datastore like redis: they only make sense in the context of the server where they were initiated).
You will need a cluster of node.js servers to handle this. There are various ways to make a cluster. They all involve directing incoming connections from your RPis to a "generic" hostname, for example server.example.com. Behind that server.example.com hostname will be multiple node.js servers.
Each incoming connection from each RPi connects to just one of those multiple servers. (You know this, I believe.) This means one node.js server in your cluster "owns" each individual RPi.
(Telling you how to rig up a cluster of node.js servers is beyond the scope of this answer. Hints: round-robin DNS or a reverse-proxy nginx front end.)
Then, you want to route -- to fan out -- the incoming data from each API call to each server in the cluster, so the server can route it to the RPis it owns.
Here's a good way to handle that:
Set up a redis cache or other shared data store. It can be very small.
When each node.js server starts, have it register itself as active. That is, have it place its own specific address for handling API calls into the shared server. The specific address is probably of the form 12.34.56.78:3000: that is, an IP address and port.
Have each server update that address every so often, once a minute or so, to show it is still alive.
When an API call arrives at server.example.com, it will come to a more-or-less randomly chosen node.js server instance.
Get that server to read the list of server addresses from the redis cache
Get that server to repeat the API call to all servers except itself. Add a parameter like repeated=yes to the repeated API calls.
Then, each server looks at its list of connected sockets and does what your application requires.
On server shutdown, have the server unregister itself -- remove its address from redis -- if possible.
In other words, build a way of fanning out the API calls to all active node.js servers in your cluster.
If this must scale up to a very large number (more than a hundred or so) node.js servers, or to many hundreds of API calls a minute, you probably should investigate using message queuing software.
SECURE YOUR REDIS server from random cybercreeps on the internet.
I am working on a nodejs app with Socket.io and I did a test in a single process using PM 2 and it was no errors. Then I move to our production environment(We use Google Cloud Compute Instance).
I run 3 app processes and a iOS client connects to the server.
By the way the iOS client doesn't keep the socket connection. It doesn't send disconnect to the server. But it's disconnected and reconnect to the server. It happens continuously.
I am not sure why the server disconnects the client.
If you have any hint or answer for this, I would appreciate you.
That's probably because requests end up on a different machine rather than the one they originated from.
Straight from Socket.io Docs: Using Multiple Nodes:
If you plan to distribute the load of connections among different processes or machines, you have to make sure that requests associated with a particular session id connect to the process that originated them.
What you need to do:
Enable session affinity, a.k.a sticky sessions.
If you want to work with rooms/namespaces you also need to use a centralised memory store to keep track of namespace information, such as the Redis/Redis Adapter.
But I'd advise you to read the documentation piece I posted, things might have changed a bit since the last time I've implemented something like this.
By default, the socket.io client "tests" out the connection to its server with a couple http requests. If you have multiple server requests and those initial http requests don't go to the exact same server each time, then the socket.io connect will never get established properly and will not switch over to webSocket and it will keep attempting to use http polling.
There are two ways to fix this.
You can configure your clients to just assume the webSocket protocol will work. This will initiate the connection with one and only one http connection which will then be immediately upgraded to the webSocket protocol (with socket.io running on top of that). In socket.io, this is a transport option specified with the initial connection.
You can configure your server infrastructure to be sticky so that a request from a given client always goes back to the exact same server. There are lots of ways to do this depending upon your server architecture and how the load balancing is done between your servers.
If your servers are keeping any client state local to the server (and not in a shared database that all servers access), then you will need even a dropped connection and reconnect to go back to the same server and you will need sticky connections as your only solution. You can read more about sticky sessions on the socket.io website here.
Thanks for your replies.
I finally figured out the issue. The issue was caused by TTL of backend service in Google Cloud Load Balancer. The default TTL was 30 seconds and it made each socket connection tried to disconnect and reconnect.
So I updated the value to 3600s and then I could keep the connection.
I implemented 2 webservers with express. One is the main, one is a microservice.
They are communicating through a HTTP REST API, and we had historically a socket.io server started on the microservice to watch the up/down status from the main server.
----HTTP-----
[main server] [microservice]
--socket.io--
I then realized that socket.io is not the right tool for that. So I decided to trade socket.io for a raw TCP socket.
So the question is : Is that possible to start the http server "ON TOP" of a raw TCP server (on the same port) ? (allowing to connect via TCP client AND to send HTTP requests ?)
I have this so far :
const app = express();
const server = http.createServer(app);
// const io = sio(server);
server.listen(config.port, config.ip, callback);
and I'm trying to integrate with this
What I'm trying to achieve, and achieved successuly with socket.io, is starting a socket server on the microservice, connect to it on the main server, keep it alive, and watch for events to keep a global variable boolean "connected" in sync with it. I'm using this variable to aknowledge the my frontend of microservice state, also to pre-check if I should try to request the microservice when requested, and also for loggin purposes. I'd like to avoid manual polling, firstly for maintenability, and also for realtime purpose.
Is that possible to start the http server "ON TOP" of a raw TCP server (on the same port) ?
Sort of, not really. HTTP runs on top of TCP. So, you could technically open a raw TCP server and then write your own code to parse incoming HTTP requests and send out legal HTTP responses. But, now you've just written your own HTTP server so it's no longer raw TCP.
The challenge with trying to have a single server that accepts both HTTP and some other protocol is that your server has to be able to figure out for any given incoming packets, what it is supposed to do with it. Is it an HTTP request? Or is it your other type of custom request. It would be technically feasible to write such a thing.
Or, you could use the webSocket technique that starts out as an HTTP request, but requests an upgrade to some other protocol using the upgrade header. It is fully defined in the http spec how to do this.
But, unless you have some network restriction that you can only have one server or one open port, I'd ask why? It's a complicated way to do things. It doesn't really cost anything to just use a different port and a different listening server for the different type of communication. And, when each server is listening only for one type of traffic, things are a heck of a lot simpler. You can use a standard HTTP server for your HTTP requests and you can use your own custom TCP server for your custom TCP requests.
I can't really tell from your question what the real problem is here that you're trying to solve. If you just want to test if your HTTP server is up/down, then use some external process that just queries one of your HTTP REST API calls every once in a while and then discerns whether the server is responding as expected. There are many existing bodies of code that can be configured to do this too (it's a common task to check on the well being of a web server).
The code you link to shows a sample server that just sends back any message that it receives (called an echo server). This is just a classic test server for a client to connect to as a test. The second code block is a sample piece of client code to connect to a server, send a short message and then disconnect.
From your comments:
The underlying TCP server wouldn't even be used for messaging, it just would be used to watch connect/disconnect events
The http server already inherits from a TCP server so it has all the same events for the server itself. You can see all those events in the http server doc. I don't know exactly what you want, but there are server lifetime events such as:
listening (server now listening)
close (server now closed)
And, there are server activity events such as:
connect (new client connected)
request (new client issues a request)
And, from the request event, you can get both the httpClientRequest and httpServerResponse objects which allow you to monitor the lifetime of an individual connection, including event get the actual socket object of an incoming connection.
Here's a code example for the connect event right in the http server doc.
I have one server and multiple clients. The server wants to run shell script on each device it wants to. Absolutely it's not possible via simple socket because we may have thousands of devices. Also server and devices should be always connected via socket. after a lot of search I found out that the solution might be NAT-T. But still I don't know how to use that or if there is another solution.
Please help me what should I do on clients and server.
If you don't know the clients address and port upfront, you need to connect to the server with the clients. 1000s of devices are no problem. You run in a socket limit around 65000 open ports (check ulimit). Build an object stream between client and server and execute the script based on the object the client receives. You could also set an interval on the clients and let them check with simple http(s) every n secs if there is something to do for them?
See for example here: Node Stream Docs
Or here: Node HTTP Docs