socket.io does not fire event instantly after hosting in aws docker - node.js

I am dealing with an error that says the web socket connection failed in my MERN web app. here is my
backend site code
frontend code
node.conf code
everything is working fine in my localhost but after hosting, the socket connection got failed and socket.io events don't fire instantly like in my chat services and notification services.

Related

does socket.io server need to be seperate from backend when deploying?

I am building react app. I have my client folder, and my backend folder that contains all my mongo db models, routes, functions etc...
I know realize that my app needs to use socket.io
My frontend is on localhost:3000 and my backend is on localhost:5000
My understanding is that socket.io needs its own port.
Does this mean when I deploy to heroku I need to deploy a backend server, frontend server, and a socket.io server?
My understanding is that socket.io needs its own port.
This is incorrect. socket.io can use the same port as your backend just fine. Incoming requests to create a socket.io connection can be distinguished from other web requests via a custom header that the underlying webSocket connection protocol uses. This allows socket.io/webSocket and your http server to use the exact same port.
Does this mean when I deploy to heroku I need to deploy a backend server, frontend server, and a socket.io server?
No. You can still just have frontend server and backend server and the backend server can handle both your backend requests and the socket.io connections.

Frequent Disconnection and re-connection of web socket requests after deployment in the K8s

I am working on the development of a chat application using the socket.io library in the back end and ngx-socket-io in the front end. The chat functionality is working fine on the local environment and there is only one web socket connection in the network tab of the browser.
But when i deploy the code on the Kubernetes cluster I can see that the web socket connection does not persist longer and the previous web socket request is closed and new request is initiated i.e. the web socket connection is disconnecting and then re-connecting.
It is not persistent even on a single active pod or service in the Kubernetes cluster.
I want a single web socket connection to persist for longer duration, only then i can have the live chat working otherwise live chat ceases once a new web socket connection is initiated.
You need to apply following annotations for Ingress with websocket protocol. See example here:
nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
nginx.ingress.kubernetes.io/proxy-send-timeout: 3600
This issue was solved by using the traefik controller which is an advanced controller instead of nginx ingress controller.

socket.io on AWS EC2 Instance

I have an express node server running that is a backend for a REST API and a Websockets (for chatting feature) while my client is React.js
When I deploy my server to an AWS EC2 instance with security groups set up, I am able to make http api calls, but my socket.io connection doesn't work. I have tested the server locally on my localhost, which works.
I think this does something with the proxy in the package.json because testing locally when I change the proxy to my EC2 instance public ip while keeping my socket.io connection connected to my localhost it does not work.
I am not getting any "connection refused" errors.
So my question is how does the react.js proxy effect the socket.io connection?

Connecting to Socket.IO server hosted on Azure Mobile Services returns Error

I am attempting to connect to a Socket.IO server hosted in the extension folder of my Azure Mobile service using the startup script, although I am unable to do so, and receive the error:
"WebSocket connection to 'ws://mymobileservice.azure-mobile.net/extensions/socket.io/?EIO=2&transport=websocket' failed: Error during WebSocket handshake: Unexpected response code: 503"
I am using port 80 with a path of 'extensions/socket.io'
I've also used port 443 with a secure connection, and that has yielded the same error.
Am I on the right track?
Any help is greatly appreciated.
You figured this out, but here's a definitive answer.
The best way to host a socket.io server is to host it on Azure Websites directly (Azure > New > Website) instead of Azure Mobile Services. Once you publish your socket IO server to the website, don't forget to go to the "Configure" tab and enable "Websockets" for the server.
There is an amazing tutorial on deploying the socket.io chat app to azure websites here:
https://azure.microsoft.com/en-us/documentation/articles/web-sites-nodejs-chat-app-socketio/

Connectivity between NodeJS applications behind load balancer

I'm currently working on nodejs application and I got small issue.
My NodeJS application consists of 2 parts:
Internal API from other applications. Let's call this part API.
User faced web server (Express + Socket.io). Let's call this Web.
We're receiving a lot of calls to API from our other internal applications. Some of this calls would generate notifications to web users (let's imaging it's online-chat).
So if we have message for client #1000 and he's online (connected to Web application through Socket.io) we would emit message throught Socket.io to this client. Everything works fine.
But there is an issue.
We're going to introduce load balancer between our NodeJS application (it's one application, so both parts - API and Web would be behind the load balancer). Now let's imagine that we have load balancer and 2 servers with this application: server1 and server2.
Thus some of API calls are sent to server1 and some of them are sent to server2. So let's imagine we got API call to server1 and this call should send a message to client #1000. But this client has open connection to server2.
The question is: is there any best practices or common solutions - how these two servers should communicate? One of possible solutions could be open socket connection between all servers with nodejs application and if we need to send a message to client - just broadcast it so every server could check if client is connected at this moment and send the message to correct client.

Resources