Has anyone used Primus with websockets behind aws Elastic Load Balancer? - node.js

I have a node.js application server running on port 80 and I recently added realtime messaging through Primus websockets transformer on port 9001.
It works well in single instance. I deployed the messaging to Beanstalk environment with the following configuration.
AWS Elastic Beanstalk
Platform version v2.0.0
Nodejs version v0.12.6
Primus version v4.0.5
Port 9001 is added to the security group of the instance as shown in the screenshot.
Proxy server is set to "none" in configuration options.
TCP Listener is added in the Elastic Load Balancer configuration.
Proxy-Protocol is enabled as mentioned in the aws documentation.
Added proxywrap to the primus server configuration.
But the client requests doesn't reach the instance and connection timed out. Has anyone used primus with websockets behind AWS ELB?
Please let me know the configuration which enables websockets communication behind Elastic Beanstalk.

I managed to get websockets (https://github.com/websockets/ws) working on both port 80 and 8080 on ELB with the configurations below. And thats without enabling Proxy-Protocol.
Security group:
Load balancer listeners:
Container options:
Load balancer:

You need to do two things
Increase idle timeout on ELB
On EC2 dashboard, reach for Load Balancer settings and open the Description tab for your load balancer. Look for the setting "Idle timeout", type something like "600" (for 10 minutes)
Ping periodically
Implement a WS ping every 5 minutes (or other, but need to be lower than the idle timeout on ELB). If primus doesn't have support on its API, implement yourself sending of a dummy message to client.

Related

ECS Health check failures AWS - copilot

Whenever I try to redeploy my load balanced service in aws (via copilot) I keep getting health check failures (502 bad gateway) here's the error message:
(service my-app-my-env-my-service-Service-n6SienH8zSJt) (port 3000) is unhealthy in
(target-group arn:aws:elasticloadbalancing:us-east-1:[my target grouo]) due to (reason Health checks failed).
I have a cluster (ECS) with two services (one backend service working totally fine, and then one load balanced service that's causing the issues) that each run one task (Fargate). The load balanced service is a meteor/node app which is listening on port 3000.
The Elastic Load Balancer (application) is listening on port 80 and it should be forwarding traffic to a target group for the service mentioned above which should be listening on port 3000.
This target group for the load balanced service has:
Target type: IP
IP address type: IPv4
Protocol: Port -- HTTP:3000
Protocol Version: HTTP1
The targets for this group have their own IP addresses with port 3000.
target type: ip address since I use fargate and not EC2 for my tasks. So when a task is turning on, I correctly see the private IP of the task registering into the target group.
A few notes:
The server is launching correctly. I'm receiving logs that indicate a healthy server and no errors are showing up
I have a /_health route which I set up and is working locally (getting my 200 status with a curl request to localhost:3000/_health). I'm pretty convinced that NO routes are working because i changed my app to render a static page regardless of the route and still having issues connecting. This makes me think the issues lies between the load balancer and the service
Been stuck on this for a week so if anyone knows what I'm missing that would be particularly helpful! I'm happy to share more information about my cluster if that will help! Thanks in advance :)
What is the mapping of the health check route in your copilot manifest?
By default copilot configures health checks to target '/'.
I feel pretty silly about this but pretty sure I found the solution. While I configured the port: 3000 correctly on the image in the manifest.yml, I needed an additional environment variable called PORT: 3000 in the variables for the manifest. This seemed to do the trick... like I said silly mistake!

Application Load Balancer https request to EC2 nodejs running on port 4000

I have an Application Load Balancer that is in HTTPS and will make a request to my EC2 instance. I have configured the ALB listeners to HTTP/HTTPS and then I have specified the target group (for the application load balancer) as the EC2 instance.
On this EC2 instance, I have two things running: 1) A website 2) A nodejs service (on port 4000)
In the target group I have specified the relevant EC2 instance twice. (once with port number 80 and another with port number 4000). On port 4000, the health check seems to be failing (even though the service is running on port 4000 only).
When I make the htttps request to my website, the response is fine. However, when I make the https request to my nodejs service (running on port number 4000), it is giving connection refused error.
A target group should only be forwarding to one application. So you would want a separate target group for your web site and your node js service. Then using ALB use advanced routing to determine which to load.
An example of above would be:
Path of /api/* forwards to node js target group
Default forwards to website.
Secondly your health check must pass on your node js app, its currently returning a 404 indicating that the path you are host checking against is not available.
Finally make sure you connect based on the listener configuration (http/https and port), not the port from the target group.

node.js autoscaling/load balancing any methods? with socket.io

I am trying to deploy a node.js project so it can auto scale with a loadbalancer.
I have tried AWS ELB elastic beanstalk. After a much time spent, the ELB doesn't like the socket.io connect as much as expected.
From this I have searched many methods to fix this issue:
nginx proxying
changed listeners on the loadbalancer to allow TCP port 3000 (other ports too depending on the forum/question I looked at)
redis server to share socket connection between different nodes.
I can't really put all the sites that I've tried to get this working.
But it hasn't worked at all.
The only thing I probably haven't tried is using route53...
So the question is:
If there is someone that has a proper node.js app with socket.io working with elastic beanstalk (or EC2 instances with a load balancer and auto scaling) working, is there a way to do this, and what would that method be?
If there are links that have a few step by step methods that would be awesome.
Thanks in advance!

amazon beanstalk tcp app not responding

i am running a nodejs tcp app at my aws linux ec2 instance . the basic code is given below
var net = require('net');
net.createServer(function(socket){
socket.write('hello\n');
socket.on('data', function(data){
socket.write(data.toString().toUpperCase())
});
}).listen(8080);
and its run like charm, but i wanted to run this same app at aws beanstalk (just to get the benefit of auto scaling). ya ya i am not aws ninja. by the way to get the public ip at beanstalk i use aws VPC.
beanstalk app connect to VPC = checked.
VPC 8080 port open = checked .
change hard coded port 8080 to process.env.PORT = checked.
but if i ping anything at port 8080 it does not return 'hello' from the application. what i am missing ?
Your application is not implementing HTTP. ElasticBeanstalk by default is going to configure the Elastic Load Balancer (ELB) to act as an HTTP load balancer. This means your instance is not healthy and is not being put into service by the ELB and the ELB itself would also be rejecting the non-HTTP request.
Important note: While it would be possible to modify ElasticBeanstalk to work for your use case, you are going to be using it in a non-standard way so there will be some risks. If you are regularly creating and deleting environments using CloudFormation or the API then you will likely run into a lot of headaches.
If you are going to just create an environment and leave it running then I suggest you take the following steps.
First off, ElasticBeanstalk's nodejs configuration is going to configure an Nginx server on the EC2 instance, since you are using TCP you will want to bypass this entirely. This can be done by re-configuring the ELB and security groups. It would be easiest to just leave Nginx running, it just will not be used, just make sure it is not on the same port as nodejs.
By default the ELB configuration will look like this:
The step you missed was updating the ELB to use TCP load balancing on the appropriate ports. You can go into the EC2 web console under Load Balancers and update the load balancer configuration for the already created Beanstalk to look like this:
You will also want to modify the health check of the load balancer to be on the correct port:
Last, double check to make sure the security groups for both the load balancer and EC2 instances allow the appropriate ports to be accessed. The last thing to check, but you already mentioned you looked, is that your VPC's NACLs also allow the appropriate ports to be accessed.

Allow WebSockets in Google Compute Engine (GCE)

I'm using Compute Engine (GCE) to run my socket server with Socket.IO (Node.js)
It's only working with polling. When I try to use a web client I receive this error code:
WebSocket connection to 'ws://myapp-socket.appspot.com/socket.io/?EIO=3&transport=websocket&sid=Tt4uNFR2fU82zsCIAADo' failed: Unexpected response code: 400
What am I doing wrong? Is it GCE configuration problem?
You cannot use the myapp-socket.appspot.com domain in your script when using WebSockets. Instead, you will need to use the external ip of the GCE instance and connect directly to that, opening any firewall ports you may be using.
I believe traffic going to the appspot.com domain is also going through frontend webservers and socket.io needs a direct connection to the server.
The Virtual Machines in Google Compute Engine have port 80 for http and port 443 for https. Using these ports for web-sockets solved the issue.

Resources