Setup of AWS ElasticBeanstalk with Websockets - node.js

I'm trying to setup Websockets in order to send messages to AWS, so I can then process the message and send some payload to other resources at cloud and deliver custom responses to client part.
But, I cannot get that to work.
The main target is to send messages to AWS through WSS://, first approach with WS:// (in case that's possible), depending on payload content, it shall return a custom response. Then close the connection if no further operation is needed.
I've tried the suggestions posted here, here and here. But, either my lack of knowledge about Load Balancing, Websockets, TCP and HTTP is not letting me see pieces of solution missing, I'm doing everything wrong or both.
As for now, I have an Elastic Beanstalk example project structure like this:
+ nodejs-v1
|--+ .ebextensions
| |--- socketupgrade.config
|
|--+ .elasticbeasntalk
| |--- config.yaml
|
|--- .gitignore
|--- app.js
|--- cron.yaml
|--- index.html
|--- package.json
The Elastic Beanstalk environment and application are standard created, and also made sure that the Balancer is application, not classic, hence the Application Load Balancer can work with Websockets out of the box as many sources and documentation state.
It's setup with HTTP at port 80. Stickiness is enabled for a day.
Here's the code being used:
app.js:
'use strict';
const express = require('express');
const socketIO = require('socket.io');
const path = require('path');
const PORT = process.env.PORT || 3000;
const INDEX = path.join(__dirname, 'index.html');
const serber = express()
.use((req, res) => res.sendFile(INDEX) )
.listen(PORT, () => console.log(`Listening on ${ PORT }`));
const io = socketIO(serber);
io.on('connection', (socket) => {
console.log('Client connected');
socket.on('disconnect', () => console.log('Client disconnected'));
});
setInterval(() => io.emit('time', new Date().toTimeString()), 1000);
index.html:
<script src="/socket.io/socket.io.js"></script>
<script>
var socket = io.connect('http://localhost');
socket.on('news', function (data) {
console.log(data);
});
</script>
package.json:
{
"name": "Elastic-Beanstalk-Sample-App",
"version": "0.0.1",
"private": true,
"dependencies": {
"express":"*",
"socket.io":"*"
},
"scripts": {
"start": "node app.js"
}
}
.ebextensions/socketupgrade.config:
container_commands:
enable_websockets:
command: |
sed -i '/\s*proxy_set_header\s*Connection/c \
proxy_set_header Upgrade $http_upgrade;\
proxy_set_header Connection "upgrade";\
' /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf
I'm only getting 504, 502, sometimes, when tweaking configurations randomly at pointless tries, it gives me 200 and at other attempts, no protocol error, but messages like disconnection and stuff...
I appreciate your time and attention reading this desperate topic! Any hint will be appreciated as well... Just, anything... T-T
Thanks for your time and attention!
Kind regards,
Jon M.
Update 1
I'll start quoting #RickBaker:
Personally, what I would do first is remove the load balancer from the equation. >If your ec2 instance has a public ip, go into your security groups and make sure >the proper port your app is listening to is open to the public. And see if you >can at least get it working without the load balancer complicating things. – >Rick Baker 21 hours ago
Changed the scaling feature of the Elastic Beanstalk environment's application from Load Balancing, Auto Scaling Environment Type to Single Instance Environment. Important to know, that I changed it from Elastic Beanstalk web page console, not from EC2 directly, since I think that it can break the Elastic Beanstalk environment application as a whole.
Anyway, changed it, after the environment and environment's application finished setting up again, changed and deployed the following:
index.html:
<script src="/socket.io/socket.io.js"></script>
<script>
var socket = io();
</script>
After everything got running, tested with a call via webpage to the index page. And the logs from node shows life:
-------------------------------------
/var/log/nodejs/nodejs.log
-------------------------------------
Listening on 8081
Client connected
Client disconnected
Client connected
Then I started to search for Server to Server setup found this docs and
then started to dig up a bit in order to connect to a WSS server.
So, the main goal is to stablish, and mantain a session from AWS EB application to another server that accepts WSS connections. The AWS EB should be responsible of stablish and mantain that connection, so when events happen at Network Server, the application at EB can send responses to the requests of events happening.
So then I read this topic, and realized that the NodeJS - socket.io approach won't work based on the posts read. So, I don't know what to do now. ( '-')
AWS EB can setup environment with Python with WSGI but, geez... Don't know what to do next. I'll try things in order to connect to WS if possible, if not then WSS, and see if something works out. So I'll Update right after I have results, whether possitive or not.
Jon over and out.

After combining previous iterations with some more documentation reading, I came to realize that, the connection indeed starts from AWS, via NodeJS using ws.
So I'm able to communicate with Network Server via WSS and request and provide data.
The app.js:
var WebSocket = require('ws');
var wss = new WebSocket('wss://example.com');
wss.on('open', function connection() {
console.log("WSS connection opening")
});
wss.on('message', function incoming(data) {
console.log("Jot:")
console.log(data)
setTimeout(function timeout() {
console.log("Sending response")
wss.send(JSON.stringify(
{
"key": "Hi there"
}
));
},
500);
});
The package.json:
{
"name": "Elastic-Beanstalk-Sample-App",
"version": "0.0.1",
"private": true,
"dependencies": {
"express":"*",
"socket.io":"*",
"ws": "*"
},
"scripts": {
"start": "node app.js"
}
}
The structure of the project remains almost the same:
+ nodejs-v1
|--+ .ebextensions
| |--- socketupgrade.config
|
|--+ .elasticbeasntalk
| |--- config.yaml
|
|--- .gitignore
|--- app.js
|--- cron.yaml
|--- package.json
As you can see, there's no index.html since it's not used.
From here, now it's up to the solution requirements the usage of sending/receiving data. And to make sure the connection is established/recovered.

Related

CRA Socket.io request returns net::ERR_CONNECTION_TIMED_OUT

Hello. I've spent some time without luck trying to understand the problem here.
I've looked through each Question on StackOverflow which seems to deal with the same problem, though nothing has worked so far.
I have a simple chat app built using Create React App and Socket.io (which runs fine on localhost), but when deployed to my Node server I'm receiving ERR_CONNECTION_TIMED_OUT errors and no response. The website itself runs fine, but when I make a call to my Socket.io server, but errors.
I'm guessing this is down to my lack of knowledge with how Node and Socket.io want to work.
Some info:
server.js
const path = require("path");
const express = require("express");
const app = express();
const http = require("http").createServer(app);
const port = 8080;
http.listen(port, () => console.log(`http: Listening on port ${port}`));
const io = require("socket.io")(http, { cookie: false });
app.use(express.static(path.join(__dirname, "build")));
app.get("/*", function (req, res) {
res.sendFile(path.join(__dirname, "build", "index.html"));
});
io.on("connection", (socket) => {
console.log("New client connected");
// Emitting a new message. Will be consumed by the client
socket.on("messages", (data) => {
socket.broadcast.emit("messages", data);
});
//A special namespace "disconnect" for when a client disconnects
socket.on("disconnect", () => console.log("Client disconnected"));
});
client.js
....
const socket =
process.env.NODE_ENV === "development"
? io("http://localhost:4001")
: io("https://my-test-site:8080");
socket.on("messages", (msgs: string[]) => {
setMessages(msgs);
});
....
docker-compose.yml
version: "X.X"
services:
app:
image: "my-docker-image"
build:
context: .
dockerfile: Dockerfile
args:
DEPENDENCY: "my-deps"
ports:
- 8080:8080
Dockerfile
...
RUN yarn build
CMD node server.js // run my server.js
...
UPDATE: I got around this problem by making sure my main port was only used to run Express (with socket.io) - in my set up that was port: 8080. When running in the same Docker container, I don't think I needed to create and use the https version of the express 'createServer'.
This looks like you forgot to map the port of your docker container. The expose statement in your dockerfile will only advertise for other docker containers, which share a docker network with your container, that they can connect to port 4001 of your container.
The port mapping can be configured with the -p flag for docker run commands. In your case the full command look somehow like this:
docker run -p 4001:4001 your_image_name
Also, do you have a signed certificate? Browser will likely block the conneciton if they do not trust your servers certificate.
I got around this problem by keeping just one port available (in my case :8080). This port is what express/socket.io is using (originally I had two different ports, one for my site, one for express). Also, in my case, when running in the same Docker container, I didn't require the require("https").createServer(app) (https) version of the server, as http was sufficient.

How do I deploy socket.io to Google App Engine?

I created my first node.js app using socket.io. Specifically I implemented the chat example published by socket.io. It works perfectly, locally. And then I tried deploying it to Google App Engine (making some code tweaks for node to work).
Everything shows up indicating that the node part is working well. However the chat doesn't work indicating that socket.io part isn't working. You can see the deployed app (and page source) here.
Do I have to do anything additional? Something in the yaml or json files?
yaml content:
runtime: nodejs
vm: true
skip_files:
- ^(.*/)?.*/node_modules/.*$
json content:
{
"name": "Chaty",
"description": "chatrooms app",
"version": "0.0.1",
"private": true,
"license": "Apache Version 2.0",
"author": "McChatface",
"engines": {
"node": "~4.2"
},
"scripts": {
"start": "node app.js",
"monitor": "nodemon app.js",
"deploy": "gcloud preview app deploy"
},
"dependencies": {
"express": "^4.13.4",
"socket.io": "^1.4.6"
}
}
In short this cannot be done on production and it appears to be work in process. The right architecture is to have a chat server on google compute engine as outlined here.
But as a proof of concept to use socket.io on google app engine is very similar to that shown in google appengine samples for websockets.
In case of socket.io do the following steps on server side. Code snippet below.
Create second express middleware and server.
Attach/use socket.io with new server.
Listen to port (65080).
Open firewall for port (65080) on google compute engine.
Link to working repository.
socket.io changes on server side
var app_chat = require('express')();
var server1 = require('http').Server(app_chat);
var io = require('socket.io')(server1);
server1.listen(65080);
io.on('connection', function (socket) {
console.log('user connected');
socket.on('chat_message', function (data) {
console.log('client sent:',data);
socket.emit('chat_message', 'Server is echoing your message: ' + data);
});
});
open firewall by command
gcloud compute firewall-rules create default-allow-websockets \
--allow tcp:65080 \
--target-tags websocket \
--description "Allow websocket traffic on port 65080"
I hope Google comes up with a production-ready solution soon enough on as this will become a key armour in any PaaS-arsenal.
GAE support for persistent socket connections arrived in February 2019!
To make this work, you'll need to be using the flex environment and modify your app.yaml to include session_affinity:
network:
session_affinity: true
Note that I still had to open port 65080 to get this working, but no other changes were required for me.
Read the deets at:
https://cloud.google.com/appengine/docs/flexible/nodejs/using-websockets-and-session-affinity
Google has an example app using WebSockets here, you need to do the following to get it working correctly:
Open up a firewall port for the server so clients can reach your server
Fetch your internal IP in Google App Engine, so clients know what IP to connect to
Echo out your IP from your server via something like a rest API or a HTML page
That should be it (don't take my word for it though, this is what I've been able to find out after doing some research on the docs), hope it helps!
Fetching your external IP from within Google App Engine
var METADATA_NETWORK_INTERFACE_URL = 'http://metadata/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip';
function getExternalIp (cb) {
var options = {
url: METADATA_NETWORK_INTERFACE_URL,
headers: {
'Metadata-Flavor': 'Google'
}
};
request(options, function (err, resp, body) {
if (err || resp.statusCode !== 200) {
console.log('Error while talking to metadata server, assuming localhost');
return cb('localhost');
}
return cb(body);
});
}
Opening the firewall port
gcloud compute firewall-rules create default-allow-websockets \
--allow tcp:65080 \
--target-tags websocket \
--description "Allow websocket traffic on port 65080"
This app.yaml configuration worked for me:
runtime: nodejs
env: flex
manual_scaling:
instances: 1
network:
session_affinity: true
And I enabled the firewall rules by this command:
gcloud compute firewall-rules create default-allow-websockets --allow
tcp:65080 --target-tags websocket --description "Allow websocket
traffic on port 65080"

How to setup gulp browser-sync for a node / react project that uses dynamic url routing

I am trying to add BrowserSync to my react.js node project. My problem is that my project manages the url routing, listening port and mongoose connection through the server.js file so obviously when I run a browser-sync task and check the localhost url http://localhost:3000 I get a Cannot GET /.
Is there a way to force browser-sync to use my server.js file? Should I be using a secondary nodemon server or something (and if i do how can the cross-browser syncing work)? I am really lost and all the examples I have seen add more confusion. Help!!
gulp.task('browser-sync', function() {
browserSync({
server: {
baseDir: "./"
},
files: [
'static/**/*.*',
'!static/js/bundle.js'
],
});
});
We had a similar issue that we were able to fix by using proxy-middleware(https://www.npmjs.com/package/proxy-middleware). BrowserSync lets you add middleware so you can process each request. Here is a trimmed down example of what we were doing:
var proxy = require('proxy-middleware');
var url = require('url');
// the base url where to forward the requests
var proxyOptions = url.parse('https://appserver:8080/api');
// Which route browserSync should forward to the gateway
proxyOptions.route = '/api'
// so an ajax request to browserSync http://localhost:3000/api/users would be
// sent via proxy to http://appserver:8080/api/users while letting any requests
// that don't have /api at the beginning of the path fall back to the default behavior.
browserSync({
// other browserSync options
// ....
server: {
middleware: [
// proxy /api requests to api gateway
proxy(proxyOptions)
]
}
});
The cool thing about this is that you can change where the proxy is pointed, so you can test against different environments. One thing to note is that all of our routes start with /api which makes this approach a lot easier. It would be a little more tricky to pick and choose which routes to proxy but hopefully the example above will give you a good starting point.
The other option would be to use CORS, but if you aren't dealing with that in production it may not be worth messing with for your dev environment.

node server hosted on heroku not serving socket.io file

So I've made a multiplayer space shooter using node.js, socket.io and kineticJS.
My Node.js server does not actually serve the client's page. My client-side files are currently hosted in a local Apache server on my computer.
The node server is up and running on Heroku right now and I can't seem to be able to get socket.io loaded on the client-side. I keep getting the "io is not defined" error.
This is how I import the script:
<script src="http://xxx-xxx-xxxx.herokuapp.com:5000/socket.io/socket.io.js"></script>
I have followed the instructions shown here: https://devcenter.heroku.com/articles/nodejs
And my package.json file looks like this:
{
"name": "Grid-Frontier",
"version": "0.0.1",
"dependencies": {
"socket.io": "0.9.x"
},
"engines": {
"node": "0.6.x"
}
}
On localhost everything is fine and I can just do the following:
// Importing on client side
<script src="http://localhost:8080/socket.io/socket.io.js"></script>
// Server-side
server.listen(8080);
socket = io.listen(server);
Because Heroku allows you only to communicate in port 80, you cannot use other ports therefore the address should be: http://xxx-xxx-xxxx.herokuapp.com/socket.io/socket.io.js not port 5000. Actually there is nothing on port 5000, it is internal to machine.

Sharing one port among multiple node.js HTTP processes

I have a root server running with several node.js projects on it. They are supposed to run separately in their own processes and directories. Consider this file structure:
/home
+-- /node
+-- /someProject | www.some-project.com
| +-- index.js
| +-- anotherFile.img
| +-- ...
+-- /anotherProject | www.another-project.com
| +-- /stuff
| +-- index.js
| +-- ...
+-- /myWebsite | www.my-website.com
| +-- /static
| +-- index.js
| +-- ...
+-- ... | ...
Each index.js should be started as an individual process with its cwd set to its parent-folder (someProject, anotherProject, etc.).
Think ov vHosts. Each project starts a webserver which listens on its own domain. And there's the problem. Only one script can start since, they all try to bind to port 80. I digged to into the node.js API and looked for a possible solution: child_process.fork().
Sadly this doesn't work very well. When I try to send a server instance to the master process (to emit a request later on) or an object consiting of request and response from the master to the salve I get errors. This is because node.js internally tries to convert these advanced objects to a JSON string and then reconverts it to its original form. This makes all the objects loose their reference and functionality.
Seccond approach child.js
var http = require("http");
var server = http.createServer(function(req, res) {
// stuff...
});
server.listen(80);
process.send(server); // Nope
First approach master.js
var http = require("http"),
cp = require("child_process");
var child = cp.fork("/home/node/someProject/index.js", [], { env: "/home/node/someProject" });
var router = http.createServer(function(req, res) {
// domaincheck, etc...
child.send({ request: req, response: res }); // Nope
});
router.listen(80);
So this is a dead end. But, hey! Node.js offers some kind of handles, which are sendable. Here's an example from the documentation:
master.js
var server = require('net').createServer();
var child = require('child_process').fork(__dirname + '/child.js');
// Open up the server object and send the handle.
server.listen(1337, function() {
child.send({ server: true }, server._handle);
});
child.js
process.on('message', function(m, serverHandle) {
if (serverHandle) {
var server = require('net').createServer();
server.listen(serverHandle);
}
});
Here the child directly listens to the master's server. So there is no domaincheck inbetween. So here's a dead end to.
I also thought about Cluster, but this uses the same technology as the handle and therefore has the same limitations.
So... are there any good ideas?
What I currently do is rather hack-ish. I've made a package called distroy. It binds to port 80 and internally proxies all requests to Unix domain socket paths like /tmp/distroy/http/www.example.com, on which the seperate apps listen. This also (kinda) works for HTTPS (see my question on SNI).
The remaining problem is, that the original IP address is lost, as it's now always 127.0.0.1. I think I can circumvent this by monkeypatching the net.Server so that I can transmit the IP address before opening the connection.
If you are interested in a node.js solution check out bouncy, a websocket and https-capable http router proxy/load balancer in node.js.
Define your routes.json like
{
"beep.example.com" : 8000,
"boop.example.com" : 8001
}
and then run bouncy using
bouncy routes.json 80
Personally, I'd just have them all listen on dedicated ports or preferably sockets and then stick everything behind either a dedicated router script or nginx. It's the simplest approach IMO.
For connect middleware there is vhost extension. Maybe you could copy some of their concepts.

Resources