Having issues setting running Meteor app with SSL on AWS Opsworks - node.js

My base case is that my Meteor App runs perfectly on Opsworks.
I do a Meteor build, tweak the files and all is good (without HTTPS/SSL). I am not using METEORUP. I just upload my tweaked build file and deploy on opsworks.
Also, I am using the out of the box Opsworks HAPROXY loadbalancer.
I then install the SSL certificates for my app and set Meteor to list on PORT=443 as per screenshot:
In the browser, I see:
503 Service Unavailable
No server is available to handle this request.
In the log files I see:
Mar 8 03:22:51 nodejs-app1 monit[2216]: 'node_web_app_buzzy' start: /bin/bash
Mar 8 03:23:51 nodejs-app1 monit[2216]: 'node_web_app_buzzy' failed, cannot ope
n a connection to INET[127.0.0.1:443/] via TCPSSL
Any ideas welcome

Your HAproxy configuration is expecting meteor/node to respond with SSL.
It should instead, terminate SSL and talking to node/meteor in plain HTTP. This is because, meteor doesn't do SSL ; it expects a server in front to handle it.
Solution:
Update the frontend https-in section to terminate ssl and redirect to the http backend
defaults
#... add this line to enable the `X-Forwarded-For` header
option forwardfor
# ...
# .... update this section ...
frontend https-in
mode tcp
# this bit causes HAProxy to talk TLS rather than just forward the connection
bind :443 ssl crt /path/to/your/certificate
reqadd X-Forwarded-Proto:\ https
# now direct it to your plain HTTP application
acl nodejs_application_buzzy_domain_buzzy hdr_end(host) -i buzzy
use_backend nodejs_app_servers if nodejs_application_buzzy_domain_buzzy

Related

Running a local dev IPFS gateway that supports HTTPS

I'm building a distributed web app designed to be hosted on IPFS. I want to do development in a web browser, using my local gateway to serve my files, but I use Javascript APIs that are not permitted without being served off HTTPS.
I tried starting a reverse proxy with self-signed ssl pointing at my local IPFS http gateway, but when I visit links using the reverse proxy, say https://___hashhere___.ipfs.localhost:8081/, I'm redirected to http://___hashhere___.ipfs.localhost:8080/:
GATEWAY_PORT=$(ipfs config Addresses.Gateway | cut -d'/' -f 5)
HTTPS_PORT=$((GATEWAY_PORT+1))
echo "https proxy to your ipfs gateway now at: https://localhost:$HTTPS_PORT"
exec npx local-ssl-proxy --source $HTTPS_PORT --target $GATEWAY_PORT
How can I run a local https+ipfs gateway in a command or two? I guess I need a reverse proxy that rewrites URLs in responses?
If you use Chromium-based browser, then http://___hashhere___.ipfs.localhost:8080/ will have window.isSecureContext set to true and you will have access to all Web APIs. No need for TLS setup for dev on localhost with Chromium (Firefox has a bug).
If you are running IPFS Companion, you may want to disable it when you develop your app, to ensure requests for IPFS resources are not redirected to the gateway set in browser extension's Preferences.
In production, you deploy go-ipfs behind a reverse proxy and that proxy terminates TLS. You can control the protocol scheme and host used in some of redirects via X-Forwarded-Proto and X-Forwarded-Host headers, as noted in go-ipfs/docs/config.md

Run nodejs app through HTTPS

I have a node app that is setup on SSH by running node osjs run --hostname=dc-619670cb94e6.vtxfactory.org --port=4100.
It starts at http://dc-619670cb94e6.vtxfactory.org:4100/ without problems, but instead I want to serve it through HTTPS https://dc-619670cb94e6.vtxfactory.org:4100/ , where I receive an error ERR_CONNECTION_CLOSED.
If I use the port I'm unable to reach it with https, but https://dc-619670cb94e6.vtxfactory.org/ is accessible.
How can I serve the port 4100 through htttps?
Thanks.
This is an implementation detail of OS.js. Their docs recommend setting up a reverse proxy for servers. Doing this will give you more control over SSL and ports, like you want
https://manual.os-js.org/installation/

AWS SSL on EC2 instance without Load Balancer - NodeJS

Is it possible to have an EC2 instance running, listening on port 443, without a load balancer? I'm trying right now in my Node.JS app but it doesn't work when I call the page using https://. However, if I set it to port 80 everything works fine with http://.
I had it working earlier with a load balancer and route53, but I don't want to pay $18/mo for an ELB anymore, especially when I only have one server running.
Thanks for the help
You're right, if it's only the one instance and you feel like you don't need to be prepared for large increases in traffic, you shouldn't have to pay for an ELB.
From a high-level standpoint you'll have to go through the following steps:
Install an nginx server to serve your NodeJS application.
Install your SSL certificates on the nginx server.
-- Either do this manually, ssh'ing into the server and installing the certs as described here.
-- OR include the necessary files in your application (I believe this only works for elastic beanstalk?) which will overwrite the nginx configuration files automatically as described here.
Make sure nginx is listening on port 443 (should've been completed in the previous step)
Open the EC2 server's security group corresponding to where you want traffic to enter the server (port 80 / port 443)
Is it possible? Yes of course. It sounds like you had an SSL certificate installed on the ELB and now you've deleted the ELB. You will have to install an SSL certificate on the EC2 server now. You can't use AWS ACM SSL certificates without an ELB or CloudFront distribution. If you don't want to pay for either of those services you will have to obtain an SSL certificate elsewhere.
For our projects (much like the other poster described) we used this setup:
nginx as load balancer and proxy for all calls on port 80 (no direct call to node.js server on port 3000 which is closed to the public)
pm2 as process manager for Node.js (and for deployment)
keymetrics.io for monitoring
Nodejs v6.9.3 boron/lts (through NVM)
Mongodb 3.2 with WiredTiger Engine (Compose.io)
Amazon EC2 instances for hosting (Amazon Linux not Ubuntu)
This setup works very well for us. And in this setup we're able to setup SSL without using the amazon load balancers.
Once you have your certificate files, it's not so hard. You can even do this without Nginx.
Let's first create an express webserver
const app = express();
For the sake of example, you could put a static website inside a folder.
const wwwFolder = express.static(path.join(__dirname, '/../www'));
app.use(wwwFolder);
Next, yYou basically need to read your certificate files
const key = readFileSync(__dirname + '/ssl/privkey.pem', 'utf8');
const cert = readFileSync(__dirname + '/ssl/cert.pem', 'utf8');
const ca = readFileSync(__dirname + '/ssl/chain.pem', 'utf8');
const serverOptions: https.ServerOptions = { key, cert, ca };
And finally, you create a https server using those certificates.
const server = https.createServer(serverOptions, app);
server.listen(httpsPort, () => log.debug("createWebServers", `server is listening on port ${httpsPort}`));
For security reasons it's probably not possible to listen directly on port 443. Instead, for instance use a port like 4201 and then use port forwarding.
If you use systemd to start/stop your service, then this port forwarding can be defined in your service configuration file. An easy solution:
[Unit]
Description=my.service
After=network.target
[Service]
Type=simple
TimeoutSec=0
User=ubuntu
PermissionsStartOnly=true
ExecStartPre=/sbin/iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 4201
ExecStart=/usr/local/bin/node /home/ubuntu/project/server.js
ExecStopPost=/sbin/iptables -t nat -D PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 4201
Restart=on-failure
[Install]
WantedBy=multi-user.target
There are various ways to create and refresh your certificate files. So, I won't go into detail here about that. But most importantly, you don't need an amazon certificate to accomplish it. LetsEncrypt is free and easy and works fine.
Usually I also add a http server (without HTTPS) and apply a redirect. And then I also use port forwarding for that. So, I add a 2nd port forwarding rule in the service file.

HAproxy and Node.js+Spdy

I'm currently using node spdy to serve files. This works beautifully.
However I would like to use HAproxy to load balance amongst these node servers. But when my node/spdy server is behind HAproxy, request.isSpdy is false... so spdy is all of a sudden not supported?
Here's my HAproxy configuration:
global
maxconn 4096
defaults
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http_proxy
mode http
bind *:80
redirect prefix https://awesome.com code 301
frontend https_proxy
mode tcp
bind *:443
default_backend webservers
backend webservers
balance source
server server1 127.0.0.1:10443 maxconn 4096
# server server2 127.0.0.1:10444 maxconn 4096
Thanks!
You can't use HAProxy's HTTP load balancing mechanism with SPDY. First, you need to use the latest development branch to enable support for NPN (and hence SPDY), and after that, you will have to configure it to run closer to simple TCP load-balancing mode -- HAProxy does not understand SPDY.
For an example HAProxy + SPDY config script, see here:
http://www.igvita.com/2012/10/31/simple-spdy-and-npn-negotiation-with-haproxy/
I ran into this same issue. Instead of using spdy, I went back to using express and made haproxy use the http/2 protocol.
frontend http-in
bind *:80
mode http
redirect scheme https code 301
frontend https-in
mode http
bind *:443 ssl crt /path/to/cert.pem alpn h2,http/1.1
the key here is this part alpn h2,http/1.1

Node.JS, HAproxy and Socket.IO through NGINX, app sits in subdirectory

I've been trying for hours and have read what this site and the internet have to offer. I just can't quite seem to get Socket.IO to work properly here. I know nginx by default can't handle Socket.IO however, HAproxy can. I want nginx to serve the Node apps through unix sockets and that works great. Each have a sub directory location set by nginx, however, now I need Socket.IO for the last app and I'm at a loss of configuring at this point.
I have the latest socket.io, HAproxy 1.4.8 and nginx 1.2.1. Running ubuntu.
So reiterating, I need to get socket.io working though nginx to a node app in a subdirectory, ex: localhost/app/.
Diagram:
WEB => HAproxy => Nginx => {/app1 app1, /app2 app2, /app3 app3}
Let me now if I can offer anything else!
There is no reason to get "get socket.io working though nginx". Instead you just route HAProxy directly to Socket.IO (without Nginx in the middle).
I recommend you checkout the following links:
https://gist.github.com/1014904
http://blog.mixu.net/2011/08/13/nginx-websockets-ssl-and-socket-io-deployment/
You could use Haproxy on port 80 to front several node.js apps running on different ports.
E.g.
URL:80/app1 -> haproxy -> node app1:8080
URL:80/app2 -> haproxy -> node app2:8081
URL:80/app3 -> haproxy -> node app3:8083
UPDATE:
The following is an example HAPROXY configuration that routes requests made to http://server:80/hello to localhost:20001 and http://server:80/echo to localhost:20002
backend hello
server hellosvr 127.0.0.1:20002
backend echo
server echosvr 127.0.0.1:20001
frontend http_in
option httpclose
option forwardfor except 127.0.0.1 # stunnel already adds the header
bind *:80
acl rec_hello path_beg /hello/
use_backend hello if rec_hello
acl rec_echo path_beg /echo
use_backend echo if rec_echo

Resources