I've a Python 3 Flask app running in an ECS cluster. The Flask app is configured to run in SSL mode. The app can't be accessed via the ALB Cname, as it generates connection refused as seen here -
curl -Il https://tek-app.example.com/health
curl: (7) Failed to connect to tek-app.example.com port 443: Connection refused
When the ALB is hit directly and ignoring the SSL cert exception, it works as seen here -
curl -Il -k https://tek-w-appli-1234.eu-west-1.elb.amazonaws.com/health
HTTP/2 200
date: Sun, 24 Feb 2019 14:49:27 GMT
content-type: text/html; charset=utf-8
content-length: 9
server: Werkzeug/0.14.1 Python/3.7.2
I understand the main recommendation is to run it behind a Nginx or Apache proxy and to set the X-Forward headers via their configs, but I feel this is over engineering the solution.
I've also tried enabling the following in the app -
from werkzeug.contrib.fixers import ProxyFix
...
app = Flask(__name__)
app.wsgi_app = ProxyFix(app.wsgi_app)
...
And this fix now produces the correct source IP's in the Cloudwatch logs, but doesn't allow connections via the ALB Cname.
Is there something simple that I'm missing here?
Reply to first answer
Thank you - the Cname is pointing to the correct ALB. I ran into a similar issue two weeks back with an Apache server, and the fix was to ensure X-Forward-Proto was in use in the Apache vhosts.conf file. So I'm thinking this may be something similar.
I did it again - while developing locally I edited my /etc/hosts file to have a local entry to play with. Then when the Flask app was pushed to the cloud and tested from the same desktop, it was referencing the local DNS entry as opposed to the public equivalent, thus the connection refused. With the local entry removed, all is now working.
Related
I have one site deployed on AWS Rosa. This site is secured with https protocol. I am trying to create Route53 healthcheck for that site. The Route53 healthcheck fails stating reason -
Failure: Resolved IP: x.x.x.x. SSL communication failure: Received fatal alert: protocol_version
However I am able to access the site on browser.
As mentioned at-
https://aws.amazon.com/premiumsupport/knowledge-center/route-53-fix-unhealthy-health-checks/
following curl command returns me httpcode 200 and response time less than 1 second
curl -Ik -w "HTTPCode=%{http_code} TotalTime=%{time_total}\n" <http/https>://<domain-name/ip address>:<port>/<path> -so /dev/null
Also when I was trying to run same command with old curl version, I was getting error -
routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
The error from Route53 also shows protocol version. Not sure if these 2 are related.
Anyone came across this issues , any pointers will be highly appreciated.
If site URL is accessible over browser, then Route53 health check should return healthy state
I have two VMs setup to learn Puppet - one running puppetserver as my master and another as just a Puppet agent for DNS.
The VMs are running in Hyper-V (Windows 10) and are on the same virtual switch.
After setting up the internal DNS server using this Puppet module - https://github.com/ajjahn/puppet-dns my second, DNS VM can no longer connect to the puppetserver. I receive this error on puppet agent -t runs:
Error: Could not request certificate: No route to host - connect(2) for "puppet.myname.homelab" port 8140
On the puppetserver I have reissued its own agent cert, which changed the cert from puppet <sha-omitted> to "puppet.myname.homelab" <sha omitted> (alt names: "DNS:puppet", "DNS:puppet.myname.homelab")
Running puppet agent -t on the puppetserver to update itself works fine post cert renewal.
I am able to successfully perform a nslookup on any of the hosts using the DNS server, and they do resolve with the new myname.homelab domain.
I still have DHCP enabled on my home router, but I have it set to be the second nameserver in /etc/resolv.conf on both VMs:
search myname.homelab
nameserver 192.168.1.107
nameserver 192.168.1.1
I am running Ubuntu 16.04 and Puppet 4 on both VMs. I have allowed port 8140 in UFW on both VMs, and have even tried disabling UFW with no luck.
I'm still learning Puppet and am a novice to networking, so any suggestions on what else to try and to point me in the right direction would be appreciated.
Thanks!
I slept on it and realized this morning that my router had reassigned my Puppetserver to a new IP, so the DNS A record for it was wrong, even though it was manually assigned in the router's DHCP.
Correcting that did the trick and now everything is working.
Same issue but another cause: the firewwall on the puppet server blocked port 8140. The can be checked on the client as follows:
$ curl -k -I https://puppet:8140
curl: (7) couldn't connect to host
After disabling the firewall on the server (e.g. systemctl stop firewalld):
$ curl -k -I https://puppet:8140
HTTP/1.1 404 Not Found
Date: Thu, 24 Oct 2019 11:27:26 GMT
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html; charset=ISO-8859-1
Content-Length: 278
Server: Jetty(9.2.z-SNAPSHOT)
which is the expected output, and also the puppet agent runs as expected.
I am running coreOS in EC2.
I have a nodeJS api docker image and running that in couple of ports (25001 and 25002). When I curl to them, I see proper response.
My intent is to have a HAProxy above these (run at 25000) which will load balance between these two.
So here are steps that I did:
DockerFile for HaProxy:
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
haproxy.cfg :
global
# daemon
maxconn 256
log /dev/log local0
defaults
mode http
log global
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:25000
default_backend node_api
backend node_api
mode http
balance roundrobin
server api1 localhost:25001
server api2 localhost:25002
Result:
When I run curl for individual services they work --->
curl -i localhost:25001/ping
HTTP/1.1 200 OK
X-Powered-By: Express
Access-Control-Allow-Origin: *
Content-Type: application/json; charset=utf-8
Content-Length: 68
ETag: W/"44-351401c7"
Date: Sat, 06 Jun 2015 17:22:09 GMT
Connection: keep-alive
{"error":0,"msg":"loc receiver is alive and ready for data capture"}
Same works for 25002
But when I run for 25000, I get a timeout error like below:
curl -i localhost:25000/ping
HTTP/1.0 504 Gateway Time-out
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body></html>
I am wondering what am I doing wrong here? Any help would be appreciated ...
When you tell HAProxy that the back-end server is located at
server api1 localhost:25001
you're giving an address relative to the HAProxy container. But your Node servers aren't running on that container, so there's nobody at localhost.
You've got a few options here.
You could use the --link option for docker run to connect HAProxy to your two back-ends.
You could use the --net=host option for docker run and then your servers can find each other at localhost
You could provide HAProxy the address of your host as the back-end address
The first option is the most container-y, but the performance of Docker's bridged network is poor at high loads. The second option is good as long as you don't mind that you're letting everything break out of its container when it comes to the network. The third is kludgey but doesn't have the other two problems.
Docker's article on networking has more details.
My base case is that my Meteor App runs perfectly on Opsworks.
I do a Meteor build, tweak the files and all is good (without HTTPS/SSL). I am not using METEORUP. I just upload my tweaked build file and deploy on opsworks.
Also, I am using the out of the box Opsworks HAPROXY loadbalancer.
I then install the SSL certificates for my app and set Meteor to list on PORT=443 as per screenshot:
In the browser, I see:
503 Service Unavailable
No server is available to handle this request.
In the log files I see:
Mar 8 03:22:51 nodejs-app1 monit[2216]: 'node_web_app_buzzy' start: /bin/bash
Mar 8 03:23:51 nodejs-app1 monit[2216]: 'node_web_app_buzzy' failed, cannot ope
n a connection to INET[127.0.0.1:443/] via TCPSSL
Any ideas welcome
Your HAproxy configuration is expecting meteor/node to respond with SSL.
It should instead, terminate SSL and talking to node/meteor in plain HTTP. This is because, meteor doesn't do SSL ; it expects a server in front to handle it.
Solution:
Update the frontend https-in section to terminate ssl and redirect to the http backend
defaults
#... add this line to enable the `X-Forwarded-For` header
option forwardfor
# ...
# .... update this section ...
frontend https-in
mode tcp
# this bit causes HAProxy to talk TLS rather than just forward the connection
bind :443 ssl crt /path/to/your/certificate
reqadd X-Forwarded-Proto:\ https
# now direct it to your plain HTTP application
acl nodejs_application_buzzy_domain_buzzy hdr_end(host) -i buzzy
use_backend nodejs_app_servers if nodejs_application_buzzy_domain_buzzy
I have a NodeJS app on ubuntu EC2 with dokku. My domain is pointing on server with wildcard and I have a SSL certificate with wildcard as well. Some time ago I added keys to dokku in app/tls/. Back then I had two apps online, production and staging. The last created on dokku (created, deployed) was intercepting all requests to host so api.my.domain and api-stage.my.domain and blah and whatever. If I typed http:// then there was no redirect. Deadline was close so I wasn't fighting with it anymore and I just made production to be the one who intercepts everything. Today I had problems with deployment, I've seen rejects over and over. I've deleted some plugins including not used anywhere dokku-domains, restarted docker few times run this command:
sudo wget -O /etc/init/docker.conf https://raw.github.com/dotcloud/docker/master/contrib/init/upstart/docker.conf
and there was no rejects anymore but... all requests to host returns 502 Bad Gateway. and there was no rejects anymore but... all requests to host returns 502 Bad Gateway including those with green padlock. I remember that previously when app was during deployment there was some info about configuring SSL, now there is none. After deleting an app and creating from scratch there is no nginx.conf file and SSL doesn't work at all.