How to do https in varnish? - varnish

I need accept https from client and my backend is also https.
How can listen HTTPS in varnish and forward request to backend in HTTPS?
VARNISH_LISTEN_PORT=443
# how to add SSL certs?

Varnish, at least in the open source version, does not support HTTPS. Varnish Software released Hitch a while ago, which can be used to terminate HTTPS in front of a Varnish caching proxy. Many setups that I have seen also use nginx for SSL termination with varnish as backend.
I just found out that the commercial product Varnish Plus in fact supports TLS/SSL.

Hitch is opensource and infact supports SSL termination and can be used as Proxy for converting your HTTP server to HTTPS server.
Following is the guide and example, here openstack running on HTTP has been converted into HTTPS for all communication.
Create a VM(will be used as proxy node):
Download Hitch at Proxy Node:
git clone https://github.com/varnish/hitch.git
official link: https://github.com/varnish/hitch
Install HITCH as shown below:
To install hitch, Docs can be followed from the official link, below are the commands anyways.
$ ./bootstrap
$ ./configure
$ make
$ sudo make install
After successuful installation of HITCH, prepare certificate for the Proxy Node(.pem file)
Start the Proxy using HITCH as shown below.
[root#testing_tools hitch]# hitch --tls -f "[*]:443" -b "[2001::29]:80" devstack.pem -u hitch -g hitch
[root#testing_tools hitch]# hitch --tls -f "[*]:9696" -b "[2001::29]:9696" devstack.pem -u hitch -g hitch
Make Following changes(keystone Database) at Openstack end for endpoints. ie. make all endpoints configured for HTTPS.
mysql> select * from endpoint;
+----------------------------------+--------------------+-----------+----------------------------------+-------------------------------------------------+-------+---------+-----------+
| id | legacy_endpoint_id | interface | service_id | url | extra | enabled | region_id |
+----------------------------------+--------------------+-----------+----------------------------------+-------------------------------------------------+-------+---------+-----------+
| 01c5333a2edf4505a14987770a762a8a | NULL | public | f883c99bc5514dd6b8d3b417fb8a121c | https://devstackipv6/volume/v1/$(project_id)s | {} | 1 | RegionOne |
| 1766694b9c5b4814a421a074d44b2d32 | NULL | admin | 68a37fb109aa4f878f893fc87c262f94 | https://devstackipv6/heat-api-cfn/v1 | {} | 1 | RegionOne |
| 29e5c59cd68443d6beb96272b2d57143 | NULL | internal | eff63e56a0264b08a4cc9dc5de4ac8c4 | https://devstackipv6/heat-api/v1/$(project_id)s | {} | 1 | RegionOne |
+----------------------------------+--------------------+-----------+----------------------------------+-------------------------------------------------+-------+---------+-----------+

Related

Nginx proxy in Docker to Azure AKS with HTTPS

I have a web application running in Azure AKS. It has its own public DNS name and HTTPS configured.
But it has no authentication.
I want to authenticate users to access this application using built-in authentication feature of Azure App services and then proxy requests to my app in AKS.
I have installed a nginx_proxy in Docker and setup authentication. Authentication works, but in web browser I get 404 "No found" and does not reach application in AKS. Why?
The application itself is running fine and can be reached on its DNS name without proxy.
+----------------+ +------------+
| Azure | | Azure AD | Authnetication
| Kubernetes | +------------+
| app |
| (my.aks.io) | +-------------+
| | | App Service |
| | 404! | Docker | 200 ok
| |<-------https----| Nginx-proxy |<-------http-----
+----------------+ my.aks.io +-------------+
My Nginx proxy config
server {
listen 80;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://my.aks.io;
}
}

Confused about hosting a dynamic app with custom api with express

I'm little bit confused. I know how to serve user a dynamic page using Ejs with express. Usually when the user access the domain, the API in the domain responds with a page that is in the same domain. so we just have to host the API in a server.
Now i'm learning React on front-end and using express with MongoDB on the back-end. How to connect these front-end and back-end. How back-end serves the react app to the user. I searched on YouTube, but no one is talking about the back-end part, they are only talking about hosting the front-end part.
I don't know if i'm asking in the right way or not.but please help me.
The idea is to have one web-server to serve static files (the react app) and one API server (express) that exposes the API.
In order to consolidate both of these servers under one domain, you will need to use a Reverse Proxy. I would suggest using Nginx (inside docker).
+----------+
| |
| Client |
| |
+----+-----+
|
| myDomain.com
|
[INTERNET]
|
|
v :80
+--------+-------+
/* | | /api
+----------+ Revese Proxy +---------+
| | | |
| +----------------+ |
| :8080 | :3000
| |
+----+----------+ +---------+-----+
| | | |
| React | | Express |
| (webserver) | | (API server) |
| | | |
+---------------+ +---------------+
Any request to myDomain.com will be checked. If the path contains /api the request will be passed to the Backend. any other request will be passed to the webserver that contains the SPA static file.

Node global tunnel and loggly and linkerd

I'm using linkerd and have to use global tunnel to proxy everything via localhost:4140. The problem is that this seems to cause loggly to stop working. As soon as the global tunnel is active, loggly doesn't receive any messages. How can I change it?
globalTunnel.initialize({
host: 'localhost',
port: 4140
});
I have seen, that I can pass a proxy variable in the config for the loggy instance.
var logglyStream = new Bunyan2Loggly(logglyConfig);
Thanks for the help.
globalTunnel overrides all http requests, so assuming that the Loggly library uses the standard http library, further proxy configuration in the Loggly library is not necessary.
I think there may be two issues here:
Linkerd Routing Rules
linkerd needs routing rules to proxy to the outside internet. You'll need a dtab that recognizes host:port requests and routes them accordingly:
dtab: |
/ip-hostport => /$/inet;
/svc => /$/io.buoyant.hostportPfx/ip-hostport;
Confirm routing works with this command:
$ http_proxy=localhost:4140 curl -s -o /dev/null -w "%{http_code}" www.google.com:80
200
Loggly header processing
It appears that Loggly fails all requests that contain headers with forward slashes in them:
# working request:
$ curl -H "foo: bar" -s -o /dev/null -w "%{http_code}" logs-01.loggly.com
403
# failed request:
$ curl -H "foo: /bar" -s -o /dev/null -w "%{http_code}" logs-01.loggly.com
400
Linkerd sets several headers on outbound requests for tracing, service discovery, and context information. Some of those headers include strings with forward slashes.
To get around this, we have two options:
Modify linkerd to clear headers on outbound requests. I've filed github.com/linkerd/linkerd/issues/1218 to track this work.
Set up a proxy server to handle outbound requests for Loggly, as documented in https://github.com/loggly/loggly-jslogger#setup-proxy-for-ad-blockers. Then, assuming that service is set up at internal-nginx-proxy, you can use this routing rule:
dtab: |
/svc/logs-01.loggly.com => /$/inet/internal-nginx-proxy/80;
I'm not familiar with linkerd but it sends logs to logs-01.loggly.com either on port 80 or 443 for secure. Is that proxied through your tunnel?

fail2ban ipv6 support doesn't work

I've installed fail2ban in my web hosting and it is monitoring wordpress login attemps through the access_log file. Once I configured fail2ban to filter wp logins with this regexp:
failregex = ^<HOST> .* "POST /wp-login.php
... the attack was changed through a ipv6 host. I read fail2ban doc and I noticed that there is not ipv6 support in fail2ban (yet). Then, I applied this workaround:
fail2ban ipv6 support(in french)
As you can see in this tutorial, I created 2 new actions called iptables46* and I defined them in the jail.local in order to be executed when fail2ban detects the new regexp for ipv4 and ipv6 (changed in the patched python scripts).
I've checked fail2ban logs and it seems that it is detecting the ipv6 calls, but a warning is displayed before each filter detection:
2016-10-26 23:00:55,539 fail2ban.filter [24963]: WARNING Unable to find a corresponding IP address for 127.0.0.1/8: [Errno -2] Name or service not known
2016-10-26 23:00:55,540 fail2ban.filter [24963]: INFO [wp-auth] Found xxxx:xxxx:xxx::xxxx:xxx
(xxxx:xxxx:xxx::xxxx:xxx is the attacker host ipv6)
I've checked fail2ban status with : fail2ban-regex access_log /etc/fail2ban/filter.d/wp-auth.conf and there are a lot of results (regexp and the filter are ok), but the host is not blocked by iptables. I've checked ip6tables with :
ip6tables -S | grep f2b
and the results:
-A f2b-default -s 2002:5bc8:c41::5bc8:c41/128 -j REJECT --reject-with icmp6-port-unreachable
also if I check the status of the fail2ban filter: fail2ban-client status wp-auth
:
Status for the jail: wp-auth
|- Filter
| |- Currently failed: 1
| |- Total failed: 93
| `- File list: /opt/wordpress/logs/access_log
`- Actions
|- Currently banned: 1
|- Total banned: 2
`- Banned IP list: xxxx:xxxx:xxx::xxxx:xxx
It seems that the ipv6 is not blocked because the host is still launching requests.
I don't know why fail2ban log is displaying a WARNING (related to a 127.0.0.1/8: [Errno -2]), if the created ip6table rule is ok... I don't know why the host is not been blocked.
Any help will be appreciated.
Good news are that fail2ban released support for IPv6 recently.
For Debian IPv6 servers I would recommend to follow this tutorial.
For CentOS IPv6 servers, I would recommend to download it here and then execute these commands replacing the version number accordingly:
tar xvfj fail2ban-0.11.0.tar.bz2
cd fail2ban-0.11.0
python setup.py install
Make sure a jail for sshd is enabled in /etc/fail2ban/jail.local, for example:
[sshd]
enabled=1

BigCouch error: "couldn't connect to host" when joinning a node

I try to setup 2 nodes with BigCouch. I set FQDN on /etc/hostname on 2 machines (example: may2.test.com). I also edited /opt/bigcouch/etc/vm.args:
-name bigcouch#may2.test.com
-setcookie monster (as default)
Then I try:
curl localhost:5984 ->
{"couchdb":"Welcome","version":"1.1.1","bigcouch":"0.4.2"}
curl localhost:5986 -> {"couchdb":"Welcome","version":"1.1.1"}
curl may2.test.com:5984 ->
{"couchdb":"Welcome","version":"1.1.1","bigcouch":"0.4.2"}
curl may2.test.com:5986 -> curl: (7) couldn't connect to host
Anyone can give me some ideas to fix it & make BigCouch work perfectly? Thank you alot
I aslo fixed this error:
Remove CouchDB. BigCouch ‘s enough.
because I install BigCouch on localhost, my query must like:
curl -X PUT http://localhost:5986/nodes/bigcouch#slave3.test.com -d {}
In your configuration file /opt/bigcouch/etc/default.ini
in the section [httpd] set bind_address = 0.0.0.0 and then sudo sv restart bigcouch

Resources