How can I use Caddy to proxy for another site? - caddy

I have a service on foo.bar.com and I need to move it to foo.example.com. To give stragglers a chance to catch up I was hoping to put Caddy on on the server dealing with foo.bar.com and have it proxy for foo.example.com. Can't get even a basic example working like:
Caddyfile
:2015
reverse_proxy https://example.com

This is correct example, You did not provided any debugging and any information about the error you are getting.
Caddy by default expects https if web configuration is either not IP or you explicitly tell this is http endpoint.
So it should work for you with curl https://localhost:2016/
Enable debugging and show us any error.
To increase verbosity put this on your Caddyfile
:2015 {
log {
level DEBUG
output stdout
}
reverse_proxy https://example.com

Related

How to execute OpenTest thru Apache reverse proxy along with other applications

First some context
We have an Ubuntu Server 18.04 LTS server running on Azure
Our company security policies only allows for ports 80 and 443 to be accessed thru HTTP/HTTPS
Any applications such as Jenkins or NodeJS ones running on other ports should use a reverse proxy thru Apache
The same server already has Jenkins running on port 8080 and Jenkins itself can be configured to run using what they call a "--path" parameter which makes it accessible thru URL http://localhost:8080/jenkins, hence reverse proxy is pretty straight forward to configure as anything going to "/jenkins" can just be pass to http://localhost:8080/jenkins, current Apache config (which is working for Jenkins) as follows:
# Jenkins
ProxyPass /jenkins http://localhost:8080/jenkins nocanon
ProxyPassReverse /jenkins http://localhost:8080/jenkins
ProxyRequests Off
AllowEncodedSlashes NoDecode
<Proxy http://localhost:8080/jenkins*>
Order deny,allow
Allow from all
</Proxy>
The problem we are facing
So, for running OpenTest, we have to install it as a npm package which can then be executed by running opentest server command, it will start the application on port 3000 by default http://localhost:3000 but it is possible to change the preferred port as well thru configuration https://getopentest.org/reference/configuration.html#server-configuration
The problem is that we need to re-route anything, let's say going to "/opentest" to the opentest server app but that doesn't work for all static assets, api urls, etc... since the app is just running on port 3000 http://localhost:3000 but doesn't seems to have something like the Jenkins' "--path", so we can't just mimic the same reverse proxy we have for Jenkins; the idea would be to have opentest in path "/opentest", something like http://localhost:3000/opentest.
We were not able to find any OpenTest configuration that allows me to do something like http://localhost:3000/opentest and we are new to pm2 so we can't tell if it is possible to use pm2 to to run the OpenTest application in a "path" or some sort of "local known application domain" which we could use to re-route the reverse proxy to.
Any thoughts, ideas, workarounds or solutions are welcome; we might be taking the wrong approach here so we would also appreciate any insights in that regard.
Thanks!
Starting with version 1.2.0, you can use the urlPrefix configuration parameter in server.yaml to accomplish this:
#...
urlPrefix: /opentest

How to setup Caddy to get HTTPS on my server

I've been issues to get the HTTPS address for my server. Let's say I have a domain www.mydomain.com
If I run this command it just works fine. I can get the HTTPS.
caddy -host www.domain.com
But I have some proxies that I use for django. So I have a CaddyFile. This is how the CaddyFile is set:
# Django
www.mydomain.com {
root /root/my_projects/my_project
proxy / 127.0.0.1:8000 {
transparent
except /static
}
log /var/log/caddy.log
So if I run this command
caddy -host CaddyFile
, it's not giving me HTTPS. Instead this is what the output is:
Activating privacy features... done.
Serving HTTP on port 2015
http://.:2015/caddyfile
So how should I configure the file or what command should I use to get HTTPS on my server with the proxy and the root folder that I set in the CaddyFile?
Thanks.
I'm guessing you use caddy v1.
From the caddy docs said:
-host
The default hostname or IP address to listen on. Sites defined in the Caddyfile without a hostname will assume this one. This is usually used with -port to quickly get simple sites up and running without a Caddyfile.
The -host option maybe ignored your Caddyfile.
If your Caddyfile is in the same directory with caddy binary, try remove all args, just run caddy. It will automatically picks up the Caddyfile.
Otherwise, try this caddy -conf <path/to/your/Caddyfile>

Use Reverse Proxy from Https client to Http server running locally on my machine

I have a published site that uses HTTPS. The site needs to communicates with a HTTP node Express API. The API is run on my local machine. Everything worked fine until I switched the client application to use HTTPS. Now I receive mixed content warnings. I have been reading about reverse proxys and wonder if this could be the solution to my problem. Is it possible to proxy a request to my localhost? Or will localhost point to the server the proxy is on?
I have been looking at using nginx as the reverse proxy server but I have zero experience with proxys and not positive how to go about it.
I am mainly wondering if it is possible or not before I dig any deeper.
Yes, this is a pretty standard use case for using nginx (or any other reverse proxy). You would configure the location prefixes, etc that need to go to your backend application and proxy (via proxy_pass directive) to them. Any static content can be served directly from nginx. All of this can then behind nginx.
Assuming that your application is never issuing absolute urls which make use of "http://" this should resolve your mixed content warnings.
You will probably want to read some tutorials but the basics of your configuration would be:
server {
listen 443 ssl; # you can also add http2
server_name hostnames that you listen for;
ssl_certificate_key /path/to/cert.key;
ssl_certificate /path/to/cert.pem;
root /var/www/sites/foo.com;
location /path/handled/by/application {
proxy_pass http://localhost:8000; # or whatever port is
}
}

Nginx and multiple domains

I bought some domains at godaddy.com (i.e mydomain.com) for my droplet at digitalocean.com (i.e 199.216.110.210). I run a nodejs application on port 80 on the droplet. From godaddy.com, I forward with masking mydomain.com to 199.216.110.210 and I could see may app.
Now I want to run on 199.216.110.210 several node applications on different ports, using ngnix as reverse proxy. I followed the instructions here (www.digitalocean.com/community/articles/how-to-host-multiple-node-js-applications-on-a-single-vps-with-nginx-forever-and-crontab).
My nginx .conf file is
server {
listen 80;
server_name mydomain.com;
location / {
proxy_pass http://localhost:3000;
# same as in the link above
}
}
(and I am sure it is read: when ngnix start if I put an error there, ngnix reports it).
I start the nodejs application on port 3000:
I try mydomain.com, but ngnix shows always the welcome page.
Also doing mydomain.com: 3000 does not work,
it works only with 199.216.110.210:3000.
From godaddy.com, if I forward with masking the mydomain.com to 199.216.110.210:3000 I can see may app.
But I do not like this solution. I would like domains pointing to my droplet, without specifing the port and admin them with nginx.
How can I get a domain name to use with ngnix as reverse proxy to select my apps, mapped on different domains on different ports? I suppose that forwarding from godaddy.com is somehow limited.
In your server go to /var/log/nginx and do a tail -F *log. Now in another shell restart nginx.
I suspect that your domain name is too long and nginx will complain about its hash_bucket_size is too small. If this is the case open /etc/nginx/nginx.conf and make sure that the line
server_names_hash_bucket_size 64;
exists, has a value of 64 and is uncommented. Then do sudo service nginx reload, and check if all works as expected.
I am going to detail step by step how I am able to do it in my aws ec2 instance;
I set up a DNS record to my instance, so i can set mydomain.com to 192.168.123.123 (my specific IP).
Inside my instance I have forever running my node.js app in port 3000 (I test it work by issuing curl localhost:3000 from the command line)
I then download this .sh file in order to properly intantiate nginx; curl -o nginxStarter.sh https://gist.githubusercontent.com/renatoargh/dda1fbc854f7957ec7b3/raw/c0bc1a1ec76e50cdb4336182c53a0b222edb6c0e/start.sh
I configure nginx with this configuration file. Put this file in; /etc/nginx/nginx.conf
Start nginx with this command; sudo sh nginxStarter.sh start
PS.: For multiple apps just replicate the lines that routes the requests to specific ports, very easy...! If not needed you can eliminate lines regarding out SSL.

WebServer on EC2 returns 503/404/nothing

I have a Linux EC2 instance. Apache in installed and up, so when I'm ssh'ed to my instance and do
curl localhost
I see a webpage served by my Apache. But when I try to access this page by URL (like http://ec2-xx-xx-xx-xx.eu-west-1.compute.amazonaws.com) I get back only 503 error page on one Internet connection, 404 error page on other connection. access_log and error_log show no activity when I try to access the server by URL. I'm stuck. Please give me some tips how to solve this issue.
I guess missing logs on local hint us that http error messages returned by amazonaws.com itself not from your Apache server. Did you set the security for port TCP 80? ssh port is open as default but I am not sure for port 80
I fixed this by turning iptables off. So firewall was the problem. Thank you guys for help.

Resources