Have a Lando web server listen on localhost:8000 - lando

Long story, but due to previous configurations of 3rd party services, it'll be much easier if I can have Lando listen on port 8000 instead of the assigned docker port (e.g. it's different every time). I've tried doing overrides such as
overrides:
ports:
- '8000'
Is it possible to configure Lando so that my apache server listens on port 8000?

It is not an answer to your question, but if you use Lando proxy, you would not have a problem with changing appserver port and you can point your http request to the selected hostname.
Example of .lando.yml:
name: test
proxy:
appserver:
- test.localhost
- my.local-domain.test

Related

My express https server works locally but not in a docker container

I currently have two docker containers running:
ab1ae510f069 471b8de074c4 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:3001->3001/tcp hopeful_bassi
2d4797b77fbf 5985005576a6 "nginx -g 'daemon of…" 25 minutes ago Up 25 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp wizardly_cori
One is my client and the other (port 3001) is my server.
The issue I'm facing is I've just added SSL to my site, but now I can't access the server. My theory is that the server needs both port 443 and port 3001 open, but I can't have port 443 open on both containers. I can run the server via HTTP locally, so I think that also points to this conclusion.
Is there anything I can do to have both using https? The client won't talk to the server if the server uses http (for obvious reasons).
Edit:
I'm now not sure if it is to do with port 443, as I killed my client and tried to just run the server, but it still gave me connection refused:
docker run -dit --restart unless-stopped -p 3001:3001 -p 443:443 471b8de074c4
If you open the port 443 for a docker container, it means that a docker-managed tool will be started. This (anyways, highly sup-optimal) tool will forward the TCP requrest to your host port 443 to the container.
If you want two containers to use the port 443, docker would want to start this portforwarder twice, on the same port. As your docker output shows, it could happen only once. Maybe digging (deeply) in the (nearly non-existent) docker logs, you can see also the relevant error message.
The problem you've found is not docker-dependant, it is the same problem what you would face also in a container-less environment - you simply can't start multiple service processes listening on the same TCP port.
Also the solution is coming from the world before the containers.
You need a central proxy service, this would listen on your port 443, and forward the requests - depending on the asked virtualhost - to the corresponding container.
Dig into the docker containers, it is nearly sure that such a https forward proxy exists. This third container will forward the requests where you want. Of course you will need to configure it.
From that moment, you don't even need to have https in your containers (although you can if you want), what helps a lot in productive, correctly certified ssl environments. Only your proxy will need the certificates. So:
/---> container A (tcp:80)
tcp:443 -- proxy
\---> container B (tcp:80)

How do I make a NodeJs project publicly accessible on port 3000?

I have a NodeJs/Express project in Alibaba cloud based Ubuntu server.
When I run project and access with curl localhost:3000 and curl 127.0.0.1:3000 it works!
When I access with IP public, e.g. curl 192.x.x.x:3000 it doesn't work, even though I have edited config in Express project in some code to : server.listen(3000,"0.0.0.0") OR server.listen("3000","192.x.x.x").
FYI I have Apache on this server. When I access on Internet with IP public no problem.
What can I do to solve this problem? Thanks beforehand.
PS: the 192.x.x.x is my IP public and it works access with Apache project
Issue the following command to open port 3000 for TCP traffic.
sudo ufw allow 3000/tcp
You have to configure your security ground and create a inbound rule to allow port 3000. Follow this guideline.
https://www.alibabacloud.com/help/doc-detail/25471.htm
Make sure you allow TCP traffic or all traffic from all sources to the port 3000 as the inbound rule.
The fact that you can access your service locally - but not publicly could mean 2 possible configurations:
The server running your application has blocked the port 3000
You have not configured your server to map the port 80 of a specific route to the port 3000
It is highly possible that a most essential part of your server configuration has not been done.

Cannot Access Google App Engine Instance Externally

I'm running a node JS app on Google Cloud Services using the cloud shell. I've deployed using gcloud app deploy, everything reports as a success. If I use gcloud app logs tail -s default I can see the logs, it says my app is listening on port 3000, that's the first debug message I see from my app.
When I invoke the endpoint without the port on the end, i.e.
https://myapp.appspot.com/myendpoint
I get an error,
"GET /myendpoint" 502
If I try with port 3000, i.e.
https://myapp.appspot.com:3000/myendpoint
The request just times out and I get no log messages from the shell.
I have port 3000 opened on the firewall, and my app.yaml is,
runtime: nodejs
env: flex
service: default
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
Update 1:
I've also tried adding a forwarding port to my app.yaml,
network:
forwarded_ports:
- 3000/tcp
And allowed port 3000 in the VPC Firewall, but this seems to make no difference.
Update 2:
I can SSH into the instance and access the endpoint using a wget http://127.0.0.1:3000/myendpoint command but still no external access.
Update 3:
I've also tried port 443 too, listening on IP 0.0.0.0. But it seems to bind to IPV6 ip address 0 and changes the port to 8443 (somehow). This is just insane...
I resolved the issue by binding my service to port 8080, and removing the "service" field from my app.yaml. the external calls are all routed to port 8080 by default.
External calls have no port specified.

Access Node.js server by URL without port at the end

My server is running on a Node.js environment with Express. My server works fine, but I can't remove the port at the end of the domain name from the URL.
What is the right way to access my app with an URL without port at the end ?
Client side
By default, the port is 80 when a browser make an HTTP request.
If you type localhost, the real request is localhost:80 because no port is specified. It will be the same with any domain name. If you type example.com, the real request is example.com:80.
It is the client (here the browser) which choose on which port it will make his request to the server.
You can force your browser to emit a request on any port by adding :port_number after the domain name, as localhost:3000 or example.com:3000. Here we change the port from 80 to 3000.
Server side
The web server chooses on which port it listens for requests. It can be 80, 3000 or any other port.
If a client makes an HTTP request, your web server needs to listen to the right port. If the client emits example.com:4000, your web server must listen on port 4000 to get and process the request.
To make a web server, you can use Node.js, Apache (used in LAMP), Nginx etc. You can have multiple web servers running on your system and each of them can use multiple ports, but you can't make them listen on the same port. One of your web server may not start or could take the lead on others or crash...
Solutions are to use only one web server or to use multiple web server on different ports. In your situation, you are using LAMP so Apache web server. Its probably running on port 80 in his configuration. In this case you can't run a Node web server on port 80 because it's already in use. You should choose another port like 3000 for example. Both Node and Apache will then run on your system but on different ports respectively 3000 and 80.
In this last situation, you can access directly to Apache, but not to Node without precise the port 3000. To be able to access Node web server by port 80 without stopping Apache, you need to go through Apache and to make it redirect requests to your Node server in some cases. To do that, you need to configurate a proxy in your Apache. Note that it would be the same if you was using Nginx or other web servers.
Example
Let's take a simple express server on port 3000 :
// server.js
var express = require('express'),
app = express(),
http = require('http').createServer(app),
port = 3000;
app.get('*', function (req, res, next) { res.sendFile(__dirname + '/views/index.html'); });
http.listen(port, function () { console.log('App running & listening on port ' + port); });
If you type in the terminal node server.js, you can access from browser by localhost:3000, but you can't access by localhost because no web server is running on port 80.
If you change port variable to 80, you can access from browser by localhost or localhost:80, but not by localhost:3000 anymore.
If you edit /etc/hosts (sudo nano /etc/hosts) with a new line 127.0.0.1 example.com, you can access from browser by example.com if port is 80, else example.com:port_number like example.com:3000. This third solution maps domain name to ip address in your local client only.
If the chosen port, 80 for example, is already in use by another process (as LAMP), your node server may not works. In this case you should close this other process first or choose another port for your node process. In the third example, if you close the LAMP first, you can access from browser by example.com, if you choose another port for Node, you can access from browser by example.com:port_number like example.com:3000 for Node and still access your LAMP server on port 80.
Don't forget that 80 is the default port used by the browser if no port is specified. If you use another port, you should precise it from the browser by adding :port_number after your domain.
Now if you own a real domain name you will need to make a real DNS mapping not juts edit /etc/hosts. Configure your DNS on your registar account (where you bought your domain name) to make it point to your server's IP. Like that, when a client make an HTTP request to the domain name, it will be redirected to your server.
To have both Apache and Node.js running and available on port 80, you should make a proxy as explain above. Indeed, for you the problem is probably that you have a web server already running on port 80 (Apache with LAMP) and you want also your Node.js app to run on port 80 to don't force clients to precise the port at the end of the url. To fix that, you need to make a proxy in Apache conf to redirect requests which come from the specific domain name to your localhost node server process on the right port.
Something like that in your apache conf :
<VirtualHost *:80>
ServerName example.com
ServerAlias www.example.com
ProxyRequests Off
ProxyPreserveHost On
ProxyVia Full
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:3000/
</VirtualHost>
Here when a request arrive on your server on port 80, Apache will check if it comes from example.com and if it is, it will redirect to 127.0.0.1:3000 where your node server will take the lead. The two different process (Apache & Node) should run in the same time on your server on different port.
If you want to run your node js server without any port and simply by http://localhost then listen your express js server on port 80 .
You could either do as stated by the previous answers and run on port 80 OR
you could keep the server running on whatever port you want and setup a proxy server such as nginx and forward the HTTP requests to said server.
This could be helpful in case you want to spin up multiple instances or even different processes.
When you see a URL, without a port, it means one of two ports are being served:
https:// - port 443
http:// - port 80
Even assuming the port is not in use, you can't service directly to port 80 without superuser privileges because port 80 and port 443 are privileged ports.
If you want to test the server running on port 80 directly:
sudo node index.js
Where index.js is the name of your Express application.
Keeping it running
Because you tagged apache, I'm assuming you want to know how to set up a node server using Apache. If you don't need a production quality server and just want to keep it running all the time, you can do that too.
Dev/Just keep it running
You can daemonize your server. A quick look for a "node" solution exposes forever as a way to do that. Simply install and run like this:
yarn global add forever
# or
# npm i -g forever
# remember, sudo for port 80
sudo forever start index.js
Production/Apache
Use a non-privileged port for Node, and set up a proxy in Apache. Something like:
ProxyPass / http://localhost:8000
If you set the port to 8000. Put that in a <VirtualHost>. Examples here. Likely you would still want to daemonize your nodejs Application using forever or some similar daemon tool (systemd is great for Linux services)

My websites running in docker containers, how to implement virtual host?

I am running two websites in two docker containers respectively in a vps.
e.g. www.myblog.com and www.mybusiness.com
How can I implement virtualhost in the vps so that the two websites can both use port 80.
I asked this question somewhere else, and was suggested to take a look at: https://github.com/hipache/hipache and https://www.tutum.co/
They look a bit curving. I am trying to find if there is a straightforward way to achieve that. Thanks!
In addition, forgot to mention my vps is a Ubuntu 14.04 box.
Take a look at jwilder/nginx-proxy project.
Automated nginx proxy for Docker containers using docker-gen
It's the easiest way to proxy your docker containers. You don't need to edit the proxy config file every time you restart a container or start a new one. It all happens automatically for you by docker-gen which generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
Usage
To run it:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock \
jwilder/nginx-proxy
Then start any containers you want proxied with an env var VIRTUAL_HOST=subdomain.youdomain.com
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
Provided your DNS is setup to forward foo.bar.com to the a host running nginx-proxy, the request will be routed to a container with the VIRTUAL_HOST env var set.
Multiple Ports
If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one. If your container only exposes one port and it has a VIRTUAL_HOST env var set, that port will be selected.
You need a reverse proxy. We use nginx and haproxy. They both work well, and are easy to run from a docker container. A nice way to run the entire setup would be to use docker-compose (formerly fig) to create the two website containers with no externally visible ports, and use a, say, haproxy container with links to both website containers. Then the entire combination exposes exactly one port (80) to the network, and the haproxy container forwards traffic to one or the other container based on the hostname of the request.
---
proxy:
build: proxy
ports:
- "80:80"
links:
- blog
- work
blog:
build: blog
work:
build: work
Then a haproxy config such as,
global
log 127.0.0.1 local0
maxconn 2000
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
log global
option dontlognull
option redispatch
retries 3
timeout connect 5000s
timeout client 1200000s
timeout server 1200000s
### HTTP frontend
frontend http_proxy
mode http
bind *:80
option forwardfor except 127.0.0.0/8
option httplog
option http-server-close
acl blog_url hdr_beg(host) myblog
use_backend blog if blog_url
acl work_url hdr_beg(host) mybusiness
use_backend work if work_url
### HTTP backends
backend blog
mode http
server blog1 blog:80 check
backend work
mode http
server work1 work:80 check

Resources