Add non-docker service to traefik v2 - site resources missing - linux

Question update below!
I have set up traefik in the last days, it seems to work great for docker containers. What does not work is setting up a non-docker backend. I have a netdata dashboard running (https://github.com/netdata/netdata) on port 19999 on the host.
I have defined a file provider:
[providers.file]
directory = "/home/myname/traefik"
filename = "providers.toml"
watch = true
Where I defined the service and router for my netdata dashboard:
[http.routers]
[http.routers.netdata]
service = "netdata"
middlewares = ["replacepath"]
rule = "Host(`my.host.name`) && Path(`/netdata`)"
[http.middlewares]
[http.middlewares.replacepath.replacePath]
path = "/"
[http.services]
[http.services.netdata]
[http.services.netdata.loadBalancer]
[[http.services.netdata.loadBalancer.servers]]
url = "http://192.168.178.60:19999/" ---> my server local ip
I use replacepath to strip the path so I don't end up one directory further down, which is not existing.
However when I visit http://my.host.name/netdata it serves me only raw html by the looks of it, I get 404s for .css and .js content.
What do I have to do to get all files in the website directory delivered? I feel like there is an easy solution to this which I can't see right now...
I found several tutorials using older traefik versions, where they use frontends and backends, to my understanding these are being replaced by routers, middlewares and services.
I tried using "http://localhost:19999" instead of my local ip, with no success (results in a bad gateway error)
I also tried setting the traefik container to the network "host" because the containers should be isolated from the rest of the host, so traefik cannot communicate with the netdata server, but as I said I get at least part of the website, so this can't be the issue?
Update #1, 30 Jan 20:
After some more tries and a failed attempt to make it work with nginx I realized that not the proxy itself is the problem here. I noticed that whatever service I run at root level (so, not path rules in traefik, or location / in nginx) it works, but everything else which gets a path/location is broken or not working at all. One service I wanted to proxy via a route is a dashboard from my homebridge (https://github.com/nfarina/homebridge) - but it seems like Angular is having troubles with custom paths. Same problem with my netdata dashboard or my onionbox status site. I am leaving this question open, maybe someone finds a (hacky) way of making it work.

You must use "PathPrefix" on router and "replacePathRegex" on middleware.
Try this way... its work for me:
[http]
[http.services]
[http.services.netdata]
[http.services.netdata.loadBalancer]
[[http.services.netdata.loadBalancer.servers]]
url = "http://172.24.0.1:19999"
[http.middlewares]
[http.middlewares.rem_subfolder]
[http.middlewares.rem_subfolder.replacePathRegex]
regex = "/netdata/(.*)"
replacement = "/$1"
[http.routers]
[http.routers.netdata]
rule = "PathPrefix(`/netdata/`)"
entrypoints = [
"web",
"websecure"
]
middlewares = [
"rem_subfolder"
]
service = "netdata"
Run the following command to get your host ip (default route), and set at "url" from service.
docker exec -it traefik ip route
Remember to change bind to = * to bind to = 172.24.0.1 at netdata.conf, to make it accessible only from traefik.

Related

Docker request to own server

I have a docker instance running apache on port 80 and node.js+express running on port 3000. I need to make an AJAX request from the apache-served website to the node server running on port 3000.
I don't know what is the appropiate url to use. I tried localhost but that resolved to the localhost of the client browsing the webpage (also the end user) instead of the localhost of the docker image.
Thanks in advance for your help!
First you should split your containers - it is a good practice for Docker to have one container per one process.
Then you will need some tool for orchestration of these containers. You can start with docker-compose as IMO the simplest one.
It will launch all your containers and manage their network settings for you by default.
So, imaging you have following docker-compose.yml file for launching your apps:
docker-compose.yml
version: '3'
services:
apache:
image: apache
node:
image: node # or whatever
With such simple configuration you will have host names in your network apache and node. So from inside you node application, you will see apache as apache host.
Just launch it with docker-compose up
make an AJAX request from the [...] website to the node server
The JavaScript, HTML, and CSS that Apache serves up is all read and interpreted by the browser, which may or may not be running on the same host as the servers. Once you're at the browser level, code has no idea that Docker is involved with any of this.
If you can get away with only sending links without hostnames <img src="/assets/foo.png"> that will always work without any configuration. Otherwise you need to use the DNS name or IP address of the host, in exactly the same way you would as if you were running the two services directly on the host without Docker.

Run node.js on cpanel hosting server

It is a simple node.js code.
var http = require('http');
http.createServer(function(req, res) {
res.writeHead(200, { 'Content-Type' : 'text/plain'});
res.end('Hello World!');
}).listen(8080);
I uploaded it on cpanel hosting server and installed node.js and run it.
If a server is normal server I can check script result by accessing 'http://{serverip}:8080'. But on cpanel is hosting domain and sub domain and every domain is matched by every sites. Even http://{serverip} is not valid url.
How can I access my node.js result?
Kindly teach me.
Thanks.
bingbing.
Install/Setup NodeJS with CPanel
1. Log in to your account using SSH (if it is not enabled for your account, contact the support team).
2. Download Node.js
wget https://nodejs.org/dist/latest/node-v10.0.0-linux-arm64.tar.xz
3. Extract the Node.js files
tar xvf node-v10.0.0-linux-arm64.tar.xz
4.Now rename the folder to "nodejs". To do this, type the following command
mv node-v10.0.0-linux nodejs
5. Now to install the node and npm binaries, type the following commands:
mkdir ~/bin <br> cp nodejs/bin/node ~/bin
cd ~/bin
ln -s
../nodejs/lib/node_modules/npm/bin/npm-cli.js npm
6. Node.js and npm are installed on your account. To verify this, type the following commands
node --version
npm --version
The ~/bin directory is in your path by default, which means you can run node and npm from any directory in your account.
7. Start Node.js Application
nohup node my_app.js &
8. Stop the Application
pkill node
9. Integrating a Node.js application with the web server(optional)
Depending on the type of Node.js application you are running, you may want to be able to access it using a web browser. To do this, you need to select an unused port for the Node.js application to listen on, and then define server rewrite rules that redirect visitors to the application.
In a text editor, add the following lines to the .htaccess file in the/home/username/public_html directory, where username represents your account username:
RewriteEngine On
RewriteRule ^$ http://127.0.0.1:XXXXX/ [P,L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^(.*)$ http://127.0.0.1:XXXXX/$1 [P,L]
In both RewriteRule lines, replace XXXXX with the port on which your Node.js application listens.
To run a Node.js application on a managed server, you must select an unused port, and the port number must be between 49152 and 65535(inclusive).
Save the changes to the .htaccess file, and then exit the text editor. Visitors to your website are redirected to the Node.js application listening on the specified port.
If your application fails to start, the port you chose may already be in use. Check the application log for error codes like EADDRINUSE that indicate the port is in use. If it is, select a different port number, update your application’s configuration and the .htaccess file, and then try again.
cPanel typically runs Apache or another web server that is shared among all the cPanel/unix accounts. The web server listens on port 80. Depending on the domain name in the requested URL, the web server uses "Virtual Hosting" to figure out which cPanel/unix account should process the request, i.e. in which home directory to find the files to serve and scripts to run. If the URL only contains an IP address, cPanel has to default to one of cPanel accounts.
Ordinarily, without root access, a job run by a cPanel account cannot listen on port 80. Indeed, the available ports might be quite restrictive. If 8080 doesn't work, you might try 60000. To access a running node.js server, you'll need to have the port number it's listening on. Since that is the only job listening on that port on that server, you should be able to point your browser to the domain name of any of the cPanel accounts or even the IP address of the server, adding the port number to the URL. But, it's typical to use the domain name for the cPanel account running the node.js job, e.g. http://cPanelDomainName.com:60000/ .
Of course port 80 is the default for web services, and relatively few users are familiar with optional port numbers in URLs. To make things easier for users, you can use Apache to "reverse proxy" requests on port 80 to the port that the node.js process is listening on. This can be done using Apache's RewriteRule directive in a configuration or .htaccess file. This reverse proxying of requests arguably has other benefits as well, e.g. Apache may be a more secure, reliable and manageable front-end for facing the public Internet.
Unfortunately, this setup for node.js is not endorsed by all web hosting companies. One hosting company that supports it, even on its inexpensive shared hosting offerings, is A2Hosting.com. They also have a clearly written description of the setup process in their Knowledge Base.
Finally, it's worth noting that the developers of cPanel are working on built-in node.js support. "If all of the stars align we might see this land as soon as version 68," i.e. perhaps early 2018.
References
Apache Virtual Hosting -
http://httpd.apache.org/docs/2.4/vhosts/
Apache RewriteRule Directive - http://httpd.apache.org/docs/2.4/mod/mod_rewrite.html
A2Hosting.com Knowledge Base Article on Configuring Node.js - https://www.a2hosting.com/kb/installable-applications/manual-installations/installing-node-js-on-managed-hosting-accounts
cPanel Feature Request Thread for node.js Support - https://features.cpanel.net/topic/nodejs-hosting
Related StackOverflow Questions
How to host a Node.Js application in shared hosting
Why node.js can't run on shared hosting?
Yes it's possible, but it has few dependencies which may or may not be supported by either your cpanel hosting provider or the plan you opt in for.
Below steps that I'm mentioning is just for a demo purpose. If you are a student or just want to play with it you can try it out. I'm not a security expert so from security point of view how good it is I really don't know.
So with that being said let's see how I configured it. I have hostinger cpanel hosting subscription and following are the steps:
Enable SSH ACCESS
Connect to shared machine via ssh
Check your linux distro and download & setup node js
In my case following are the commands for that:
Downloading node & extracting it using curl
curl https://nodejs.org/dist/v12.18.3/node-v12.18.3-linux-x64.tar.gz |tar xz
This will download & extract node & create a directory. You can confirm that using ls command as shown in the image below.
At this point you can check the versions as shown below
as you can see for the node command it's okay but for the npm command we have modify it as follows
./node-v12.18.3-linux-x64/bin/node ./node-v12.18.3-linux-x64/lib/node_modules/npm/bin/npm-cli.js --version
Further we can create alias to make life little easier
check the below images for that:
I tried using bashrc/bash_profile but somehow it didn't work .
And that's all node server running on a shared cpanel machine.
Now I wanted to have an express js based rest api support in this case. The problem with that is it will be locally hosted on the port I'll give. Check the below example:
var express=require('express')
var app=express()
app.get('/', function (req, res) {
res.send('hosting node js base express api using php & shared hosting a great way to start yjtools')
})
console.log("listening yjtools node server on port 49876...")
app.listen(49876)
The problem here is even though it will execute I'll not be able to access it over the network. This is because we only get fixed predefined ports (like 80,21,3306 etc.) which are allowed/open on the shared cpanel machine. Due to this the express app I hosted will only available locally on 49876 port.
Let's see what do we have:
An express js based app hosted locally on cpanel machine.
Php based hosted Apache server available over http/https.
So we can make use of php with redirect rule set and curl to bridge the gap.
Following are the changes I did to make it work:
In .htaccess file add a redirect rule, say domain/api is what I want my rest api path to be.
RewriteRule api/(.*)$ api/api.php?request=$1 [QSA,NC,L]
In the api/api.php file (this is the path I choose you can choose any path)
<?php
echo "Hello ".$_REQUEST['username'];
echo '<hr>';
$curl = curl_init('http://127.0.0.1:49976/');
curl_setopt($curl, CURLOPT_HEADER, 1);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
//Get the full response
$resp = curl_exec($curl);
if($resp === false) {
//If couldn't connect, try increasing usleep
echo 'Error: ' . curl_error($curl);
} else {
//Split response headers and body
list($head, $body) = explode("\r\n\r\n", $resp, 2);
$headarr = explode("\n", $head);
//Print headers
foreach($headarr as $headval) {
header($headval);
}
//Print body
echo $body;
}
//Close connection
curl_close($curl);
?>
And on the ssh prompt just run the app.js file
node api/app.js
Below are the images for this working in action:
Here is the similar thing which I referred for my program, so we can also make this node call via php itself.
Now I have express based rest api support , angular app hosted and mysql for database everything on cpanel.
You can use any domain pointed to that cPanel server and instead of accessing http://server-ip:8080 try accessing http://domain.tld:8080. By default cPanel does not bind on port 8080. Be sure to check if there is any firewall on the server. If it is, then allow incoming connections on tcp port 8080. Depending on your WHM server configuration, it should also work with http://server-ip:8080
cPanel Version 80 has nodejs 10.x support: https://documentation.cpanel.net/display/80Docs/80+Release+Notes#id-80ReleaseNotes-InstallanduseNode.jsapplications
Install and use Node.js applications
You can now install and use Node.js applications on your server. To
use Node.js, install the ea-nodejs10 module in the Additional Packages
section of WHM's EasyApache 4 interface (WHM >> Home >> Software >>
EasyApache 4).
You can register Node.js applications in cPanel's Application Manager
interface (cPanel >> Home >> Software >> Application Manager). For
more information, read our Guide to Node.js Installations
documentation.
For Application Manager to be enabled: https://documentation.cpanel.net/display/80Docs/Application+Manager
Your hosting provider must enable the Application Manager feature in
WHM's Feature Manager interface (WHM >> Home >> Packages >> Feature
Manager).
Your hosting provider must install the following Apache modules:
The ea-ruby24-mod_passengermodule. Note: This module disables Apache's
mod_userdir module.
The ea-apache24-mod_env module. Note: This module allows you to add
environment variables when you register your application. For more
information about environment variables, read the Environment
Variables section below.
The ea-nodejs10 module if you want to register a Node.js™ application.
You can see how application manager looks like in this Youtube video:
https://www.youtube.com/watch?v=ATxMYzLbRco
anyone who wants to know how to deploy node js app to Cpanel this is a good source for him, this explains thoroughly how to deploy node js app to Cpanel please check this

Why does swapping between container IP and alias cause difference in AJAX request?

I have a small sample project located here that illustrates the problem I am seeing when working with a nginx + node + host docker stack.
I have 2 containers:
A node (express) application that simply returns a json object. It is CORs enabled based on this website. It has it's port published to host via 3000:80
An nginx server that is also CORs enabled based on this website. It only serves static content (index.html and main.js files) from the default location (/usr/shared/nginx/html). Its port is published via 8080:80.
When running the containers individually from host I can access the node server and see the JSON object being returned. When I access the nginx server, I see my index.html and the javascript code from main.js runs.
Now I have the node app container linked to the nginx server container. From inside my main.js file of the nginx container, I attempt to access the server at http://nodeapp/api. I am seeing a CORs error
XMLHttpRequest cannot load http://nodeapp/api. No
'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://localhost:8080' is therefore not allowed
access.
The strange thing is, the response header indicates it is coming from nginx and not my express application as I would expect. The nginx container is also not logging anything.
Things that worked
If I change the url for the XMLHttpRequest to the node container's IP (say 172.17.0.2) it works as expected and the response header indicates it is coming from the express server. In my /etc/hosts file there is an entry:
172.17.0.2 nodeapp abc123ContainerId quickserve_nodeapp_run_1
When I curl the node container from an interactive tty container it also works as expected.
If I load the node container and use http-server (server on host) it works as expected and the response header indicates it is coming from the express server.
Just in case it has an incluence, that old thread (2013) mentioned a cor option on the docker daemon.
Nowadays (Q4 2015), the docker daemon includes:
--api-cors-header="" Set CORS headers in the remote API
To set cross origin requests to the remote api please give values to --api-cors-header when running Docker in daemon mode. Set * (asterisk) allows all, default or blank means CORS disabled
$ docker -d -H="192.168.1.9:2375" --api-cors-header="http://foo.bar"
That might be a setting to use in your case.

How to connect to docker container's link alias from host

I have 3 separate pieces to my dockerized application:
nodeapp: A node:latest docker container running an expressjs app that returns a JSON object when accessed from /api. This server is also CORs enabled according to this site.
nginxserver: A nginx:latest static server that simply hosts an index.html file that allows the user to click a button which would make the XMLHttpRequest to the node server above.
My host machine
The node:latest has its port exposed to the host via 3000:80.
The nginx:latest has its port exposed to the host via 8080:80.
From host I am able to access both nodeapp and nginxserver individually: I can make requests and see the JSON object returned from the node server using curl from the command line, and the button (index.html) is visible on the screen when I hit localhost:8080.
However, when I try clicking the button the call to XMLHttpRequest('GET', 'http://nodeapp/api', true) fails without seemingly hitting the nodeapp server (no log is present). I'm assuming this is because host does not understand http://nodeapp/api.
Is there a way to tell docker that while a container is running to add its container linking alias to my hosts file?
I don't know if my question is the proper solution to my problem. It looks as though I'm getting a CORs error returned but I don't think it is ever hitting my server. Does this have to do with accessing the application from my host machine?
Here is a link to an example repo
Edit: I've noticed that the when using the stack that clicking the button sends a response from my nginx container. I'm confused as to why it is routing through that server as the nodeapp is in my hosts file so it should recognize the correlation there?
Problem:
nodeapp exists in internal network, which is visible to your nginxserver only, you can check this by enter nginxserver
docker exec -it nginxserver bash
# cat /etc/hosts
Most important, your service setup is not correct, nginxserver shall act as reverse proxy in front of nodeapp
host (client) -> nginxserver -> nodeapp
Dirty Quick Solution:
If you really really want your client (host) to access internal application nodeapp, then you simple change below code
XMLHttpRequest('GET', 'http://nodeapp/api', true)
To
XMLHttpRequest('GET', 'http://localhost:3000/api', true)
Since in your docker-compose.yml, nodeapp service port 80 is exposed in home network as 3000, which can be accessed directly.
Better solution
You need redesign your service stack, to make nginxserver as frontend node, see one sample http://schempy.com/2015/08/25/docker_nginx_nodejs/

403 Forbidden after successfully installing Ghost

I have been spending days figuring out how to install the viral Ghost platform, and experienced numerous errors. Luckily, I have managed to install it - Ghost gives me a positive Ghost is running... message in SSH after I've done npm start --production. However, when I browse to my website - http://nick-s.se - Apache displays its default page and when I go to the ghost login area - /ghost, the site returns a 403 Forbidden.
P.S. I have specifically installed Ghost on a different port than the one Apache is running on. I don't know what's going on...
Update - I have found out that I can access my Ghost installation by adding the port number 2368 which I've configured in the config.js. Now, however my problem is - how can I run Ghost without using such ports?...
tell your browser you want to connect to the port Ghost is running on: http://nick-s.se:2368
So a few things, based on visiting:
1) It seems Apache isn't proxying the request onward to Ghost. Are you sure that you've configured it properly?
2) It also looks like Apache doesn't have access to the directory that you set as root. This shouldn't be necessary anyway if proxying is set up correctly, but could become an issue later if you wanted to use apache to serve things like the static assets.
If you are open to nginx instead of Apache, I have written a how to on this: link. You can skip the section on configuring Nginx. Otherwise, still might be useful if you figure out the conversion of rules from Nginx to Apache.
If you don't have any other sites running on your VPS you can just turn apache off and not have to deal with apache proxying the request to port 2368 and have Ghost run on port 80. If your VPS is running CentOS you can check out this how to on disabling apache and running Ghost on port 80.

Resources