Changing Domain on Node Red Raspberry PI - dns

When i want to access to Node Red via Brower I typ 192.168.0.24:1880/ui/.
Now i just want to reach the Node-Red side have an local Domain something like website.test.
I already have changed my port form 1880 Port to 80.
Also i edit the /etc/hosts file to -> 192.168.0.24 website.test
But when I test it, I cant access to the Node-Red Website with this Domain.
Does anyone know how to accomplish this?

First, to bind Node-RED to port 80 will require you to run it as root on the pi. This is a VERY BAD idea unless you 100% understand the consequences as it opens up a LOT of potential security issues.
A better solution is to change the address that Node-RED binds to to be 127.0.0.1 so it only listens on localhost then use something like nginx to proxy it on to port 80.
You can change the bind address by uncommenting the uiHost line at the top of the settings.js file in the userDir (which is normally /home/pi/.node-red)
A basic nginx config would look like this:
server {
listen 80;
listen [::]:80;
location / {
proxy_pass http://127.0.0.1:1880;
#Defines the HTTP protocol version for proxying
#by default it it set to 1.0.
#For Websockets and keepalive connections you need to use the version 1.1
proxy_http_version 1.1;
#Sets conditions under which the response will not be taken from a cache.
proxy_cache_bypass $http_upgrade;
#These header fields are required if your application is using Websockets
proxy_set_header Upgrade $http_upgrade;
#These header fields are required if your application is using Websockets
proxy_set_header Connection "upgrade";
#The $host variable in the following order of precedence contains:
#hostname from the request line, or hostname from the Host request header field
#or the server name matching a request.
proxy_set_header Host $host;
}
}
Second, editing the /etc/hosts file on the pi will only map a hostname for processes running on the pi not anywhere where (e.g. on your pc).
If your other machine is another Linux machine or a Mac then you can probably use mDNS to access the pi which by default will be found on raspberrypi.local. Unfortunately Windows doesn't support mDNS so that won't work (You may be able to add support by installing some printer packages from Apple).
You can edit the hosts file on Windows (C:\Windows\System32\drivers\etc\hosts) but this isn't a great solution as you will need to do it to every machine that wants to access the Node-RED instance.
Other options include adding an entry on your local router or running your own DNS server, but both of those options are far too complicated to get into here.

Related

Running .NET Core 5.0 on Debian Nginx

I wanted to move a project I have working on Windows to run on a Unix machine.
The machine running on Debian 9 with Nginx.
This project runs absolutely fine on Windows with IIS.
I've followed all the instructions on here created a service for this to run on the start of the machine and Nginx configuration to proxy the connection from the port I want to use to port 5000.
When I start the application running Dotnet Myddl.dll it starts and says it is only listening on port 5000.
Then when I try to access it, I can see a warning.
warn: Microsoft.AspNetCore.HttpsPolicy.HttpsRedirectionMiddleware[3]
Failed to determine the HTTPS port for redirect.
I know it is related to my app redirecting to HTTPS and not knowing where to redirect it, but how do I resolve this?
My service
[Unit]
Description=Myapp API
[Service]
WorkingDirectory=/var/www/myapp/publish
ExecStart=/usr/bin/dotnet /var/www/myapp/publish/myapp.dll
Restart=always
# Restart service after 10 seconds if the dotnet service crashes:
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=dotnet-example
User=www-data
Environment=ASPNETCORE_ENVIRONMENT=Production
Environment=DOTNET_PRINT_TELEMETRY_MESSAGE=false
[Install]
WantedBy=multi-user.target
server {
listen 6969;
server_name mysite.net *.mysite.net;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
EDIT:
I've been trying to resolve this and still can't. When I start the app on the unix machine I get the following
root#myhost:/var/www/myapp/publish# dotnet Myapp.dll info: Microsoft.Hosting.Lifetime[0] Now listening on: localhost:5000 info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: Production info: Microsoft.Hosting.Lifetime[0] Content root path: /var/www/myapp/publish
Obviously it is missing https option, and I can't figure why.
EDIT2:
I've published the app as self contained for Linux-x64, and now I do not get the warning saying that it couldn't determine the https port, on my browser I get redirected to https://mydomain:5001 when I access it on http://mydomain:6969
Still I do not get the app listening on https on Unix, just on Windows.
EDIT3:
Noticed that if I go to one of my endpoints e.g. http://IP:6969/api/users I get a 500 response.
EDIT4:
When I was loading my application locally, I was getting straight through to the swagger page, such as /swagger/index.html, for some reason my API when complied for Linux does not accept this URL and returns me a 404, but if I get to one of my endpoints e.g. /api/users, it does return me the data I was expecting for.
Default https port should be 5001.
You can set https port following offical docs.
Or disable relatied middlware. Use nginx for https termination if needed.
I was trying to Migrate an API, and as on Windows it was returning me to my swagger page located on /swagger/index.html I was expecting the same to happen, which for some reason, doesn't.
So if I access one of my endpoints (e.g. /api/users) it does work fine.

Accessing cups from node within a docker container

So I have a node server within a docker container. Right now I would like to have it communicate with the parent system's CUP server. However when I do an ajax call to the server, with port 631 exposed I get a 400 bad request error.
When looking at the CUPS logs it gives this reason for the rejection:
Request from "localhost" using invalid Host: field "host.docker.internal:631"
Now to even access the parent machine I have to use host.docker.internal to gain access, but I have not figured out a way to get cups to ignore the host or think its localhost.
Cups is watching for any serverAlias, and anything on port 631 so it "should" accept the call. Any ideas?
I had the same problem with CUPS (2.3.4) on osx. I spent several hours to fix the invalid Host: field error.
It seems that there's a bug, even when using SeverAlias * on cups conf.
For those who are looking for a workaround:
We have to change the Host header sent from the docker container to localhost in order to do so, I managed to set up an Nginx container listening on port 8888 and rewriting the Host field while proxy_pass to the host’s CUPS server.
This is the nginx conf.d:
server {
listen 8888;
location / {
proxy_pass http://host.docker.internal:631;
proxy_set_header Host localhost;
}}
Now instead of connecting to host.docker.internal:631 we connect the cups client to localhost:8888. (I have set up the nginx sever on the same docker container, you might want to set up a separate container depending on your needs)

Nginx - Load balancing returns 404 on Windows

I have installed Nginx for Windows (64-bit) from here because the official binaries are 32-bit. The aim is to use Nginx for load balancing NodeJS applications. I am following instructions from here where, the link to sample basic configuration file also exists.
The following configuration file works successfully on Linux where nginx was installed via Ubuntu PPA. The servers are themselves started via pm2.
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream top_servers {
# Use IP Hash for session persistence
ip_hash;
# Least connected algorithm
least_conn;
# List of Node.JS Application Servers
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
server 127.0.0.1:3004;
}
server {
listen 80;
server_name ip.address;
location /topserver/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://top_servers;
proxy_set_header X-Request-Id $request_id;
}
}
However, this file does not work with Windows. I am getting an error as 'No such file or directory' under the html folder of Nginx installation on Windows. I haven't done any such setting for Linux.
Can you please help me convert the above configuration file for Windows?
NOTE I don't have a choice - Windows is a must for this project.
So, I over-wrote the contents of conf/nginx.conf with the contents shown above. First, I got an error as "map" directive is not allowed here. Then, after removing this directive, I got another error as "upstream" directive is not allowed here". I think, the binaries I am using does not support load balancing.

Install Ghost Blog

I've been trying to get Ghost.io installed on my web server for quite sometime. I have a VPS with Centos 6 and Cpanel.
Today I found a script at http://www.allaboutghost.com/one-click-ghost-install-script/ that said you could just enter a command into your ssh terminal and have it all installed for you.
Command
wget -O - https://raw.github.com/howtoinstallghost/installghost.sh/master/installGhost.sh | sudo bash
I did this and it appears to have worked, I didn't get any errors but now I am not able to find the install in either FileZilla or by using my web browser. The website says that it installs in the /var/www/ghost/ directory but I can't find that. If I use cd /var/www/ghost/ in the ssh it takes me right to it and even lets me edit the config.example.js file.
If I direct my browser to www.mydomain.com:80 since the site says it installs on port 80 it just takes me back to my home page.
What am I missing and what do I need to do?
As Per the comments, I did follow the instructions on the github page. Now All I get when I visit mydomainname.com/ghost/
Ghost is installed really easy. You'd better don't use 3rd party scripts for that as it might really vary from system to system. All you need is to get Node.js installed and then follow instruction for example from here: https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ghost-on-ubuntu-16-04. They are very detailed and must work for Centos as well.
Most common errors are:
- Not configuring a reverse proxy (nginx or apache) to link to your ghost install on port 2368. Here is an example for Nginx:
server {
listen 80;
server_name your_domain_or_ip_address;
location / {
proxy_set_header HOST $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:2368;
}
}
If you now see default home page it means some web server is already running and intercept all the requests and parks them to a default location (e.g. /var/www)
- If you want Ghost to be the one and the only web server on your VPS, you have to remove or shut down currently installed web server and try to configure ghost to answer on port 80 like this:
server: {
host: '0.0.0.0',
port: '80'
}
I didn't test it but has to work. This kind of install is not recommended and insecure. I suppose you can configure reverse proxy from Cpanel, but not sure.
The best and easiest way to set up Ghost is using SSH.
Hope it helps. For any further help, you have to provide more details and possibly logs and configs. Most of the possible errors you'll catch when installing or starting your blog with npm
sudo npm install --production
sudo npm start --production
Good luck with your deployment.
Hey you don't need to do that much you can just launch ghost from digitalpress for free...
Know how to host ghost blog for free out here : https://treanches.digitalpress.blog/hosting-ghost-blog/
You can think this as self promotion but this article is so much understanding you won't regret reading it

Setup (https) SSL on localhost for meteor development

How do you create a self signed SSL certificate to use on local server on mac 10.9?
I require my localhost serving as https://localhost
I am using the linkedin API. The feature which requires the ssl on local host is explained here.
https://developer.linkedin.com/documents/exchange-jsapi-tokens-rest-api-oauth-tokens
In brief, linkedin will send the client a bearer token after the client authorises my app to access their data. The built in javascript library by linkedin will automatically send this cookie to my server / backend. This json file info is used for user authentication.
However, linkedin will not send the private cookie if the server is not https.
Quick and easy solution that works in dev/prod mode, using http-proxy ontop of your app.
1) Add in the tarang:ssl package
meteor add tarang:ssl
2) Add your certificate and key to a directory in your app /private, e.g /private/key.pem and /private/cert.pem
Then in your /server code
Meteor.startup(function() {
SSLProxy({
port: 6000, //or 443 (normal port/requires sudo)
ssl : {
key: Assets.getText("key.pem"),
cert: Assets.getText("cert.pem"),
//Optional CA
//Assets.getText("ca.pem")
}
});
});
Then fire up your app and load up https://localhost:6000. Be sure not to mix up your ports with https and http as they are served seperately.
With this I'm assuming you know how to create your own self signed certificate, there are loads of resources on how to do this. Just in case here are some links.
http://www.akadia.com/services/ssh_test_certificate.html
https://devcenter.heroku.com/articles/ssl-certificate-self
An alternative to self signed certs: it may be better to use an official certificate for your apps domain and use /etc/hosts to create a loopback on your local computer too. This is because its tedious to have to switch certs between dev and prod.
Or you could just use ngrok to port forward :)
1) start your server (i.e. at localhost:3000)
2) start ngrok from command line: ./ngrok http 3000
that should give you http and https urls to access from any device
Other solution is to use NGINX. Following steps are tested on Mac El Capitan, assuming your local website runs on port 3000 :
1. Add a host to your local machine :
Edit your host file : vi /etc/hosts
Add a line for your local dev domain : 127.0.0.1 dev.yourdomain.com
Flush your cache dscacheutil -flushcache
Now you should be able to reach your local website with http://dev.yourdomain.com:3000
2. Create a self signed SSL like explained here : http://mac-blog.org.ua/self-signed-ssl-for-nginx/
3. Install nginx and configure it to map https traffic to your local website:
brew install nginx
sudo nginx
Now you should be able to reach http://localhost:8080 and get an Nginx message.
This is the default conf so now you have to set the https conf :
Edit your conf file :
vi /usr/local/etc/nginx/nginx.conf
Uncomment the HTTPS server section and change following lines :
server_name dev.yourdomain.com;
Put your certificates you just created :
ssl_certificate /path-to-your-keys/nginx.pem;
ssl_certificate_key /path-to-your-keys/nginx.key;
Change the location section with this one:
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
Restart nginx :
sudo nginx -s stop
sudo nginx
And now you should be able to access https://dev.yourdomain.com

Resources