How do you create a self signed SSL certificate to use on local server on mac 10.9?
I require my localhost serving as https://localhost
I am using the linkedin API. The feature which requires the ssl on local host is explained here.
https://developer.linkedin.com/documents/exchange-jsapi-tokens-rest-api-oauth-tokens
In brief, linkedin will send the client a bearer token after the client authorises my app to access their data. The built in javascript library by linkedin will automatically send this cookie to my server / backend. This json file info is used for user authentication.
However, linkedin will not send the private cookie if the server is not https.
Quick and easy solution that works in dev/prod mode, using http-proxy ontop of your app.
1) Add in the tarang:ssl package
meteor add tarang:ssl
2) Add your certificate and key to a directory in your app /private, e.g /private/key.pem and /private/cert.pem
Then in your /server code
Meteor.startup(function() {
SSLProxy({
port: 6000, //or 443 (normal port/requires sudo)
ssl : {
key: Assets.getText("key.pem"),
cert: Assets.getText("cert.pem"),
//Optional CA
//Assets.getText("ca.pem")
}
});
});
Then fire up your app and load up https://localhost:6000. Be sure not to mix up your ports with https and http as they are served seperately.
With this I'm assuming you know how to create your own self signed certificate, there are loads of resources on how to do this. Just in case here are some links.
http://www.akadia.com/services/ssh_test_certificate.html
https://devcenter.heroku.com/articles/ssl-certificate-self
An alternative to self signed certs: it may be better to use an official certificate for your apps domain and use /etc/hosts to create a loopback on your local computer too. This is because its tedious to have to switch certs between dev and prod.
Or you could just use ngrok to port forward :)
1) start your server (i.e. at localhost:3000)
2) start ngrok from command line: ./ngrok http 3000
that should give you http and https urls to access from any device
Other solution is to use NGINX. Following steps are tested on Mac El Capitan, assuming your local website runs on port 3000 :
1. Add a host to your local machine :
Edit your host file : vi /etc/hosts
Add a line for your local dev domain : 127.0.0.1 dev.yourdomain.com
Flush your cache dscacheutil -flushcache
Now you should be able to reach your local website with http://dev.yourdomain.com:3000
2. Create a self signed SSL like explained here : http://mac-blog.org.ua/self-signed-ssl-for-nginx/
3. Install nginx and configure it to map https traffic to your local website:
brew install nginx
sudo nginx
Now you should be able to reach http://localhost:8080 and get an Nginx message.
This is the default conf so now you have to set the https conf :
Edit your conf file :
vi /usr/local/etc/nginx/nginx.conf
Uncomment the HTTPS server section and change following lines :
server_name dev.yourdomain.com;
Put your certificates you just created :
ssl_certificate /path-to-your-keys/nginx.pem;
ssl_certificate_key /path-to-your-keys/nginx.key;
Change the location section with this one:
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
Restart nginx :
sudo nginx -s stop
sudo nginx
And now you should be able to access https://dev.yourdomain.com
Related
I'm new to Docker and nginx so this may be a simple question but I've been searching through question/answers for a while and haven't found the correct solution.
I'm trying to run an nginx server through docker to reroute all requests to my.example.com/api/... to localhost:3000/api/...
I have the following Dockerfile:
FROM nginx
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
and the nginx.conf file:
server {
server_name my.example.com;
location / {
proxy_pass http://localhost:3000/;
}
}
When I make the calls to the api on localhost:3000 this works fine but when I try to run against my.example.com I get a network error that the host isn't found. To be clear the domain I want to 'redirect' traffic from to localhost host is a valid server address, but I want to mock the api for it for development.
This isn't working because your nginx proxying request to localhost which is container itself but your app is running on host's port 3000 -- outside of container. check this article
change
proxy_pass http://localhost:3000/;
to
proxy_pass http://host.docker.internal.
add 127.0.0.1 example.com my.example.com in etc/hosts
I wanted to move a project I have working on Windows to run on a Unix machine.
The machine running on Debian 9 with Nginx.
This project runs absolutely fine on Windows with IIS.
I've followed all the instructions on here created a service for this to run on the start of the machine and Nginx configuration to proxy the connection from the port I want to use to port 5000.
When I start the application running Dotnet Myddl.dll it starts and says it is only listening on port 5000.
Then when I try to access it, I can see a warning.
warn: Microsoft.AspNetCore.HttpsPolicy.HttpsRedirectionMiddleware[3]
Failed to determine the HTTPS port for redirect.
I know it is related to my app redirecting to HTTPS and not knowing where to redirect it, but how do I resolve this?
My service
[Unit]
Description=Myapp API
[Service]
WorkingDirectory=/var/www/myapp/publish
ExecStart=/usr/bin/dotnet /var/www/myapp/publish/myapp.dll
Restart=always
# Restart service after 10 seconds if the dotnet service crashes:
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=dotnet-example
User=www-data
Environment=ASPNETCORE_ENVIRONMENT=Production
Environment=DOTNET_PRINT_TELEMETRY_MESSAGE=false
[Install]
WantedBy=multi-user.target
server {
listen 6969;
server_name mysite.net *.mysite.net;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
EDIT:
I've been trying to resolve this and still can't. When I start the app on the unix machine I get the following
root#myhost:/var/www/myapp/publish# dotnet Myapp.dll info: Microsoft.Hosting.Lifetime[0] Now listening on: localhost:5000 info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: Production info: Microsoft.Hosting.Lifetime[0] Content root path: /var/www/myapp/publish
Obviously it is missing https option, and I can't figure why.
EDIT2:
I've published the app as self contained for Linux-x64, and now I do not get the warning saying that it couldn't determine the https port, on my browser I get redirected to https://mydomain:5001 when I access it on http://mydomain:6969
Still I do not get the app listening on https on Unix, just on Windows.
EDIT3:
Noticed that if I go to one of my endpoints e.g. http://IP:6969/api/users I get a 500 response.
EDIT4:
When I was loading my application locally, I was getting straight through to the swagger page, such as /swagger/index.html, for some reason my API when complied for Linux does not accept this URL and returns me a 404, but if I get to one of my endpoints e.g. /api/users, it does return me the data I was expecting for.
Default https port should be 5001.
You can set https port following offical docs.
Or disable relatied middlware. Use nginx for https termination if needed.
I was trying to Migrate an API, and as on Windows it was returning me to my swagger page located on /swagger/index.html I was expecting the same to happen, which for some reason, doesn't.
So if I access one of my endpoints (e.g. /api/users) it does work fine.
When i want to access to Node Red via Brower I typ 192.168.0.24:1880/ui/.
Now i just want to reach the Node-Red side have an local Domain something like website.test.
I already have changed my port form 1880 Port to 80.
Also i edit the /etc/hosts file to -> 192.168.0.24 website.test
But when I test it, I cant access to the Node-Red Website with this Domain.
Does anyone know how to accomplish this?
First, to bind Node-RED to port 80 will require you to run it as root on the pi. This is a VERY BAD idea unless you 100% understand the consequences as it opens up a LOT of potential security issues.
A better solution is to change the address that Node-RED binds to to be 127.0.0.1 so it only listens on localhost then use something like nginx to proxy it on to port 80.
You can change the bind address by uncommenting the uiHost line at the top of the settings.js file in the userDir (which is normally /home/pi/.node-red)
A basic nginx config would look like this:
server {
listen 80;
listen [::]:80;
location / {
proxy_pass http://127.0.0.1:1880;
#Defines the HTTP protocol version for proxying
#by default it it set to 1.0.
#For Websockets and keepalive connections you need to use the version 1.1
proxy_http_version 1.1;
#Sets conditions under which the response will not be taken from a cache.
proxy_cache_bypass $http_upgrade;
#These header fields are required if your application is using Websockets
proxy_set_header Upgrade $http_upgrade;
#These header fields are required if your application is using Websockets
proxy_set_header Connection "upgrade";
#The $host variable in the following order of precedence contains:
#hostname from the request line, or hostname from the Host request header field
#or the server name matching a request.
proxy_set_header Host $host;
}
}
Second, editing the /etc/hosts file on the pi will only map a hostname for processes running on the pi not anywhere where (e.g. on your pc).
If your other machine is another Linux machine or a Mac then you can probably use mDNS to access the pi which by default will be found on raspberrypi.local. Unfortunately Windows doesn't support mDNS so that won't work (You may be able to add support by installing some printer packages from Apple).
You can edit the hosts file on Windows (C:\Windows\System32\drivers\etc\hosts) but this isn't a great solution as you will need to do it to every machine that wants to access the Node-RED instance.
Other options include adding an entry on your local router or running your own DNS server, but both of those options are far too complicated to get into here.
I have installed Nginx for Windows (64-bit) from here because the official binaries are 32-bit. The aim is to use Nginx for load balancing NodeJS applications. I am following instructions from here where, the link to sample basic configuration file also exists.
The following configuration file works successfully on Linux where nginx was installed via Ubuntu PPA. The servers are themselves started via pm2.
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream top_servers {
# Use IP Hash for session persistence
ip_hash;
# Least connected algorithm
least_conn;
# List of Node.JS Application Servers
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
server 127.0.0.1:3004;
}
server {
listen 80;
server_name ip.address;
location /topserver/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_pass http://top_servers;
proxy_set_header X-Request-Id $request_id;
}
}
However, this file does not work with Windows. I am getting an error as 'No such file or directory' under the html folder of Nginx installation on Windows. I haven't done any such setting for Linux.
Can you please help me convert the above configuration file for Windows?
NOTE I don't have a choice - Windows is a must for this project.
So, I over-wrote the contents of conf/nginx.conf with the contents shown above. First, I got an error as "map" directive is not allowed here. Then, after removing this directive, I got another error as "upstream" directive is not allowed here". I think, the binaries I am using does not support load balancing.
I am working on building a very old program that has many outdated links to dependencies.
These links might be in other dependencies downloaded from the web which takes out the option to change the url
I am able to find all of these dependencies with other links but changing the paths has become a endless task.
Is it possible to create a list of rules for outgoing urls that map one to one.
For example:
http://Oldserver.com/this/is/one/OldDependency.jar -> http://Newserver.com/this/is/one/with/other/url/NewDependency.jar
It does not matter what tool is used for the routing, iptables or something else. I am willing to set anything up for this.
This needs to happen on the OS level because the paths are inside tar files
I was able to get this working by using a local nginx.
The best solution here was a dockerized nginx container.
I will use the example above
http://Oldserver.com/this/is/one/OldDependency.jar -> http://Newserver.com/this/is/one/with/other/url/NewDependency.jar
Steps:
Edit your host file to route the host to your localhost
$ sudo vim /etc/hosts
Add this line to your hosts file
127.0.0.1 Oldserver.com
Pull docker the nginx docker container
docker pull nginx
Save this nginx configuration file to some path (code tags not working, sorry)
events {
worker_connections 4096; ## Default: 1024
}
http {
server {
listen 80;
server_name Oldserver.com;
location = /this/is/one/OldDependency.jar {
proxy_pass http://Newserver.com/this/is/one/with/other/url/NewDependency.jar;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
proxy_pass http://Oldserver.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}
If you have more paths, add then above the wildcard location /.
The location / forwards all paths not matched to the original server with the path reserved
Set permissions on config
chmod 600 /some/path/to/nginx.conf
Start up a nginx docker container with the configuration file
docker run --name proxy -v /some/path/to/nginx.conf:/etc/nginx/nginx.conf:ro -p 80:80 -d nginx
Now, every request to the Oldserver.com will go through your nginx proxy and reroute if it matches any of your location configurations
I had a similar problem to this, needing to rewrite an outgoing url. In my case it was in a docker container running on kubernetes.
In my case it was because of this issue: https://stackoverflow.com/a/63712440
The app runtime (.net core 3.1) crypto code only checks the first url in the list of certificate revocation endpoints. I was doing an SSL client certificate setup (mTLS).
The PKI cert I was issued contained an internal domain address first+second, and then a publicly addressable url third:
X509v3 CRL Distribution Points:
Full Name:
URI:http://some.domain.1/CRL/Cert.crl
URI:http://some.domain.2/CRL/Cert.crl
URI:http://x509.domain.com/CRL.crl
Because the domain addresses use a 'CRL' folder in the path, but the public url does not, just mapping the public IP address to the local domain host via /etc/hosts (or k8s hostAliases) didn't work.
To solve this in k8s, I added a sidecar to my pod; here's the details:
First, start with an nginx.conf:
events { }
http {
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:5555;
}
location /CRL/ {
proxy_pass http://x509.domain.com/;
}
}
}
This kind of looks like a reverse proxy, but really it just an actual proxy. My dotnet app serves on port 5555 inside the pod. 127.0.0.1 will route to the pod, not the nginx container. Note that the second proxy_pass value doesn't include the 'CRL' path, that allows the url to rewritten not just redirected.
I then built an nginx docker image called crl-rewrite-proxy:
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
I then added the image into my pod yaml as a sidecar:
- name: crl-rewrite-proxy
image: $(image):crl-rewrite-proxy
ports:
- containerPort: 80
And then added an alias for the internal domain address, so outgoing calls to it from the app would route back into the pod:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "some.domain.1"
Lastly I defined an ingress in my k8s yaml, so the aliased calls will be routed to the sidecar:
- kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: $(name)-crl-rewrite-proxy
namespace: $(namespace)
spec:
rules:
- host: $(name).$(host)
http:
paths:
- path: /CRL
pathType: ImplementationSpecific
backend:
service:
name: $(name)
port:
number: 80
The app runtime makes a call to http://some.domain.1/CRL/Cert.crl; the host alias routes that to 127.0.0.1:80; k8s passes that to the sidecar; the sidecar passes that to nginx; nginx rewrite the host+url to a public IP on a different path; the resource then gets fetched successfully.
Thanks to thor above for the local setup, I used this to verify it would work locally before doing up the k8s bits.