I have a web application running in Azure AKS. It has its own public DNS name and HTTPS configured.
But it has no authentication.
I want to authenticate users to access this application using built-in authentication feature of Azure App services and then proxy requests to my app in AKS.
I have installed a nginx_proxy in Docker and setup authentication. Authentication works, but in web browser I get 404 "No found" and does not reach application in AKS. Why?
The application itself is running fine and can be reached on its DNS name without proxy.
+----------------+ +------------+
| Azure | | Azure AD | Authnetication
| Kubernetes | +------------+
| app |
| (my.aks.io) | +-------------+
| | | App Service |
| | 404! | Docker | 200 ok
| |<-------https----| Nginx-proxy |<-------http-----
+----------------+ my.aks.io +-------------+
My Nginx proxy config
server {
listen 80;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://my.aks.io;
}
}
Related
I wanted to move a project I have working on Windows to run on a Unix machine.
The machine running on Debian 9 with Nginx.
This project runs absolutely fine on Windows with IIS.
I've followed all the instructions on here created a service for this to run on the start of the machine and Nginx configuration to proxy the connection from the port I want to use to port 5000.
When I start the application running Dotnet Myddl.dll it starts and says it is only listening on port 5000.
Then when I try to access it, I can see a warning.
warn: Microsoft.AspNetCore.HttpsPolicy.HttpsRedirectionMiddleware[3]
Failed to determine the HTTPS port for redirect.
I know it is related to my app redirecting to HTTPS and not knowing where to redirect it, but how do I resolve this?
My service
[Unit]
Description=Myapp API
[Service]
WorkingDirectory=/var/www/myapp/publish
ExecStart=/usr/bin/dotnet /var/www/myapp/publish/myapp.dll
Restart=always
# Restart service after 10 seconds if the dotnet service crashes:
RestartSec=10
KillSignal=SIGINT
SyslogIdentifier=dotnet-example
User=www-data
Environment=ASPNETCORE_ENVIRONMENT=Production
Environment=DOTNET_PRINT_TELEMETRY_MESSAGE=false
[Install]
WantedBy=multi-user.target
server {
listen 6969;
server_name mysite.net *.mysite.net;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
EDIT:
I've been trying to resolve this and still can't. When I start the app on the unix machine I get the following
root#myhost:/var/www/myapp/publish# dotnet Myapp.dll info: Microsoft.Hosting.Lifetime[0] Now listening on: localhost:5000 info: Microsoft.Hosting.Lifetime[0] Application started. Press Ctrl+C to shut down. info: Microsoft.Hosting.Lifetime[0] Hosting environment: Production info: Microsoft.Hosting.Lifetime[0] Content root path: /var/www/myapp/publish
Obviously it is missing https option, and I can't figure why.
EDIT2:
I've published the app as self contained for Linux-x64, and now I do not get the warning saying that it couldn't determine the https port, on my browser I get redirected to https://mydomain:5001 when I access it on http://mydomain:6969
Still I do not get the app listening on https on Unix, just on Windows.
EDIT3:
Noticed that if I go to one of my endpoints e.g. http://IP:6969/api/users I get a 500 response.
EDIT4:
When I was loading my application locally, I was getting straight through to the swagger page, such as /swagger/index.html, for some reason my API when complied for Linux does not accept this URL and returns me a 404, but if I get to one of my endpoints e.g. /api/users, it does return me the data I was expecting for.
Default https port should be 5001.
You can set https port following offical docs.
Or disable relatied middlware. Use nginx for https termination if needed.
I was trying to Migrate an API, and as on Windows it was returning me to my swagger page located on /swagger/index.html I was expecting the same to happen, which for some reason, doesn't.
So if I access one of my endpoints (e.g. /api/users) it does work fine.
I'm little bit confused. I know how to serve user a dynamic page using Ejs with express. Usually when the user access the domain, the API in the domain responds with a page that is in the same domain. so we just have to host the API in a server.
Now i'm learning React on front-end and using express with MongoDB on the back-end. How to connect these front-end and back-end. How back-end serves the react app to the user. I searched on YouTube, but no one is talking about the back-end part, they are only talking about hosting the front-end part.
I don't know if i'm asking in the right way or not.but please help me.
The idea is to have one web-server to serve static files (the react app) and one API server (express) that exposes the API.
In order to consolidate both of these servers under one domain, you will need to use a Reverse Proxy. I would suggest using Nginx (inside docker).
+----------+
| |
| Client |
| |
+----+-----+
|
| myDomain.com
|
[INTERNET]
|
|
v :80
+--------+-------+
/* | | /api
+----------+ Revese Proxy +---------+
| | | |
| +----------------+ |
| :8080 | :3000
| |
+----+----------+ +---------+-----+
| | | |
| React | | Express |
| (webserver) | | (API server) |
| | | |
+---------------+ +---------------+
Any request to myDomain.com will be checked. If the path contains /api the request will be passed to the Backend. any other request will be passed to the webserver that contains the SPA static file.
I need accept https from client and my backend is also https.
How can listen HTTPS in varnish and forward request to backend in HTTPS?
VARNISH_LISTEN_PORT=443
# how to add SSL certs?
Varnish, at least in the open source version, does not support HTTPS. Varnish Software released Hitch a while ago, which can be used to terminate HTTPS in front of a Varnish caching proxy. Many setups that I have seen also use nginx for SSL termination with varnish as backend.
I just found out that the commercial product Varnish Plus in fact supports TLS/SSL.
Hitch is opensource and infact supports SSL termination and can be used as Proxy for converting your HTTP server to HTTPS server.
Following is the guide and example, here openstack running on HTTP has been converted into HTTPS for all communication.
Create a VM(will be used as proxy node):
Download Hitch at Proxy Node:
git clone https://github.com/varnish/hitch.git
official link: https://github.com/varnish/hitch
Install HITCH as shown below:
To install hitch, Docs can be followed from the official link, below are the commands anyways.
$ ./bootstrap
$ ./configure
$ make
$ sudo make install
After successuful installation of HITCH, prepare certificate for the Proxy Node(.pem file)
Start the Proxy using HITCH as shown below.
[root#testing_tools hitch]# hitch --tls -f "[*]:443" -b "[2001::29]:80" devstack.pem -u hitch -g hitch
[root#testing_tools hitch]# hitch --tls -f "[*]:9696" -b "[2001::29]:9696" devstack.pem -u hitch -g hitch
Make Following changes(keystone Database) at Openstack end for endpoints. ie. make all endpoints configured for HTTPS.
mysql> select * from endpoint;
+----------------------------------+--------------------+-----------+----------------------------------+-------------------------------------------------+-------+---------+-----------+
| id | legacy_endpoint_id | interface | service_id | url | extra | enabled | region_id |
+----------------------------------+--------------------+-----------+----------------------------------+-------------------------------------------------+-------+---------+-----------+
| 01c5333a2edf4505a14987770a762a8a | NULL | public | f883c99bc5514dd6b8d3b417fb8a121c | https://devstackipv6/volume/v1/$(project_id)s | {} | 1 | RegionOne |
| 1766694b9c5b4814a421a074d44b2d32 | NULL | admin | 68a37fb109aa4f878f893fc87c262f94 | https://devstackipv6/heat-api-cfn/v1 | {} | 1 | RegionOne |
| 29e5c59cd68443d6beb96272b2d57143 | NULL | internal | eff63e56a0264b08a4cc9dc5de4ac8c4 | https://devstackipv6/heat-api/v1/$(project_id)s | {} | 1 | RegionOne |
+----------------------------------+--------------------+-----------+----------------------------------+-------------------------------------------------+-------+---------+-----------+
How do you create a self signed SSL certificate to use on local server on mac 10.9?
I require my localhost serving as https://localhost
I am using the linkedin API. The feature which requires the ssl on local host is explained here.
https://developer.linkedin.com/documents/exchange-jsapi-tokens-rest-api-oauth-tokens
In brief, linkedin will send the client a bearer token after the client authorises my app to access their data. The built in javascript library by linkedin will automatically send this cookie to my server / backend. This json file info is used for user authentication.
However, linkedin will not send the private cookie if the server is not https.
Quick and easy solution that works in dev/prod mode, using http-proxy ontop of your app.
1) Add in the tarang:ssl package
meteor add tarang:ssl
2) Add your certificate and key to a directory in your app /private, e.g /private/key.pem and /private/cert.pem
Then in your /server code
Meteor.startup(function() {
SSLProxy({
port: 6000, //or 443 (normal port/requires sudo)
ssl : {
key: Assets.getText("key.pem"),
cert: Assets.getText("cert.pem"),
//Optional CA
//Assets.getText("ca.pem")
}
});
});
Then fire up your app and load up https://localhost:6000. Be sure not to mix up your ports with https and http as they are served seperately.
With this I'm assuming you know how to create your own self signed certificate, there are loads of resources on how to do this. Just in case here are some links.
http://www.akadia.com/services/ssh_test_certificate.html
https://devcenter.heroku.com/articles/ssl-certificate-self
An alternative to self signed certs: it may be better to use an official certificate for your apps domain and use /etc/hosts to create a loopback on your local computer too. This is because its tedious to have to switch certs between dev and prod.
Or you could just use ngrok to port forward :)
1) start your server (i.e. at localhost:3000)
2) start ngrok from command line: ./ngrok http 3000
that should give you http and https urls to access from any device
Other solution is to use NGINX. Following steps are tested on Mac El Capitan, assuming your local website runs on port 3000 :
1. Add a host to your local machine :
Edit your host file : vi /etc/hosts
Add a line for your local dev domain : 127.0.0.1 dev.yourdomain.com
Flush your cache dscacheutil -flushcache
Now you should be able to reach your local website with http://dev.yourdomain.com:3000
2. Create a self signed SSL like explained here : http://mac-blog.org.ua/self-signed-ssl-for-nginx/
3. Install nginx and configure it to map https traffic to your local website:
brew install nginx
sudo nginx
Now you should be able to reach http://localhost:8080 and get an Nginx message.
This is the default conf so now you have to set the https conf :
Edit your conf file :
vi /usr/local/etc/nginx/nginx.conf
Uncomment the HTTPS server section and change following lines :
server_name dev.yourdomain.com;
Put your certificates you just created :
ssl_certificate /path-to-your-keys/nginx.pem;
ssl_certificate_key /path-to-your-keys/nginx.key;
Change the location section with this one:
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Client-Verify SUCCESS;
proxy_set_header X-Client-DN $ssl_client_s_dn;
proxy_set_header X-SSL-Subject $ssl_client_s_dn;
proxy_set_header X-SSL-Issuer $ssl_client_i_dn;
proxy_read_timeout 1800;
proxy_connect_timeout 1800;
}
Restart nginx :
sudo nginx -s stop
sudo nginx
And now you should be able to access https://dev.yourdomain.com
I am running Meteor on AWS Elastic Beanstalk. Everything is up and running except that it's not running Websockets with the following error:
WebSocket connection to 'ws://MYDOMAIN/sockjs/834/sxx0k7vn/websocket' failed: Error during WebSocket handshake: Unexpected response code: 400
My unstanding was to add something like:
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
To the proxy config, via my YML config file.
Via my .exbextension config file:
files:
"/etc/nginx/conf.d/proxy.conf" :
mode: "000755"
owner: root
group: root
content: |
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
I have ssh'd into the server and I can see the proxy.conf with those two lines in it.
When I hit my webserver I still see the "Error during WebSocket handshake: " error.
I have my beanstalk load configured with stick sessions and the following ports:
BTW I did see https://meteorhacks.com/load-balancing-your-meteor-app.html and I tried to:
Enable HTTP load balancing with Sticky Session on Port 80
Enable TCP load balancing on Port 8080, which allows websocket
But could not seem to get that working either.
Adding another shot at some YAML that does NOT work here": https://gist.github.com/adamgins/0c0258d6e1b8203fd051
Any help appreciated?
With a LOT of help from AWS paid support, I got this working. The reality is I was not far off it was down to some SED syntaxt.
Here's what currently works (Gist):
option_settings:
- option_name: AWS_SECRET_KEY
value: <SOMESECRET>
- option_name: AWS_ACCESS_KEY_ID
value: <SOMEKEY>
- option_name: PORT
value: 8081
- option_name: ROOT_URL
value: <SOMEURL>
- option_name: MONGO_URL
value: <SOMEMONGOURL>
- option_name: MONGO_OPLOG_URL
value: <SOMEMONGOURL>
- namespace: aws:elasticbeanstalk:container:nodejs
option_name: ProxyServer
value: nginx
option_name: GzipCompression
value: true
- namespace: aws:elasticbeanstalk:container:nodejs:staticfiles
option_name: /public
value: /public
container_commands:
01_nginx_static:
command: |
sed -i '/\s*proxy_set_header\s*Connection/c \
proxy_set_header Upgrade $http_upgrade;\
proxy_set_header Connection "upgrade";\
' /tmp/deployment/config/#etc#nginx#conf.d#00_elastic_beanstalk_proxy.conf
In addition to this you need to got into your Load balancer and change the Listener from HTTP to TCP:
described here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.managing.elb.html).
Have you checked out Meteor WebSocket handshake error 400 with nginx? I think their configuration might be a bit different from yours. I'm in the same boat as you, trying to get this exact same set up working.
This no longer works. I posted more here https://solitaired.com/websockets-elastic-beanstalk but the jist is create a file in .ebextensions (called websockets.config for example) with the following:
files:
"/etc/nginx/conf.d/websockets.conf":
content: |
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";