I have a containerized Docker ASP.NET Core application created with
mcr.microsoft.com/dotnet/core/runtime:3.1.3-alpine
When launched the only reference to the port is this ENV variable from the base image
ASPNETCORE_URLS http://+:80
I deployed the app to Azure, setuped the registry and created a new Web Application.
I setup the TLS/SSL settings for working with https only.
Everythings works.
Question:
I want to know how this is possible since I don't config the certificate on my container, I suppose the Kudu service (the reverse proxy) rebind the 443 port to the 80 of the container. Is this true ? The plain http traffic between Kudu and the container on port 80 can cause a possible security hole ?
If I deploy a container with NGINX as a reverse-proxy for ASP.NET Core I must configure the TSL/SSL into NGINX ? On ASP.NET Core ? None at all ?
I want to understand how Kudu, NGINX, and the reverse proxy in general works with and without SSL/TSL
With a Reverse Proxy the client never connects to the HTTP server in your application, in your case Kestrel. The connections you get are requests coming from the Reverse Proxy, and you send your responses back to the Reverse Proxy. Most HTTP stuff is copied from the incoming client request and passed along to your application, but the Reverse Proxy can terminate the SSL tunnel, offload the Authentation, and perform other request transformations.
Related
I'm trying to deploy a custom server on an APP Service on Azure that only accepts requests on HTTPS instead HTTP.
My idea is deploying using the APP Service for avoid deploying on myself any SSL certificate.
I have found this on the documentation of the APP Service:
App Service terminates TLS/SSL at the front ends. That means that TLS/SSL requests never get to your app. You don't need to, and shouldn't implement any support for TLS/SSL into your app.
The front ends are located inside Azure data centers. If you use TLS/SSL with your app, your traffic across the Internet will always be safely encrypted.
So when I try to access via HTTPS on the 443 port the requests are being sent to port 80 and by HTTP. I tried to expose the port 443 directy using the config WEBSITES_PORT but result is that as that port doesn't accept HTTP request, the APP Service is not starting and keeps rebooting some time.
2022-09-14T16:05:22.335Z ERROR - Container xxxx_3_4a82d922 didn't respond to HTTP pings on port: 443, failing site start. See container logs for debugging.
My question is, is there any possibility to resend those HTTPS requests to the 443 as HTTPS on the APP Service in any way?
Thanks!
So your App Service essentially runs on a VM in isolated regions of Azure Data Centers often referred to as Stamps or Scale Units.
Unless you are on an ASE, your App Services live on these stamps which are multi tenant environments sharing a few incoming load balancers and the later is where TLS/SSL is terminated and is the entry point for your app. From the load balancer, the traffic is routed to a proxy (for linux apps) such as Nginx on a VM, over http and forwarded from there to the port exposed by your app service app(docker containers in linux). The defaults are 80 or 8080 but you could change this using the setting WEBSITES_PORT (note the use case here).
So you wouldn't really need end to end TLS given the above architecture. You could turn on the HTTPS only flag in your App Service->Configuration->General settings blade and this would redirect all http requests at the front end to https. This still would not result in end to end TLS.
TLS is often terminated outside the applications in the infrastructure (API gateway or Traffic Managers for instances) and this is by design and offer many benefits (less overhead, certificate management etc).
I am working on an supply chain management application. I developed frontend of the application in ReactJS and backed in docker and nodeJS. My question is if i deploy my backend i.e NodeJS sdk on docker swarm. Can i access the deployed API's in different computer?
You can achieve service (backend) is reachable from the outside in primarily two ways:
eighter you expose the port of the service directly, and you can then connect straight to it (it is not recommended to do so on any actual deployments) by using ports configuration. By doing so the exposed port will be available to access the service from the outside world.
or you deploy another service which will act as a reverse proxy / API gateway. So the proxy (nginx, traefik, ...) will listen for all incoming requests, check SSL, ..., and then it will forward the request to the right service. This is the recommended way because you hide your actual service behind a proxy, and also put all of the auth/ssl details on the proxy itself, so you free your service from needing to know anything about that technical details.
Little background....
We are already using IIS server 7.5 for our .Net web applications. Now we are developing WEB API in nodejs and wants to deploy the same on the existing Production infra.
Is is possible to use port 80 for both IIS and Nginx?
One of the possible solution I think will work is to have two public IPs configured on the LIVE infra and ask each server ie IIS and Nginx(nginx as proxy for node app) to listen to port 80 for http and 443 for https of respective public IP.
Note : I do not wish to use IIS url rewrite to forward request from IIS to nginx or vice versa.
Would appreciate if you guys can point me to right direction
Thanks
I'm developing a security system. It has a proxy server acting like a ssl termination using Nginx which will forwards all tcp/ip connections from clients to other third-party systems.
The client-proxy connections must be authenticated and securely encrypted by ssl.
And my problems is:
Each client is a computer which installed Windows 7 OS or higher. It has been installed with some third-party applications which cannot be interfered. For the better user experience, all clients' tcp/ip outbound requests from any applications must be transparently "converted" into (or "wrapped" in) ssl requests before coming to proxy server. And my first idea is to develop a network driver to access these requests using windows api, namely WFP(Windows Filtering Platform). I have read its online documentation but it's not easy to understand. Can you have to find some projects like this or tell me which sections in the documentation need to be focused? Thank you in advance!
Your issue is a bit unclear but here are my thoughts:
You want to have full encryption between the End User Client to the App Service.
Current:
Client --(443: TLS)--> NGINX --(Clear; Port 80)--> App Service
(Terminate TLS)
Change:
Client --(443: TLS)--> NGINX --(TLS; Port 443)--> App Proxy -(Plain; 80)-> App Service
(Terminate TLS) (Nginx with self-signed Cert)
The change is to add an additional Nginx server on the app server to provide the last layer of TLS between the load balancer and the App Service.
If your App service has the capability to serve SSL connections directly that's even better as you can just configure that in place of running an additional Nginx server on the app host. (If you wanted you could run apache or any other web server that supports proxy/load balancing capabilities)
If you are worried about the App Service port, it won't make a difference, the idea is that the App Proxy (being Nginx or the likes) will handle the encryption on a different port to then pass via localhost to the App Service (in plain text).
Additional Resources:
Can Nginx do TCP load balance with SSL termination?
https://serverfault.com/questions/978922/nginx-proxy-pass-to-https
https://reinout.vanrees.org/weblog/2017/05/02/https-behind-proxy.html
https://nginx.org/en/docs/http/ngx_http_ssl_module.html
I am curious to understand how IIS 7.5 Reverse Proxy is implemented in Rewrite Module (v2).
I am planning to setup a website that will handle proxing between requests coming from users (internet) and my HTTP services that are deployed on the same server. I have setup a website within IIS and configured the reverse proxy logic. I've then setup another website on the same server and deployed all my WCF REST services there. I am planning to implement IIS offloading, common tasks (such as authentication, etc) on the reverse proxy website before the request gets to the actual services (like WCF routing service for SOAP). Configuration is working perfectly fine.
However I am trying to understand the implications of this setup. When IIS does reverse proxing, does it create a new HTTP request (and a new TCP port) between those two websites? And even if both sites are on the same server? Should I expect number of TCP connections opened on this server to get doubled when reverse proxy is used?
Furthermore, has anyone experienced any performance/resource issues with a similar setup?
Cheers,
OS