Scenario
We have a multi-tenant SaaS application deployed on a VM hosted with a service provider. We have multiple domains pointing to this VM (let us call them abc.com and xyz.com). Each of our tenant gets a unique sub-domain URL from one of these domains.
In our IIS installation no domains are explicitly defined - as a result, when a request hits the IIS, as no domains are defined, all requests are automatically routed to the default site.
With this arrangement, we are able to serve a any number of sub-domains for each of the primary domains pointing to the VM without having to explicitly create these sub-domains. For example, t1.abc.com, t2.abc.com, t3.xyz.com, t4.xyz.com are all served by our application without having to create these sub-domains in IIS instance.
In our application, when we get the request, by checking the requested URL, we can easily identify the tenant from which the request is coming. All further data access is automatically restricted to the data created by the particular tenant.
Issue
We need to provide secure communication to all our application users through SSL. We can purchase wildcard SSL for each of the domains (viz abc.com, xyz.com). Now the issue is, how do we deploy multiple SSLs on a single website/application defined in IIS?
This will require us to have two separate sites defined in IIS for abc.com and xyz.com. Unfortunately, these will not be "catch-all" sites for the corresponding sub-domains. By default, IIS allows only one "catch-all" site.
From what I understand from some of the posts, we can have multiple "catch-all" site (for separate domains) in IIS, provided each of them is bound to a separate IP address. Though I could not find any document providing the steps for the same. Can someone point me to the document / steps for doing this?
It is not necessary to have 2 separate boxes for abc.com and xyz.com domains. You can request your CA to combine all DNS records into single SSL certificate with all required Subject Alternative Names SAN.
We also ran into this problem before and SAN certificate worked really well with our multi-tenant configuration. I would also recommend to use load balancer like Nginx for SSL offload to serve your client faster and even more secure. In this case you can just point new client to the Load Balancer.
Related
(Edit:) How secure are shared hosting solutions?
We have some sites on shared hosting accounts where smtp is accessed as a subdomain e.g. smtp.domain.co.za - when accessing that address with a browser it provides access to a generic login interface to the domain's mail which is not covered by the inexpensive letsencrypt ssl certificates clients who choose shared hosting in the first place typically opt for, leading to security concerns. (E.g. when a security audit gets done).
I expected it should be possible to restrict access to the subdomain, e.g. only allowing access from scripts running in the main domain (or some other consition) with .htaccess - these are typically microsites where the only SMTP traffic originates from the site at the domain. However, visiting the 'subdomain' in a browser it seems the .htaccess in the main domain does not get called.
So, short of purchasing a more expensive certificate that either covers the subdomain, or a separate certificate for the subdomain - which would negate the little merit of using shared hosting in the first place, is there a way of resolving the problem?
TL;DR;
What's the way to distribute an SSL certificate across regions, so that no matter which region the application is hosted - it will serve the SSL certificate for the requested custom domains.
Explanation:
We have an Azure Web app where we add custom domains per user. We want to scale the app in different geographic regions behind a traffic manager so that when the website is accessed from Australia - it will be served from the Auatralia's Web App, and when the request comes from Europe - the web app in Europe will serve the request. So, in current situation, regardless of where the request is coming from it will always be served from one location, for example: Europe.
The challenge here is we can add the custom domain in only one of the web app, due to the fact that you need a CNAME entry pointing to an individual URL. It cannot point at two different URLs at the same time. It is possible to route the requests to individual apps but the other web app will not be able to serve the SSL certificate if it's mapped on App1 in region1.
How to distribute or maintain the pool of certificates which can be access by the web apps in different regions? Is there any way with Microsoft Azure?
Update:
We are going to have N number of custom domains, and so N number of SSL certs to handle. AFAIK, Azure Front Door and Azure Traffic Manager - we can map a custom domain to their own endpoints, and is limited to one custom domain. Here I'm talking about handling thousands of external custom domains/SSL Certs.
Thanks in Advance! 🙏
Instead of using Traffic Manager, I would use Azure Front Door. This has a built-in SSL certificate management. You don't even need to purchase the certificate yourself.
What I understood from the question is basically you would like to address the request from the same region rather than from one location. In that case, I would suggest have a look at azure application gateway. Here, you can define path-based load-balancing rules. In that path based, basically you can have one attribute which identifies location say /api/emea/images, /api/apac/images. Off-course you need to first define API on these lines to accommodate some kind of identifier. Once done, then based on this you can create this load-balancing rule in application gateway. Then, you can have different backend pools say one sitting in EMEA region with four-five virtual machines, that can handle traffic from EMEA region. Similarly, it goes for another region as well. Try implementing the same on these lines. You can also explore front door option as well as it handles load-balancing globally and your certificate related stuff should also get addressed. It should address your problem.
In the azure tutorial for setting up a custom domain for the azure front door, few areas got me confused
A brief period of downtime for the domain can occur.
A custom domain and its sub-domain can be associated with only a single Front Door at a time.
The custom domain also must have routing rule with a default path ('/*') associated with it
We have a production site running that has multiple subdomains. I need to map one subdomain with one front door. For example, we have https://web.contoso.com, https://api.contoso.com, https://admin.constoso.com. We have created a frontend for APIs services. https://busymonk.azurefd.net.
Now we need to CNAME only api.contoso.com with busymonk.azurefd.net. Is the said domain downtime going to occur for the main domain and other subdomains?
How I should add the routing for the custom domain. Even this example got me confused. Do I need to add routing between custom domain and my backend pool, or do I need to make a backend pool of https://busymonk.azurefd.net and then add routing between api.contoso.com to busymonk.azurefd.net?
When you need only api.contoso.com with your CDN endpoint, only the subdomain api.contoso.com may have downtime.
To avoid interruption of web traffic, you could first map the temporary afdverify sub-domain. With this method, users can access your domain without interruption while the DNS mapping occurs.
Source Type Destination
afdverify.api.contoso.com CNAME afdverify.busymonk.azurefd.net
If you have verified that the afdverify subdomain has been successfully mapped to your Front Door. Then you could map the permanent custom domain. After this, you could delete the temporary afdverify subdomain CNAME record.
Once you add the custom domain for api.contoso.com with the front door. It's up to you. You only need to make sure there is a path from the frontend hosts to the backend pools via valid routing rules.
For example, to make the custom domain api.contoso.com work, you need to add a new routing rule or change existing routing rule to point to the domain api.contoso.com as the frontend hosts with a default path /* associated with it and select the existing the backend pool of your backend web app host like app service xxx.azurewebsites.net.
Hope this could help you.
Be aware that if you use the afdverify approach and enable HTTPS using an AFD managed certificate, you'll be waiting an excessive amount of time for Digicert to validate the domain for certificate provisioning (24+ hours). It appears to be a manual process on their end, and if your domain's WHOIS registrant email is not displayed b/c it's private, then you'll need to receive email at X#customdomain where X = admin, administrator, hostmaster, postmaster, or webmaster. You'll be better off opening a ticket with Microsoft support over it, they'll work directly with Digicert to get your certificate provisioned.
I have one website configured for Windows Server 2012 IIS 8. This one website can be accessed by xyz.com or abc.com (2 different top level domain names). Is it possible to configure SSLs for both?
Yes. You can configure two different domains with two different certificates for the same IP and port no(443). After providing the domain name in Binding, enable check box Require Server Name Identification(IIS8). If you do not enable this check box, two websites will have the same single certificate, if you change one website binding file, it will reflect in other website also.. I have experienced.
Please refer below link.
http://www.orcsweb.com/blog/fred/host-different-ssls-on-one-ip-with-iis-8-sni/
As Windows Azure web site is powered by IIS, you can see from the offerings that it is possible to bind multiple SSL to a single site,
http://azure.microsoft.com/en-us/pricing/details/web-sites/#web-sites
The trick is to use SNI,
http://www.iis.net/learn/get-started/whats-new-in-iis-8/iis-80-server-name-indication-sni-ssl-scalability
Yes you can add multiple https bindings each with their own separate ssl certificate to the same site in IIS. However you'll want to bind them to separate IP addresses so that the certificate for xyz.com is bound using the IP address for xyz.com, and the certificate for abc.com is bound using the IP address for abc.com. But frankly, it'd be better practice to have one redirect to the other, or just make 2 sites in IIS and keep everything separate.
I am creating software that allows users to either have their own custom subdomain (e.g: theirsubdomain.mydomain.com) or point a CNAME from their own domain to my website address (e.g: theirsubdomain.theirdomain.com).
I've contacted my host about this and the first subdomain option is cool. They will set-up a wilcard subdomain script for me...
The CNAME they said I can't do automatically. I will have to manually go into my account and add the domain to point to my website address otherwise apache wont now where to look for the files.
Is this common practice or is there a way around this that is automated?
The issue is the HTTP header. When you request a Web page the browser sends a request that starts out with:
GET /mypage.html HTTP/1.1
Host: www.mysite.com
The Host item allows a single Web server to serve pages for multiple domains. By looking at the Host, the server knows that mypage.html should come from its stored files for mysite.com, and not from the files of myothersite.com which is on the same server.
I am guessing your site is on a shared Web server at your host company, and they use this functionality to differentiate between requests for your site and requests for other sites that sit on that same virtual box. Some of these virtual hosts, like HostGator, will allow you to specify other domains that should be accepted on this Host line and where the returned documents should come from. This often is a more premium service offered by companies. For example on HostGator they say "The Baby and Business hosting plans allow for unlimited domains to be hosted on just one single account", however the basic Hatchling plan does not allow this.
If you have your own rented machine, with your own installation of Apache, you can manage the processing of this HTTP header information yourself. Apache supports virtual hosts, see the following documentation: http://httpd.apache.org/docs/2.2/vhosts/
So basically, you have to have some way to tell Apache (or whatever server you are using) that the files for a particular Host value corresponds to the same files for your domain, since a single Apache server may be providing files hundreds of different domains. If you are not administering your own Apache server, to where you can set up virtual hosts as shown in the documentation, the hosting service would have to provide some custom way to get this information to Apache.