Azure Traffic Manager with my own SSL cert? - azure

I've been using Azure to host my Web Apps for a while now and they've had my own wildcard cert attached to various ones with no problem. Recently, however, one of my clients has wanted a certain degree of uptime/performance (not that there have been any problems so far but they are willing to pay for it and who am I to turn down money) so I've set up mirrored sites and am using traffic manager to route between them.
It works like a charm but for one problem: I have a cname pointing a friendly url to the traffic manager address and, if I try to connect via https, it craps out and wants to use its own *.azurewebsites.com cert no matter what I try.
So my question is: am I missing something here? How to I use my own custom *.mycompany.com cert in this case?
Or, for that matter, is there a better way of doing what I'm ultimately trying to accomplish here?
Here is my set up:
Endpoint 1: MyWebApp-East (type - Azure Endpoint, ssl installed and proper host info added)
Endpoint 2: MyWebApp-West (type - Azure Endpoint, ssl installed and proper host info added)
Traffic Manager: Routing Type - Performance
UPDATE
Oddly enough, I got it to work. I must have had something wrong somewhere. I did a scorched earth approach to it by deleting EVERYTHING (sites, traffic manager, dns entries, etc) and starting over. It works perfectly now!

Posted this in the top part but so as not to leave this open, I'll repost the solution I found:
Oddly enough, I got it to work. I must have had something wrong somewhere. I did a scorched earth approach to it by deleting EVERYTHING (sites, traffic manager, dns entries, etc) and starting over. It works perfectly now.
Sometimes to go forwards, you have to destroy everything.

Related

best way to redirect securely one domain to another in IIS without having a website

I would like to know what is the best way to redirect everything from marketing-address.com to real-address.com.
best means
as less effort as possible,
as cheap as possible,
as secure as possible.
In detail:
Less effort: If possible without the need of creating a website oder some code like javascript
Secure: https://marketing-address.com should be accpeted by the browsers - no warning.
Cheap: if possible without buying a certificate (I don't think that this is possible) and without having a second webserver running
So in theory, the communication would be like this:
Making the address targeting the same IP address
Making the existing IIS listen to that address to
Let IIS tell the caller "yes, you're totally right here, but I neither I have a website nor do I have a certificate, but you don't need anything of that since you get redirected anyway..."
Is there a chance to accomplish that? If no, I would need to buy a certificate. What would be the solution then?
There are 2 restrictions:
We are using an Azure App Service for hosting an asp.net core site, which seems to be very restricted in configuration possibilities
The browser should definetly show the real-address.com in the URL, not the marketing-address.com.
Have you tried to use an Application Gateway before the IIS/Webapp at the backend?
I believe the AppGW will solve these issues, the AppGW can redirect the hostname to another web address, as many you want to.
https://learn.microsoft.com/en-us/azure/application-gateway/ssl-overview#tls-termination

Azure Traffic Manager, Priority Mode: Browser refresh won't go to secondary node when primary goes down

We are testing out Traffic Manager to see if it is a viable solution for failover. If our primary Azure region becomes unavailable for any reason, we want end users to be directed to a secondary location where they can continue using the site.
I have followed the documentation for setting this up and have 3 simple API return pages as endpoints in 3 different regions that simply alert which one you are hitting. I have them prioritized, 1, 2 and 3.
When hitting the .trafficmanager.net URL, the primary is displayed as it should. All 3 show "online" in the traffic manager profile. If I stop the primary site, then refresh my browser, I get a 403 error stating that the site has stopped.
I set the TTL in the traffic manager profile configuration to 60 seconds. However, after 15+ minutes, the browser still displays the 403. The only way I seem to be able to get the secondary site to pull up is by starting a new browser session. It's like there is some sort of caching and/or TTL issue with the browser session that prevents it from trying the secondary site.
This obviously wouldn't be acceptable in a live, production environment. There has to be a way around this, right? Has anyone else dealt with this issue?
The browser might be using Keep-Alive
Keep in mind that Azure Traffic Manager works at the DNS level so, rather than using a browser to get a repro, try to get a repro with some DNS tools like dig, nslookup, etc.
This isn't just a browser setting. Your IIS Manager could be considered to use keep-alive to reduce strain on itself, thus leaving open connections that completely bypass the Traffic Manager's DNS rules. I had these exact same symptoms, and was able to alleviate them by following steps I posted here. Whether it'll prove useful in a real-world scenario has yet to be seen, but I'm hoping this will help you get further.

Access internet via Apache2 ProxyPass

Recently, I made a setup where I pointed some websites to a redirect server. The redirect server in return served the website requests using ProxyPass directive of Apache2. It worked like a charm without even a single problem for my websites.
So, based on that I have got an idea to access internet via Apache2. Please note that this is because I do not have access to fast internet and every internet provider is so lousy and lame here to provide better connection speeds even for the lot of money I pay to them.
Now, https as better speends than VPN.
So, the idea is to get rid of VPN and SSH tunnel redirects and instead, resolve every domain on my Mac to a single server IP address which should be a redirect server and which can in turn bring me back every web request made from my Mac. Possible? This will make me to always use https to my own redirect server. https has better speed than VPN for me whenever I try and when I am on VPN things are too slow for me, may be because of level of encryption. Please note that I do not want solution using PPTP, L2TP and anything else which are lighter than OpenVPN (using Pritunl).
Please let me know if anything like that is possible and if yes then how.
Even though if it does not work, my mind always gets this idea every time. I just want someone to shed light on this and shut down my idea if its the worst by far. Thanks in advance.
Also, I have also seen some proxy sites where I put any website link on their website and their website works like a browser as if I am surfing on their remote server itself. May be something like that can be useful and speedy for me. But, I do not want to use them because I do not trust those sites for security. No way.
Got a solution myself without any kind of VPN.
Actually I needed to make my DNS secure and connections to my server Apps secure. So, for that I tried DNSCrypt-Proxy and its working great and resolving my DNS queries on HTTPS (443).
And, I am using an Addon on Chrome for "Always https" connections. I am blocking every request on http for Chrome using that Addon. Perfect!!!
So, now all surfing traffic on my Mac is going on HTTPS and is perfectly safe from hackers. I do not care for any other connections made by my other Mac Apps. I just care for security of my Apps while I am surfing them OR any payments I am making for shopping.
DNSCrypt-Proxy:
Please go to https://dnscrypt.org/#dnscrypt-osx and you will find all help there to how to install and run it on your Mac.
brew install dnscrypt-proxy --with-plugins
sudo dnscrypt-proxy --ephemeral-keys --resolver-name=cisco
^ You can find the resolver name in excel sheet that comes with this package.
And, just add an entry in your Network interfaces for DNS to point to 127.0.0.1, Please note that remove all other entries.
"Always HTTPS for Chrome":
https://chrome.google.com/webstore/detail/https-everywhere/gcbommkclmclpchllfjekcdonpmejbdp?hl=en
Enjoy perfect security on your Mac, if you do not care about IP address anonymity. Always use legal stuff!!!

NodeJS OpenShift App times out on https, but not http

I've got a fairly simple app deployed on OpenShift that uses CloudFlare as a DNS provider, since they support CNAME records for the root domain, which our current domain provider does not.
The issue with this setup is somewhere along the line https is not working. I believe this is an OpenShift issue because it's the same kind of issue you get when you've mapped the domain name to your app but haven't added the proper aliases yet - you get a timeout essentially.
We've got two aliases - with www and without. There's no option to specify https or anything with OpenShift aliases from what I can see. There aren't any SSL certificates assigned to these aliases as we do not need or use https - we're on the Free plan.
The main URL to access the site is http://www.jcuri.com - notice this works as expected, however https://www.jcuri.com times out.
Initially we were thinking of using CloudFlare page rules to auto-redirect to a non-https URL however this is locked down behind a paywall which we're hoping to avoid, as we don't need any of the Pro features.
Is there something I'm missing here? It seems that OpenShift is just denying any https connections purely because we don't have certificates assigned to the aliases. I wouldn't even mind if there were certificate errors, at least that would give us a chance to do a redirect on the actual NodeJS application, but we don't even reach that point.
Can anyone offer some advice on this?
Since those domains are not pointed directly at openshift via CNAME, but are seemingly redirected via another service (from what i can tell from the dns) it is hard to say whether it is OpenShift that is causing the https issues. If you do not have a custom ssl certificate installed on openshift, you will just get an invalid certificate error, but since you are using a redirect service, maybe it is possible that the service is checking the certificate first, seeing an error, and then not working?
Since the https page rules you stated above are behind a paywall, this actually makes a lot of sense that they are blocking it, not OpenShift. Godaddy provided a forwarding service that would allow you to point both www and naked domain to openshift correctly using cnames, i have used it before.

Staging area ptotection based on ip...what if my client has a dynamic IP?

I'm trying to put online a staging area for an upcoming website... I'd usually rather use an htaccess rule to enable only me and my client to see the website...i think is safer and you dont need to rememebr passwords and so...
but my client this time has an internet provider who doesnt give him a static ip, aparently everyday or so, his ip chamges...so i have to change my htaccess!
there is any solution for that?
First of all, dynamic IPs are very common, a lot of providers disconnect the client in intervals of 12 or 24 hours, which usually means they get a new IP assigned.
Second, just giving out a username / password combination not only seems safer, but also more hassle-free. You are about to invest time into a solution that's probably not worth it. I also don't see how you would obtain the valid IP address of the client to update your .htaccess file, apart from having the client install a service that updates a dynamic DNS entry mayb - more of a hassle than remembering a login, if you ask me.
You could have him use a dynamic DNS service like dyndns.com or no-ip.com. That way he can setup a domain name like someguy.dyndns.com which would always resolve to his ip (he'll probably need to install a small daemon/service/program to automatically update the IP though). Then you can add a rule into your .htaccess like allow from someguy.dyndns.com.

Resources