Just read this article about Static website hosting in Azure Storage :
https://learn.microsoft.com/en-gb/azure/storage/blobs/storage-blob-static-website.
For the moment, I use to deploy my static content to Azure CDN.
So far here are the difference that I've found:
Network performance
Pricing
Custom 404 page (Azure Static Website Hosting)
Custom DNS is only supported for Azure CDN ???
I am wondering what could be a good candidate for Static website hosting in Azure Storage:
Content for local application that does not need to scale globally ?
As the official said, the benefits of using Azure CDN to deliver web site assets include:
Better performance and improved user experience for end users, especially when using applications in which multiple round-trips are required to load content.
Large scaling to better handle instantaneous high loads, such as the start of a product launch event.
Distribution of user requests and serving of content directly from edge servers so that less traffic is sent to the origin server.
I don't think that host Static website in Azure Storage is a good candidate because of the performance. Below points show the features of CDN which are what Azure Storage do not have:
CDN is a content distribution network built on the network. It relies on the edge servers deployed everywhere, through the load balancing, content distribution, scheduling and other functional modules of the central platform, so that users can get the required content near the network, reduce network congestion, improve the user's access response speed and hit rate. The key technologies of CDN include content storage and distribution technology.
The basic principle of CDN is that various caching servers are widely used to distribute the caching servers to areas or networks which are relatively centralized by users. When users access the web site, the user's access is directed to the nearest working normal cache server using the global load technology, and the cache server is used directly. The household request.
Content distribution network (CDN) is a new network construction method, which is specially optimized for the distribution of broadband media in traditional IP networks, and from a broad sense, CDN represents a network service model based on quality and order.
Simply speaking, content distribution network (CDN) is a strategic deployment of the overall system, including distributed storage, load balancing, network request redirection and content management 4 elements, and content management and the global network traffic management (Traffic Management) is the core of the CDN. Through the judgement of user proximity and server load, CDN ensures that the content provides services for users' requests in a highly efficient way.
In general, content services are based on caching servers, also known as proxy caches (Surrogate), which are located on the edge of the network and are only "one hop" (Single Hop) away from the user. At the same time, proxy caching is a transparent image of the content provider source server (usually located in the data center of the CDN service provider). Such an architecture allows CDN service providers to represent their customers, the content providers, to provide the best possible experience to the end users, which can not tolerate any delay in the response time of the request.
More information about CDN, we can refer to: Content Delivery Network and its Business Benefits
In addition, custom DNS is not only for CDN: Setting up Custom DNS Records to point to Azure Web Sites - with Stefan Schackow
Related
I am creating a standard multi-tenant site that has the following structure:
example.com
tenant1.example.com
tenant2.example.com
The hosting is Azure web apps. Tenants are generated dynamically (there could be many) and the site includes real-time components, so utilises Azure SignalR. The site will have a wildcard SSL/TLS certificate to enable the subdomain structure.
Rather than going direct to the app service in one region, I'd like to put a load balancer in front of this and route traffic to regional clusters, or maybe even isolated instances for larger clients. It would also be good to have the DDOS protection that comes in-built with these things.
Azure Front Door was my first investigation, this can handle wildcard certificates but doesn't support SignalR.
Application Gateway was my next investigation, this can handle SignalR, but doesn't support wilcard certificates.
In terms of DDOS attacks, it seems we can enable a form of protection directly on the web apps. However, to me, this seems like it would throttle an attack rather than provide (low-cost) protection, as I believe a load balancer would.
How can I load balance this situation please?
I have a web app which has a decent amount of static content: images, css, some js etc. I was trying to figure out how to deliver those resources.
My first thought was using the storage + CDN setup yet, I would be running into issues since I'm also using a Zuul Proxy as an Application Gateway.
Therefore, if every request goes through the gateway, the gateway itself needs to be replicated across regions and cache content in order to achieve the same performance as a CDN, right?
If this is the case, how does the Azure Application Gateway integrate with it's CDN? Does it maintain the performance of the CDN or drastically lowers it?
Edit
Also, if I have both the Application Gateway and the CDN replicated so much, is it even worth having a CDN? Wouldn't activating some caching rules inside the gateway do the trick?
I have been looking at setting up Geo DNS routing using Azure Traffic Manager (Performance mode). Basically I have an application (Web App and Azure SQL Database) set-up in East US, North Europe and Australia East. For compliance reasons data cannot be shared between data centres, and I do not want the user to have to make a choice regarding which data centre the use:
us.app.com
eu.app.com
au.app.com
I want to be able to use app.com and then have that routed based on the user location. All of which Traffic Manager does - however it will also fail over to other data centres if the closest data centre is unavailable. I don't want the fail-over behaviour - if for some reason the Web App is down in the closest region, I want the user to receive an error.
Has anyone experience of any other providers that offer such a facility? Can the fail-over behaviour be turned off on Traffic Manager.
Interesting question!
Firstly, please note that the 'Performance' mode routing in Traffic Manager is not guaranteed to route a given user to the same data center if that use travels...for example, if an EU user accesses the service whilst visiting the US, they will be routed to the US endpoint. For this reason, where there is a strong constraint to link a user with a particular region, an application-level re-direct may be required.
To address the question you actually asked...there's no built-in ability to disable endpoint monitoring / failover in Traffic Manager today. As a workaround, I suggest making a placeholder site that hosts an error page using Azure Web Apps, then using nested Traffic Manager profiles as follows
3 child profiles, each with 2 endpoints - one of your service endpoints plus the error page web app. These will use the 'Priority' traffic-routing method (aka 'failover' if you're using the old ASM APIs)
1 parent profile, with 3 endpoints, namely the 3 child profiles above. This should use the 'Performance' traffic-routing method. You'll have to specify the location of each endpoint, which should be the same as the app that it contains.
In this way, if one of your apps fails, traffic will be directed to the error page site instead of to the other apps.
Configuring nested Traffic Manager profiles isn't supported in the Azure Portal today. You will need to use Azure PowerShell or Azure CLI (which supports Windows, Linux and Mac OS)
Regards,
Jonathan Tuliani
Program Manager
Azure Networking - DNS and Traffic Manager
I have my website (abc.azurewebsites.net) hosted to Azure Web Apps using Visual Studio.
Now after 1 month I am facing problems with traffic management. My CPU is always 90 - 95% as the number of requests is too high.
Does anyone know how to add Traffic Management in this web app without changing the domain abc.azurewebsites.net? Is it hard coded in my application?
I thought of changing the web app to a Virtual Machine but now as it's already deployed I am scared of domain loss.
When you Scale your Web App you add instances of your current pricing tier and Azure deploys your Web App package to each of them.
There's a Load Balancer over all your instances, so, traffic is automatically load balanced between them. You shouldn't need a Virtual Machine for this and you don't need to configure any extra Traffic Manager.
I can vouch that my company is using Azure Web Apps to manage more than 1000 concurrent users making thousands of requests with just 2-3 instances. It all depends on what your application does and what other resources does it access too, if you implemented or not a caching strategy and what kind of data storage you are using.
High CPU does not always mean high traffic, it's a mix of CPU and Http Queue Length that gives you an idea of how well your instances are handling traffic.
Your solution might implementing a group of things:
Performance tweak your application
Add caching strategies (distributed cache like Azure Redis is a good option)
Increase Web App instances by configuring Auto-Scaling based on HTTP Queue Length / CPU.
You should not have to change your domain to autoscale a Web App, but you may have to change your pricing tier. Scaling to multiple instance is available at Basic pricing tier, and autoscaling starts at Standard tier. Custom domains are allowed at these levels but you don't have to change your domain if you don't want to.
Here is the overview of scaling a web app https://azure.microsoft.com/en-us/documentation/articles/web-sites-scale/
Adding a Virtual Machine (VM) is very costly as compared to adding instance. On top of it, Redundancy (recommended) for the VMs, adding NIC etc will blow up the cost. Maintenance is another challenge. PAAS (webApp etc) is always a better option than IAAS.
Serverless offerings like Azure Functions can also be thought of. They support http trigger and scale up really well.
I'm trying to add web app endpoints from the same location, to an azure traffic manager, when I try to do this, it tells me that App Service will use load balancing to do this for me, when we apps are in the same location.
My understanding is that load balancing is for distributing requests between multiple VMs on one web app. The plan was to use out single DNS and allow traffic manager to determine which endpoint to go to using round-robin or failover. How will load balancing know to direct to one of the web apps from this single address?
Azure Web Apps already have built in load balancing between instances within the web app. So for example if you have a web app with 10 instances under the endpoint: tester.azurewebsites.net, Azure load balances appropriately across those instances.
When you bring in traffic manager, that is looking for different endpoints to facilitate between. Incoming requests will be routed based on proximity to endpoints it is managing, load and if the endpoint is available. Traffic Manager takes care of all of those complexities for you.
This allows you to have a single endpoint myapp.azurewebsites.net; which may route to myapp-west.azurewebsites.net and myapp-east.azurewebsites.net. That routing as I indicated is based on proximity, load and availability.
How it actually works is the magic sauce of Azure Traffic Manager. I use it in production and it has been working very well for me. I primarily use it for routing based on proximity, and have yet to experience a failure on a web app to test a production failover reroute.
Hope that helps!