GCP: dispatch.yaml route limit - node.js

We are using a GCP environment to setup our project. In that we require the use of dispatch.yaml file.
We want to redirect to the different server URL without changing the domain name in the browser. Dispatch.yaml fulfills this need but it allows only 20 routing rules.
How can we overcome this limitation of Dispatch.yaml? As we require more than 20 routing rules.

Per this Google Groups discussion it seems like this limit is a hard limit unlikely to change, GCP support said that:
With more dispatch rules comes more CPU and memory resource requirements per app. [...].
Ideally it is recommended to design your application to abide by App Engine quotas as they are set to protect the overall underlying architecture. Treating other micro-services as backend services and directly routing requests to them using their full target address via your main frontend default service is the recommended approach. This way you only have to maintain the URL requests to your main frontend client-facing default service, and not every service.
It seems like the request of a quota increase was denied for the user in the Group's discussion and they figured out a solution using Google Cloud Endpoints and the direct module/service addressing scheme in GAE.
The discussion is relatively old, but I believe that they won't change the limit. Nevertheless I would recommend contacting the GCP support and explain your current situation so they can recommend the proper way for your app to avoid the limit.
There's also the possibility of a feature request to increase the limit. There's already one created for this, you can reply in the post stating that you would also like the limit increased, so the GCP engineers know that more users are being affected by the limit.

Related

What to use for routing thousands of subdomains in Azure?

We have an application that we are hosting in multiple environments in Microsoft Azure. We want to route the traffic based on subdomains, like xxx.mydomain.com should go to the webapp that I have in North Europe and yyy.mydomain.com and zzz.mydomain.com should go to the webapp that I have in the East US.
I know it sounds like simple DNS, but it is more than that. Because:
I need to be able to add or update entries dynamically using code so an API should be available for that.
A normal DNS entry has a 24 hours time to live meaning that if I want to move my app from one environment to another, for up to 24 hours, users will hit both environments.
I expect to have hundreds of thousands of subdomains. Azure DNS has a limit of 25,000 entries.
I've looked into Azure Traffic Manager. It doesn't seem to have an option for traffic based on subdomains.
Also, I've looked at Azure Application Gateway. It seems to be the correct choice and it supports API's, but I cannot find the limits for subdomains.
Any suggestions?
From the criteria, it seems you're looking for a load-balancer/proxy/application-delivery-controller solution that's controllable through an API. I'll add my 5 cents here, as we've just gone through very similar problem. However these are more of a suggestion to look for answers elsewhere then Azure.
Azure
Azure Traffic Manager or Azure Application Gateway have limits which you can't fit in. For example in Azure Application Gateway with 200 rules, you could potentially host only 200 HTTPS site, the moment you need to serve HTTP & HTTPS, you're limited to 100 sites per application gateway. You'd need to split your solution across multiple subscriptions in order to fit subscription wide limits. Also the application gateway API is a bit too convoluted for my liking.
Azure DNS is also a bit problematic, as DNS records can last up to 24 hours. You'd therefore loose the ability to switch/route traffic to a different origin instantly.
Self-hosted
You could look into more old school solutions, run HAProxy or Nginx and programmatically modify their configuration(text files) on the fly and reload the configuration. HAPRoxy also has a socket "API" that can simplify the configuration modification and reload for you.
There's also a new set of service mesh controllers such as Kong, which can run in the cloud natively and are meant for service mesh solutions, however Kong offers a simple API, where you could manage/route traffic easily.
SaaS
If you're into buying this as a Service, Edge Cloud providers such as Cloudflare, Fastly or others are indeed "one big proxy server" and it is possible to configure them programmatically to route traffic to different origins, it's what they do after all.
Azure Application Gateway is indeed perhaps one of the best options for your scenario.
As you already said, it has an api that you could use to dynamically add rules based on your subdomains.
The limits for Application Gateway only allow for 200 rules per gateway.
But you can have 1000 gateways per subscription so if you could chain the gateways, that will give you roughly 200.000 rules.
The Microsoft documentation doesn't show that you can request an increase in these limits but maybe if you ask really nice the might allow it.
Maybe this is not the answer to your question but it might be an answer.
If anyone interested, we've ended up using Azure DNS. We have contacted Microsoft and they confirmed that they can increase the quota to 500,000 which is more than enough for us. :)

Securely allow Google App Engine to internal company network/servers for Google Apps Scripts

It is well documented that Google Apps Script run on Google App Engine servers that would not have access to a company's internal network/server:
https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app
https://cloud.google.com/appengine/kb/#static-ip
https://developers.google.com/apps-script/guides/jdbc#using_jdbcgetconnectionurl
Per the documentation, if you want a Google Apps Script project to have access to an internal network/server then you will have to white-list Google's IPs. But we all know that isn't the safest option. In fact, the documentation even says so:
Note that using static IP address filtering is not considered a safe and effective means of protection. For example, an attacker could set up a malicious App Engine app which could share the same IP address range as your application. Instead, we suggest that you take a defense in depth approach using OAuth and Certs.
The issue is I cannot find any documentation, reference material, or articles on how best an organization should do what it suggests.
So my question is, how can an organization using G-Suite Enterprise securely allow Google Apps Script projects to access the company's internal network?
The documentation made it quite clear, that since App Scripts are ran on shared App Engine instances, it is impossible to restrict with IP, and that also implies the networking capability would be very limited (i.e. no VPC peering or alike). Therefore, as in the highlighted block, they suggest implementing authentication over just IP restriction.
Apart from authentication, App Script also supports encrypting and authenticating the server with SSL (sample code). This should protect the connection from being eavesdropped when sent over the Internet.
Further more, you can implement a "semi IP restriction" mechanism, technically called Port Knocking, which briefly works as follow:
First create a special endpoint, requires authentication, accepts an IP address as input. When requested, you open up your firewall to accept connection from that IP to your internal network for a limited time (e.g. 5min).
In your App Script, use URL Fetch to request that endpoint, so that your scripts instance is temporarily allowed to access your network.
Of course that will not be perfect, since one App Engine instance runs many scripts concurrently and the whitelist is opened for a set time, but still this is considerably better than persistently opening the port to all Google (App Engine) IPs.
Apps Script is a great tool for simplifying tasks when you are using G Suite services, unfortunately, what you are trying to achieve is not available. Also, keep in mind Apps Script is not built on App Engine, it's a completely different product.
Therefore if what it is shown in the documentation can't fulfill the requirements you have, please check other Google alternatives like App Engine or Google Cloud Platform, instead of G Suite.

Azure App Service Hosting Requirements/Options

I'm interested in using Azure as a PaaS solution to host a Node Js app that I'll be developing in the few coming months. I've done a fair bit of research on the pricing models and tiers so I sort of have a grasp on that, however, I'm not sure how to accurately spec my server requirements. When looking at pure CPU, Memory and Storage specifications between the Basic, Standard and Premium plans they all look similar, with the exception of storage I suppose.
The application I intend to build will primarily perform CRUD based actions. It will not host large images/videos and static files will be used in JS libraries or small images for theming (icons, logos etc I'm hoping there's a CDN). I anticipate no more than 1000 web page requests per day and the AppService is only intended to serve as a WebApi and Web Server, I intend to host the DB on Mlab.
I'm looking for an option that will give me reasonable page load and server response times (1-2secs). The app service also needs support for SSL, is that something I need to get from Microsoft or I can purchase and apply elsewhere.
Finally, I'd love to be able to test and dev on Azure, as from my experience it is better to do so on an architecture that matches your production. Is there any low cost Dev/Test server options that I can use instead of using the production service (which I anticipate will exceed my test performance requirements and would also cost more)?
While the CPU, memory, and storage options may look similar between the plans, the VM underneath and the additional features are not.
For plans:
Shared/Free are plans where you share a VM with other users. You have
quotas for how much of the VMs resources you are allowed to use, and
if you go over them the site will be shut down until the quota
resets. This is fine for dev/test environments, but can be risky for
production as a traffic spike can cause your site to be turned off
temporarily.
Basic plans give you a dedicated VM for your app, so there are no
quotas and thus removes the risk of having your site shut off it gets
too popular.
Standard adds autoscaling (the ability to increase and decrease resources based on usage metrics) and SSL.
Premium is similar to Standard, but the underlying VM is running on better hardware.
The Shared plan and higher (basically anything but free) offers load balancing and custom domains. You can purchase a domain within your Azure account or bring your own.
The default yourwebsite.azurewebsites.net is protected by the azurewebsites.net SSL cert. However, if you use a custom domain and need SSL support, then you need to be on a Standard plan or higher. As with domains, you can purchase one through Azure or bring your own.
You can put a CDN of your own choosing in front of your Azure App Service, or you can use Azure's CDN. It is not included in the App Service plan.
For production with a custom domain and SSL, you are looking at one of the Standard plans.
For dev/test there are a couple ways you could go. If your dev server doesn't use any of the extra features like custom domains, you can scale the plan up and down as you please. That means you can scale up to the matching plan for final testing of a release, but leave it in a lower tier the majority of the time.
The second option is to use deployment slots to create your dev site on the same VM as your prod site. You need to be on a Standard plan or higher to use this feature, and it comes with some added benefits. Particularly that you can swap which code is in production or funnel some of your traffic to a staging slot before swapping new code into production.

What is a good Azure architecture for Web App Services

I have been researching for a couple days and looking at pluralsight courses but I Can't seem to find a decent answer on how to setup a proper Azure infrastructure.
I have a client app, api backend, and a database as a core of my overall application. I know I need 2 different Web App services and an SQL database.
I also have a need to only allow access to all 3 from our company's IP address.
I'm getting lost with all the VNET and VPN talk and I am wondering if that is even required. Is it considered good to do IP restrictions and call it a day? Should I add an Application Gateway infront of the client application none the less?
If VNETs are required, is it a must to do site-to-site? (don't think we have the authority to do that) If not, how do we access the backend services like the database and API if everything is locked down?
Any help is appreciated because there is too much information and I can't seem to make sense of any of it.
Thanks
It depends a lot on both the purpose of your client application, web application and database, as well as the capabilities that currently exist within your organisation. Have you had a look at the references architectures Microsoft has as a starting point ?
If you are looking at a fairly simple application, deployed to Azure with minimal internal only use, then use something like this reference architecture: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/vpn. You can actually simplify that a little further by removing the load balancers etc if you think traffic will be generally low.
If you are looking for an external application that can only be managed internally, you should adopt something similar to this reference architecture: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/n-tier/n-tier-sql-server. Maybe even add a VPN component to the management jump box similar to this architecture: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/vpn.
Even this, however may be too complicated for your use case. If your application is pretty basic, is secured using username/password or identity federation, and has low risk data associated with it, then just the basic web application architecture would do fine, just read through the various considerations here: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/app-service-web-app/basic-web-app

Azure Website Logs Including Internal IPs in Entries

For the last couple of weeks, we have been seeing an increasing amount of entries in the web logs of our Azure website whose originating IP address (in the c-ip column of the log) appears to be in the range 100.90.X.X. It has now reached more than half of all the traffic being logged, and is interfering with our ability to perform analytics and threat detection.
According to the Wikipedia entry on reserved IP addresses, this block is part of one "Used for communications between a service provider and its subscribers when using a Carrier-grade NAT, as specified by RFC 6598", so could this be a problem in Azure?
Looking at the logs, the traffic comes from many different user agents (both normal users and the common legitimate bots) and is requesting a broad range of resources, so does not immediately appear suspicious other than the IPs. It looks more like legitimate traffic is being given an incorrect (internal) IP.
It seems to be only affecting static content (e.g. images and XML files), but not ALL static content.
We are using a single Small Standard instance in Western Europe, with a single web app running on it. We are not using any scaling features. There is a linked SQL database, and the website runs primarily over HTTPs. 95%+ of our traffic comes from UK sources. We have not made any changes to logging, which is handled by Azure.
Is there any way that we can return to seeing the actual IPs here, or is this malicious traffic?
It’s possible to alter the logging, but not on an app. The app diagnostic setting is pretty rudimentary — just a switch “to log or not to log?”
What you’ll be interested in is this comparison between apps (it was called “sites” then), roles (available through Cloud Service)1 and virtual machines. The article mentions that there is more control over logging in the roles environment, which I would assume means that you can set up custom logs. This article details how to set up logging for the headers you choose in IIS. Now, you can fiddle with your IIS in a virtual machine, but there is a chance a cut-down version of this would work in a web role, for example. This article discusses how to enable diagnostics logging in your cloud service hosted application.
Moving to cloud service from app environment is not trivial, since you have many more things you must set up. Possibly you’re looking at changing your solution’s structure, maybe altering the architecture of your app. So I wouldn’t consider doing it just so I could see a client’s IP.
The simplest thing you can do is try attaching analytics. There used to be a solution straight from Azure, but I can't find it in the portal. Google analytics is my go-to solution for traffic analysis. It may get you the information you want.
It’s really annoying how Microsoft rebrands an azure service every few months.

Resources