Here is my situation.
I have a project hosted on Google Cloud, more specifically GAE (NodeJS) and Firestore.
I have a queue stored on Firestore that it could be up to 30 - 40k entries.
Each entry is basically an object with which I'll have to make an api call to an external service.
That external service allows only 10 requests/s for one IP.
At the moment I take batches of 10 and make for each one an api call, but it's to slow.
I already tried to instantiate multiple instances of the GAE service, but I still hit the limitation ( the instances use the same ip ?! ).
Another option would be to move the making of the api call in a Cloud Function and hit it there, but I think that I would bet the same outcome as with the GAE instances.
So, what do you think ?
Many thanks!
In my opinion, the requests per second per IP limit is put in place to throttle the overall amount of incoming requests and gaming this rule may cause issue to that service. The best way to handle this situation is either to get a paid subscription or to discuss the issue directly with the service provider.
Regarding the App Engine instances and IP addresses the short answer is:
No, GAE instances don't have their own dynamic IPs.
For more reference you can confirm it in the FAQ for App Engine:
App Engine does not currently provide a way to map static IP addresses to an application. In order to optimize the network path between an end user and an App Engine application, end users on different ISPs or geographic locations might use different IP addresses to access the same App Engine application. DNS might return different IP addresses to access App Engine over time or from different network locations.
tcptraceroute to a google service shows one of these points:
lga34s14-in-f14.1e100.net
According to the description of Google Edge Network:
Our Edge Points of Presence (PoPs) are where we connect Google's network to the rest of the internet via peering. We are present on over 90 internet exchanges and at over 100 interconnection facilities around the world.
To sum it up: your application should exit the Google's network from the Edge Point closest to it's target it would make sense that it's always the same point with the same IP and from the amount of the services and the client applications GCP hosts you can expect a reverse proxy being used by Google.
Related
I'm running an azure function which gets data from an API and stores it in a blob. Everything worked fine and stopped working out of nowhere. We then got in contact with our provider and they told us they made some changes in their API. After we made the necessary changes in our code,started getting an IP denied error from their part. I then searched and found the possible outbound IP addresses for the Azure Function. They whitelisted the whole list and still
They aren't getting any requests from those IP's,
We are not able to access that data for the same reason our IP is denied.
We've been running the code in a local machine and it works completely fine, but this is just a temporary fix and we want to keep everything in the cloud.
I've been stuck with this for about 3 weeks. I've looked into different solutions and I found about Azure Logic Apps and Azure Service Fabric.
Is there something missing in my Azure Function that isn't allowing me to make requests to the API? Am I using the wrong outbound IP? Also, if I use any of the other two services, will I encounter with the same problem? I did some research on them and I think they both also use multiple outbound IP addresses, so I'm worried I'll get the same problem.
Using NAT gateway you can specify a static IP address for outbound traffic, your function app need to be attached on a subnet which is not available for consumption plan.
Here is where you should be getting the Azure IP address ranges from. Azure Functions originate from the App Service Plan ranges. Note that this is updated weekly, things change, but not too often. Your provider will need to open all the relevant ranges and keep up to date with any changes. If your solution is not mission critical with a high SLA, then having your service provider open the relevant ranges and deal with failures, updating the ranges on an ad-hoc basis should be fine.
Secondly, if you have a good relationship with the provider, ask them to check the firewall, they will be able to give you an indication of the IP's getting blocked by checking the firewall logs. This will help you find the right range.
The only guaranteed way to solve this in a mission critical solution is to run your Azure Functions from an dedicated app service plan with a dedicated IP address. This is an expensive option but will be the most robust.
Additional helpful information here on how App Services works with IPs can be found here.
So just a quick summary of what we are doing to put everything into context. We have a socket server running as an Azure Cloud Service (worker role) within the South Central US region. All of our other components (Queue, DBs, web app, API etc) are located in East US. The reasons being is sadly due to not being able to modify the static IP address that was created for the South Central US a few years ago. The devices in the field cannot alter their IP as well :/ So we are stuck communicating cross region.
So what Im asking, is there a way to improve latency? Can we "port forward" ? What other options do we have? Im assuming the latency is our biggest enemy here as we pipe data back and forth.
Looking at load balancing at moment - https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-overview
Thoughts?
Load Balancer is a regional service and cannot direct traffic across regions.
There are a couple of options:
1) build your own VM's with a TCP proxy to achieve your scenario. You could use Load Balancer to scale and protect your TCP proxy instance if you want to pursue that path.
2) explore using Application Gateway for this scenario since it is a proxy and can direct to IP address destinations. This is essentially a managed service for option 1, although limited to HTTP & HTTPS.
3) migrate to a DNS approach for locating your service and orchestrating a migration across regions over time.
Either way, traffic would remain on Microsoft's own backbone between regions.
Best regards,
Christian
Hello I've been searching everywhere and did not found a solution to my problem, which is how can I access my API through the gateway configured endpoint only, currently I can access to my api using localhost:9000, and localhost:8000 which is the Kong gateway port, that I secured and configured, but what's the point of using this gateway if the initial port is still accessible.
Thus I am wondering is there a way to disable the 9000 port and only access to my API with KONG.
Firewalls / security groups (in cloud), private (virtual) networks and multiple network adapters are usually used to differentiate public vs private network access. Cloud vendors (AWS, Azure, etc) and hosting infrastructures usually have such mechanisms built in, e.g. Kubernetes, Cloud Foundry etc.
In a productive environment Kong's external endpoint would run with public network access and all the service endpoints in a private network.
You are currently running everything locally on a single machine/network, so your best option is probably to use a firewall to restrict access by ports.
Additionally, it is possible to configure separate roles for multiple Kong nodes - one (or more) can be "control plane" nodes that only you can access, and that are used to set and review Kong's configuration, access metrics, etc.
One (or more) other Kong nodes can be "data plane" nodes that accept and route API proxy traffic - but that doesn't accept any Kong Admin API commands. See https://konghq.com/blog/separating-data-control-planes/ for more details.
Thanks for the answers they give a different perspectives, but since I have a scalla/play microservice, I added a special Playframework built-in http filter in my application.conf and then allowing only the Kong gateway, now when trying to access my application by localhost:9000 I get denied, and that's absolutely what I was looking for.
hope this answer gonna be helpful for future persons in this same situation.
I wrote a small utility that utilizes Azure blob storage to push some files across for a secondary backup (~100GB). Thus far it works really well, however since it is sitting in a colocation area, my bandwidth usage can hit 190mb/s+ which is a bill I'd rather not pay. Given this, I have two questions:
Outbound traffic on a server with multiple IPs utilizes the first IP configured as the "main" one. I know in C# I can get a list of network adapters and change properties, but is it possible to tell an app that it's traffic needs to utilize a specific IP (instead of the default) for outgoing connections? We could use this to filter anything coming out of that IP, regardless of destination and only this app would use that address.
If not, is it possible to configure an app to send all traffic on a separate adapter that would have a single IP, so we could filter outbound at our router level to throttle that traffic?
Alternatively (if we're attacking this from the wrong angle), is it possible to limit Azure transfers to a maximum bandwidth allotment in some capacity? That's all I'm really after, as any other traffic should be able to use the maximum it can (meaning QoS doesn't apply - there isn't contention here, just too much outgoing in general).
For your backup needs, did you already evaluate RA-GRS, it provides built-in data replication to secondary location with read-only access on the data.
https://azure.microsoft.com/en-us/documentation/articles/storage-redundancy/
As far as I can tell there is no API allows you setup a limits for the bandwidth consumed, however you can enable storage monitoring so that you have a better idea on how many transactions triggered.
https://azure.microsoft.com/en-us/documentation/articles/storage-monitor-storage-account/
Btw, there is one thing which might be able to address your cost concern is to setup your spending limit for your Azure subscription, but this depends the type of your subscription.
https://azure.microsoft.com/en-us/pricing/spending-limits/
I am starting to develop a webservice that will be hosted in the cloud but needs higher availability than typical cloud SLAs provide.
Typical SLAs, e.g. Windows Azure, promise an availability of 99.9%, i.e. up to 43min downtime per month. I am looking for an order of magnitude better availability (<5min down time per month). While I can configure several load balanced database back-ends to resolve that part of the issue I see a bottleneck at the webserver. If the webserver fails, the whole service is unavailable to the customer. What are the options of reducing that risk without introducing another possible single point of failure? I see the following solutions and drawbacks to each:
SRV-record:
I duplicate the whole infrastructure (and take care that the databases are in sync) and add additional SRV records for the domain so that the user tying to access www.example.com will automatically get forwarded to example.cloud1.com or if that one is offline to example.cloud2.com. Googling around it seems that SRV records are not supported by any major browser, is that true?
second A-record:
Add an additional A-record as alternatives. Drawbacks:
a) at my hosting provider I do not see any possibility to add a second A-record but just one... is that normal?
b)if one server of two servers are down I am not sure if the user gets automatically re-directed to the other one or 50% of all users get a 404 or some other error
Any clues for a best-practice would be appreciated
Cheers,
Sebastian
The availability of the instance i.e. SLA when specified by the Cloud Provider means the "Instance's Health is server running in the context of Hypervisor or Fabric Controller". With that said, you need to take an effort and ensure the instance is not failing because of your app / OS / or pretty much anything running inside the instance. There are few things which devops tend to miss and that kind of hit back hard like for instance - forgetting to configure the OS Updates and Patches.
The fundamental axiom with the availability is the redundancy. More redundant your application / infrastructure is more availabile is your app.
I recommend your to look into the Azure Traffic Manager and then re-work on your architecture. You need not worry about the SRV record or A-Record. Just a CNAME for the traffic manager would do the trick.
The idea of traffic manager is simple, you can tell the traffic
manager to stand after the domain name ( domain name resolution of the
app ) then the traffic manager decides where to send the request on
considerations of factors like Round-Robin, Disaster Management etc.
With the combination of the Traffic Manager and multi-region infrastructure setup; you will march towards the high availability goal.
Links
Azure Traffic Manager Overview
Cloud Power: How to scale Azure Websites globally with Traffic Manager
Maybe You should configure a corosync cluster with DRBD ?
DRBD will ensure You that the data on both nodes are replicated (for example website files and db files).
Apache as web server will be available under a virtual IP to which domain is pointed. In case of one server is down corosync will move all services to second server within few seconds.