Dynamic action calls are getting through Amazon CloudFront - amazon-cloudfront

We have configured CDN to speed up our website. In our website we are doing some ajax calls basically action calls which take some amount of time to get response from origin server because they are some heavy queries.
Query takes more than 40 - 50 seconds to execute, due to which for most of the actions which take more than 30 seconds to execute we are getting 504 timeout error from cloud front.
Is there any option in cloudfront where we can increase these limit for dynamic calls or if we can ignore these action by cloudfront because all these are dynamic action it shouldn't get route through cloudfront CDN.

There is no way to set Cloudfront timeouts.
A couple methods:
Route the dynamic calls directly to your server. As you suggested, CLoudfront is going to offer 0 benefit for those calls so don't use the Cloudfront urls and instead use the backend urls.
Polling. The goal is to change your long request into lots of short ones. One call to make the request. Then subsequent calls to check on the status of the job. This is clearly much more effort as it will result in some coding changes - however, at some point, your jobs are going to grow to timing out at the browser level as well, so might be something to think about now. (You could also use something like websockets, where there is a persistent connection that you pass data on.)

Related

Send HTTP request at exact time in future with Nodejs

I need to make POST http request at exact timestamp in future, as accurate as possible, down to milliseconds. But there is network latency as well. How can I achieve such a goal?
setTimeout is not enough here, because it always takes some time resulting in latecomer request due vary network latency. And firing this request before target timestamp may result in early coming request.
My goal is to make request guaranteed came to server after target timestamp, but as soon as possible after it. Could you suggest any solutions with Nodejs?
The best you can do in nodejs (which is not a real-time system) is to do the following:
Premeasure the expected latency so you know about how much to presend the request.
Use setTimeout() to schedule the send at precisely the one-way latency time before your target time. There is no other mechanism in nodejs that would be more precise.
If your request involves a DNS lookup, you can prefetch the TCP address for your hostname and take the DNS lookup time out of your request cycle or at least prime the local DNS cache.
Create a dedicated nodejs program that does nothing else - so its event loop will not be doing anything else at the time the setTimeout() needs to run. You could run this as a child_process from your larger program if desired.
Run a number of tests to see how the timing works and, if you are consistently off by some margin, then adjust your latency offset.
You can develop a regular latency test to determine if the latency changes with time.
As others have said, there is no way to predict what the natural response time will be of the target server (how long it takes to start processing your request from the moment your network packets arrive there). If lots of incoming requests are all racing for the same time slot, then your request will get interleaved in among all the others and served in some order that you do not control.
Other things you can consider. If the target server supports the latest http specifications, then you can have a pre-established http connection with the host (perhaps targeting some other endpoint) that will be kept alive for you to send your precise timing request on. This would take some experimentation to figure out what the target host supports and if this would work.

How to handle multiple API Requests

I am working with the Google Admin SDK to create Google Groups for my organization. I can't add members to the group when creating the group, ideally, when I create a new group I'd like to add roughly 60 members. In addition, the ability to add members after the group is created in bulk (a batch request) was deprecated August 2020. Right now, after I create a group I need to make a web request to the API to add each member of the group individually (which will be about 60 members).
I am using node.js and express, is there a good way to handle 60 web requests to an api? I don't know how taxing this will be on my server. If anyone has any resources to share where I can learn about the impact this would have on a nodejs server that would be great.
Fortunately, these groups aren't created often, maybe 15 a month.
One idea I have is to offload the work to something like a cloud function so my node server makes one request to the cloud function, then the cloud function makes all the additional requests to add members to the Group. I'm not 100% sure if this is necessary and I'm curious on other approaches.
Limits and Quotas
Note that adding group members may take 10 minutes to propagate.
The rate limit for the Directory API is 3000 queries per 100 seconds per IP address. This works out to around 30 per second. 60 requests is not a large amount of requests, but if you try to send them all in a few milliseconds the system may extrapolate the rate and deem it over the limit, I wouldn't think so, though probably best to test it on your end with your system and connection etc.
Exponential Backoff
If you do need to make many requests this is the method Google recommends. It involves repeating the request if it fails and then exponentially increasing the amount of time to wait until it reaches 16 seconds. You can always implement a longer wait to retry. Its only 60 requests after all.
Batch Requests
The previously mentioned methods should work no issue for you, since there are only 60 requests to make, it won't put any "stress" on the system as such. That said, the most performant way to handle many requests to the Directory API is to use a batch request. This allows you to bundle up all your member requests into one large batch, of up to 1000 calls. This will also give you a nice cushion in case you need to increase your requests in future!
EDIT - I am sorry, I missed that you mentioned that Batching is deprecated. Only global batching is deprecated. If you send a batch request to a specific API, batching will still be supported. What you can no longer do is send a single batch request to different APIs, like Directory and Sheets in one.
References
Limits and Quotas
Exponential Backoff
Batch Requests

Throttling requests to third-party APIs when using cloud functions

We're running our Node backend on Firebase Functions, and have to frequently hit a third-party API (HubSpot), which is rate-limited to 100 requests / 10 seconds.
We're making these requests to HubSpot from our cloud functions, and often find ourselves exceeding HubSpot's rate-limit during campaigns or other website usage spikes. Also, since they are all write requests to update data on HubSpot, these requests cannot be made out of order.
Is there a way to throttle our requests to HubSpot, so as to not exceed their rate limit? Open to suggestions that may not necessarily involve cloud functions, although that would be preferred.
Note: When I say "throttle", I mean that all requests to HubSpot need to go through. I'm trying to achieve something similar to what Lodash's throttle method does, if that makes sense.
What we usually do in this case is store the data into a database, and then pass it over to HubSpot in a tempered way (e.g. without exceeding their rate limit) using a cron that runs every minute. For every data item that we pass to HubSpot successfully, we mark it as "success" in the database.
Cloud Functions can not be rate limited. It will always attempt to service requests and events as fast as they arrive. But you can use Cloud Tasks to create an task queue to spread out the load of some work over time using a configured rate limit. A task queue can target another HTTP function. This effectively makes your processing asynchronous, but is really the only mechanism that Google Cloud gives you to smooth out load.

One Instance-One Request at a time App Engine Flexible

I am using
App Engine Flexible, custom runtime.
nodejs, as base Image.
express
Cloud Tasks for queuing the requests
puppeteer job
My Requirements
20GB RAM
long-running process
because of my unique requirement, I want 1 request to be handled by only 1 instance. when it gets free or the request gets timed-out, only then it should get a new request.
I have managed to reject other requests while the instance is processing 1 request, but not able to figure out the appropriate automatic scaling settings.
Please suggest the best way to achieve this.
Thanks in advance!
In your app.yaml try restricting the max_instances and max_concurrent_requests.
I also recommend looking into rate limiting your Cloud Tasks queue in order to reduce unnecessary attempts to send requests. Also you may want to increase your MIN_INTERVAL for retry attempts to spread out requests as well.
Your task queue will continue to process and send tasks by the rate you have set, so if your instance rejects the request it will go into a retry pattern. It seems like you're focused on the scaling of App Engine but your issue is with Cloud Tasks. You may want to schedule your tasks so they fire at the interval you want.
You could set readiness checks on your app.
When an instance is handling a request, set the readiness check to return a non-ready status. 429 (too many requests) seems like a good option.
This should avoid traffic to that specific instance.
Once the request is finished, return a 200 from the readiness endpoint to signal that the instance is ready to accept a new request.
However, I'm not sure how will this work with auto-scaling options. Since the app will only scale up once the average CPU is over the threshold defined, if all instances are occupied but do not reach that threshold, the load balancer won't know where to route requests (no instances are ready), and it won't scale up.
You could play around a little bit with this idea and manual scaling, or by programatically changing min_instances (in automatic scaling) through the GAE admin API.
Be sure to always return a 200 for the liveness check, or the instance will be killed as it will be considered unhealthy.

Amazon Cloudfront timeout error

I am working on a node project which generates data using mongodb dataset-generator and I've added my data generation server code to AWS's Lambda which I've expose to AWS's api gateway.
So now the issue is that CloudFront timeout the request after 30 seconds. And the problem is that the computation I am doing cannot be break into multiple API hits. So can anyone from community can help me out here or can tell me some alternative which allows me to hit request which won't get timeout.
I believe I originally misinterpreted the nature of the problem you are experiencing.
So now the issue is that CloudFront timeout the request after 30 seconds
I assumed, since you mentioned CloudFront, that you had explicitly configured CloudFront in front of your API Gateway endpoint.
It may be true that you didn't, since API Gateway implicitly uses services from "the AWS Edge Network" (a.k.a. CloudFront) to provide a portion of its service.
My assumption was that API Gateway's "hidden" CloudFront distributions had different behavior than a standard CloudFront distribution, but apparently that is not the case to any extent that is relevant, here.
In fact, API Gateway also has a 30 second response timeout and Can Be Increased? is No. So the "CloudFront" timeout is essentially the same timeout as the one imposed by API Gateway.
This, of course, would have have precedence over any longer timeout on your Lambda function.
There isn't a simple and obvious workaround. This seems like a task that is outside the scope of the design of API Gateway.
One option -- which I personally tend to dislike when APIs force it on me -- is to require pagination. I really hate that... just give me the data, I can handle it... but it has its practical applications. If the request is for 1000000 rows, return rows 1 through 1000 and return a next_url that will fetch rows 1001 through 2000.
Another option is for the initial function to submit the request to a second lambda function, using asynchronous invocation, for processing, and return a redirect that will send the user to a new URL where the data can be fetched. Now, stick with me, because this solution sounds really horrible, but it's theoretically viable. The asynchronous function would do the work in the background, and store the response in S3. The URL where the data is fetched would be third lambda function that would poll the key in the S3 bucket where the data is to be stored, say once per second for 20 seconds. If the file shows up, it would pre-sign a URL for that location, and issue a final redirect to the browser with the signed URL as the Location. If the file does not show up, it would redirect the browser back to itself again so that polling would continue until the file shows up or the browser gets tired of the redirect loop.
Sketchy? Yes. Viable? Probably. Good idea? That's debatable... but it seems as if you are doing something that really is outside the fundamental design parameters of API Gateway, so either a fairly complex workaround is needed, of you'll want to implement this somewhere other than with API Gateway.
Of course, you could write your own "API Gateway" that runs on EC2 and invokes Lambda functions directly through the Lamdba API and returns results to the caller -- so Lambda still handles the work and the scaling, but you avoid the 30 second timeout. 30 seconds is a long time to wait for a web response.
I see that this is the old question but need to say that start from March 2017 it is possible to change an origin response timeout and keep-alive timeout.
https://aws.amazon.com/about-aws/whats-new/2017/03/announcing-configure-read-timeout-and-keep-alive-timeout-values-for-your-amazon-cloudfront-custom-origins/
Max value is 60 seconds for Origin response timeout but if needed AWS can increase value to 180 seconds (with support request)

Resources