CAP - Gateway Timeout - How to increase the time out of incoming request - sap-cloud-sdk

I trigger a Post Function Import (Action in CDS), this would typically take about 2 minutes for processing. The POST operation was successfully completed in JAVA, however I get a Gateway Timeout.
How to increase the timeout of incoming requests? I had tried to set the property INCOMING_CONNECTION_TIMEOUT: 0 in mta.yaml of service project as well as using the command
cf set-env x-service-name-blue INCOMING_CONNECTION_TIMEOUT 0
cf restage x-service-name-blue
It did not work either.
Could you assist?

Update: I think the correct environment variable on the approuter is called SESSION_TIMEOUT. Can you try this one instead?
This is for the XS Advanced approuter, though I'm not sure if it still applies to the one used for CF apps, this documentation suggest that it's a property of the approuter, so you can try setting it there.

Related

Batch headers are not considered for individual requests with Cloud sdk 3.66

Ours is a dwc based application Master Data Proxy Service (MDPS).
We are getting an error due to the required Dwc headers (dwc-tenant, dwc-subdomain, dwc-jwt) etc,
not being propagated to individual request contexts from a batch request.
I did some debugging on this and here are my observations:
We create a destination with DwcHeaderProvider as header provider with the following code:
DefaultHttpDestination.builder(megacliteUri + MEGACLITE_VERSION + serviceBinding)
.keyStore(dwcUtil.getKeyStore())
.keyStorePassword("changeit")
.proxyType(ProxyType.INTERNET)
.headerProviders(new DwcHeaderProvider()
// The destination that Megaclite should use to perform the request
.header(Constants.DESTINATION_NAME, Constants.DEFAULT_DESTINATION_VALUE)
.build();
DwcHeaderProvider in turns gets all the relevant headers including the dwc headers. But with the new version its not happening.
I can see that internally the headers are fetched from a headerFacade, which in the previous versions used to be DefaultRequestHeaderFacade.
Now the facade is getting initialized as com.sap.cds.integration.cloudsdk.facade.CdsRequestHeaderFacade and this comes from a jar
com/sap/cds/cds-integration-cloud-sdk/1.23.0/cds-integration-cloud-sdk-1.23.0.jar
cds-integration-cloud-sdk-1.23.0.jar
Can you look into it this? It is a high prio issue for us, since batch requests are completely not working, and our UI relies on such requests.
Thanks,
Sachin
Update: Please use CAP 1.24 - the issue is fixed already.
Until the problem is solved and a proper fix is released, can you try a workaround?
Before instantiating the destination run the following code snippet:
import com.sap.cloud.sdk.cloudplatform.requestheader.RequestHeaderAccessor;
import com.sap.cloud.sdk.cloudplatform.requestheader.DefaultRequestHeaderFacade;
RequestHeaderAccessor.setHeaderFacade(new DefaultRequestHeaderFacade());

How to increase the AWS lambda to lambda connection timeout or keep the connection alive?

I am using boto3 lambda client to invoke a lambda_S from a lambda_M. My code looks something like
cfg = botocore.config.Config(retries={'max_attempts': 0},read_timeout=840,
connect_timeout=600) # tried also by including the ,
# region_name="us-east-1"
lambda_client = boto3.client('lambda', config=cfg) # even tried without config
invoke_response = lambda_client.invoke (
FunctionName=lambda_name,
InvocationType='RequestResponse',
Payload=json.dumps(request)
)
Lambda_S is supposed to run for like 6 minutes and I want lambda_M to be still alive to get the response back from lambda_S but lambda_M is timing out, after giving a CW message like
"Failed to connect to proxy URL: http://aws-proxy..."
I searched and found someting like configure your HTTP client, SDK, firewall, proxy or operating system to allow for long connections with timeout or keep-alive settings. But the issue is I have no idea how to do any of these with lambda. Any help is highly appreciated.
I would approach this a bit differently. Lambdas charge you by second, so in general you should avoid waiting in them. One way you can do that is create an sns topic and use that as the messenger to trigger another lambda.
Workflow goes like this.
SNS-A -> triggers Lambda-A
SNS-B -> triggers lambda-B
So if you lambda B wants to send something to A to process and needs the results back, from lambda-B you send a message to SNS-A topic and quit.
SNS-A triggers lambda, which does its work and at the end sends a message to SNS-B
SNS-B triggers lambda-B.
AWS has example documentation on what policies you should put in place, here is one.
I don't know how you are automating the deployment of native assets like SNS and lambda, assuming you will use cloudformation,
you create your AWS::Lambda::Function
you create AWS::SNS::Topic
and in its definition, you add 'subscription' property and point it to you lambda.
So in our example, your SNS-A will have a subscription defined for lambda-A
lastly you grant SNS permission to trigger the lambda: AWS::Lambda::Permission
When these 3 are in place, you are all set to send messages to SNS topic which will now be able to trigger the lambda.
You will find SO answers to questions on how to do this cloudformation (example) but you can also read up on AWS cloudformation documentation.
If you are not worried about automating the stuff and manually tests these, then aws-cli is your friend.

Azure Function Timer Trigger & API management - Manual execution returns 404

I have a function app with:
a few functions triggered by a Timer Trigger
and some triggered by the HTTP Trigger.
I have also an Azure API Management service set up for the function app, where the HTTP Triggered functions have their endpoints defined.
I am trying to trigger one of my timer triggered functions manually as per the guide here https://learn.microsoft.com/en-us/azure/azure-functions/functions-manually-run-non-http
I am however getting a 404 result in Postman, despite the seemingly correct URL and x-functions-key.
The function:
The key:
The request:
I also noticed that:
if I don't include the x-functions-key header, then I get 401 Unauthorized result
and if I include an incorrect key, then I get 403 Forbidden.
Could it be related to the API management service being set up for the function app?
How can I troubleshoot this further?
I have managed to solve it.
It turns out that Azure Functions timer trigger requires six parts cron expression (I was only aware of the five part style)
Without that, it does not work - sadly this is not easily noticeable in the UI.
I have realized that by investigating Application Insights logs:
The function page shows that everything is fine:
Changing the CRON format has fixed the 404 issue and I started getting 202 Accepted response.
As a bonus note, I have to add:
Even though the response was 202 Accepted, the triggering didn't work correctly, because my function return type was Task<IActionResult> which is not accepted for timer triggered functions.
Again, only ApplicationInsights showed that anything is wrong:
The 'MonkeyUserRandom' function is in error: Microsoft.Azure.WebJobs.Host: Error indexing method 'MonkeyUserRandom'. Microsoft.Azure.WebJobs.Host: Cannot bind parameter '$return' to type IActionResult&. Make sure the parameter Type is supported by the binding. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
That's a bonus tip for a 'manual triggering of non-http function does not work'.
I test it in my side, it works fine. Please refer to the below screenshot:
Please check if you request https://xxx.azurewebsites.net/admin/functions/TimerTrigger1 but not https://xxx.azurewebsites.net/admin/functions/TimerTrigger. Note it's "TimerTrigger1".
I requst with ..../TimerTrigger at first test because the document shows us QueueTrigger, and it response 404.

In Azure functions (Http triggers, Python 3), how do I control the maximum concurrent requests the server can handle?

Using Python 3.8 for an Azure functions app in which all the functions are HTTP triggers. We have HTTP 2 enabled ...
Below is our host.json file
{
"version": "2.0",
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[1.*, 2.0.0)"
}
}
We are sending 30 requests at the same time from the client (Angular 9) application to the server (15 are OPTIONS requests and the other 15 are GETs) and are noticing that 20 of those are handled relatively quickly but then the rest take a noticeably longer time to process. Below are two of the requests side-by-side
For the longer requests, I have verified through curl and Postman that individually they return in a much quicker period of time, which leads me to believe there is some concurrency setting on the srever I can adjust but I can't figure out where.
Edit: Here's a little more information. My anonymous function begins like the below ...
def main(req: func.HttpRequest) -> func.HttpResponse:
    """."""
    logging.info("received request")
but note the times reported in the Azure log for that function when the function responds slowly ...
2020-11-17 14:29:24.094 Executing 'Functions.download-image' (Reason='This function was programmatically called via the host APIs.', Id=xxx-xxx)
Information
2020-11-17 14:29:32.143 received request
There is an 8 second delay between when I'm told the function was invoked and the first logging statement from the function. Below is what my "Scale Out" looks like ...
For this problem, you can check if the scale out tab of your function app.
You can scale out the instance manually, or you can also define custom autoscale according to your requirement.
As you mentioned the first 20 requests are handled relatively quickly, so I guess you choose P1 app service plan(as below screenshot show) because P1 plan has maximum 20 instances. If your current app service plan just has maximum 20 instances, you need to scale up your plan to a higher pricing tier.
By the way, if you want the requests be handled quickly, you'd better enable "Always on" in your first screenshot. Otherwise your function app will idle if it it hasn't received a request for a long time.

How to set Infinite Timeout for Azure Function app v2.0

I have a very long running process which is hosted using Azure Function App (though it's not recommended for long running processes) targeting v2.0. Earlier it was targeting v1.0 runtime so I didn't face any function timeout issue.
But now after updating the runtime to target v2.0, I am not able to find any way to set the function timeout to Infinite as it was in case of v1.0.
Can someone please help me out on this ?
From your comments it looks like breaking up into smaller functions or using something other than functions isn't an option for you currently. In such case, AFAIK you can still do it with v2.0 as long as you're ready to use "App Service Plan".
The max limit of 10 minutes only applies to "Consumption Plan".
In fact, documentation explicitly suggests that if you have functions that run continuously or near continuously then App Service Plan can be more cost-effective as well.
You can use the "Always On" setting. Read about it on Microsoft Docs here.
Azure Functions scale and hosting
Also, documentation clearly states that default value for timeout with App Service plan is 30 minutes, but it can be set to unlimited manually.
Changes in features and functionality
UPDATE
From our discussion in comments, as null value isn't working for you like it did in version 1.x, please try taking out the "functionTimeout" setting completely.
I came across 2 different SO posts mentioning something similar and the Microsoft documentation text also says there is no real limit. Here are the links to SO posts I came across:
SO Post 1
SO Post 2
One way of doing it is to implement Eternal orchestrations from Durable Functions. It allows you to implement an infinite loop with dynamic intervals. Of course, you need to slightly modify your code by adding support for the stop/start function at any time (you must pass the state between calls).
[FunctionName("Long_Running_Process")]
public static async Task Run(
[OrchestrationTrigger] DurableOrchestrationContext context)
{
var initialState = context.GetInput<object>();
var state = await context.CallActivityAsync("Run_Long_Running_Process", initialState);
if (state == ???) // stop execution when long running process is completed
{
return;
}
context.ContinueAsNew(state);
}
You cannot set an Azure Function App timeout to infinite. I believe the longest any azure function app will consistently run is 10 minuets. As you stated Azure functions are not meant for long running processes. You may need find a new solution for your app, especially if you will need to scale up the app at all in the future.

Resources