Should it be possible to cache server responses to services calls done via Gateway.Send() from within another service?
I've seen you comments stating that if I enable caching with [CacheResponse] attribute it should work.
However, it isn't caching the response for me. I'm using ServiceStack v5.1.0
Thanks
ServiceStack caching features only caches HTTP Requests which are serialized in the registered Cache provider and written directly to the Response Output Stream.
In Process Service Gateway requests are never serialized, cached or written to a Stream, they're effectively an in-process C# method call when calling an internal Service, if the gateway request is routed to a remote Service that's cached it will return the cached response as per normal HTTP request.
Otherwise if you want to cache an In Memory Service Gateway request you can use a ConcurrentDictionary in your Service implementation and memoize results as you would when caching any expensive C# logic.
Related
I'm in the process of creating a kubernetes application. I have one microservice that performs an operation on data that it receives.
My API does some basic authentication and request validation checks, however here is where I'm confused as to what to do.
The API gateway has two endpoints, one which performs an individual operation, forwarding the request to the microservice, and another which receives an array of data, which it then sends off as individual requests to the microservice, allowing it to scale independently
The main thing I'm worried about is regarding scaling. The microservice depends on external APIs and could take a while to respond, or could fail to scale quickly enough. How would this impact the API gateway?
I'm worried that the API gateway could end up being overwhelmed by external requests, what is the best way to implement this?
Should I use some kind of custom metric and somehow tell kubernetes to not send traffic to api gateway pods that are handling more than X requests? Or should I set a hard cap using a counter on the API gateway to limit the number of requests that pod is handling by returning an error or something?
I'm use node.js for the API gateway code so aside from memory limits, I'm not sure if there's an upper limit to how many requests the gateway can handle
I have a service running behind an Azure API Management instance running in the consumption tier. When no traffic has been sent to the API Management instance in a while (15 minutes isn't enough to trigger it, but an hour is), the first request sent takes about 3 minutes 50 seconds and returns a HTTP 500 with this body content:
<html><head><title>500 - The request timed out.</title></head><body> <font color ="#aa0000"> <h2>500 - The request timed out.</h2></font> The web server failed to respond within the specified time.</body></html>
Following requests work fine. Based on application logs and testing with an API Management instance pointing to my local machine via ngrok, it doesn't look like API management is even trying to connect to the backend for these requests. For the local test, I ran my app under the debugger, put a breakpoint in my service method (there's no auth that could get in the way) and watched the "output" window in Visual Studio. It never hit my breakpoint, and never showed anything in the output window for that "500 request timed out" request. When I made another request to API Management, it forwarded along to my service as expected, giving me output and hitting my breakpoint.
Is this some known issue with API Management consumption tier that I need to find some way to work around (ie. a service regularly pinging it)? Or a possible configuration issue with the way I've set up my API Management instance?
My API management instance is deployed via an ARM template using the consumption tier in North Central US and has some REST and some SOAP endpoints (this request I've been using for testing is one of the SOAP ones and uses the envelope header to specify the SOAP action).
Additional information:
The request is question is about 2KB, and a response from the server (which doesn't play into this scenario as the call never makes it to my server) is about 1KB, so it's not an issue with request/response sizes.
When I turn on request tracing (by sending the Ocp-Apim-Subscription-Key + Ocp-Apim-Trace headers), this 500 response I'm getting doesn't have the Ocp-Apim-Trace-Location header with the trace info that other requests do.
I get this behavior when I send 2 requests (to get the 4-minute 500 response and then a normal 5s 200 response), wait an hour, and make another request (which gets the 4-minute delay and 500 response), so I don't believe this could be related to the instance serving too much traffic (at least too much of my traffic).
Further testing shows that this happens about once every 60 to 90 minutes, even if I send one request every minute trying to keep the APIM instance "alive".
HTTP 500 (Internal Server Error) status code indicates that the server encountered an unexpected condition that prevented it from fulfilling the request. (possibly due to large payload). There is no issue at APIM level. Analyze the APIM inspector trace and you should see HTTP 500 status code under 'forward-request' response attribute.
You need to understand who is throwing these HTTP 404 and 500 responses, APIM, or the backend SOAP API. The best way to get that answer is to collect APIM inspector trace to inspect request and response. Debug your APIs using request tracing
The Consumption tier exposes serverless properties. It runs on a shared infrastructure, can scale down to zero in times of no traffic and is billed per execution. Connections are pooled and reused unless explicitly closed by the back end. Api management service limits
1. These pattern of symptoms are also often known to occurs due to
network address translation (SNAT) port limits with your APIM
service.
Whenever a client calls one of your APIM APIs, Azure API Management service opens a SNAT port to access your backend API. Azure uses SNAT and a Load Balancer (not exposed to customers) to communicate with end points outside Azure in the public IP address space, as well as end points internal to Azure that aren't using Virtual Network service endpoints. (This situation is only applicable to backend APIs exposed on public IPs.)
Each instance of API Management service is initially given a pre-allocated number of SNAT ports. That limit affects opening connections to the same host and port combination. SNAT ports are used up when you have repeated calls to the same address and port combination. Once a SNAT port has been released, the port is available for reuse as needed. The Azure Network load balancer reclaims SNAT ports from closed connections only after waiting four minutes.
A rapid succession of client requests to your APIs may exhaust the pre-allocated quota of SNAT ports if these ports are not closed and recycled fast enough, preventing your APIM service from processing client requests in a timely manner.
Following strategies can be considered:
Use multiple IPs for your backend URLs
Place your APIM and backend service in the same VNet
Place your APIM in a virtual network and route outbound calls to Azure Firewall
Consider response caching and other backend performance tuning (configuring certain APIs with response caching to reduce latency
between client applications calling your API and your APIM backend
load.)
Consider implementing access restriction policies (policy can be used to prevent API usage spikes on a per key basis by limiting the
call rate per a specified time period.)
2. The forward-request policy forwards the incoming request to the
backend service specified in the request context. The backend
service URL is specified in the API settings and can be changed
using the set backend service policy.
Policy statement:
<forward-request timeout="time in seconds" follow-redirects="false | true" buffer-request-body="false | true" buffer-response="true | false" fail-on-error-status-code="false | true"/>
Example:
The following API level policy forwards all API requests to the backend service with a timeout interval of 60 seconds.
<!-- api level -->
<policies>
<inbound>
<base/>
</inbound>
<backend>
<forward-request timeout="60"/>
</backend>
<outbound>
<base/>
</outbound>
</policies>
Attribute: timeout="integer"
Description: The amount of time in seconds to wait for the HTTP
response headers to be returned by the backend service before a
timeout error is raised. Minimum value is 0 seconds. Values greater
than 240 seconds may not be honored as the underlying network
infrastructure can drop idle connections after this time.
Required: No
Default: None
This policy can be used in the following policy sections and scopes.
Policy sections: backend
Policy scopes: all scopes
Checkout similar feedback for your reference. Also, refer for detailed troubleshooting of 5oo error for APIM.
I am facing an issue with Azure App Services, when we request with an HTTP CONNECT method, App Service returns a Bad Gateway Error. Along with the response, it is exposing the Server as well. Is there a way to fix this?
Here is the example
I am already doing the server header removal in Web.Config and Application_Start method global.asax.
I raised a support issue with Microsoft Azure. This response is not coming from the server, it is handled by their frontend and they can't remove the specific header.
I need to migrate an API Server powered by node restify to something using API Gateway + lambda functions provided by AWS.
The API Server (GET/POST simple stuff no DB involved) is served as a proxy server to talk to a CMS system to fetch data for clients.
At the moment, the etag caching is done through restify middleware. I wonder what I need to do to achieve the same thing in new solution (API Gateway + lambda)?
A side note, what I come up with is - save the response from CMS into S3/CloudFront with ETag caching mechanism enabled and let them determine if cached response on the browser can be used.
I wonder if that is a good practice?
thanks
First of all API gateway has a seperate caching option where you can cache the responses for a particular TTL more suitable for API content caching.
If your CMS responses mainly contain static content and the requirement for proxy is to passthrough and cache the content, use AWS CloudFront directly infront of your CMS.
If you are only using API gateway Lambda as a proxy and also significant data transformation or generation done in Lambda, then you can setup AWS CloudFront infront of API Gateway to cache the responses.
If only very light data transformation and generation happens at the proxy (API Gateway with Lambda), the you can only use CloudFront infront of your CMS and use Edge Lambda runs at CloudFront edge locations to do the light modifications for the responses coming from the CMS also with caching.
I don't see a clear need in storing the responses in S3 and then serve through CloudFront unless your CMS has direct support in pushing content to S3 automatically.
My WCF Service (concurrency-multiple) will receive a request and then contact another TCP/IP service synchronously to provide a response to client.
I have confused, is that ok to contact tcp/ip synchronously, I don't want block channel to serve only one request at a time.
Am I correct in assuming that since service support multiple calls, synchronous call to tcp/ip doesn't affect.
Please comment if any suggestions.
Your WCF service will continue to accept requests and will create a thread for each request to external service.