I want to serve the response from a different http endpoint. As per the API-gateway documentation we can achieve this by creating a new resource in api-gateway, for which the Integration type should be HTTP, and we can also define "Endpoint URL" which will take the request for further processing.
Refer this image to understand how it works.
I have added this record manually inside api-gateway. Is there any way to define it in the serverless.yml file, so that on every deployment, this proxy resource will also get created.
Related
One of the ways of creating an API Gateway on AWS with Terraform requires creating a resource for each method/route and each integration (resource that handles the request), along with an API Deployment.
When we remove the resources for a route from our configuration, Terraform detects the change and deletes the integration, lambda etc. But this also needs a deployment. Since the dependent resources are deleted they are not part of any depends_on clause. This results in the following behavior:
The deployment is created prior to deleting the resources from the API Gateway. So the old resources are still part of the API Gateway at the time the deployment is done. Since a deployment is a snapshot of the resources configured at this time, the old resource is still part of the snapshot.
How can we tell Terraform that the API deployment resource should only be updated after all other resources (no longer in the template at this point) are destroyed?
I was given a link to an OpenAPI 3.0.1 definition hosted on SwaggerHub, and was told to deploy it. On the Terraform side, I see way too many resources that confuse me, I'm not sure which one to use. What's the most straightforward way to deploy an API gateway via Terraform that's already all configured in an OpenAPI definition? Is there a resource that would simply let me provide an OpenAPI definition URL to the API gateway, or would I have to copy paste the actual JSON somewhere?
The AWS API Gateway service has two main usage patterns:
Directly specify individual resources, methods, requests, integrations, and responses as individual objects in the API Gateway API.
Submit an OpenAPI definition of the entire API as a single unit and have API Gateway itself split that out into all of the separate objects in API Gateway's data model.
Since the underlying API supports both models, it can be hard to see initially which parts are relevant to each usage pattern. The Terraform provider for AWS follows the underlying API design, and so that confusion appears there too.
It sounds like you are intending to take the second path I described above, in which case the definition in Terraform is comparatively straightforward, and in particular it typically involves only a single Terraform resource to define the API itself. (You may need to use others to "deploy" the API, etc, but that seems outside of the scope of your current question.)
The api_gateway_rest_api resource type is the root resource type for defining an API Gateway REST API, and for the OpenAPI approach is the only one required to define your entire API surface, by specifying the OpenAPI definition in its body argument:
resource "aws_api_gateway_rest_api" "example" {
name = "example"
body = file("${path.module}/openapi.json")
}
In the above example I've assumed that you've saved the API definition in JSON format in an openapi.json file in the same directory as the .tf file which would contain the resource configuration. I'm not familiar with SwaggerHub, but if there is a Terraform provider available for it which has a data source for retrieving the definition directly from that system then you could potentially combine those, but the principle would be the same; it would only be the exact expression for the body argument that would change.
The other approach with the resources/etc defined explicitly via the API Gateway API would have a separate resource for each of API Gateway's separate object types describing an API, which makes for a much more complicated Terraform configuration. However, none of those need be used (and indeed, none should be used, to avoid conflicts) when you have defined your API using an OpenAPI specification.
NOTE: The above is about API Gateway REST APIs, which is a separate offering from "API Gateway v2", which offers so-called "HTTP APIs" and "WebSocket APIs". As far as I know, API Gateway v2 doesn't support OpenAPI definitions and therefore I've assumed you're asking about the original API Gateway, and thus "REST APIs".
Is there a way to map an entire subdomain to an Azure Function and pass in the folder structure without proxying it through a VM or App Service? For instance, https://example.com/path/to/name would pass into an Azure Function something like "path/to/name"?
I was able to resolve this with the help of https://learn.microsoft.com/en-us/azure/azure-functions/functions-proxies.
After creating a new Function App with an Azure Function using JavaScript with an HTTP Trigger, create a Functions Proxy with a route template of {*path} and a Backend URL of https://localhost/api/<trigger name>?code=<trigger code>&path={path}. On a GET, the path is available at req.query.path, and the original url is available at req.originalUrl.
I've setup a load balancer in my resource-group with a backend pool and inbound nat rules for http and https.
Now when i try to create an auto-scale-set through a template, i have to reference to a "loadBalancerInboundNatPool". But this is, what i can decipher from the error messages, not the same as the InboundNatRules.
How do i create/find the name of my InboundNatPool, so i can reference it from my template and create my Auto-Scale-Set correctly?
The loadBalancerInboundNatPools is in vmss, you need to add this to vmss
You can take a look at existing examples doing exactly this thing:
https://github.com/Azure/azure-quickstart-templates/tree/master/201-2-vms-loadbalancer-natrules
https://github.com/Azure/azure-quickstart-templates/tree/master/201-vmss-linux-nat
and several others in that repo. I don't really understand your question, but those templates will let you do just what your goal is
I could create an azure function proxy with success that routes requests to my blob storage. However, it only works if I specify the Backend URL with the full url to the blob file:
ex:
https://account.blob.core.windows.net/site/index.html
where '/site' is my container name and 'index.html' is my blob name.
I had an understanding that I could use the route template as '/site' and if I leave the Backend URL as 'https://account.blob.core.windows.net/site/' what comes after the last '/' would be routed to my storage account. Did I understand wrong?
UPDATE
After reading this other question Azure Function App Proxy to a blob storage account and update the route template / backend url it works, but if my blob name has an extension it does not work (such as .html). Any clues?
Yes we have identified a bug when URL ends with an .extension and will release the fix in the next few days. Thanks much for the feedback.
In the Azure Functions Proxy documentation they specify how to get the request parameters and pass those to your backend service.
Your template can be /site/{*restOfPath}
And your backend would be https://account.blob.core.windows.net/site/{restOfPath}
I was able to get this working only on files that do NOT have a file extension. So I was able to add an index blob and get to it from https://myfunction.azurewebsites.net/index, however, when I tried index.html, the proxy returned a message "The resource you are looking for has been removed, had its name changed, or is temporarily unavailable."