How do you set up an API Gateway Step Function integration using Terraform and aws_apigatewayv2_integration - terraform

I am looking for an example on how to start execution of a step function from API Gateway using Terraform and the aws_apigatewayv2_integration resource. I am using an HTTP API (have only found an older example for REST API's on Stackoverflow).
Currently I have this:
resource "aws_apigatewayv2_integration" "workflow_proxy_integration" {
api_id = aws_apigatewayv2_api.default.id
credentials_arn = aws_iam_role.api_gateway_step_functions.arn
integration_type = "AWS_PROXY"
integration_subtype = "StepFunctions-StartExecution"
description = "The integration which will start the Step Functions workflow."
payload_format_version = "1.0"
request_parameters = {
StateMachineArn = aws_sfn_state_machine.default.arn
}
}
Right now, my State Machine receives an empty input ("input": {}). When I try to add input to the request_parameters section, I get this error:
Error: error updating API Gateway v2 integration: BadRequestException: Parameter: input does not fit schema for Operation: StepFunctions-StartExecution.

I spent over an hour looking for a solution to a similar problem I was having with request_parameters. AWS's documentation currently uses camelCase for their keys in all of their examples (stateMachineArn, input, etc.) so it made it difficult to research.
You'll want to use PascalCase for your keys, similar to how you already did for StateMachineArn. So instead of input, you'll use Input.

Related

Incorrect component name in App Insights on Azure, using OpenTelemetry

I'm using OpenTelemetry to trace my service running on Azure.
Azure Application Map is showing incorrect name for the components sending the trace. It is also sending incorrect Cloud RoleName which is probably why this is happening. The Cloud RoleName that App Insights is displaying is Function App name and not the Function name.
In my Azure Function (called FirewallCreate), I start a trace using following util method:
def get_otel_new_tracer(name=__name__):
# Setup a TracerProvider(). For more details read:
# https://learn.microsoft.com/en-us/azure/azure-monitor/app/opentelemetry-enable?tabs=python#set-the-cloud-role-name-and-the-cloud-role-instance
trace.set_tracer_provider(
TracerProvider(
resource=Resource.create(
{
SERVICE_NAME: name,
SERVICE_NAMESPACE: name,
# SERVICE_INSTANCE-ID: "my-instance-id"
}
)
)
)
# Send messages to Exporter in batches
span_processor = BatchSpanProcessor(
AzureMonitorTraceExporter.from_connection_string(
os.environ["TRACE_APPINSIGHTS_CONNECTION_STRING"]
)
)
trace.get_tracer_provider().add_span_processor(span_processor)
return trace.get_tracer(name)
def firewall_create():
tracer = get_otel_new_tracer("FirewallCreate")
with tracer.start_new_span("span-firewall-create") as span:
# do some stuff
...
The traces show another function name, in the same function app (picture attached).
What mistake I could be making?
The component has 0 ms and 0 calls in it. What does it mean? How to interpret it?
Hey thanks for trying out OpenTelemetry with Application Insights! First of all, are you using the azure-monitor-opentelemetry-exporter, and if so, what version are you using? I've also answered your questions inline:
What mistake I could be making?
What name are you trying to set your component to? The exporter will attempt to set your cloudRoleName to <service_namespace>.<service_name> by default that you set in your Resource. If <service_namespace> is not populated, it takes the value of <service_name>.
The component has 0 ms and 0 calls in it. What does it mean? How to interpret it?
Is this the only node in your application map? We need to first find whether this node is created by the exporter or by the functions runtime itself.

Azure Functions Orchestration using Logic App

I have multiple Azure Functions which carry out small tasks. I would like to orchestrate those tasks together using Logic Apps, as you can see here:
Logic App Flow
I am taking the output of Function 1 and inputting parts of it into Function 2. As I was creating the logic app, I realized I have to parse the response of Function 1 as JSON in order to access the specific parameters I need. Parse JSON requires me to provide an example schema however, I need to be able to parse the response as JSON without this manual step.
One solution I thought would work was to register Function 1 with APIM and provide a response schema. This doesn't seem to be any different to calling the Function directly.
Does anyone have any suggestions for how to get the response of a Function as a JSON/XML?
You can run Javascript snippets and dynamic parse the response from Function 1 without providing a sample.
e.g.
var data = Object.keys(workflowContext.trigger.outputs.body.Body);
var key = data.filter(s => s.includes('Property')).toString(); // to get element - Property - dynamic content
return workflowContext.trigger.outputs.body.Body[key];
https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-add-run-inline-code?tabs=consumption

Get AWS SSM Parameters Tags without Get Parameter

I am trying to list of all Parameters along with all their tags, I am trying to do so without listing the value of the parameters.
My initial approach was to do a describe_parameters and then loop through the Key Names and then perform list_tags, while doing so I found out that the ARNs are needed to perform list_tags which are not returned in the describe parameters.
Is there a way to get the parameters along with their tags without actually getting the parameters?
You can do this with the resource groups tagging api IF THEY ARE ALREADY TAGGED. Here's a basic example below without pagination.
import boto3
profile = "your_profile_name"
region = "us-east-1"
session = boto3.session.Session(profile_name=profile, region_name=region)
client = session.client('resourcegroupstaggingapi')
response = client.get_resources(
ResourceTypeFilters=[
'ssm',
],
)
print(response)
If you're wanting to discover untagged parameters, this won't work. Better would be to setup config rules to highlight these issues without you having to manage searching for them.

How to pass session parameters with python to snowflake?

The below code is my attempt at passing a session parameter to snowflake through python. This part of an existing codebase which runs in AWS Glue, & the only part of the following that doesn't work is the session_parameters.
I'm trying to understand how to add session parameters from within this code. Any help in understanding what is going on here is appreciated.
sf_credentials = json.loads(CACHE["SNOWFLAKE_CREDENTIALS"])
CACHE["sf_options"] = {
"sfURL": "{}.snowflakecomputing.com".format(sf_credentials["account"]),
"sfUser": sf_credentials["user"],
"sfPassword": sf_credentials["password"],
"sfRole": sf_credentials["role"],
"sfDatabase": sf_credentials["database"],
"sfSchema": sf_credentials["schema"],
"sfWarehouse": sf_credentials["warehouse"],
"session_parameters": {
"QUERY_TAG": "Something",
}
}
In AWS Cloudwatch, I can find the parameter was sent with the other options. In snowflake, the parameter was never set.
I can add more detail where necessary, I just wasn't sure what details are needed.
It turns out that there is no need to specify that a given parameter is a session parameter when you are using the Spark Connector. So instead:
sf_credentials = json.loads(CACHE["SNOWFLAKE_CREDENTIALS"])
CACHE["sf_options"] = {
"sfURL": "{}.snowflakecomputing.com".format(sf_credentials["account"]),
"sfUser": sf_credentials["user"],
"sfPassword": sf_credentials["password"],
"sfRole": sf_credentials["role"],
"sfDatabase": sf_credentials["database"],
"sfSchema": sf_credentials["schema"],
"sfWarehouse": sf_credentials["warehouse"],
"QUERY_TAG": "Something",
}
Works perfectly.
I found this in the Snowflake Documentation for Using the Spark Connector: Here's the section on setting Session Parameters

Cloud Datastore Projection Query with filters with AppEngine NodeJS Standard

I am learning GCP, and have searched through the documentation. The Projection queries documentation states that they can be used with filters albeit with some limitations. As far as I understand I am not falling within the limitations, but still I cannot make it work.
What I want to do is a simple
SELECT property FROM kind WHERE enabled = TRUE
The properties are marked as indexed, I have also deployed an index.yaml. And my code is the following
const selectQuery = programTypeQuery
.select(entityNameProperty)
.filter('enabled',true);
When commenting the select line, the query works. When commenting the filter line, it also works, but when running both I get the following message in postman.
{
"code": 9,
"metadata": {
"_internal_repr": {}
},
"note": "Exception occurred in retry method that was not classified as transient"
}
My log just shows a 400 status error.
Any help will be appreciated
EDIT:
this is the full code. I have a parameter that indicates the language of the name. in the database I have nameEn and nameEs as properties, so I want to return only the name in the selected language. enabled is a boolean property that indicates if the product is active or not.
const Datastore = require('#google-cloud/datastore');
const datastore = Datastore();
const programTypeQuery = datastore.createQuery('programType')
entityNameProperty = 'name' + req.params.languageCode
const selectQuery = programTypeQuery
.select(entityNameProperty)
.filter('enabled',true);
selectQuery.runQuery()
.then((results) => {
res.json(results);
})
.catch(err => res.status(400).json(err));
From the details you provided it is hard to detect where this issue is being originated. Can you use Google APIs Explorer for Datastore API and try your query. I prepared the request body according to your descriptions, you can click here and execute it by just changing the projectId. By doing this you will receive a sucessful response or an error message with details, it might be easier to detect the root cause from here.
Most likely you are missing a composite index definition. You should be able to look at your GAE logs in stackdriver to see the error message returned from Cloud Datastore.
Since your property name is dynamic you won't be able to use a composite index effectively. You'll probably need to change your data model to something that doesn't use dynamic property names.

Resources