Lambda-Backed Custom Resource - python-3.x

I'm trying to create a custom resource in a CFT that will run a lambda function on creation of said template. I've looked at the AWS documentation for Lambda-Backed Custom Resources, but I'm still a bit confused on the topic as the documentation was not particularly verbose. I've included the JSON for my custom resource, and I'm just wondering if there's anything else I have to do in order to ensure that this resource will call on the function upon creating the template.
"LambdaRunner": {
"Type": "AWS::CloudFormation::CustomResource",
"Properties": {
"ServiceToken": {
"Fn::GetAtt": [
"DistroDBPop",
"Arn"
]
}
}
Note: The Lambda function it references takes a CSV from an S3 resource and uses the information to create and populate a DynamoDB table.

That looks sufficient as far as calling the function, assuming that the CloudFormation template contains a Lambda function called DistroDBPop.
If you look at Walkthrough: Looking Up Amazon Machine Image IDs - AWS CloudFormation, you'll see that several other elements are also needed:
The Lambda function
A Role for the Lambda function
A special callback in the Lambda function to indicate that it has completed
There's some good example Lambda code at: stelligent/cloudformation-custom-resources - GitHub
There is also a cfnresponse module that makes it easier to callback at the end of the Lambda function. See: cfn-response Module
Finally, make sure the Lambda function understands that it might be called at Create, Update and Delete of the stack, so it might need to 'ignore' certain events unless they are relevant.

Related

cdk python not fetching value from SSM parameter store even during Deploy Time

I have a line of code that is supposed to get the value for a parameter name in the SSM parameter store. As per documentation here
This is my code
table_name = ssm.StringParameter.from_string_parameter_attributes(self, "activity-details-table-name", parameter_name="/XAZ-1019/activity/table-name").string_value
print(table_name)
When I do a cdk deploy I only get ${Token[TOKEN.358]} in the print statement right after
This is the expected behaviour of from_string_parameter_attributes. The string_value method returns a Token value, which the CDK converts to a CloudFormation Ref intrinsic function in the template.
In other words, you are right that the CDK is not fetching the actual value. It's CloudFormation that gets the value from the SSM Parameter Store at deploy-time. No matter. We can merrily pass the opaque token value around in our CDK code, and leave it to the CDK to include the right references in the template.
If you really need synth-time resolution of the actual paramater value, there's always value_from_lookup. It is one of a handful of context methods that has the CDK actually make (and cache) cloud-side lookups itself.

NodeJS - Simplify/Resolve GraphQL query

I am currently writing a Lambda authorizer for an AWS AppSync API, however the authorization depends on the target resource being accessed.
Every resource has their own ACL listing the users and conditions for allowing access to it.
Currently the best I could find would be to get the identity of the caller, look at all the ACLs, and authorize the call while denying access to all the other resources, what's not only highly inefficient, but also extremely impractical, if not impossible.
The solution I had originally came up with was to get the target resource, retrieve the ACL and check if the user fits the specified criteria. The problem is that I am unable to reliably define what's the target resource. What I get from AWS is a request like this:
{
"authorizationToken": "ExampleAUTHtoken123123123",
"requestContext": {
"apiId": "aaaaaa123123123example123",
"accountId": "111122223333",
"requestId": "f4081827-1111-4444-5555-5cf4695f339f",
"queryString": "mutation CreateEvent {...}\n\nquery MyQuery {...}\n",
"operationName": "MyQuery",
"variables": {}
}
}
So, I only have the query string and variables, leaving the actual parsing of this to me. I got to convert it to an AST using graphql-js, but it's still extremely verbose and most importantly, it's structure varies greatly.
My first code to retrieve the target worked for the AppSync console queries, but not the Amplify Front-End, for example. I also can't rely on something as simple as the variable name, as an attacker could quite easily craft a query with an arbitrary name, or even not use variables at all.
I thought about implementing this authorization logic within Lambda Resolvers, what should be simpler in a way, but would require me to use resolvers as authorizers, what doesn't seem ideal, and implement the entire resolver logic when I just want the most trivial possible resolvers.
Ideally I'd like something like this:
/* Schema:
type Query {
operationName(key: KEY!): responseType
}*/
/* Query:
query abitraryQueryName($var1: KEY!) {
operationName(key: $var1) {
field1
field2
}
}*/
/* Variables:
{ "var1": "value1" } */
parsedQuery = {
operation: "operationName",
params: { "key": "value1" },
fields: [ "field1", "field2" ]
};
Is there any way to resolve/simplify the queries from GraphQL to JSON/similar in a way that this information can be easily extracted?
Well, couldn't find anything on it, so I made something myself.
On the off chance someone needs something similar, here's the gist with the code I used: https://gist.github.com/Iorpim/6544dad46060522dd0b17477871bc434
I didn't make it a proper full lib, as it's a very specific use case and it's likely a one-off, and I am also not sure how reliable it is, but it solves my problem!

Azure Functions Orchestration using Logic App

I have multiple Azure Functions which carry out small tasks. I would like to orchestrate those tasks together using Logic Apps, as you can see here:
Logic App Flow
I am taking the output of Function 1 and inputting parts of it into Function 2. As I was creating the logic app, I realized I have to parse the response of Function 1 as JSON in order to access the specific parameters I need. Parse JSON requires me to provide an example schema however, I need to be able to parse the response as JSON without this manual step.
One solution I thought would work was to register Function 1 with APIM and provide a response schema. This doesn't seem to be any different to calling the Function directly.
Does anyone have any suggestions for how to get the response of a Function as a JSON/XML?
You can run Javascript snippets and dynamic parse the response from Function 1 without providing a sample.
e.g.
var data = Object.keys(workflowContext.trigger.outputs.body.Body);
var key = data.filter(s => s.includes('Property')).toString(); // to get element - Property - dynamic content
return workflowContext.trigger.outputs.body.Body[key];
https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-add-run-inline-code?tabs=consumption

Issue with validating Arm template using DeploymentsOperations.StartValidate

I am currently working on a project where i deploy multiple arm templates each deploying a VM and doing few operations on them. I wanted handle quota issues by calling template validation before triggerring the first deployment. So, i created a template which has logic to create required VMs and i am using this template only for validation (to check if quota will not be exceeded).
Since our code already has the ResourceManagementClient, i tried the following code:
Deployment parameters = new Deployment(
new DeploymentProperties(DeploymentMode.Incremental)
{
Template = templateFile,
Parameters = parameterFile,
});
DeploymentsValidateOperation dp = deployments.StartValidate(groupName, "validation", parameters);
But when i try to access the Value from the variable dp, i keep getting the following exception:
Generic Exception System.InvalidOperationException: The operation has
not completed yet. at Azure.Core.ArmOperationHelpers`1.get_Value()
at
Azure.ResourceManager.Resources.DeploymentsValidateOperation.get_Value()
at DeployTemplate.Program.d__3.MoveNext() in
\Program.cs:line 88
I even added a loop after the "StartValidate" to wait till the dp.HasCompleted is set to true. But this seems to run indefinetly. I also tried the "StartValidateAsync" method, which seems to have the same issue.
I wanted to understand if i am using this method correctly? if there is a better way to do the template validations? I could not find any examples on this method`s usage. if possible please share any code snippet where this method is used for my reference.
Note: Currently, Since this is not working, i am testing with Fluent Api way. That seems to be working. But, it requires lot of changes in our code as it creates ambiguity with many classes in "Azure.ResourceManager.Resources" which are already used for other operations.
I found that even though the deployment operations HasCompleted field is not set, when I call dp.GetRawResponse(), it returns the exact errors expected.
I now use this to validate my templates.

How do you set up an API Gateway Step Function integration using Terraform and aws_apigatewayv2_integration

I am looking for an example on how to start execution of a step function from API Gateway using Terraform and the aws_apigatewayv2_integration resource. I am using an HTTP API (have only found an older example for REST API's on Stackoverflow).
Currently I have this:
resource "aws_apigatewayv2_integration" "workflow_proxy_integration" {
api_id = aws_apigatewayv2_api.default.id
credentials_arn = aws_iam_role.api_gateway_step_functions.arn
integration_type = "AWS_PROXY"
integration_subtype = "StepFunctions-StartExecution"
description = "The integration which will start the Step Functions workflow."
payload_format_version = "1.0"
request_parameters = {
StateMachineArn = aws_sfn_state_machine.default.arn
}
}
Right now, my State Machine receives an empty input ("input": {}). When I try to add input to the request_parameters section, I get this error:
Error: error updating API Gateway v2 integration: BadRequestException: Parameter: input does not fit schema for Operation: StepFunctions-StartExecution.
I spent over an hour looking for a solution to a similar problem I was having with request_parameters. AWS's documentation currently uses camelCase for their keys in all of their examples (stateMachineArn, input, etc.) so it made it difficult to research.
You'll want to use PascalCase for your keys, similar to how you already did for StateMachineArn. So instead of input, you'll use Input.

Resources