In the AWS CDK, I can write a Jest unit test to test if a resource has a specific property. But how do I test a resource DeletionPolicy value which is NOT a property?
cdk.out/example.template.json (simplified)
"AppsUserPool8FD9D0C0": {
"Type": "AWS::Cognito::UserPool",
"Properties": {
"UserPoolName": "test",
...
},
"UpdateReplacePolicy": "Retain",
"DeletionPolicy": "Retain",
"Metadata": {}
}
Jest unit test passes for property (simplified)
expect(stack).toHaveResourceLike('AWS::Cognito::UserPool', {
"UserPoolName": "test"
});
Jest unit test fails for DeletionPolicy (simplified)
expect(stack).toHaveResourceLike('AWS::Cognito::UserPool', {
"DeletionPolicy": "Retain"
});
You can use the following example
https://github.com/aws/aws-cdk/blob/775a0c930a680f8a52bb4a40084d07492f7f9fee/packages/%40aws-cdk/aws-cloudformation/test/test.resource.ts#L57
You can use haveResouce() with parameter ResourcePart.CompleteDefinition
snippet from the example
expect(stack).to(haveResource('AWS::CloudFormation::CustomResource', {
DeletionPolicy: 'Retain',
UpdateReplacePolicy: 'Retain',
}, ResourcePart.CompleteDefinition));
Updated snippet for CDK 2.x
const template = Template.fromStack(stack);
template.hasResource('AWS::Cognito::UserPool', {
DeletionPolicy: 'Retain',
UpdateReplacePolicy: 'Retain',
});
Here's an updated snippet confirmed working on CDK version: 1.107.0
import { ResourcePart } from '#aws-cdk/assert';
test('stack has correct policies', async () => {
expect(stack).toHaveResource('AWS::Cognito::UserPool', {
DeletionPolicy: 'Retain',
UpdateReplacePolicy: 'Retain',
}, ResourcePart.CompleteDefinition);
});
Related
I have an Azure DevOps pipeline with the resource section is given below
resources:
repositories:
- repository: test
type: git
name: Hackfest/template
pipelines:
- pipeline: Build
source: mybuild
branch: main
# version: # Latest by default
trigger:
branches:
include:
- main
I'm trying to invoke the pipeline using a rest api call. The body of the rest api call is given below
$body='{
"definition": { "id": "3321" },
"resources": {
"pipelines": {
"Build": {
"version": "20220304.15",
"source": "mybuild"
}
}
},
"sourceBranch": "main"
}'
With the above json string I'm able to invoke the pipeline build, but it is not picking the artifacts from version 20220304.15 of the build "mybuild". Rather it is taking the latest artifact version of mybuild and starting the build.
How I should modify the above body string to pick the correct version of the "mybuild"?
With the Runs - Run Pipeline this is worked for me:
"resources": {
"repositories": {
"self": {
"refName": "refs/heads/dev"
}
},
"pipelines": {
"Build": {
"version": "Build_202203040100.1"
}
}
}
I have a workflow in a standard logic app, that have HTTP trigger. When the workflow is trigged, the workflow, retrieve some data from a CosmosDB. Something like:
The previous method will require to have an API connection. I have already created and deployed a 'V2' API connection. Let's call it myCosmosCon
Also in the ARM template for my logic app I have already added a connectionRuntimeUrl of my connection API (to myCosmosCon) to appSettings (configuration):
....
"siteConfig": {
"appSettings": [
{
"name": "subscriptionId",
"value": "[subscription().subscriptionId]"
},
{
"name": "resourceGroup_name",
"value": "[resourceGroup().name]"
},
{
"name": "location_name",
"value": "[resourceGroup().location]"
},
{
"name": "connectionRuntimeUrl",
"value": "[reference(resourceId('Microsoft.Web/connections', parameters('connection_name')),'2016-06-01', 'full').properties.connectionRuntimeUrl]"
},
.....
]
},
Then I wrote the following in the connections.json:
{
"managedApiConnections": {
"documentdb": {
"api": {
"id": "/subscriptions/#appsetting('subscriptionId')/providers/Microsoft.Web/locations/#appsetting('location_name')/managedApis/documentdb"
},
"connection": {
"id": "/subscriptions/#appsetting('subscriptionId')/resourceGroups/#appsetting('resourceGroup_name')/providers/Microsoft.Web/connections/myCosmosCon"
},
"connectionRuntimeUrl": "#appsetting('connection_runtimeUrl')",
"authentication": {
"type": "ManagedServiceIdentity"
}
}
}
}
Now, when I deploy the ARM template of my Logic app, workflow, ... etc. I see no errors, the workflow looks also good. The only problem is the URL link to the HTTP trigger is not generated, I can't run the program.
However, if I change the connection_runtimeUrl in the connections.json file to have the actual value; to look something like:
"connectionRuntimeUrl": "https://xxxxxxxxxxxxx.xx.common.logic-norwayeast.azure-apihub.net/apim/myCosmosCon/xxxxxxxxxxxxxxxxxxxxxxxx/",
The URL is generated directly and I can simply run the workflow. AFTER that, if I return the connection_runtimeUrl as it was (a call to appsettings()), it still working!! the link also stay there.
It looks like the when I deploy the Logic app and the workflow that the connections.json, do not compile or make the call, so Azure think that there is an error and do not generate the link.
Any idea about how to solve the problem??
Thanks!
Not sure but could be the issue:
When you create a connection api for a logic app standard, you also need to create an access policy at the connection api level for the system assigned identity running the logic app standard.
param location string = resourceGroup().location
param cosmosDbAccountName string
param connectorName string = '${cosmosDbAccountName}-connector'
// The principalid of the logic app standard system assigned identity
param principalId string
// get a reference to the cosmos db account
resource cosmosDbAccount 'Microsoft.DocumentDB/databaseAccounts#2021-06-15' existing = {
name: cosmosDbAccountName
}
// create the related connection api
resource cosmosDbConnector 'Microsoft.Web/connections#2016-06-01' = {
name: connectorName
location: location
kind: 'V2'
properties: {
displayName: connectorName
parameterValues: {
databaseAccount: cosmosDbAccount.name
accessKey: listKeys(cosmosDbAccount.id, cosmosDbAccount.apiVersion).primaryMasterKey
}
api: {
id: 'subscriptions/${subscription().subscriptionId}/providers/Microsoft.Web/locations/${location}/managedApis/documentdb'
}
}
}
// Grant permission to the logic app standard to access the connection api
resource cosmosDbConnectorAccessPolicy 'Microsoft.Web/connections/accessPolicies#2016-06-01' = {
name: '${cosmosDbConnector.name}/${principalId}'
location: location
properties: {
principal: {
type: 'ActiveDirectory'
identity: {
tenantId: subscription().tenantId
objectId: principalId
}
}
}
}
output connectionRuntimeUrl string = reference(cosmosDbConnector.id, cosmosDbConnector.apiVersion, 'full').properties.connectionRuntimeUrl
I'm having trouble with the exact same issue/bug. The only work around as I see it is to deploy the workflow twice. First time with an actual URL pointing to a dummy connection and the second time with the appsetting reference.
I have seen lots of answer of how to debug a lambda function offline in vscode and I have got that working, to the extent I can set breakpoint and step through it.
However I am unsure how to specify the payload input for the lambda function for testing.
{
"configurations": [
{
"type": "aws-sam",
"request": "direct-invoke",
"name": "Downloads:charge.handler (nodejs10.x)",
"invokeTarget": {
"target": "code",
"projectRoot": "",
"lambdaHandler": "charge.handler"
},
"lambda": {
"runtime": "nodejs10.x",
"payload": {},
"environmentVariables": {}
}
}
]
}
It seems whatever I put into the payload json field, I only ever see an empty param object when my lambda function runs. I also have an ssm key saved on the aws server. Will that automatically be available to my locally debugged lambda function which I have setup with with SAM CLI, Docker and AWS CLI?
Any help would be greatly appreciated.
Thanks,
Greg
Okay so I wasn't specifying the payload properly.
Should be:
"payload": { "json": { "body": {
"item1": 1, "item2": 2, ...
}}}
I've written a simple lambda function in Micronauts/Groovy to return Allow/Deny policies as an AWS API gateway authorizer. When used as the API gateway authorizer the JSON cannot be parsed
Execution failed due to configuration error: Could not parse policy
When testing locally the response has the correct property case in the JSON.
e.g:
{
"principalId": "user",
"PolicyDocument": {
"Context": {
"stringKey": "1551172564541"
},
"Version": "2012-10-17",
"Statement": [
{
"Action": "execute-api:Invoke",
"Effect": "Allow",
"Resource": "arn:aws:execute-api:eu-west-1:<account>:<ref>/*/GET/"
}
]
}}
When this is run in AWS the JSON response has the properties all in lowercase:
{
"principalId": "user",
"policyDocument": {
"context": {
"stringKey": "1551172664327"
},
"version": "2012-10-17",
"statement": [
{
"resource": "arn:aws:execute-api:eu-west-1:<account>:<ref>/*/GET/",
"action": "execute-api:Invoke",
"effect": "Allow"
}
]
}
}
Not sure if the case is the issue but I cannot see what else might be the issue (tried many variations in output).
I've tried various Jackson annotations (#JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class) etc) but they do not seem to have an affect on the output in AWS.
Any idea how to sort this? Thanks.
Example code :
trying to get output to look like the example.
Running example locally using
runtime "io.micronaut:micronaut-function-web"
runtime "io.micronaut:micronaut-http-server-netty"
Lambda function handler:
AuthResponse sessionAuth(APIGatewayProxyRequestEvent event) {
AuthResponse authResponse = new AuthResponse()
authResponse.principalId = 'user'
authResponse.policyDocument = new PolicyDocument()
authResponse.policyDocument.version = "2012-10-17"
authResponse.policyDocument.setStatement([new session.auth.Statement(
Effect: Statement.Effect.Allow,
Action:"execute-api:Invoke",
Resource: "arn:aws:execute-api:eu-west-1:<account>:<ref>/*/GET/"
)])
return authResponse
}
AuthResponse looks like:
#CompileStatic
class AuthResponse {
String principalId
PolicyDocument policyDocument
}
#JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class)
#CompileStatic
class PolicyDocument {
String Version
List<Statement> Statement = []
}
#JsonNaming(PropertyNamingStrategy.UpperCamelCaseStrategy.class)
#CompileStatic
class Statement {
String Action
String Effect
String Resource
}
Looks like you cannot rely on AWS lambda Java serializer to not change your JSON response if you are relying on some kind of annotation or mapper. If you want the response to be untouched you'll need to you the raw output stream type of handler.
See the end of this AWS doc Handler Input/Output Types (Java)
Creating a python flask rest plus server application,
I'm trying to create a model for input body (in POST operation) with 'allOf' operator,
which is equivalent to the following example, taken from swagger.yaml I've created with the swagger editor:
definitions:
XXXOperation:
description: something...
properties:
oper_type:
type: string
enum:
- oper_a
- oper_b
- oper_c
operation:
allOf:
- $ref: '#/definitions/OperA'
- $ref: '#/definitions/OperB'
- $ref: '#/definitions/OperC'
It should be something like (just in my crazy imagination):
xxx_oper_model = api.model('XXXOperation', {
'oper_type': fields.String(required=True, enum['oper_a', 'oper_b', 'oper_c']),
'operation': fields.Nested([OperA, OperB, OperC], type='anyof')
})
when OperA, OperB, OperC are also defined as models.
How can I do that?
Actually, I prefer to use 'oneOf', but as I understand it's not supported even in the swagger editor, so I try to use the 'allOf' with not required fields.
Versions: flask restplus: 0.10.1, flask: 0.12.2, python: 3.6.2
Thanks a lot
You need to use api.inherit. As mentioned in page 30 of documentation example;
parent = api.model('Parent', {
'name': fields.String,
'class': fields.String(discriminator=True)
})
child = api.inherit('Child', parent, {
'extra': fields.String
})
this way, Child will have all properties of Parent + its own additional property extra
{
"Child": {
"allOf": [
{
"$ref": "#/definitions/Parent"
},
{
"properties": {
"extra": {
"type": "string"
}
}
}
]
}
}