Github Action running "terraform plan" failing with a 404 - terraform

I am running the following Github action that should post the output from terraform planto the PR in Github but it's giving me a 404. I have pretty much copied the action from Terraform
The only thing I can see that is suspect is the URL that's generating the 404 looks like it's missing a value - it says
"https://api.github.com/repos/orgname/reponame/issues//comments" but I think it probably should be
"https://api.github.com/repos/orgname/reponame/issues/SOMETHING/comments":
- name: Run Terraform Plan
id: plan
run: |
cd operations/app/terraform/vars/staging
terraform plan -no-color -input=false
continue-on-error: true
- uses: actions/github-script#v6
env:
PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const output = `#### Terraform Format and Style 🖌 \`${{ steps.fmt.outcome }}\`
#### Terraform Initialization ⚙️\`${{ steps.init.outcome }}\`
#### Terraform Validation 🤖\`${{ steps.validate.outcome }}\`
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`\n
${process.env.PLAN}
\`\`\`
</details>
*Pusher: #${{ github.actor }}, Action: \`${{ github.event_name }}\`*`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
I get the following:
Plan: 0 to add, 1 to change, 0 to destroy.
─────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.
RequestError [HttpError]: Not Found
at /home/runner/work/_actions/actions/github-script/v6/dist/index.js:4469:21
at processTicksAndRejections (node:internal/process/task_queues:96:5) ***
status: 404,
response: ***
url: 'https://api.github.com/repos/myorg/myrepo/issues//comments',
Error: Unhandled error: HttpError: Not Found
status: 404,
headers: ***
<snip headers>
data: ***
message: 'Not Found',
documentation_url: 'https://docs.github.com/rest'
***
***,
request: ***
method: 'POST',
url: 'https://api.github.com/repos/orgname/reponame/issues//comments',
headers: ***
accept: 'application/vnd.github.-preview+json',
'user-agent': 'actions/github-script octokit-core.js/3.5.1 Node.js/16.13.0 (linux; x64)',
authorization: 'token [REDACTED]',
'content-type': 'application/json; charset=utf-8'
***,

Try putting await in front of the function call in your action.
await github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})

The issue was this was running on both push and pull_request - but on a push there is no PR for it to add the comment to so that portion failed. Adding a conditional for the github-script got it working as expected
- uses: actions/github-script#v6
if: github.event_name == 'pull_request'
env:

Related

Errno 20 Not a directory: '/tmp/tmp6db1q_sn/pyproject.toml/egg-info'

I'm wrapping my colleague's code in an AWS SAM. I made a lambda to just call existing code with a run method which I did like this.
import json
from sentance_analyser_v2.main import run
def lambda_handler(event, context):
if (event['path'] == '/analyse'):
return RunAnalyser(event, context)
else:
return {
"statusCode": 200,
"body": json.dumps({
"message": "Hello"
}),
}
def RunAnalyser(event, context):
source = event['queryStringParameters']['source']
output = run(source)
return {
"statusCode": 200,
"body": json.dumps({
"output": json.dumps(output)
})
}
After running into numerous package problems I realized I forget to use the sam build --use-container command and I was hoping after a build it would solve some of my package errors but I ran into this error code -
Error: PythonPipBuilder:ResolveDependencies - [Errno 20] Not a directory: '/tmp/tmp6db1q_sn/pyproject.toml/egg-info'
After trying to understand what the freak that even is I started pulling my hairs out because there is nothing on this...
Here is the basic template that I currently use just so I can test that my wrapping worked. I'm still very new to AWS & SAM so I don't want to overcomplicate things just yet with the template.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
sam-app
Sample SAM Template for sam-app
Globals:
Function:
Timeout: 30
Tracing: Active
Api:
TracingEnabled: True
Resources:
MainLambda:
Type: AWS::Serverless::Function
Properties:
CodeUri: main_lambda/
Handler: app.lambda_handler
Runtime: python3.9
Architectures:
- x86_64
Events:
Analyser:
Type: Api
Properties:
Path: /analyse
Method: get
Outputs:
HelloEfsApi:
Description: "API Gateway endpoint URL for Prod stage for Hello EFS function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"

Convert Azure Pipeline condition/variable to a json boolean

Currently, I'm working on a pipeline that should call an Azure Function in a certain way, depending on the outcome/result of a previous job in that pipeline.
The Azure Function should be called when the result of previous job is either: Succeeded, SucceededWithIssues or Failed. We want to ignore Skipped and Cancelled.
The body sent to the Azure Function differs based on the result: Succeeded/SucceededWithIssues VS Failed. It only differs by a single boolean in the payload called: DeploymentFailed.
The current implementation is using two separate tasks for calling the Azure Function. This was necessary, since I couldn't find a way to convert the outcome of the previous job to a boolean.
The current pipeline as is:
trigger:
- master
parameters:
- name: jobA
default: 'A'
- name: correlationId
default: '90c7e477-2141-45db-812a-019a9f88bdc8'
pool:
vmImage: ubuntu-latest
jobs:
- job: job_${{parameters.jobA}}
steps:
- script: echo "This job could potentially fail."
- job: job_B
dependsOn: job_${{parameters.jobA}}
variables:
failed: $[dependencies.job_${{parameters.jobA}}.result]
condition: in(variables['failed'], 'Succeeded', 'SucceededWithIssues', 'Failed')
pool: server
steps:
- task: AzureFunction#1
displayName: Call function succeeded
condition: in(variables['failed'], 'Succeeded', 'SucceededWithIssues')
inputs:
function: "<azure-function-url>"
key: "<azure-function-key>"
method: 'POST'
waitForCompletion: false
headers: |
{
"Content-Type": "application/json"
}
body: |
{
"CorrelationId": "${{parameters.correlationId}}",
"DeploymentFailed": false # I would like to use the outcome of `variable.failed` here and cast it to a JSON bool.
}
- task: AzureFunction#1
displayName: Call function failed
condition: in(variables['failed'], 'Failed')
inputs:
function: "<azure-function-url>"
key: "<azure-function-key>"
waitForCompletion: false
method: 'POST'
headers: |
{
"Content-Type": "application/json"
}
body: |
{
"CorrelationId": "${{parameters.correlationId}}",
"DeploymentFailed": true # I would like to use the outcome of `variable.failed` here and cast it to a JSON bool.
}
My question: How can I use the outcome of the previous job to only have 1 Azure Function invoke task?
You can map condition directly to variable:
variables:
failed: $[dependencies.job_${{parameters.jobA}}.result]
result: $[lower(notIn(dependencies.job_${{parameters.jobA}}.result, 'Succeeded', 'SucceededWithIssues', 'Failed'))]
and then:
- task: AzureFunction#1
displayName: Call function succeeded
condition: in(variables['failed'], 'Succeeded', 'SucceededWithIssues')
inputs:
function: "<azure-function-url>"
key: "<azure-function-key>"
method: 'POST'
waitForCompletion: false
headers: |
{
"Content-Type": "application/json"
}
body: |
{
"CorrelationId": "${{parameters.correlationId}}",
"DeploymentFailed": $(result) # I would like to use the outcome of `variable.failed` here and cast it to a JSON bool.
}

API gateway returns 401 and doesn't invoke custom authorizer

I've implemented a custom 'REQUEST' type authorizer for an API gateway which validates a JWT token passed in the 'Authorization' header. I've tested the lambda independently and it works as expected. I've also attached the authorizer to my routes and I can test it in the AWS console - again, everything seems to work (see image):
successful invoke via console
However, when I try to invoke my endpoints with the token in the Authorization header, I always receive an UNAUTHORIZED response:
{
"errors": [
{
"category": "ClientError",
"code": "UNAUTHORIZED",
"detail": "Unauthorized",
"method": "GET",
"path": "/cases",
"requestId": "004eb254-a926-45ad-96a5-ce3527621c81",
"retryable": false
}
]
}
From what I have gathered, API gateway never invokes my Authorizer as i don't see any log events in its cloudwatch. I was able to enable cloudwatch logging of my API gateway, and the only log information I see is as follows:
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| timestamp | message |
|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1578275720543 | (dac0d4f6-1380-4049-bcee-bf776ca78e5c) Extended Request Id: F2v9WFfiIAMF-9w= |
| 1578275720543 | (dac0d4f6-1380-4049-bcee-bf776ca78e5c) Unauthorized request: dac0d4f6-1380-4049-bcee-bf776ca78e5c |
| 1578275720543 | (dac0d4f6-1380-4049-bcee-bf776ca78e5c) Extended Request Id: F2v9WFfiIAMF-9w= |
| 1578275720544 | (dac0d4f6-1380-4049-bcee-bf776ca78e5c) Gateway response type: UNAUTHORIZED with status code: 401 |
| 1578275720544 | (dac0d4f6-1380-4049-bcee-bf776ca78e5c) Gateway response body: {"errors": [{"category": "ClientError","code": "UNAUTHORIZED","detail": "Unauthorized","method": "GET","path": "/cases","requestId": "dac0d4f6-1380-4049-bcee-bf776ca78e5c","retryable": false }]} |
| 1578275720544 | (dac0d4f6-1380-4049-bcee-bf776ca78e5c) Gateway response headers: {} |
| 1578275720544 | (dac0d4f6-1380-4049-bcee-bf776ca78e5c) Gateway response type: UNAUTHORIZED with status code: 401 |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
At this point I am completely stuck and not sure how to debug this further. I'm assuming something must be configured wrong but the log information I can find doesn't given any indication of what the problem is. I've also pasted a copy of my authorizers configuration in the image below:
Authorizer Configuration
Screenshot of one endpoint configured to use the authorizer
I figured out the problem I was having:
I needed to set identitySource: method.request.header.Authorization in the authorizer field of the endpoint as well as in the CF stack.
Custom Authorizer definition in raw cloudformation:
service:
name: api-base
frameworkVersion: ">=1.2.0 <2.0.0"
plugins:
- serverless-plugin-optimize
- serverless-offline
- serverless-pseudo-parameters
- serverless-domain-manager
custom:
stage: ${self:provider.stage, 'dev'}
serverless-offline:
port: ${env:OFFLINE_PORT, '4000'}
false: false
cognitoStack: marley-auth
customDomain:
domainName: ${env:BE_HOST, ''}
enabled: ${env:EN_CUSTOM_DOMAIN, self:custom.false}
stage: ${self:provider.stage, 'dev'}
createRoute53Record: true
provider:
name: aws
runtime: nodejs10.x
versionFunctions: true
apiName: public
logs:
restApi: true
stackTags:
COMMIT_SHA: ${env:COMMIT_SHA, 'NO-SHA'}
environment:
USER_POOL_ID: ${cf:${self:custom.cognitoStack}-${self:custom.stage}.UserPoolId}
CLIENT_ID: ${cf:${self:custom.cognitoStack}-${self:custom.stage}.UserPoolClientId}
timeout: 30
iamRoleStatements:
- Effect: "Allow"
Action:
- "lambda:InvokeFunction"
Resource: "*"
functions:
authorizer:
handler: handler/authorize.handler
resources:
- Outputs:
ApiGatewayRestApiId:
Value:
Ref: ApiGatewayRestApi
Export:
Name: ${self:custom.stage}-${self:provider.apiName}-ApiGatewayRestApiId
ApiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
Export:
Name: ${self:custom.stage}-${self:provider.apiName}-ApiGatewayRestApiRootResourceId
SharedAuthorizerId:
Value:
Ref: SharedAuthorizer
Export:
Name: ${self:custom.stage}-${self:provider.apiName}-ApiGatewaySharedAuthorizerId
- Resources:
SharedAuthorizer:
Type: AWS::ApiGateway::Authorizer
Properties:
Name: public
AuthorizerUri: !Join
- ''
- - 'arn:aws:apigateway:'
- !Ref 'AWS::Region'
- ':lambda:path/2015-03-31/functions/'
- !GetAtt
- AuthorizerLambdaFunction
- Arn
- /invocations
RestApiId: !Ref 'ApiGatewayRestApi'
Type: REQUEST
IdentitySource: method.request.header.Authorization
AuthorizerResultTtlInSeconds: '300'
DependsOn: AuthorizerLambdaFunction
ApiAuthLambdaPermission:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:InvokeFunction
FunctionName: !Ref AuthorizerLambdaFunction
Principal: apigateway.amazonaws.com
SourceArn: !Sub "arn:aws:execute-api:#{AWS::Region}:#{AWS::AccountId}:#{ApiGatewayRestApi}/authorizers/*"
DependsOn: ApiGatewayRestApi
Using the authorizer in another stack - note that I have specified IdentitySource here as well as in the definition of the authorizer - for some reason I had to do it in both places.
authorizer:
type: CUSTOM
authorizerId: ${cf:api-base-${self:custom.stage}.SharedAuthorizerId}
identitySource: method.request.header.Authorization

How create state machine with notifications and sns topic in same tempale?

Consider a code:
serverless.yml
service: my-service
frameworkVersion: ">=1.38.0 <2.0.0"
plugins:
- serverless-step-functions
- serverless-pseudo-parameters
- serverless-cf-vars
- serverless-parameters
provider:
name: aws
stage: ${opt:stage}
region: us-east-1
stepFunctions:
stateMachines:
MyStateMachine:
name: my_state_machine
notifications:
ABORTED:
- sns:
Ref: SnsTopic
FAILED:
- sns:
Ref: SnsTopic
definition:
StartAt: "Just Pass"
States:
"Just Pass":
Type: Pass
End: true
Resources:
SnsTopic:
Type: AWS::SNS::Topic
Properties:
TopicName: MySnsTopic
package.json
{
"devDependencies": {
"serverless-pseudo-parameters": "^2.5.0",
"serverless-step-functions": "^2.10.0",
"serverless-cf-vars": "^0.3.2",
"serverless-domain-manager": "3.2.7",
"serverless-aws-nested-stacks": "^0.1.2",
"serverless-parameters": "0.1.0"
}
}
Failed with error:
Error --------------------------------------------------
Error: The CloudFormation template is invalid: Template format error: Unresolved resource dependencies [SnsTopic] in the Resources block of the template
So it looks like when state machine is created there is no SnsTopic resource. But how to create it before state machine?
DependsOn attrobute on state machine lead to same error. Any Ideas?
The fix is quite trivial (facepalm):
resources:
Resources:
SnsTopic:
Type: AWS::SNS::Topic
Properties:
TopicName: MySnsTopic

LambdaFunction - Value of property Variables must be an object with String (or simple type) properties

I am using serverless to deploy my Lambda based application. It was deploying just fine, and then it stopped for some reason. I paired down the entire package to the serverless.yml below and one function in the handler - but I keep getting this error:
Serverless Error ---------------------------------------
An error occurred: TestLambdaFunction - Value of property Variables must be an object with String (or simple type) properties.
Stack Trace --------------------------------------------
Here is the serverless.yml
# serverless.yml
service: some-api
provider:
name: aws
runtime: nodejs6.10
stage: prod
region: us-east-1
iamRoleStatements:
$ref: ./user-policy.json
environment:
config:
region: us-east-1
plugins:
- serverless-local-dev-server
- serverless-dynamodb-local
- serverless-step-functions
package:
exclude:
- .gitignore
- package.json
- README.md
- .git
- ./**.test.js
functions:
test:
handler: handler.test
events:
- http: GET test
resources:
Outputs:
NewOutput:
Description: Description for the output
Value: Some output value
Test Lambda Function in Package
#handler.test
module.exports.test = (event, context, callback) => {
callback(null, {
statusCode: 200,
body: JSON.stringify({
message: 'sadfasd',
input: event
})
})
}
Turns out, this issue does not have any relationship to the Lambda function. Here is the issue that caused the error.
This does NOT work:
environment:
config:
region: us-east-1
This DOES work:
environment:
region: us-east-1
Simply put, I don't think you can have more than one level in your yaml environment variables.
Even if you try sls print as a sanity check, this issue will not pop up. Only in sls deploy.
You have been warned, and hopefully saved!
Other thing that might cause this kind of error is using invalid yaml syntax.
It's easy to get confused about this.
Valid syntax for environment variables
environment:
key: value
Invalid syntax for environment variables
environment:
- key: value
Notice little dash in the code below?
In yaml syntax - means array and therefore, code below is interpreted as array, not object.
So that's why error tells "Value of property Variables must be an object with String (or simple type) properties."
This can be easily fixed by removing - in front of all keys.

Resources