Warned - no cfnRole set and unnecessary files was created after deploy - python-3.x

No cfnRole warned and unnecessary files was created after deploy
Serverless: Safeguards Processing...
Serverless: Safeguards Results:
Summary --------------------------------------------------
passed - no-unsafe-wildcard-iam-permissions
passed - framework-version
warned - require-cfn-role
passed - allowed-runtimes
passed - no-secret-env-vars
passed - allowed-regions
passed - allowed-stages
Details --------------------------------------------------
1) Warned - no cfnRole set
details: http://slss.io/sg-require-cfn-role
Require the cfnRole option, which specifies a particular role for CloudFormation to assume while deploying.
I had been go to the site that write in details.
details: http://slss.io/sg-require-cfn-role
Anyway, I don't know how to fix it.
s_hello.py & s_hello2.py always generated after deploy.
This is my serverless.yaml file
service: myapp
app: sample-app
org: xxx
provider:
name: aws
runtime: python3.7
stage: dev
region: us-east-1
package:
individually: true
functions:
hello:
handler: src/handler/handler.hello
hello2:
handler: src/handler2/handler2.hello2
It's always happen although follow this site .
My Lambda-function will create "s_xxx.py (Where xxx is handler.py file.

I solved this issue creating a cfn-role in AWS IAM following these steps:
Roles -> Create Role -> AWS Service -> Select Cloud Formation from the list
Next: Permisions
You need to choose all the policies you need to deploy your lambda function (S3FullAccess, SQSFullAccess, LambdaFullAccess...)
There's one that it's mandatory AWSConfigRole who allows to serverless framework to get this role.
After setting the role, you need to copy its arn and create behind provider level cfnRole like this:
provider:
name: aws
runtime: python3.7
stage: ${opt:stage, 'dev'}
profile: ${self:custom.deploy-profile.${opt:stage, 'dev'}}
region: us-west-2
environment:
TEMP: "/tmp"
cfnRole: arn:aws:iam::xxxxxxxxxx:role/cfn-Role
That's work for me, I hope to help!

Related

Accessing environment configs defined in serverless.yaml in standalone nodejs script

I have recently started working on a project in which we are using serverless framework. We are using docker to make dev environment easier to setup.
As part of this docker setup we have created a script that creates S3 buckets & tables among other things. We were earlier defining environment variables in the docker-compose file and were accessing them in our nodejs app. For purposes of deployment to other environments our devops team defined a few environment variables in the serverless.yaml file resulting in environment configs being present at two places. We are now planning to move all the environment configs defined in our docker-compose file to serverless.yaml. This works well for our lambdas functions as they are able to read these configs, but it doesn't work for the standalone setup script that we have written.
I tried using this plugin(serverless-scriptable-plugin) in an attempt to be able to read these env variables but still unable to do so.
Here is my serverless.yaml file
service:
name: my-service
frameworkVersion: '2'
configValidationMode: error
provider:
name: aws
runtime: nodejs14.x
region: 'us-east-1'
profile: ${env:PROFILE, 'dev'}
stackName: stack-${self:provider.profile}
apiName: ${self:custom.environment_prefix}-${self:service.name}-my-api
environment: ${self:custom.environment_variables.${self:provider.profile}}
plugins:
- serverless-webpack
- serverless-scriptable-plugin
- serverless-offline-sqs
- serverless-offline
functions:
myMethod:
handler: handler.myHandler
name: ${self:custom.environment_prefix}-${self:service.name}-myHandler
events:
- sqs:
arn:
Fn::GetAtt:
- MyQueue
- Arn
resources:
Resources:
MyQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:custom.queuename}
Tags:
- Key: product
Value: common
- Key: service
Value: common
- Key: function
Value: ${self:service.name}
- Key: region
Value: ${env:REGION}
package:
individually: true
custom:
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
serverless-offline:
host: 0.0.0.0
port: 3000
serverless-offline-sqs:
apiVersion: '2012-11-05'
endpoint: http://sqs:9324
region: ${self:provider.region}
accessKeyId: root
secretAccessKey: root
skipCacheInvalidation: false
localstack:
stages:
- local
lambda:
mountCode: true
debug: true
environment_prefixes:
staging: staging
production: production
dev: dev
environment_prefix: ${self:custom.environment_prefixes.${self:provider.profile}}
queuename: 'myQueue'
environment_variables:
dev:
AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: test
BUCKET_NAME: my-bucket
S3_URL: http://localstack:4566
SLS_DEBUG: '*'
scriptable:
commands:
setup: node /app/dockerEntrypoint.js
In my DockerFile I try executing script using sls setup CMD. I initially thought using sls command might expose these environment variables defined in serverless.yaml file but it doesn't seem to happen.
Is there any other way this can be achieved? I am trying to access these variables using process.env which works for lambdas but not for my standalone script. Thanks!
There's not a good way to get access to these environment variables if you're running the lambda code as a script.
The Serverless Framework injects these variables into the Lambda function runtime configuration via CloudFormation.
It does not insert/update the raw serverless.yml file, nor does it somehow intercept calls to process.env via the node process.
You'll could use the scriptable plugin to run after package, and then export each variable into your local docker environment. But that seems pretty heavy for the variables in your env.
Instead, you might consider something like dotenv, which will load variables from a .env file into your environment.
There is a serverless-dotenv plugin you could use, and then your script could also call dotenv before running.

Lambda functions does not appear on AWS Console when I deploy from VSCode with Serverless Deploy

My problem is that if I write the Lambda function in VSCode I cannot deploy it to AWS console.
I have an AWS account and provided credentials to use in VSCode. Just testing the deployment of simple Lambda function to AWS Console with serverless deploy command. So far no success. It creates the bucket on S3 and put zip code there.
ConsoleTest function was created manually in AWS Lambda Console.
My serverless.yml looks like this:
service: myservice
provider:
name: aws
runtime: nodejs12.x
functions:
hello:
handler: handler.hello
events:
- http:
path: users/create
method: get
Result in terminal (I get correctly JSON response)
I was following the official guide: https://serverless.com/framework/docs/providers/aws/guide/deploying/
Any help, please?
Found a solution.
Problem was that it created the wrong region. I also installed AWS Cli and specified region into a config file, but added region property to the provider. Not sure which one helped, because basically they do the same function.
When I put
service: myservice
provider:
name: aws
runtime: nodejs12.x
stage: development
region: eu-central-1
Everything started to work correctly and deploy to my AWS console.
Change the zone as shown in image.

How AWS Credentials works at GitHub Actions?

At my unit tests, I'm using aws-sdk to test the SES, which needs some credentials, we are facing a problem to access the secrets with GitHub Actions.
At beginning I was trying to set the values to ~/.aws/credentials using the run command from github workflows:
# .github/workflows/nodejs.yml
steps:
...
- name: Unit Test
run: |
mkdir -p ~/.aws
touch ~/.aws/credentials
echo "[default]
aws_access_key_id = ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key = ${{ secrets.AWS_SECRET_KEY_ID }}
region = ${AWS_DEFAULT_REGION}
[github]
role_arn = arn:aws:iam::{accountID}:role/{role}
source_profile = default" > ~/.aws/credentials
npm test
env:
AWS_DEFAULT_REGION: us-east-1
CI: true
Originally my test file:
// ses.test.js
const AWS = require("aws-sdk")
const credentials = new AWS.SharedIniFileCredentials({ profile: "github" })
AWS.config.update({ credentials })
...
I tried to use another way to get credentials at my tests like, and also doesn't work:
const AWS = require("aws-sdk")
const credentials = new AWS.ChainableTemporaryCredentials({
params: {RoleArn: "arn:aws:iam::{accountID}:role/{role}"},
masterCredentials: new AWS.EnvironmentCredentials("AWS")
)}
AWS.config.update({ credentials })
Finally I tried to create an Action customized (using actions js library like: #actions/core, #actions/io, #actions/exec), to get the AWS env values and set it at ~/.aws/credentials, but also doesn't work as expected
One way that worked was exposing AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (without use GitHub Actions secrets, not ideal, for security purposes)
Someone has any ideas how AWS credentials could work at GitHub Actions with secrets ?
Thanks a lot for your attention.
Luckily the aws-sdk should automatically detect credentials set as environment variables and use them for requests
To get access to secrets in your action, you need to set them in the repo. Then you can expose them to the step as an env var.
For more details see GitHub Encrypted secrets
On GitHub, navigate to the main page of the repository
Under your repository name, click the ⚙ Settings tab
Repository settings button
In the left sidebar, click Secrets
Type a name for your secret in the "Name" input box
Type the value for your secret
Click Add secret
In your case you will want to add secrets for both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
Now that those are set you can pass those values into the action via the workflow yaml:
steps:
...
- name: Unit Test
uses: ...
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: ...
Avoid using long term and hard coded credentials.
The configure-aws-credentials action provides a mechanism to configure AWS credential and region environment variables for use in other GitHub Actions. The environment variables will be detected by both the AWS SDKs and the AWS CLI to determine the credentials and region to use for AWS API calls.
I recommend configuring configure-aws-credentials to use OpenID Connect (OIDC). This allows your GitHub Actions workflows to access resources in AWS, without needing to store the AWS credentials as long-lived GitHub secrets. The GitHub Configuring OpenID Connect in AWS post walks through setting this up.
To give you a practical example, I set up a pipeline to upload dummy data to a S3 bucket. First set up an OpenID Connect provider, and a role for github to federate into in your AWS account. The examples in configure-aws-credentials are written in CloudFormation but I've translated them to the Python Cloud-Development-Kit(CDK) below. Make sure to change the role condition to match your repository.
github_oidc_provider = iam.OpenIdConnectProvider(
self,
"GithubOIDC",
url="https://token.actions.githubusercontent.com",
thumbprints=["a031c46782e6e6c662c2c87c76da9aa62ccabd8e"],
client_ids=[
"sts.amazonaws.com"
]
)
github_actions_role = iam.Role(
self,
"DeployToBucketRole",
max_session_duration=cdk.Duration.seconds(3600),
role_name="github-actions-role",
description="Github actions deployment role to S3",
assumed_by=iam.FederatedPrincipal(
federated=github_oidc_provider.open_id_connect_provider_arn,
conditions={
"StringLike": {
# <GITHUB USERNAME>/<YOUR REPO NAME>
"token.actions.githubusercontent.com:sub": 'repo:arbitraryrw/cdk-github-actions-demo:*'
}
},
assume_role_action="sts:AssumeRoleWithWebIdentity"
)
)
bucket = s3.Bucket(
self,
f"example_bucket",
bucket_name="cdk-github-actions-demo",
encryption=s3.BucketEncryption.S3_MANAGED,
enforce_ssl=True,
block_public_access=s3.BlockPublicAccess.BLOCK_ALL,
removal_policy=cdk.RemovalPolicy.DESTROY,
auto_delete_objects=True
)
# Give the role permissions to read / write to the bucket
bucket.grant_read_write(github_actions_role)
You can then reference this in your pipeline and run AWS CLI / SDK commands using these credentials. Notice that the snippet references Github Encrypted Secrets, I recommend leveraging this functionality:
name: Example CDK Pipeline
on:
push:
branches: [ main ]
jobs:
build:
name: Emulate build step
runs-on: ubuntu-latest
steps:
- name: Checking out repository
uses: actions/checkout#v2
- name: "Upload artifacts"
uses: actions/upload-artifact#v2
with:
name: build-artifacts
path: ${{ github.workspace }}/resources
deploy:
needs: build
name: Deploy build artifacts to S3
runs-on: ubuntu-latest
# These permissions are needed to interact with GitHub's OIDC Token endpoint.
permissions:
id-token: write
contents: read
steps:
- name: "Download build artifacts"
uses: actions/download-artifact#v2
with:
name: build-artifacts
path: ${{ github.workspace }}/resources
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials#v1
with:
aws-region: us-east-1
role-to-assume: ${{ secrets.AWS_ROLE_FOR_GITHUB }}
role-session-name: GitHubActions
- run: aws sts get-caller-identity
- name: Copy files to the test website with the AWS CLI
run: |
aws s3 sync ./resources s3://${{ secrets.BUCKET_NAME }}
For a full example on how to set this up using the CDK you can take a look at the cdk-github-actions-demo repo I set up.
Take a look at: https://github.com/aws-actions/configure-aws-credentials
It allows you to configure AWS credential and region environment variables for use in other GitHub Actions. The environment variables will be detected by both the AWS SDKs and the AWS CLI to determine the credentials and region to use for AWS API calls.
I was hitting my head against the wall on the same thing for a while.
In my case the setting profile = default was the issue.
I was able to remove that from my script and only having env. If I had both it would throw an error.
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'us-east-1'
If running aws from the command line is acceptable for you, you can set the following ENV vars and just use aws commands without needing to run aws configure:
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: us-east-1
AWS_DEFAULT_OUTPUT: json

Cannot deploy Node.js app inside AWS Lambda using Serverless Framework

I am new to serverless framework.
I was trying to deploy my code to lambda using serverless.
service:
name: store-consumer
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-XXXXXX-1
functions:
lambda:
handler: index.handler
The content of the serverless.yml file is as given above.
But when I hit 'sls deploy' in terminal my code is zipped and uploaded to an s3 bucket. How do I deploy my code to the corresponding lambda using serverless?
I assume I 'll have to give some credentials for the lambda, but how do I do that in the .yml file?! What am I not getting correctly?
You can specify the Lambda function name explicitly using the name field. Example:
service:
name: store-consumer
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: ap-XXXXXX-1
functions:
lambda:
handler: index.handler
name: myfunc
With this config file your deployed Lambda function will have the name myfunc.
See line 129 in https://serverless.com/framework/docs/providers/aws/guide/serverless.yml/.
Using the name of an already existing Lambda function will not work, you still have to delete old the Lambda function beforehand.

Serverless not including my node_modules

I have a nodejs serverless project that has this structure:
-node_modules
-package.json
-serverless.yml
-funcitons
-medium
-mediumHandler.js
my serverless.yml:
service: googleAnalytic
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-east-1
package:
include:
- node_modules/**
functions:
mediumHandler:
handler: functions/medium/mediumHandler.mediumHandler
events:
- schedule:
name: MediumSourceData
description: 'Captures data between set dates'
rate: rate(2 minutes)
- cloudwatchEvent:
event:
source:
- "Lambda"
detail-type:
- ""
- cloudwatchLog: '/aws/lambda/mediumHandler'
my sls info shows:
Service Information
service: googleAnalytic
stage: dev
region: us-east-1
stack: googleAnalytic-dev
api keys:
None
endpoints:
None
functions:
mediumHandler: googleAnalytic-dev-mediumHandler
When I run sls:
serverless invoke local -f mediumHandler
it works and my script where I included googleapis and aws-sdk work. But when I deploy, those functions are skipped and show no error.
When debugging serverless's packaging process, use sls package (or sls deploy --noDeploy (for old versions). You'll get a .serverless directory that you can inspect to see what's inside the deployment package.
From there, you can see if node_modules is included or not and make changes to your serverless.yml correspondingly without needing to deploy every time you make a change.
Serverless will exclude development packages by default. Check your package.json and ensure your required packages are in the dependencies object, as devDependencies will be excluded.
I was dumb to put this in my serverless.yml which caused me the same issue you're facing.
package:
patterns:
- '!node_modules/**'

Resources