Deploying multiple AWS lambdas separately - python-3.x

I'm using Serverless Framework 1.32.0 with AWS Lambdas and Python 3.6. I would like to deploy multiple lambdas in a separate way, since at this moment I can only do deployments one by one for every lambda in my directory, which can be confusing with many lambdas in a short future.
This is my current project structure:
└── cat_service
│  
├── hello_cat
│   ├── hello_cat-functions.yml
│   └── service.py
│  
├── random_cat_fact
│   ├── random_cat_fact-functions.yml
│   └── service.py
│  
└── serverless.yml
serverless.yml
service: cat-service
provider:
name: aws
runtime: python3.6
stage: dev
stackName: cat-service
deploymentBucket:
name: test-cat-bucket
role: arn:aws:iam::#{AWS::AccountId}:role/lambda-cat-role
cfnRole: arn:aws:iam::#{AWS::AccountId}:role/cloudformation-cat-role
functions:
- ${file(lambdas/hello_cat/hello_cat-functions.yml)}
stepFunctions:
stateMachines:
catStateMachine:
definition:
Comment: "Get cat hello"
StartAt: hello_cat
States:
hello_cat:
Type: Task
Resource: "arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:${self:service}-${opt:stage}-hello_cat"
End: true
plugins:
- serverless-step-functions
- serverless-pseudo-parameters
hello_cat-functions.yml
msc_cat_facts:
handler: service.handler
name: ${self:service}-${opt:stage}-msc_cat_facts
The problem is that, when I deploy it with serverless deploy --stage dev, it zips the full project and does not separate lambdas, so the actual Lambda in the AWS console shows as hello_cat but includes the full project structure instead of separating every lambda files in its own directory.
Is there a way to deploy separate lambdas in the same project with only one serverless.yml?
Thanks in advance.

You'll need to configure Serverless to package individually
To do this add the following to your serverless.yaml:
package:
individually: true

Besides including into serverless.yml, as proposed by #thomasmichaelwallace, the:
package:
individually: true
Try changing the path of your handler function into hello_cat-functions.yml,
from handler: service.handler to:
msc_cat_facts:
handler: hello_cat/service.handler
name: ${self:service}-${opt:stage}-msc_cat_facts

Related

GitHub Actions and Azure ASP.NET | Configure directory

I'm new to using GitHub Actions and Azure. I have a Blazor Server app that I'm trying to deploy, but my build has failed because the default Actions workflow doesn't recognize my file structure when it tries to dotnet build since I used JetBrains Rider. The fix to this is just to step one directory forward, but I'm unsure of the best approach to rewriting my workflow.
name: Build and deploy ASP.Net Core app to Azure Web App - quantumblink
on:
push:
branches:
- master
workflow_dispatch:
jobs:
build:
runs-on: windows-latest
steps:
- uses: actions/checkout#v2
- name: Set up .NET Core
env:
PROJECT_PATH: ./QuantumBlinkSite
uses: actions/setup-dotnet#v1
with:
dotnet-version: '6.0.x'
include-prerelease: true
- name: Build with dotnet
run: dotnet build --configuration Release
- name: dotnet publish
run: dotnet publish -c Release -o $PROJECT_PATH
- name: Upload artifact for deployment job
uses: actions/upload-artifact#v2
with:
name: .net-app
path: PROJECT_PATH
deploy:
runs-on: windows-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact#v2
with:
name: .net-app
- name: Deploy to Azure Web App
id: deploy-to-webapp
uses: azure/webapps-deploy#v2
with:
app-name: 'quantumblink'
slot-name: 'Production'
publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_9C448CC12BC94139931D6147F846D187 }}
package: .
I did some googling and it seemed that setting an environment variable for PROJECT_PATH would work, but I guess not.
My project structure is:
QuantumBlinkSite
QuantumBlinkSite
├── Data
├── Pages
├── Properties
├── Shared
├── bin
│   └── Debug
│   └── net6.0
├── obj
│   └── Debug
│   └── net6.0
│   ├── ref
│   ├── refint
│   ├── scopedcss
│   │   ├── Pages
│   │   ├── Shared
│   │   ├── bundle
│   │   └── projectbundle
│   └── staticwebassets
└── wwwroot
How should I go about best specifying the correct directory?

gcloud app deploy "This deployment has too many files"

I got this error even if I only have around 10 files in my directory.
I am using GithubActions to deploy my react app.
This is my current directory
├── app.yaml
└── build
├── index.html
├── favicon.ico
└── ...etc (around 10files)
here, I execute 'gcloud app deploy' but it returns error and says " too many files"
Though I don't have 10000 files in current directory , I have more 10000files in hierarchy shallow directory, does app.yaml see there ??
I mean ,
.
├── node_modules ← 10000files
└── current_directory
├── app.yaml
└── build
├── index.html
├── favicon.ico
└── ...etc (around 10files)
this is error
ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: This deployment has too many files. New versions are limited to 10000 files for this app.
- '#type': type.googleapis.com/google.rpc.BadRequest
fieldViolations:
- description: This deployment has too many files. New versions are limited to 10000
files for this app.
field: version.deployment.files[...]
this is my app.yaml
runtime: nodejs12
api_version: 1
threadsafe: true
handlers:
- url: /
static_files: build/index.html
upload: build/index.html
- url: /
static_dir: build
I'm not sure why they are trying to upload 10000files.
Thanks

Lambda Function can't resolve layer code (nodejs)

Introduction
I am using layers to avoid duplicate code. Currently I have one lambda function and one layer. But the function has problems using the layer.
I am using the AWS SAM to deploy it.
Project
The project looks like this:
backend/
├── lambdas/
│ └── onSignup/
│ ├── app.ts
│ └── package.json
├── layers/
│ └── SQLLayer/
│ └── nodejs/
│ ├── package.json
│ └── index.ts
└── template.yaml
template.yaml:
Resources:
SQLLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: sql-dependencies
Description: sql dependencies
ContentUri: layers/SQLLayer/
CompatibleRuntimes:
- nodejs14.x
LicenseInfo: 'MIT'
RetentionPolicy: Retain
Metadata: # Manage esbuild properties
BuildMethod: nodejs14.x
onSignup:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: lambdas/onSignup/
Handler: app.lambdaHandler
Runtime: nodejs14.x
Timeout: 10
Architectures:
- x86_64
Layers:
- !Ref SQLLayer
Metadata: # Manage esbuild properties
BuildMethod: esbuild
BuildProperties:
Minify: false # set True on release
Target: "es2020"
Sourcemap: true
EntryPoints:
- app.ts
Problem
I am now trying to import this layer into the app.ts from the lambda function like this. But it cant find the module.
var executeSqlStatement = require("/opt/SQLLayer/")
I have also tried countless other file paths but it doesn't work. Almost every resource I tried to use used another file path but none worked for me. But when I deploy them they are uploaded correctly. I am just not sure how they are put together on runtime.
Other mentions
I am not sure if this is a problem but when building I get this output:
Building layer 'SQLLayer'
package.json file not found. Continuing the build without dependencies. <=== COULD THIS BE A PROBLEM?
Running NodejsNpmBuilder:CopySource
Building codeuri: C:\coding\apps\iota\backend\lambdas\onSignup runtime: nodejs14.x metadata: {'BuildMethod': 'esbuild', 'BuildProperties': {'Minify': False, 'Target': 'es2020', 'Sourcemap': True, 'EntryPoints': ['app.ts']}} architecture: x86_64 functions: ['onSignup']
Running NodejsNpmEsbuildBuilder:CopySource
Running NodejsNpmEsbuildBuilder:NpmInstall
Running NodejsNpmEsbuildBuilder:EsbuildBundle
Build Succeeded
It says package.json was not found but the build continues and also succeeds. The layer can also be found in the build folder so I don't think this is a problem.
The other weird thing is that in the build folder for the layer there are two nodejs folders nested inside each other. But that could be normal but I just wanted to mention it for completion.
I hope you have enough information to help me.
Thanks!
It's because, in fact, that path doesn't really exist.
You can do the following.
Do the normal import:
var executeSqlStatement = require("/opt/SQLLayer/")
and in tsconfig.json change the settings to the following:
{
baseUrl: "./",
paths: {
"/opt/SQLLayer/": "your path local"
}
}
You did not specify which version of SAM CLI you were using. The information I am providing below is based on SAM CLI version 1.55.0.
So your layer code doesn't really follow your local hierarchy or include the name of the layer itself. The warning you get of the missing package.json is a warning that tells you the code is not been placed where you expect.
You need to remove the nodejs folder so your code looks like this
├── layers/
│ └── SQLLayer/
│ ├── package.json
│ └── index.ts
This will create a layer with the current hierarchy
├── nodejs/
│ ├── package.json
│ └── index.ts
In your actual Lamabda function this will look like this
├── /opt/
│ └── nodejs/
│ ├── package.json
│ └── index.ts
This means that you can import this module like this in your lambda function require('/opt/nodejs')
I hope you find this information useful to understand how you can structure the code of your lambda layers source code folder when using SAM CLI.

Managing multiple terraform statefiles with gitlab

Just as an example, my git repository would look like so:
└── git_repo_example
├── modules
│   ├── module_1
│   │   ├── main.tf
│   │   └── var.tf
│   └── module_2
│   ├── main.tf
│   └── var.tf
└── projects
├── project_1
│   ├── main.tf
│   ├── terraform.tfstate
│   └── vars.tf
└── project_2
├── main.tf
├── terraform.tfstate
└── vars.tf
7 directories, 10 files
My team wants to make our terraform state files gitlab-managed, so that the statefiles would be locked in case multiple people want to run or modify a single project at the same time.
All of the examples I can find for managing terraform via gitlab only seem to assume 1 tfstate file and project, but my repository has multiple. Breaking this up into multiple repositories would be difficult to manage since they all reference the same modules, and it seems that placing everything into one folder is against terraform best-practices.
How would one best go about managing one repository with multiple terraform projects / statefiles via gitlab?
I have a similar-ish directory structure that works well with GitLab managed state per project directory.
I'd recommend replacing local TF development with GitLab CI/CD pipelines, using the provided GitLab container image as it supports the GitLab backend by default.
I use environments (representing each project directory) to mange the pipeline (CI/CD variables are managed by environment). The TF statefile is named according to the TF_ADDRESS variable:
image: registry.gitlab.com/gitlab-org/terraform-images/stable:latest
variables:
TF_ROOT: ${CI_PROJECT_DIR}/${ENVIRONMENT}
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${ENVIRONMENT}
Here a build job is defined to create the TF plan and run only when the development directory is modified and merged to the default branch. An identical job for the production directory is also defined. Each environment references a unique TF state file managed by GitLab:
.plan:
stage: build
environment:
name: $ENVIRONMENT
script:
- gitlab-terraform plan
- gitlab-terraform plan-json
cache:
policy: pull # don't update the cache
artifacts:
name: plan
paths:
- ${TF_ROOT}/plan.cache
reports:
terraform: ${TF_ROOT}/plan.json
Development Build Terraform Plan:
extends: .plan
variables:
ENVIRONMENT: development
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- "*"
- "development/**/*"
Production Build Terraform Plan:
extends: .plan
variables:
ENVIRONMENT: production
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- "*"
- "production/**/*"
I have exactly the same kind of terraform scripts repository, with a "run" script at the top, which will do, for each application, a
cd modules/modulesx
terraform init -backend-config=backend.tf -reconfigure
With backend.tf (here for Azure):
container-name = "tfbackend"
key = "aStorageAccount/aFileShare/path/to/modulex.tfstate"
resource_group_name = "xxx"
That does create a modules/modulex/.terraform/terraform.tfstate
However, this file is local and not versioned nor "locked".

How to deploy to appengine flexible using bazel and google cloud deploy using non containerized artifact?

I have a project structure like:
.
├── app
│   ├── BUILD
│   ├── entry.py
│   ├── forms.py
│   ├── __init__.py
│   ├── jinja_custom_filter.py
│   ├── models.py
│   ├── __pycache__
│   ├── static
│   ├── templates
│   ├── utils.py
│   └── views.py
├── app.db
├── app.yaml
├── BUILD
├── cloudbuild.yaml
├── config.py
├── __init__.py
├── LICENSE
├── manage.py
├── requirements.txt
├── run.py
└── WORKSPACE
4 directories, 20 files
Project uses flask, sqlalchemy (see further below)
How does one deploy using google cloud builder to appengine using non-containerized deployment option? just artifact?
Here is my cloudbuild.yaml?:
# In this directory, run the following command to build this builder.
# $ gcloud builds submit
steps:
# Fetch source.
#- name: "docker.io/library/python:3.6.8"
# args: ['pip', 'install', '-t', '/workspace/lib', '-r', '/workspace/requirements.txt']
#- name: 'gcr.io/cloud-builders/git'
# args: ['clone', '--single-branch', '--branch=develop', 'https://github.com/codecakes/<myproject>_gae.git', '<myproject>_gae']
# Build the Bazel builder and output the version we built with.
#- name: 'gcr.io/cloud-builders/docker'
# args: ['build', '--tag=gcr.io/$PROJECT_ID/deploy:latest', '.']
# Build the targets.
#- name: 'gcr.io/$PROJECT_ID/bazel'
- name: 'gcr.io/cloud-builders/bazel'
args: ['build', '--spawn_strategy=standalone', '//app:entry', '--copt', '--force_python=PY3', '--color=yes', '--curses=yes', '--jobs=10', '--loading_phase_threads=HOST_CPUS', '--aspects=#bazel_tools//tools/python:srcs_version.bzl%find_requirements', '--output_groups=pyversioninfo', '--verbose_failures']
dir: '.'
- name: 'gcr.io/cloud-builders/bazel'
# args: ['run', '--spawn_strategy=standalone', '//:run', '--copt', '--verbose_failures=true', '--show_timestamps=true', '--python_version=PY3', '--build_python_zip', '--sandbox_debug', '--color=yes', '--curses=yes', '--jobs=10', '--loading_phase_threads=HOST_CPUS', '--aspects=#bazel_tools//tools/python:srcs_version.bzl%find_requirements', '--output_groups=pyversioninfo']
args: ['build', '--spawn_strategy=standalone', ':run', '--copt', '--aspects=#bazel_tools//tools/python:srcs_version.bzl%find_requirements', '--verbose_failures=true', '--show_timestamps=true', '--python_version=PY3', '--build_python_zip', '--sandbox_debug', '--color=yes', '--curses=yes', '--jobs=10', '--loading_phase_threads=HOST_CPUS']
dir: '.'
artifacts:
objects:
location: 'gs://<myproject>/'
paths: ['cloudbuild.yaml']
#images: ['gcr.io/$PROJECT_ID/deploy:latest']
I then do
sudo gcloud builds submit --config cloudbuild.yaml ./
which gives me
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
d4dfd7dd-0f77-49d1-ac4c-4e3a1c84e3ea 2019-05-04T07:52:13+00:00 1M32S gs://<myproject>_cloudbuild/source/1556956326.46-3f4abd9a558440d8ba669b3d55248de6.tgz - SUCCESS
then i do
sudo gcloud app deploy app.yaml --user-output-enabled --account=<myemail>
which gives me:
Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [3] Docker image asia.gcr.io/<myproject>/appengine/default.20190502t044929:latest was either not found, or is not in Docker V2 format. Please visit https://cloud.google.com/container-registry/docs/ui
My workspace file is:
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
# Use the git repository
git_repository(
name = "bazel_for_gcloud_python",
remote = "https://github.com/weisi/bazel_for_gcloud_python.git",
branch="master",
)
# https://github.com/bazelbuild/rules_python
git_repository(
name = "io_bazel_rules_python",
remote = "https://github.com/bazelbuild/rules_python.git",
# NOT VALID! Replace this with a Git commit SHA.
branch= "master",
)
# Only needed for PIP support:
load("#io_bazel_rules_python//python:pip.bzl", "pip_repositories")
pip_repositories()
load("#io_bazel_rules_python//python:pip.bzl", "pip_import")
# This rule translates the specified requirements.txt into
# #my_deps//:requirements.bzl, which itself exposes a pip_install method.
pip_import(
name = "my_deps",
requirements = "//:requirements.txt",
)
# Load the pip_install symbol for my_deps, and create the dependencies'
# repositories.
load("#my_deps//:requirements.bzl", "pip_install")
pip_install()
And app.yaml:
runtime: custom
#vm: true
env: flex
entrypoint: gunicorn -b :$PORT run:app
runtime_config:
# You can also specify 2 for Python 2.7
python_version: 3
handlers:
- url: /$
secure: always
script: auto
- url: /.*$
static_dir: app/static
secure: always
# This sample incurs costs to run on the App Engine flexible environment.
# The settings below are to reduce costs during testing and are not appropriate
# for production use. For more information, see:
# https://cloud.google.com/appengine/docs/flexible/python/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
packages
alembic==0.8.4
Babel==2.2.0
blinker==1.4
coverage==4.0.3
decorator==4.0.6
defusedxml==0.4.1
Flask==0.10.1
Flask-Babel==0.9
Flask-Login==0.3.2
Flask-Mail==0.9.1
Flask-Migrate==1.7.0
Flask-OpenID==1.2.5
flask-paginate==0.4.1
Flask-Script==2.0.5
Flask-SQLAlchemy==2.1
Flask-WhooshAlchemy==0.56
Flask-WTF==0.12
flipflop==1.0
future==0.15.2
guess-language==0.2
gunicorn==19.9.0
itsdangerous==0.24
Jinja2==2.8
Mako==1.0.3
MarkupSafe==0.23
pbr==1.8.1
pefile==2016.3.28
PyInstaller==3.2
python-editor==0.5
python3-openid==3.0.9
pytz==2015.7
six==1.10.0
speaklater==1.3
SQLAlchemy==1.0.11
sqlalchemy-migrate==0.10.0
sqlparse==0.1.18
Tempita==0.5.2
Werkzeug==0.11.3
Whoosh==2.7.0
WTForms==2.1
As I understand after talking to google cloud developers, you have to use dockerfile to allow google cloud build a it builds using containers to push your image to app engine.
See my own work around

Resources