Problem with fs-extra while deploying python using serverless - node.js

I'm not that much expert using npm and bitbucket-pipelines, but I want to create a pipeline on Bitbucket to deploy my python (flask) project using serverless to AWS Lambda. It's being deployed locally, but when I run it using the Bitbucket pipeline, this happens:
Error: Cannot find module '/opt/atlassian/pipelines/agent/build/node_modules/fs-extra/lib/index.js'. Please verify that the package.json has a valid "main" entry
Here is my code:
bitbucket-pipelines.yml
image: node:14.13.1-alpine3.10
pipelines:
branches:
master:
- step:
caches:
- node
script:
- apk add python3
- npm install
- npm install -g serverless
- serverless config credentials --stage dev --provider aws --key ${AWS_DEV_LAMBDA_KEY} --secret ${AWS_DEV_LAMBDA_SECRET}
- serverless deploy --stage dev
serverless.yml
service: serverless-flask
plugins:
- serverless-python-requirements
- serverless-wsgi
custom:
wsgi:
app: app.app
packRequirements: false
pythonRequirements:
dockerizePip: non-linux
provider:
name: aws
runtime: python3.8
stage: dev
region: us-west-2
functions:
app:
handler: wsgi.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
alert:
handler: alerts.run
events:
- schedule: rate(1 day)
package:
exclude:
- .venv/**
- venv/**
- node_modules/**
- bitbucket-pipelines.yml
How can I fix this?

What helped me in the same situation was:
Deleted /node_modules folder
run npm install inside service folder
run serverless deploy

I had the same issue and resolved the problem by (re)installing fs-extra
npm install fs-extra

Related

Hoe to run the node js code in aws amplify

I have a React/Node app which i am trying to host on AWS amplify. first try, my app deployed but i saw some pages/buttons are not working because of node js. Then i did some search and i saw that i need to modify "amplify.yml" file to:
version: 1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
artifacts:
baseDirectory: build
files:
- '**/*'
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- yarn build
artifacts:
baseDirectory: build
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Getting build issues(Build time out) with the above build settings.
Make sure you have created a user with AdministratorAccess-Amplify privileges in IAM.
Then it is necessary to replace line 6 of the Hands-On amplify.yml with
npm install -g #aws-amplify/cli.
The code should now display correctly to complete Hands-On

Building and Deploying depending on front or backend changes in Gitlab

I'm starting to use gitlab CI/CD pipeline but have some doubts regarding the output of the building process if i was to have a project(Repo) and inside this project I have the front and backend separated by the project structure, ex:
CarProject
.gitlab-ci.yml
|__FrontEndCarProject
|__BackendCarProject
let's say that every time I change something in the frontend I would need to build it and deploy it to S3, but there is no need to build the backend (java application) and deploy it to elastic beanstalk (and vice versa for when i change the backend)..Is there a way to check where the changes have been made(FrontEndCarProject/BackendCarProject) using GitLab and redirect the .gitlab-ci.yml to a script file depending on if a have to deploy to S3 or elastic beanstalk?
Just trying
Note: another way is just to manually change the yml file depending on where i want to deploy..but is there a way to autodetect this and automated?
.gitlab-ci.yml
Just to get the idea, heres an example that would run in a linear way, but how can i conditionally build/deploy(depending on my front or backend)? should i keep them in different repos for simplicity? is it a good practice?
variables:
ARTIFACT_NAME: cars-api-v$CI_PIPELINE_IID.jar
APP_NAME: cars-api
stages:
- build
- deploy
# ONLY Build when front(FrontendCarProject) in changed
build_front:
stage: build
image: Node:latest
script:
- npm install
artifacts:
paths:
- ./dist
# ONLY build when backend(BackendCarProject) is changed
build_back:
stage: build
image: openjdk:12-alpine
script:
- ./gradlew build
artifacts:
paths:
- ./build/libs/
# ONLY deploy when front(FrontendCarProject) in changed
deploy_s3:
stage: deploy
image:
name: python:latest
script:
- aws configure set region us-east-1
- aws s3 cp ./build/libs/cars-api.jar s3://$S3_BUCKET/cars-api.jar
# ONLY deploy when backend(BackendCarProject) is changed
deploy_back_end:
stage: deploy
image:
name: banst/awscli
script:
- aws configure set region us-east-1
- aws s3 cp ./build/libs/$ARTIFACT_NAME s3://$S3_BUCKET/$ARTIFACT_NAME
- aws elasticbeanstalk create-application-version --application-name $APP_NAME --version-label $CI_PIPELINE_IID --source-bundle S3Bucket=$S3_BUCKET,S3Key=$ARTIFACT_NAME
- aws elasticbeanstalk update-environment --application-name $APP_NAME --environment-name "production" --version-label=$CI_PIPELINE_IID
If your frontend and backend can be built and deployed seperately, than you can use rules:changes to check if a change happened and need:optional to only deploy the respective built libraries.
variables:
ARTIFACT_NAME: cars-api-v$CI_PIPELINE_IID.jar
APP_NAME: cars-api
stages:
- build
- deploy
# ONLY Build when front(FrontendCarProject) in changed
build_front:
stage: build
image: Node:latest
script:
- npm install
rules:
- changes:
- FrontEndCarProject/*
artifacts:
paths:
- ./dist
# ONLY build when backend(BackendCarProject) is changed
build_back:
stage: build
image: openjdk:12-alpine
script:
- ./gradlew build
rules:
- changes:
- BackendEndCarProject/*
artifacts:
paths:
- ./build/libs/
# ONLY deploy when front(FrontendCarProject) in changed
deploy_s3:
stage: deploy
image:
name: python:latest
script:
- aws configure set region us-east-1
- aws s3 cp ./build/libs/cars-api.jar s3://$S3_BUCKET/cars-api.jar
needs:
- job: build_front
artifacts: true
optional: true
# ONLY deploy when backend(BackendCarProject) is changed
deploy_back_end:
stage: deploy
image:
name: banst/awscli
script:
- aws configure set region us-east-1
- aws s3 cp ./build/libs/$ARTIFACT_NAME s3://$S3_BUCKET/$ARTIFACT_NAME
- aws elasticbeanstalk create-application-version --application-name $APP_NAME --version-label $CI_PIPELINE_IID --source-bundle S3Bucket=$S3_BUCKET,S3Key=$ARTIFACT_NAME
- aws elasticbeanstalk update-environment --application-name $APP_NAME --environment-name "production" --version-label=$CI_PIPELINE_IID
needs:
- job: build_back
artifacts: true
optional: true

Static file referenced by handler not found: build/index.html -Bitbucket Pipeline React App Engine

I have an issue with my Bitbucket CI/CD pipeline. The pipeline itself runs fine, but the application is broken when I try to access it. The pipeline deploys a React App Engine Node.js application. The problem comes when I access the site. This is the error I receive in Google Logging "Static file referenced by handler not found: build/index.html".
If I deploy the application manually, I have no issues and the application works fine. This application error only occurs if the deployment happens in the bitbucket pipeline.
Here is the app.yaml
runtime: nodejs12
handlers:
# Serve all static files with url ending with a file extension
- url: /(.*\..+)$
static_files: build/\1
upload: build/(.*\..+)$
# Catch all handler to index.html
- url: /.*
static_files: build/index.html
upload: build/index.html
Here is the bitbucket-pipelines.yml
pipelines:
branches:
master:
- step:
name: NPM Install and Build
image: node:14.15.1
script:
- npm install
- unset CI
- npm run build
- step:
name: Deploy to App Engine
image: google/cloud-sdk
script:
- gcloud config set project $GCLOUD_PROJECT
- 'echo "$GOOGLE_APPLICATION_CREDENTIALS" > google_application_credentials.json'
- gcloud auth activate-service-account --key-file google_application_credentials.json
- gcloud app deploy app.yaml
Any help would be greatly appreciated. Thank you so much.
Bitbucket pipelines does not save artifacts between steps. You need to declare an artifacts config in the build step so that you can reference it in the deploy step. Something like this:
pipelines:
branches:
master:
- step:
name: NPM Install and Build
image: node:14.15.1
script:
- npm install
- unset CI
- npm run build
artifacts: # Declare artifacts here for later steps
- build/**
- step:
name: Deploy to App Engine
image: google/cloud-sdk
script:
- gcloud config set project $GCLOUD_PROJECT
- 'echo "$GOOGLE_APPLICATION_CREDENTIALS" > google_application_credentials.json'
- gcloud auth activate-service-account --key-file google_application_credentials.json
- gcloud app deploy app.yaml
See here for more details: https://support.atlassian.com/bitbucket-cloud/docs/use-artifacts-in-steps/
Note that I have not tested this.

serverless-domain-manager cannot be found by serverless deployment

I was getting the below error while deploying the lambda on AWS using bitbucket pipeline
Error: Could not set up basepath mapping. Try running sls create_domain first.
Error: 'staging-api.simple.touchsuite.com' could not be found in API Gateway.
ConfigError: Missing region in config
at getDomain.then.then.catch (/opt/atlassian/pipelines/agent/build/node_modules/serverless-domain-manager/index.js:181:15)
at
at runMicrotasksCallback (internal/process/next_tick.js:121:5)
at _combinedTickCallback (internal/process/next_tick.js:131:7)
at process._tickDomainCallback (internal/process/next_tick.js:218:9)
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information
Operating System: linux
Node Version: 8.10.0
Framework Version: 1.61.3
Plugin Version: 3.2.7
SDK Version: 2.3.0
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
So, I updated the serverless-domain-manager to the newest version 3.3.1
I tried to deploy the lambda after updating the serverless-domain-manager and now I am getting the below error.
Serverless Error
Serverless plugin "serverless-domain-manager" not found. Make sure it's installed and listed in the "plugins" section of your serverless config file.
serverless.yml snippet
plugins:
- serverless-plugin-warmup
- serverless-offline
- serverless-log-forwarding
- serverless-domain-manager
custom:
warmup:
schedule: 'cron(0/10 12-23 ? * MON-FRI *)'
prewarm: true
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- TS-Staging
- x-tss-correlation-id
- x-tss-application-id
stage: ${opt:stage, self:provider.stage}
domains:
prod: api.simple.touchsuite.com
staging: staging-api.simple.touchsuite.com
dev: dev-api.simple.touchsuite.com
customDomain:
basePath: 'svc'
domainName: ${self:custom.domains.${self:custom.stage}}
stage: ${self:custom.stage}
bitbucket-pipeline.yml snippet
image: node:8.10.0
pipelines:
branches:
master:
- step:
caches:
- node
name: Run tests
script:
- npm install --global copy
- npm install
- NODE_ENV=test npm test
- step:
caches:
- node
name: Deploy to Staging
deployment: staging # set to test, staging or production
script:
- npm install --global copy
- npm run deploy:staging
- npm run deploy:integrations:staging
- node -e 'require("./scripts/bitbucket.js").triggerPipeline()'
Need some insight, what am I missing that creating the error
I have found with Bitbucket I needed to add an npm install command to make sure that my modules and the plugins were all installed before trying to run them. This may be what is missing in your case. You can also turn caching on for the resulting node_modules folder so that it doesn't have to download all modules every time you deploy.

Serverless not including my node_modules

I have a nodejs serverless project that has this structure:
-node_modules
-package.json
-serverless.yml
-funcitons
-medium
-mediumHandler.js
my serverless.yml:
service: googleAnalytic
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: us-east-1
package:
include:
- node_modules/**
functions:
mediumHandler:
handler: functions/medium/mediumHandler.mediumHandler
events:
- schedule:
name: MediumSourceData
description: 'Captures data between set dates'
rate: rate(2 minutes)
- cloudwatchEvent:
event:
source:
- "Lambda"
detail-type:
- ""
- cloudwatchLog: '/aws/lambda/mediumHandler'
my sls info shows:
Service Information
service: googleAnalytic
stage: dev
region: us-east-1
stack: googleAnalytic-dev
api keys:
None
endpoints:
None
functions:
mediumHandler: googleAnalytic-dev-mediumHandler
When I run sls:
serverless invoke local -f mediumHandler
it works and my script where I included googleapis and aws-sdk work. But when I deploy, those functions are skipped and show no error.
When debugging serverless's packaging process, use sls package (or sls deploy --noDeploy (for old versions). You'll get a .serverless directory that you can inspect to see what's inside the deployment package.
From there, you can see if node_modules is included or not and make changes to your serverless.yml correspondingly without needing to deploy every time you make a change.
Serverless will exclude development packages by default. Check your package.json and ensure your required packages are in the dependencies object, as devDependencies will be excluded.
I was dumb to put this in my serverless.yml which caused me the same issue you're facing.
package:
patterns:
- '!node_modules/**'

Resources