Description problem
I have a problem with serverless-offline because I try to install this local and global, and it doesn't work.
serverless-offline image error
When executing the command I get this information
sls offline
Serverless command "offline" not found. Run "serverless help" for a list of all available commands.
This is my configuration in the serverless file.
provider:
name: aws
runtime: nodejs14.x
environment:
NODE_ENV: ${env:NODE_ENV}
plugins:
- serverless-plugin-typescript
- serverless-offline
This commands I used to install
In the machine environment
npm i -g serverless-offline
In the file project:
yarn add serverless-offline -D
Issue in git
Serverles environment configurated image
Try in your project with the npx prefix. So npx sls offline or npx serverless offline.
Related
I've created a action for a deployment on github actions. This all works with composer install and git pulling the master branch. However on my digital ocean droplet, I get the issue
bash: line 4: npm: command not found
If i ssh into my server i can use npm perfectly fine. This was installed via nvm and uses the latest version but for some reason its not accessable via the action.
My deployment script is
on:
push:
branches: [master]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Deploy Laravel APP
uses: appleboy/ssh-action#v0.1.4
with:
host: ${{secrets.SSH_HOST}}
key: ${{secrets.SSH_KEY}}
username: ${{ secrets.SSH_USER }}
script: |
cd /var/www/admin
git pull origin master
composer install
npm install
npm run prod
I presume this is more to do with the setup from nvm as i can use this via ssh but as they use the same user to log in via ssh, i can't seem to see an issue.
Any ideas how I can resolve this issue to give access/allow github actions to use npm?
I didn't find a solution for the nvm issue, installing npm via a different way resoleved this issue.
I had the same issue, and finally found the solution.
I could solve the issue by adding the following lines before running npm commands.
export NVM_DIR=~/.nvm
source ~/.nvm/nvm.sh
These commands helps the terminal fix the node path installed by nvm.
Reference link is here.
I'm new to Continous Intregration and recently I setup my first project in CircleCI.
Unfortunately I seems like it's not completely working as expected.
I want to deploy my application to Firebase (Hosting and Functions).
Of course I added Environment Variables to the project in CircleCI.
But Firebase Functions doesn't access my Environment Variables so it's running into errors.
In the Functions folder I created a new nodejs application incl. the dotenv package and I'm calling the variables with process.env.CIRCLECI_VARIABLE.
Would be great if someone could give me a hint what's missing.
config.yml
version: 2.1
jobs:
build:
docker:
- image: circleci/node:10
steps:
- checkout
- run:
name: Install packages
command: yarn install
- run:
name: Build project
command: yarn build
- run:
name: Install functions packages
command: cd ./functions && yarn install
deploy:
docker:
- image: circleci/node:10
steps:
- checkout
- run:
name: Install packages
command: yarn install
- run:
name: Build project
command: yarn build
- run:
name: Install functions packages
command: cd ./functions && yarn install
- run:
name: Installing Firebase-Tools
command: yarn add firebase-tools
- run:
name: Firebase Deploy
command: ./node_modules/.bin/firebase deploy --token "$FIREBASE_TOKEN"
workflows:
build_and_deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
I've found the solution
I didn't know that I have to add the Environment Variables to the Google Cloud Function.
Now everything is working as expected
I have an application in NodeJS which I am building with AWS CodeBuild and then deploying using SAM to AWS lambda. I want to remove all devDependencies from the project after the build phase. In build phase I run all tests which require the devDependencies but I don't want them to be zipped with other modules while pushing it to S3 as artifacts.
My buildspec.yml
version: 0.2
phases:
install:
commands:
# Update libs
- echo Executing the install phase.
runtime-versions:
nodejs: 10
pre_build:
commands:
- npm install
build:
commands:
- echo Executing the build phase.
- npm run test
- export BUCKET=alexa-v1
- aws cloudformation package --template-file template.yml --s3-bucket $BUCKET --output-template-file outputtemplate.yml
post_build:
commands:
- echo Build complete
artifacts:
type: zip
files:
- template.yml
- outputtemplate.yml
I am not sure if adding npm prune --production in post_build, is the right way to do it.
Try use the AWS SAM CLI instead of the regular AWS CLI. In particular, there is the sam package command that can be used when you package your application, e.g.
sam package \
--template-file template.yml \
--s3-bucket $BUCKET \
--output-template-file outputtemplate.yml
You could insert "npm prune --production" before the last line in the build phase ("aws cloudformation package...").
This would ensure that all dev dependencies are removed before deploying the code to lambda.
I had a problem with the same issue.
And in case you think, when you look for NODE_ENV=production
I found out that devDependency is not installing.
I'm facing the problem that I cant build my Angular app through the AWS Amplify Console:
"You are running version v8.12.0 of Node.js, which is not supported by Angular CLI 8.0+.
The official Node.js version that is supported is 10.9 or greater.
Please visit https://nodejs.org/en/ to find instructions on how to update Node.js."
Now I want to set the default node version of the docker container in the provision step to VERSION_NODE_10 which is already defined in the container.
# Framework Versions
ENV VERSION_NODE_8=8.12.0
ENV VERSION_NODE_6=6
ENV VERSION_NODE_10=10
ENV VERSION_NODE_DEFAULT=$VERSION_NODE_8 <-- Change this to $VERSION_NODE_10
ENV VERSION_RUBY_2_3=2.3.6
ENV VERSION_RUBY_2_4=2.4.3
ENV VERSION_RUBY_DEFAULT=$VERSION_RUBY_2_3
ENV VERSION_HUGO=0.51
ENV VERSION_YARN=1.13.0
amplify.yml:
version: 0.1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
frontend:
phases:
preBuild:
commands:
- npm ci
build:
commands:
- node -v
- npm run-script build
artifacts:
baseDirectory: dist/cr-client
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Does anyone know how to change the default?
The correct answer actually isn't the right one.
You should use a custom build image of NodeJS to run your application properly without changing the node version by nvm.
To do that:
Open the "Amplify Console"
Open "All Apps"
Choose the app you're going to change the NodeJS version
Open "Build Settings"
Scroll down to "Build image settings" box and hit "edit" button
At "Build Image" dropdown, choose the option "Build image"
A new input field will appear just below this dropdown, now write the Docker Image Name (same used in Dockefile) you are looking for. For example node:12.16.1
Save
Redeploy any build.
AWS Amplify use nvm to handle node version. Try this:
version: 0.1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
frontend:
phases:
preBuild:
commands:
- nvm use $VERSION_NODE_10
- npm ci
build:
commands:
- nvm use $VERSION_NODE_10
- node -v
- npm run-script build
artifacts:
baseDirectory: dist/cr-client
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Custom build image of NodeJS is a lot of pain.
I usually do this:
App settings > Build Settings > Build Image Settings click Edit.
Live package updates : Node.js version > version.
The accepted answer did not work for me.
The only way to change the node version in the provision step is to have your own build setting.
However, there is an easier way to accomplish this.
In my case, I wanted the latest node 10 version. And adding nvm install in the prebuild step worked.
frontend:
phases:
preBuild:
commands:
- nvm install 10
You can install and use any node version in the amplify by installing it in prebuild steps. Use nvm to switch the node version.
preBuild:
commands:
- nvm install <node version>
Amplify Console output:
# Executing command: nvm install 10
2020-09-09T13:36:19.465Z [INFO]: Downloading and installing node v10.22.0...
2020-09-09T13:36:19.544Z [WARNING]: Downloading https://nodejs.org/dist/v10.22.0/node-v10.22.0-linux-x64.tar.gz...
2020-09-09T13:36:19.664Z [WARNING]: ########
2020-09-09T13:36:19.665Z [WARNING]: 11.9%
2020-09-09T13:36:19.765Z [WARNING]: #######
2020-09-09T13:36:19.765Z [WARNING]: ######################## 43.5%
2020-09-09T13:36:19.832Z [WARNING]: ################################
2020-09-09T13:36:19.832Z [WARNING]: ######################################## 100.0%
2020-09-09T13:36:19.844Z [WARNING]: Computing checksum with sha256sum
2020-09-09T13:36:19.934Z [WARNING]: Checksums matched!
2020-09-09T13:36:20.842Z [INFO]: Now using node v10.22.0 (npm v6.14.6)
Following on #richard's solution, you can put a .nvmrc ($ node --version > .nvmrc) file in the root of your repo with the specific Node version you used to build your project, and use nvm install instead of nvm use $VERSION_NODE_10
Update as of 4th Dec 2022:
What worked out for me was to use a custom build of the NodeJS Docker image on Docker Hub.
Here's what you would need to do:
Go to AWS Amplify
Go to "Build settings"
Scroll down to "Build image settings"
Click on "Edit" button
Under "Build image" click on the dropdown button
Select "Build Image" (by default Linux:2 is selected, at least for me)
In the text field type, for example, "node:18.12.1"
Go back to the latest deploy and click on the "Redeploy this version" app
Roll a J and smoke it, everything should be green now
In that way, you may use whatever build of NodeJS you would need. At least, NodeJS 18 worked for me, I didn't need another.
During build time you can see in the Provision tab they actually use the custom build from Docker Hub:
I tried two of the answers above and they did not work for me.
I didn't think of that. That approach was shared by "dncrews" user on Github.
Check this out.
February 2023
To do so, open amplify/backend/function/function-name/function-name-cloudformation-template.json and set the Runtime property in the LambdaFunction resource to nodejs18.x
https://docs.amplify.aws/cli/function/configure-options/#updating-the-runtime
I am trying to deploy a node.js react based isomorphic application using a Dockerfile linked up to Elastic Beanstalk.
When I run my docker build locally I am able to do so successfully. I have noticed however that the npm install command is taking a fair amount of time to complete.
When trying to deploy the application using the eb deploy command it is pretty much crashing the Amazon service or I get an error like this:
ERROR: Timed out while waiting for command to Complete
My guess is that this is down to my node_modules folder being 300MB big. I have also tried adding an artifact declaration into the config.yml file and deploying that way but get the same error.
Is there a best practice way of deploying a node application to AWS Beanstalk or is the best way to manually setup an EC2 instance and relying on Code Commit git hooks?
My Dockerfile is below:
FROM node:argon
ADD package.json /tmp/package.json
RUN npm config set registry https://registry.npmjs.org/
RUN npm set progress=false
RUN cd /tmp && npm install --silent
RUN mkdir -p /usr/src/app && cp -a /tmp/node_modules /usr/src/app
WORKDIR /usr/src/app
ADD . /usr/src/app
EXPOSE 8000
CMD npm run build && npm run start
...and this is my config.yml file:
branch-defaults:
develop:
environment: staging
master:
environment: production
global:
application_name: website-2016
default_ec2_keyname: key-pair
default_platform: 64bit Amazon Linux 2015.09 v2.0.6 running Docker 1.7.1
default_region: eu-west-1
profile: eb-cli
sc: git
You should change your platform to a more current one (I'm using
docker 1.9.1, and there might be newer versions)
I'm using an image from docker hub to deploy my apps into beanstalk. I build them using our CI servers and then run a deploy
command that pulls the image from docker hub. This can save you a
lot of build errors (and build time) and is actually more in touch with the Docker
philosophy of immutable infrastructure.
300MB for node_modules is not small but should present no problem. We deploy this size of dependencies and code regularly.