I have a React/Node app which i am trying to host on AWS amplify. first try, my app deployed but i saw some pages/buttons are not working because of node js. Then i did some search and i saw that i need to modify "amplify.yml" file to:
version: 1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
artifacts:
baseDirectory: build
files:
- '**/*'
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- yarn build
artifacts:
baseDirectory: build
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Getting build issues(Build time out) with the above build settings.
Make sure you have created a user with AdministratorAccess-Amplify privileges in IAM.
Then it is necessary to replace line 6 of the Hands-On amplify.yml with
npm install -g #aws-amplify/cli.
The code should now display correctly to complete Hands-On
Related
I've been trying to setup a CI/CD Pipeline on my Repo which runs common tasks like linting/tests etc. I've successfully setup a Gitlab-Runner which is working fine. The only part I'm stuck is the "deploy" part.
When I run my build, how do I actually get the files into my /var/www/xyz folder.
I get that everything is running in a Docker Container and I can't just magically copy paste my files there, but i don't get how I get the files on my actual server-directory. I've been searching for days for good docs / explanations, so as always, StackOverflow is my last resort for help.
I'm running on a Ubuntu 20.04 LTS VPS and a SaaS GitLab-Repository if that info is needed. That's my .gitlab-ci.yml:
image: timbru31/node-alpine-git
before_script:
- git fetch origin
stages:
- setup
- test
- build
- deploy
#All Setup Jobs
Install Dependencies:
stage: setup
interruptible: true
script:
- npm install
- npm i -g #nrwl/cli
artifacts:
paths:
- node_modules/
# All Test Jobs
Lint:
stage: test
script: npx nx run nx-fun:lint
Tests:
stage: test
script: npx nx run nx-fun:test
Deploy:
stage: build
script:
- ls /var/www/
- npx nx build --prod --output-path=dist/
- cp -r dist/* /var/www/html/neostax/
only:
refs:
- master
Normally I would ssh into my server, run the build, and then copy the build to the corresponding web-directory.
TL;DR - How do I get files from a GitLab-Runner to an actual directory on the server?
I followed a tutorial on youtube Link to create a ci/cd pipeline for deployment on netlify.
The builds are successful but the deploy shows a 404 page.
Link to deployed site
I'm posting the yml commands below
stages:
- lint
- build
- deploy
lint project:
stage: lint
image: node:15
script:
- npm install
- npm run lint
build project:
stage: build
image: node:15
script:
- npm install
- npm run build
artifacts:
paths:
- .next
netlify:
stage: deploy
image: node:15
script:
- npm set prefix ~/.npm; path+=$HOME/.npm/bin
- path+=./node_modules/.bin
- npm install -g netlify-cli
- netlify deploy --dir=.next --prod
I'm trying to create a Next JS boilerplate to be used in my future projects & Through CI/CD pipeline i'm looking for automated deployment. Any help or link to resources would be appreciated.
I was getting the below error while deploying the lambda on AWS using bitbucket pipeline
Error: Could not set up basepath mapping. Try running sls create_domain first.
Error: 'staging-api.simple.touchsuite.com' could not be found in API Gateway.
ConfigError: Missing region in config
at getDomain.then.then.catch (/opt/atlassian/pipelines/agent/build/node_modules/serverless-domain-manager/index.js:181:15)
at
at runMicrotasksCallback (internal/process/next_tick.js:121:5)
at _combinedTickCallback (internal/process/next_tick.js:131:7)
at process._tickDomainCallback (internal/process/next_tick.js:218:9)
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information
Operating System: linux
Node Version: 8.10.0
Framework Version: 1.61.3
Plugin Version: 3.2.7
SDK Version: 2.3.0
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
So, I updated the serverless-domain-manager to the newest version 3.3.1
I tried to deploy the lambda after updating the serverless-domain-manager and now I am getting the below error.
Serverless Error
Serverless plugin "serverless-domain-manager" not found. Make sure it's installed and listed in the "plugins" section of your serverless config file.
serverless.yml snippet
plugins:
- serverless-plugin-warmup
- serverless-offline
- serverless-log-forwarding
- serverless-domain-manager
custom:
warmup:
schedule: 'cron(0/10 12-23 ? * MON-FRI *)'
prewarm: true
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- TS-Staging
- x-tss-correlation-id
- x-tss-application-id
stage: ${opt:stage, self:provider.stage}
domains:
prod: api.simple.touchsuite.com
staging: staging-api.simple.touchsuite.com
dev: dev-api.simple.touchsuite.com
customDomain:
basePath: 'svc'
domainName: ${self:custom.domains.${self:custom.stage}}
stage: ${self:custom.stage}
bitbucket-pipeline.yml snippet
image: node:8.10.0
pipelines:
branches:
master:
- step:
caches:
- node
name: Run tests
script:
- npm install --global copy
- npm install
- NODE_ENV=test npm test
- step:
caches:
- node
name: Deploy to Staging
deployment: staging # set to test, staging or production
script:
- npm install --global copy
- npm run deploy:staging
- npm run deploy:integrations:staging
- node -e 'require("./scripts/bitbucket.js").triggerPipeline()'
Need some insight, what am I missing that creating the error
I have found with Bitbucket I needed to add an npm install command to make sure that my modules and the plugins were all installed before trying to run them. This may be what is missing in your case. You can also turn caching on for the resulting node_modules folder so that it doesn't have to download all modules every time you deploy.
I'm trying to setup GitLab CI for a mono repository.
For the sake of the argument, lets say I want to process 2 JavaScript packages:
app
cli
I have defined 3 stages:
install
test
build
deploy
Because I'm reusing the files from previous steps, I use the GitLab cache.
My configuration looks like this:
stages:
- install
- test
- build
- deploy
install_app:
stage: install
image: node:8.9
cache:
policy: push
paths:
- app/node_modules
script:
- cd app
- npm install
install_cli:
stage: install
image: node:8.9
cache:
policy: push
paths:
- cli/node_modules
script:
- cd cli
- npm install
test_app:
image: node:8.9
cache:
policy: pull
paths:
- app/node_modules
script:
- cd app
- npm test
test_cli:
image: node:8.9
cache:
policy: pull
paths:
- cli/node_modules
script:
- cd cli
- npm test
build_app:
stage: build
image: node:8.9
cache:
paths:
- app/node_modules
- app/build
script:
- cd app
- npm run build
deploy_app:
stage: deploy
image: registry.gitlab.com/my/gcloud/image
only:
- master
environment:
name: staging
url: https://example.com
cache:
policy: pull
paths:
- app/build
script:
- gcloud app deploy app/build/app.yaml
--verbosity info
--version master
--promote
--stop-previous-version
--quiet
--project "$GOOGLE_CLOUD_PROJECT"
The problem is in the test stage. Most of the time the test_app job fails, because the app/node_modules directory is missing. Sometimes a retry works, but mostly not.
Also, I would like to use two caches for the build_app job. I want to pull app/node_modules and push app/build. I can't find a way to accomplish this. This makes me feel like I don't fully understand how the cache works.
Why are my cache files gone? Do I misunderstand how GitLab CI cache works?
The cache is provided on a best-effort basis, so don't expect that the cache will be always present.
If you have hard dependencies between jobs, use artifacts and dependencies.
Anyway, if it is just for node_modules, I suggest you to install it in every step, instead of using artifacts - you will not save much time with artifacts.
I have problem with test scss-lint in my project on nodejs.
When tests reach scss-lint, it gives an error.
How to make sure that tests do not fall with the successful result of the test itself?
My gitlab-ci.yml
image: node:wheezy
cache:
paths:
- node_modules/
stages:
- build
- test
gem_lint:
image: ruby:latest
stage: build
script:
- gem install scss_lint
artifacts:
paths:
- node_modules/
only:
- dev
except:
- master
install_dependencies:
stage: build
script:
- npm install
artifacts:
paths:
- node_modules/
only:
- dev
except:
- master
scss-lint:
stage: test
script:
- npm run lint:scss-lint
artifacts:
paths:
- node_modules/
only:
- dev
except:
- master
You are doing it wrong.
Each job you define (gem_lint, install_dependencies, and scss-lint) is run with its own context.
So your problem here is that during the last step, it doesn't find the scss-lint gem you installed because it switched its context.
You should execute all the scripts at the same time in the same context :
script:
- gem install scss_lint
- npm install
- npm run lint:scss-lint
Of course for this you need to have a docker image that has both npm and gem installed maybe you can find one on docker hub), or you can choose one (for example : ruby:latest) and add as the first script another one that would install npm :
- curl -sL https://deb.nodesource.com/setup_6.x | bash -