Remove/resolve Travis CI weird messages when deploying to npm - node.js

When package is being deployed to npm registry, some weird additional messages appear on Travis CI console:
Standard messages:
Deploying application
NPM API key format changed recently. If your deployment fails, check your API key in ~/.npmrc.
http://docs.travis-ci.com/user/deployment/npm/
~/.npmrc size: 48
+ my-package#1.0.0
Then weird messages follow:
Already up-to-date!
Not currently on any branch.
nothing to commit, working tree clean
Dropped refs/stash#{0} (bff3fdd...1c6d37a)
.travis.yml file:
dist: trusty
sudo: required
env:
- CXX="g++-4.8"
addons:
apt:
sources:
- ubuntu-toolchain-r-test
packages:
- g++-4.8
language: node_js
node_js:
- 5
- 6
- 7
deploy:
provider: npm
email: me#me.xx
api_key:
secure: ja...w=
on:
tags: true
branch: master
How to get rid of these messages, and also why are they there?
Cheers!

Related

Azure Web application deployment successful, but does not update the web application

Previously I was having an error with the deployment of my React application on Web Service Linux on Azure. This problem was solved in the previous post I did, follow the link:
My Azure Web Application on Linux is not working. The error message on azure logs "react-scripts: not found" and github "npm ERR! code ELIFECYCLE ”.
Now I am having another problem which consists of the following:
After deploying to the Azure platform (I'm using the github option for deployment) and receiving a successful deployment notification, upon entering my github repository, I received the error
"npm ERR! Code ELIFECYCLE" (follow the link to view the entire log: https://mega.nz/folder/eth0WSiL#pGvXl2yShQfUrNELCKD3cA). Upon entering the application and testing it I noticed that the deployment really did not work.
An important point worth mentioning that in the previous problem the solution passed by #JasonPan worked, but when we tested it I still used the Azure classic
deployment center, which was removed a few days ago and after trying to use the current deployment center I came across this error.
I managed to solve the problem. I needed to do two things within my .yml file, they were:
add a CI: false and remove the npm run test
Here is the code:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Node.js version
uses: actions/setup-node#v1
with:
node-version: '14.x'
- name: npm install, build
run: |
npm install
npm run build --if-present
- name: Upload artifact for deployment job
uses: actions/upload-artifact#v2
with:
name: node-app
path: .
deploy:
runs-on: ubuntu-latest
needs: build
environment:
CI: false
name: 'production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
The .yml file before it was changed:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up Node.js version
uses: actions/setup-node#v1
with:
node-version: '14.x'
- name: npm install, build, and test
run: |
npm install
npm run build --if-present
npm run test --if-present
- name: Upload artifact for deployment job
uses: actions/upload-artifact#v2
with:
name: node-app
path: .
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}

Caching npm dependencies in circleci

I'm setting up the CI for an existing Express server project that lives in my repo's backend/core folder. Starting with just basic setup and linting. I was able to get npm install and linting to work but I wanted to cache the dependencies so that it wouldn't take 4 minutes to load for each push.
I used the caching scheme they describe here but it still seemed to run the full install each time. Or if it was using cached dependencies, it installed grpc each time which took a while. Any ideas what I can do?
My config.yml for reference:
# Use the latest 2.1 version of CircleCI pipeline process engine. See: https://circleci.com/docs/2.0/configuration-reference
# default executors
executors:
core-executor:
docker:
- image: 'cimg/base:stable'
commands:
init_env:
description: initialize environment
steps:
- checkout
- node/install
- restore_cache:
keys:
# when lock file changes, use increasingly general patterns to restore cache
- node-v1-{{ .Branch }}-{{ checksum "backend/core/package-lock.json" }}
- node-v1-{{ .Branch }}-
- node-v1-
- run: npm --prefix ./backend/core install
- save_cache:
paths:
- ~/backend/core/usr/local/lib/node_modules # location depends on npm version
key: node-v1-{{ .Branch }}-{{ checksum "backend/core/package-lock.json" }}
jobs:
install-node:
executor: core-executor
steps:
- checkout
- node/install
- run: node --version
- run: pwd
- run: ls -A
- run: npm --prefix ./backend/core install
lint:
executor: core-executor
steps:
- init_env
- run: pwd
- run: ls -A
- run: ls backend
- run: ls backend/core -A
- run: npm --prefix ./backend/core run lint
orbs:
node: circleci/node#4.1.0
version: 2.1
workflows:
test_my_app:
jobs:
#- install-node
- lint
#requires:
#- install-node
I think the best thing to do is to use npm ci which is faster. Best explanation of this is here: https://stackoverflow.com/a/53325242/4410223. Even though it will reinstall every time, it is consistent so better than caching. Although when using this, I am unsure what the point of continuing to use cache in your pipeline is, but caching still seems to be recommended with npm ci.
However, the best way to do this is to just use the node orb you already have in your config. A single step of - node/install-packages will do all that work for you. You will be able to replace it with your restore_cache, npm install and save_cache steps. You can even see all the steps it does here: https://circleci.com/developer/orbs/orb/circleci/node#commands-install-packages. Just open the command source and look at the steps on line 71.

serverless-domain-manager cannot be found by serverless deployment

I was getting the below error while deploying the lambda on AWS using bitbucket pipeline
Error: Could not set up basepath mapping. Try running sls create_domain first.
Error: 'staging-api.simple.touchsuite.com' could not be found in API Gateway.
ConfigError: Missing region in config
at getDomain.then.then.catch (/opt/atlassian/pipelines/agent/build/node_modules/serverless-domain-manager/index.js:181:15)
at
at runMicrotasksCallback (internal/process/next_tick.js:121:5)
at _combinedTickCallback (internal/process/next_tick.js:131:7)
at process._tickDomainCallback (internal/process/next_tick.js:218:9)
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information
Operating System: linux
Node Version: 8.10.0
Framework Version: 1.61.3
Plugin Version: 3.2.7
SDK Version: 2.3.0
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
So, I updated the serverless-domain-manager to the newest version 3.3.1
I tried to deploy the lambda after updating the serverless-domain-manager and now I am getting the below error.
Serverless Error
Serverless plugin "serverless-domain-manager" not found. Make sure it's installed and listed in the "plugins" section of your serverless config file.
serverless.yml snippet
plugins:
- serverless-plugin-warmup
- serverless-offline
- serverless-log-forwarding
- serverless-domain-manager
custom:
warmup:
schedule: 'cron(0/10 12-23 ? * MON-FRI *)'
prewarm: true
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- TS-Staging
- x-tss-correlation-id
- x-tss-application-id
stage: ${opt:stage, self:provider.stage}
domains:
prod: api.simple.touchsuite.com
staging: staging-api.simple.touchsuite.com
dev: dev-api.simple.touchsuite.com
customDomain:
basePath: 'svc'
domainName: ${self:custom.domains.${self:custom.stage}}
stage: ${self:custom.stage}
bitbucket-pipeline.yml snippet
image: node:8.10.0
pipelines:
branches:
master:
- step:
caches:
- node
name: Run tests
script:
- npm install --global copy
- npm install
- NODE_ENV=test npm test
- step:
caches:
- node
name: Deploy to Staging
deployment: staging # set to test, staging or production
script:
- npm install --global copy
- npm run deploy:staging
- npm run deploy:integrations:staging
- node -e 'require("./scripts/bitbucket.js").triggerPipeline()'
Need some insight, what am I missing that creating the error
I have found with Bitbucket I needed to add an npm install command to make sure that my modules and the plugins were all installed before trying to run them. This may be what is missing in your case. You can also turn caching on for the resulting node_modules folder so that it doesn't have to download all modules every time you deploy.

jHipster App crashes propably because CloudFoundry activates cloud profile

The deployment of my small jhipster App "customerapp" fails and it is probably because cloud foundry sets the profile "cloud" in addition to the profile "dev". I am using several spaces in cloud foundry for the different stages of the development: dev, staging and prod.
I used the jhipster generator, added some entities customer, address and contacts. App is running locally without any issues.
I also use gitlab-ci to build, test and deploy my software. My .gitlab-ci.yml looks like this (I deleted some unecessary parts).
image: mydockerregistry.xxxxx.de/jutoro/jhipster_test/jhipster-dockerimage
services:
- docker:dind
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- node_modules
- .maven
before_script:
- chmod +x mvnw
- export MAVEN_USER_HOME=`pwd`/.maven
stages:
- build
- package
- deployToCF
mvn-build:
stage: build
only:
- dev
- prod
script:
- npm install
- ./mvnw compile -DskipTests -Dmaven.repo.local=$MAVEN_USER_HOME -
Dspring.profiles.active=dev
mvn-package-dev:
stage: package
only:
- dev
script:
- npm install
- ./mvnw package -Pdev -DskipTests -Dmaven.repo.local=$MAVEN_USER_HOME -Dspring.profiles.active=dev
artifacts:
paths:
- target/*.war
mvn-package-prod:
stage: package
only:
- prod
script:
- npm install
- ./mvnw package -Pprod -DskipTests -Dmaven.repo.local=$MAVEN_USER_HOME -Dspring.profiles.active=prod
artifacts:
paths:
- target/*.war
deployToCloudFoundry-dev:
image: pivotalpa/cf-cli-resource
stage: deployToCF
only:
- dev
cache:
paths:
- bin/
script:
- bash ci/scripts/deployToCloudFoundry.sh
deployToCloudFoundry-prod:
image: pivotalpa/cf-cli-resource
stage: deployToCF
only:
- prod
cache:
paths:
- bin/
script:
- bash ci/scripts/deployToCloudFoundry.sh
The DOCKERFILE (which is built and added to our docker repository also with gitlab-ci):
# DOCKER-VERSION 1.8.2
FROM openjdk:8
MAINTAINER Robert Zieschang
RUN apt-get install -y curl
# install node.js
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt-get install -y nodejs python g++ build-essential && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# install yeoman
RUN npm install -g yo
The deplpoyToCloudFoundry.sh shell script:
cf login -a $CF_API_ENDPOINT -u $CF_USER -p $CF_PASS -o "${CF_ORG^^}" -s ${CI_COMMIT_REF_NAME^^}
cf push -n $CI_PROJECT_NAME-$CI_COMMIT_REF_NAME
My manifest file:
---
applications:
- name: customerapp
memory: 1024M
#buildpack: https://github.com/cloudfoundry/java-buildpack#v3.19.2
path: target/customerapp-0.0.1-SNAPSHOT.war
services:
- postgresql
env:
#SPRING_PROFILES_ACTIVE: dev
#SPRING_PROFILES_DEFAULT: dev
#JAVA_OPTS: -Dspring.profiles.active=dev
The pipeline runs well, the app is packed into the war file and uploaded to cloud foundry as well, but it crashes and I assume it is because somehow cloud foundry still applies the profile 'cloud' and this overrides important configurations from jhipsters 'dev' profile.
[...]
2019-01-02T19:03:16.05+0100 [APP/PROC/WEB/0] OUT 2019-01-02 18:03:16.055 INFO 8 --- [ main] pertySourceApplicationContextInitializer : 'cloud' property source added
2019-01-02T19:03:16.05+0100 [APP/PROC/WEB/0] OUT 2019-01-02 18:03:16.056 INFO 8 --- [ main] nfigurationApplicationContextInitializer : Reconfiguration enabled
2019-01-02T19:03:16.06+0100 [APP/PROC/WEB/0] OUT 2019-01-02 18:03:16.064 INFO 8 --- [ main] com.jutoro.cco.CustomerappApp : The following profiles are active: cloud,dev,swagger
[...]
This later leads to:
2019-01-02T19:03:29.17+0100 [APP/PROC/WEB/0] OUT 2019-01-02 18:03:29.172 ERROR 8 --- [ main] com.jutoro.cco.CustomerappApp : You have misconfigured your application! It should not run with both the 'dev' and 'cloud' profiles at the same time.
[...]
After that cloud foundry stops the app.
2019-01-02T19:04:11.09+0100 [CELL/0] OUT Cell 83899f60-78c9-4323-8d3c-e6255086c8a7 stopping instance 74be1834-b656-4445-506c-bdfa
The generated application-dev.yml and bootstrap.yml was just modified in some places:
bootstrap.yml
uri: https://admin:${jhipster.registry.password}#url.tomy.jhipsterregistryapp/config
name: customerapp
profile: dev # profile(s) of the property source
label: config-dev
application-dev.yml
client:
service-url:
defaultZone: https://admin:${jhipster.registry.password}#url.tomy.jhipsterregistryapp/eureka/
What did I try to set the dev profile in cf:
added -Dspring.profiles.active=dev in gitlab-ci.yml in addition to -Pdev
added SPRING_PROFILES_ACTIVE: dev in the manifest env: section
added SPRING_PROFILES_DEFAULT: dev in the manifest env: section
added SPRING_APPLICATION_JSON: {"spring.cloud.dataflow.applicationProperties.stream.spring.profiles.active": "dev"} (as mentioned in https://github.com/spring-cloud/spring-cloud-dataflow/issues/2317)
added JAVA_OPTS: -Dspring.profiles.active=dev in the manifest env: section (cv env customerapp shows that it was set)
set the JAVA_OPTS -Dspring.profiles.active=dev with cf set-env and cf restage
Any help is appreciated.
Regards
Robert
Forget the answer before. Turns out deep down it was a datasource problem which made the app not respond to the heartbeats.
Uncomment
#hibernate.connection.provider_disables_autocommit: true
In the application properties fixed this.
Maybe any "future" person may stumble upon the same behaviour.
I was able to deploy my jhipster app to cloud foundry.
I somehow "fixed" it, but I am not aware of further consequences. Yet.
Turned out cloud foundry had a problem to monitor my jhipster app via the standard health-check-type http which should be "heartbeat"?
So I decided to switch the monitoring behaviour to a not heartbeat-ish way.
Just switch health-check-type to process in your manifest.yml file.
health-check-type: process
The app is now running.

Cache files are gone in my GitLab CI pipeline

I'm trying to setup GitLab CI for a mono repository.
For the sake of the argument, lets say I want to process 2 JavaScript packages:
app
cli
I have defined 3 stages:
install
test
build
deploy
Because I'm reusing the files from previous steps, I use the GitLab cache.
My configuration looks like this:
stages:
- install
- test
- build
- deploy
install_app:
stage: install
image: node:8.9
cache:
policy: push
paths:
- app/node_modules
script:
- cd app
- npm install
install_cli:
stage: install
image: node:8.9
cache:
policy: push
paths:
- cli/node_modules
script:
- cd cli
- npm install
test_app:
image: node:8.9
cache:
policy: pull
paths:
- app/node_modules
script:
- cd app
- npm test
test_cli:
image: node:8.9
cache:
policy: pull
paths:
- cli/node_modules
script:
- cd cli
- npm test
build_app:
stage: build
image: node:8.9
cache:
paths:
- app/node_modules
- app/build
script:
- cd app
- npm run build
deploy_app:
stage: deploy
image: registry.gitlab.com/my/gcloud/image
only:
- master
environment:
name: staging
url: https://example.com
cache:
policy: pull
paths:
- app/build
script:
- gcloud app deploy app/build/app.yaml
--verbosity info
--version master
--promote
--stop-previous-version
--quiet
--project "$GOOGLE_CLOUD_PROJECT"
The problem is in the test stage. Most of the time the test_app job fails, because the app/node_modules directory is missing. Sometimes a retry works, but mostly not.
Also, I would like to use two caches for the build_app job. I want to pull app/node_modules and push app/build. I can't find a way to accomplish this. This makes me feel like I don't fully understand how the cache works.
Why are my cache files gone? Do I misunderstand how GitLab CI cache works?
The cache is provided on a best-effort basis, so don't expect that the cache will be always present.
If you have hard dependencies between jobs, use artifacts and dependencies.
Anyway, if it is just for node_modules, I suggest you to install it in every step, instead of using artifacts - you will not save much time with artifacts.

Resources