jHipster App crashes propably because CloudFoundry activates cloud profile - jhipster

The deployment of my small jhipster App "customerapp" fails and it is probably because cloud foundry sets the profile "cloud" in addition to the profile "dev". I am using several spaces in cloud foundry for the different stages of the development: dev, staging and prod.
I used the jhipster generator, added some entities customer, address and contacts. App is running locally without any issues.
I also use gitlab-ci to build, test and deploy my software. My .gitlab-ci.yml looks like this (I deleted some unecessary parts).
image: mydockerregistry.xxxxx.de/jutoro/jhipster_test/jhipster-dockerimage
services:
- docker:dind
cache:
key: "$CI_COMMIT_REF_NAME"
paths:
- node_modules
- .maven
before_script:
- chmod +x mvnw
- export MAVEN_USER_HOME=`pwd`/.maven
stages:
- build
- package
- deployToCF
mvn-build:
stage: build
only:
- dev
- prod
script:
- npm install
- ./mvnw compile -DskipTests -Dmaven.repo.local=$MAVEN_USER_HOME -
Dspring.profiles.active=dev
mvn-package-dev:
stage: package
only:
- dev
script:
- npm install
- ./mvnw package -Pdev -DskipTests -Dmaven.repo.local=$MAVEN_USER_HOME -Dspring.profiles.active=dev
artifacts:
paths:
- target/*.war
mvn-package-prod:
stage: package
only:
- prod
script:
- npm install
- ./mvnw package -Pprod -DskipTests -Dmaven.repo.local=$MAVEN_USER_HOME -Dspring.profiles.active=prod
artifacts:
paths:
- target/*.war
deployToCloudFoundry-dev:
image: pivotalpa/cf-cli-resource
stage: deployToCF
only:
- dev
cache:
paths:
- bin/
script:
- bash ci/scripts/deployToCloudFoundry.sh
deployToCloudFoundry-prod:
image: pivotalpa/cf-cli-resource
stage: deployToCF
only:
- prod
cache:
paths:
- bin/
script:
- bash ci/scripts/deployToCloudFoundry.sh
The DOCKERFILE (which is built and added to our docker repository also with gitlab-ci):
# DOCKER-VERSION 1.8.2
FROM openjdk:8
MAINTAINER Robert Zieschang
RUN apt-get install -y curl
# install node.js
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt-get install -y nodejs python g++ build-essential && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# install yeoman
RUN npm install -g yo
The deplpoyToCloudFoundry.sh shell script:
cf login -a $CF_API_ENDPOINT -u $CF_USER -p $CF_PASS -o "${CF_ORG^^}" -s ${CI_COMMIT_REF_NAME^^}
cf push -n $CI_PROJECT_NAME-$CI_COMMIT_REF_NAME
My manifest file:
---
applications:
- name: customerapp
memory: 1024M
#buildpack: https://github.com/cloudfoundry/java-buildpack#v3.19.2
path: target/customerapp-0.0.1-SNAPSHOT.war
services:
- postgresql
env:
#SPRING_PROFILES_ACTIVE: dev
#SPRING_PROFILES_DEFAULT: dev
#JAVA_OPTS: -Dspring.profiles.active=dev
The pipeline runs well, the app is packed into the war file and uploaded to cloud foundry as well, but it crashes and I assume it is because somehow cloud foundry still applies the profile 'cloud' and this overrides important configurations from jhipsters 'dev' profile.
[...]
2019-01-02T19:03:16.05+0100 [APP/PROC/WEB/0] OUT 2019-01-02 18:03:16.055 INFO 8 --- [ main] pertySourceApplicationContextInitializer : 'cloud' property source added
2019-01-02T19:03:16.05+0100 [APP/PROC/WEB/0] OUT 2019-01-02 18:03:16.056 INFO 8 --- [ main] nfigurationApplicationContextInitializer : Reconfiguration enabled
2019-01-02T19:03:16.06+0100 [APP/PROC/WEB/0] OUT 2019-01-02 18:03:16.064 INFO 8 --- [ main] com.jutoro.cco.CustomerappApp : The following profiles are active: cloud,dev,swagger
[...]
This later leads to:
2019-01-02T19:03:29.17+0100 [APP/PROC/WEB/0] OUT 2019-01-02 18:03:29.172 ERROR 8 --- [ main] com.jutoro.cco.CustomerappApp : You have misconfigured your application! It should not run with both the 'dev' and 'cloud' profiles at the same time.
[...]
After that cloud foundry stops the app.
2019-01-02T19:04:11.09+0100 [CELL/0] OUT Cell 83899f60-78c9-4323-8d3c-e6255086c8a7 stopping instance 74be1834-b656-4445-506c-bdfa
The generated application-dev.yml and bootstrap.yml was just modified in some places:
bootstrap.yml
uri: https://admin:${jhipster.registry.password}#url.tomy.jhipsterregistryapp/config
name: customerapp
profile: dev # profile(s) of the property source
label: config-dev
application-dev.yml
client:
service-url:
defaultZone: https://admin:${jhipster.registry.password}#url.tomy.jhipsterregistryapp/eureka/
What did I try to set the dev profile in cf:
added -Dspring.profiles.active=dev in gitlab-ci.yml in addition to -Pdev
added SPRING_PROFILES_ACTIVE: dev in the manifest env: section
added SPRING_PROFILES_DEFAULT: dev in the manifest env: section
added SPRING_APPLICATION_JSON: {"spring.cloud.dataflow.applicationProperties.stream.spring.profiles.active": "dev"} (as mentioned in https://github.com/spring-cloud/spring-cloud-dataflow/issues/2317)
added JAVA_OPTS: -Dspring.profiles.active=dev in the manifest env: section (cv env customerapp shows that it was set)
set the JAVA_OPTS -Dspring.profiles.active=dev with cf set-env and cf restage
Any help is appreciated.
Regards
Robert

Forget the answer before. Turns out deep down it was a datasource problem which made the app not respond to the heartbeats.
Uncomment
#hibernate.connection.provider_disables_autocommit: true
In the application properties fixed this.

Maybe any "future" person may stumble upon the same behaviour.
I was able to deploy my jhipster app to cloud foundry.
I somehow "fixed" it, but I am not aware of further consequences. Yet.
Turned out cloud foundry had a problem to monitor my jhipster app via the standard health-check-type http which should be "heartbeat"?
So I decided to switch the monitoring behaviour to a not heartbeat-ish way.
Just switch health-check-type to process in your manifest.yml file.
health-check-type: process
The app is now running.

Related

How to deploy Angular App with GitLab CI/CD

I've been trying to setup a CI/CD Pipeline on my Repo which runs common tasks like linting/tests etc. I've successfully setup a Gitlab-Runner which is working fine. The only part I'm stuck is the "deploy" part.
When I run my build, how do I actually get the files into my /var/www/xyz folder.
I get that everything is running in a Docker Container and I can't just magically copy paste my files there, but i don't get how I get the files on my actual server-directory. I've been searching for days for good docs / explanations, so as always, StackOverflow is my last resort for help.
I'm running on a Ubuntu 20.04 LTS VPS and a SaaS GitLab-Repository if that info is needed. That's my .gitlab-ci.yml:
image: timbru31/node-alpine-git
before_script:
- git fetch origin
stages:
- setup
- test
- build
- deploy
#All Setup Jobs
Install Dependencies:
stage: setup
interruptible: true
script:
- npm install
- npm i -g #nrwl/cli
artifacts:
paths:
- node_modules/
# All Test Jobs
Lint:
stage: test
script: npx nx run nx-fun:lint
Tests:
stage: test
script: npx nx run nx-fun:test
Deploy:
stage: build
script:
- ls /var/www/
- npx nx build --prod --output-path=dist/
- cp -r dist/* /var/www/html/neostax/
only:
refs:
- master
Normally I would ssh into my server, run the build, and then copy the build to the corresponding web-directory.
TL;DR - How do I get files from a GitLab-Runner to an actual directory on the server?

Hoe to run the node js code in aws amplify

I have a React/Node app which i am trying to host on AWS amplify. first try, my app deployed but i saw some pages/buttons are not working because of node js. Then i did some search and i saw that i need to modify "amplify.yml" file to:
version: 1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
artifacts:
baseDirectory: build
files:
- '**/*'
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- yarn build
artifacts:
baseDirectory: build
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Getting build issues(Build time out) with the above build settings.
Make sure you have created a user with AdministratorAccess-Amplify privileges in IAM.
Then it is necessary to replace line 6 of the Hands-On amplify.yml with
npm install -g #aws-amplify/cli.
The code should now display correctly to complete Hands-On

Restoring from Azure Artefacts in CircleCI

I'm trying to restore from Azure Artefacts in CircleCI. I've been trying to piece together bits and pieces from artifacts-credprovider, including the Docker image example. My circle CI config looks like this:
version: 2.1
jobs:
build:
docker:
- image: mcr.microsoft.com/dotnet/core/sdk:3.1
working_directory: ~/repo
steps:
- checkout
- run:
name: Install Artifacts Credprov
command: |
wget -qO- https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
- run:
name: Restore Packages
command: dotnet restore
- run:
name: Build
command: dotnet build
workflows:
main:
jobs:
- build
In the CircleCI Project I've also set the following env vars:
DOTNET_SYSTEM_NET_HTTP_USESOCKETSHTTPHANDLER=0
NUGET_CREDENTIALPROVIDER_SESSIONTOKENCACHE_ENABLED-true
VSS_NUGET_EXTERNAL_FEED_ENDPOINTS={"endpointCredentials": [{"endpoint":"https://pkgs.dev.azure.com/paulgrenyer0243/_packaging/paulgrenyer0243/nuget/v3/index.json", "username":"...", "password":"..."}]}
and when restoring this is the error I get:
#!/bin/bash -eo pipefail
dotnet restore
/usr/share/dotnet/sdk/3.1.201/NuGet.targets(124,5): error : Unable to load the service index for source https://pkgs.dev.azure.com/paulgrenyer0243/_packaging/paulgrenyer0243/nuget/v3/index.json. [/root/repo/NugetClient.csproj]
/usr/share/dotnet/sdk/3.1.201/NuGet.targets(124,5): error : Response status code does not indicate success: 401 (Unauthorized). [/root/repo/NugetClient.csproj]
Exited with code exit status 1
I'm assuming that either the env vars aren't being picked up, the env vars have the wrong values or I'm trying the wrong approach.
Does anyone have this working or can see what I'm doing wrong?
Restoring from Azure Artefacts in CircleCI
I agree with you. The dotnet restore task did not get the env vars from the CircleCI Project.
You could try to set the environment variable in a container, like:
version: 2.1
jobs:
build:
docker:
- image: mcr.microsoft.com/dotnet/core/sdk:3.1
environment:
VSS_NUGET_EXTERNAL_FEED_ENDPOINTS: {"endpointCredentials": [{"endpoint":"https://pkgs.dev.azure.com/XXX/_packaging/XXX/nuget/v3/index.json", "username":"...", "password":"..."}]}
Then specify source during restoring package:
- run:
name: Restore Packages
command: dotnet restore -s "https://pkgs.dev.azure.com/XXX/_packaging/XXX/nuget/v3/index.json" -s "https://api.nuget.org/v3/index.json"
You could check this document and this thread for some more details.
Hope this helps.

React App + Node Gitlab cicd pipeline for GAE (Build and Deploy)

How do you create 2 stages 1 to build the react app and then one to deploy the files to GAE?
My current YML looks like this:
image: 'google/cloud-sdk:slim'
build-stage:
stage: build
image: 'node:latest'
script:
- 'npm install'
- 'npm install --prefix ./client'
- 'npm run build --prefix ./client'
only:
- master
deployment-stage:
stage: deploy
script:
- 'gcloud auth activate-service-account --key-file $GOOGLE_SERVICE_ACCOUNT_FILE'
- 'gcloud app deploy app.yaml --project $GOOGLE_PROJECT_ID --set-env-vars **redacted**'
only:
- master
Google App Engine doesn't show any builds happening on the builds tab. I have made a service account with these permissions: Here
I have set my CICD variables in Gitlab also, here is the output from the jobs so far.
Build stage:
$ npm run build --prefix ./client
> client#0.1.0 build /builds/**redacted**/client
> react-scripts build
Creating an optimized production build...
Compiled successfully.
File sizes after gzip:
276.17 KB build/static/js/2.63b40945.chunk.js
59.19 KB build/static/css/2.2e872fcd.chunk.css
4.3 KB build/static/js/main.7cffe524.chunk.js
923 B build/static/css/main.433538f4.chunk.css
772 B build/static/js/runtime-main.ef76e641.js
The project was built assuming it is hosted at /.
You can control this with the homepage field in your package.json.
The build folder is ready to be deployed.
You may serve it with a static server:
npm install -g serve
serve -s build
Find out more about deployment here:
bit.ly/CRA-deploy
Job succeeded
Deploy stage:
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/**redacted**/.git/
Created fresh repository.
From https://gitlab.com/**redacted**
* [new ref] refs/pipelines/124363587 -> refs/pipelines/124363587
* [new branch] master -> origin/master
Checking out f2026f12 as master...
Skipping Git submodules setup
$ gcloud auth activate-service-account --key-file $GOOGLE_SERVICE_ACCOUNT_FILE
00:02
Activated service account credentials for: [**redacted**]
$ gcloud app deploy app.yaml --project $GOOGLE_PROJECT_ID --set-env-vars **redacted**
Job succeeded
I think the issue is that the build files are not being uploaded as they are in a separate container.
I tried to run it all in 1 scripts step but the google/cloud-sdk:slim doesn't contain npm to do the build or installs.
Thanks!
You need to use GitLab artifacts. That's how you can pass an output from Stage 1 to Stage 2.
E.g.
stages:
- build
- deploy
build_project:
stage: build
image: node:15
script:
- echo "Start building App"
- npm install
- cd client
- npm install
- CI=false npm run build
- echo "Build successfully!"
artifacts:
expire_in: 1 hour
paths:
- client/build
- node_modules
deploy_production:
stage: deploy
image: google/cloud-sdk:alpine
environment: Production
only:
- master
script:
- echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud --quiet --project $PROJECT_ID app deploy app.yaml
after_script:
- rm /tmp/$CI_PIPELINE_ID.json
deploy_staging:
stage: deploy
image: google/cloud-sdk:alpine
environment: Staging
only:
- staging
script:
- echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud --quiet --project $PROJECT_ID app deploy staging-app.yaml --verbosity=info
after_script:
- rm /tmp/$CI_PIPELINE_ID.json
Figured this out with some trial and error...
image: python:2.7
before_script:
- echo "deb http://packages.cloud.google.com/apt cloud-sdk-jessie main" | tee /etc/apt/sources.list.d/google-cloud-sdk.list
- curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
- apt-get update
- apt-get -qq -y install google-cloud-sdk
- apt-get -qq -y install npm
- npm install
- npm install --prefix ./client
- npm run build --prefix ./client
deploy:
stage: deploy
environment: Production
only:
- master
script:
- gcloud auth activate-service-account --key-file $GOOGLE_SERVICE_ACCOUNT_FILE
- gcloud app deploy app.yaml --project $GOOGLE_PROJECT_ID
Moved the set-env-vars to the app.yaml as they cannot be set as a flag in the gloud app deploy as per the docs: https://cloud.google.com/appengine/docs/standard/nodejs/config/appref#environment_variables

Change directory in pipe line bitbucket

My folder structure:
-backend
-frontend
My reactapp is placed in frontend directory.
image: node:10.15.3
pipelines:
default:
- step:
caches:
- node
script: # Modify the commands below to build your repository.
- yarn install
- yarn test
- yarn build
This one fails. How do I go to the frontend-directory to run this?
Bitbucket Pipeline run in one bitbucket cloud server.
So, similar as using a local command line interface, you can navigate using comands like cd, mkdir.
image: node:10.15.3
pipelines:
default:
- step:
caches:
- node
script: # Modify the commands below to build your repository.
- cd frontend
- yarn install
- yarn test
- yarn build
- cd ../ #if you need to go back
#Then,probably you will need to deploy your app, so you can use:
- apt-get update
- apt-get -qq install git-ftp
- git ftp push --user $FTP_USERNAME --passwd $FTP_PASSWORD $FTP_HOST
If you need to test syntax of your yml file, try here

Resources