Codeship independent CI for microservices in monorepo - node.js

Currently we have a NodeJS monolith app. The tests run in Codeship and if the tests are green then the code will be deployed to Heroku. That is pretty easy.
So we would like to break up our monolith app into microservices and we prefer monorepo solution.
For example we have service-1 and service-2 in the repo. We would like to setup independent CI and deployment pipeline for each services on Codeship.
my-repo
- service-1
- src
- package.json
- docker-compose.yml
- codeship-steps.yml
- service-2
- src
- package.json
- docker-compose.yml
- codeship-steps.yml
Do you have any idea how can we setup the ideal CI?

Yes CodeShip Pro provides a Docker Compose-like approach to setting up multiple services from the same project space. Assuming microservices are already split up into their particular folders, a codeship-services.yml may look like the following:
service-a:
build:
context: ./service-a
dockerfile: Dockerfile # The Dockerfile in ./service-a directory
service-b:
build:
context: ./service-b
Please check out our comprehensive intro guide for more information

Related

Gitlab docker ci template - what is the header name for

At first example, the image name is docker:latest.
And the stage is the defination of pipeline that i can have build, test, deploy stages.
Snippet 1
gitlab-ci.yml
docker-build:
# Use the official docker image.
image: docker:latest
stage: build
May i know the defination of docker-build?
Can i named it build or something else, what is the usage?
Snippet 2
gitlab-ci.yml
image: docker:latest
services:
- docker:dind
build:
stage: build
script:
- docker build -t test .
In another example, there is services defined. Why i need services and when i don't need it?
Can i say this example must have another file 'Dockerfile' so the docker build command only works?
Once build successfully , the image will be named docker:latest?
Job-naming:
There are a few reserved keywords which you can not use for a job name, like stages, services etc. see https://docs.gitlab.com/ee/ci/yaml/#unavailable-names-for-jobs
you can name your job anything else you like.
Stages
As you have written there are a certain set of pre defined stages: .pre, build, test, deploy and .post - but you can also define your own stages with
stages:
- build
- build-docker
- test
- deploy
Dockerfile
yes you need a dockerfile to docker build, and the tag of your image will be test as it is defined with -t test.
Regarding building docker images with gitlab ci i can recommen https://blog.callr.tech/building-docker-images-with-gitlab-ci-best-practices/ to read.
I hope this helps somehow. Generally speaking i recommend you to read the gitlab documentation and the getting started guide: https://docs.gitlab.com/ee/ci/quick_start/ - it explains a lot of the default concepts. and i would recommend to not ask to many questions within in one stackoverflow question, keep it focused to one topic

Dockerize and reuse NodeJS dependency

I'm developing an application based on a microfrontend architecture, and in a production environment, the goal is to have each microfrontend as a dockerized NodeJS application.
Right now, each microfrontend depends on an internal NPM package developed by the company, and I would like to know if it's possible to have that dependency as an independent image, where each microfrontend would, some how, reuse it instead of installing it multiple times (one for each microfrontend)?
I've been making some tests, and I've managed to dockerize the internal dependency, but haven't been able to make it reachable to the microfrontends? I was hopping that there was a way to set it up on package.json, something similar to how it's made for local path, but since the image's scope are isolated, they can't find out where's that dependency.
Thanks in advance.
There are at least 2 solutions to your question
create a package and import it in every project (see Verdaccio for local npm registry)
Use a single Docker image with shared node_modules and change command in docker-compose
Solution 2
Basically the idea is to put all your microservices into a single Docker image In a structure like this:
/service1
/service2
/service3
/node_modules
/package.json
Then on your docker-compose.yaml
version: '3'
services:
service1:
image: my-image:<version or latest>
command: npm run service1:start
environment:
...
service2:
image: my-image:<version or latest>
command: npm run service2:start
environment:
...
service3:
image: my-image:<version or latest>
command: npm run service3:start
environment:
...
The advantage is that you now you have a single image to deploy in production and all the shared code is in one place

Problem deploying MERN app with Docker to GCP App Engine - should deploy take multiple hours?

I am inexperienced with Dev Ops, which drew me to using Google App Engine to deploy my MERN application. Currently, I have the following Dockerfile and entrypoint.sh:
# Dockerfile
FROM node:13.12.0-alpine
WORKDIR /app
COPY . ./
RUN npm install --silent
WORKDIR /app/client
RUN npm install --silent
WORKDIR /app
RUN chmod +x /app/entrypoint.sh
ENTRYPOINT [ "/app/entrypoint.sh" ]
# Entrypoint.sh
#!/bin/sh
node /app/index.js &
cd /app/client
npm start
The React front end is in a client folder, which is located in the base directory of the Node application. I am attempting to deploy these together, and would generally prefer to deploy together rather than separate. Running docker-compose up --build successfully redeploys my application on localhost.
I have created a very simple app.yaml file which is needed for Google App Engine:
# app.yaml
runtime: custom
env: standard
I read in the docs here to use runtime: custom when using a Dockerfile to configure the runtime environment. I initially selected a standard environment over a flexible environment, and so I've added env: standard as the other line in the app.yaml.
After installing and running gcloud app deploy, things kicked off, however for the last several hours this is what I've seen in my terminal window:
Hours seems like a higher magnitude of time than what seems right for deploying an application, and I've begun to think that I've done something wrong.
You are probably uploading more files than you need.
Use .gcloudignore file to describe the files/folders that you do not want to upload. LINK
You may need to change the file structure of your current project.
Additionally, it might be worth researching further the use of the Standard nodejs10 runtime. It uploads and starts much faster than the Flexible alternative (custom env is part of App Engine Flex). Then you can deploy each part to a different service.

Problem to run nodejs app on Gitlab Pages

I'm trying to run my nodejs app on gitlab pages. I use a gitlab-ci.yml file for this where I deploy and run the nodejs app. Unfortunately the pipeline kills the process after 1 hour because the pipeline thinks running the nodejs app is part of the build script. I have two questions:
- Can you run a nodejs app on gitlab pages?
- If so, what is the best way to start the app?
Below you find the gitlab-ci.yml file.
Thanks!
image: node:latest
stages:
- build
cache:
paths:
- node_modules/
install_dependencies:
stage: build
script:
- npm install
- npm install -g nodemon
- NODE_ENV=production nodemon app.js
artifacts:
paths:
- node_modules/
Can you run a nodejs app on gitlab pages?
No! Gitlab pages allow you to host only static websites: https://about.gitlab.com/product/pages/
If so, what is the best way to start the app?
If your app is static, try to use a static site generator! If you wanna play with nodejs, try other hosting platforms like heroku or clever cloud

How to manage multiple backend stacks for development?

I am looking for the best/simplest way to manage a local development environment for multiple stacks. For example on one project I'm building a MEAN stack backend.
I was recommended to use docker, however I believe it would complicate the deployment process because shouldn't you have one container for mongo, one for express etc? As found in this question on stack.
How do developers manage multiple environments without VMs?
And in particular, what are best practices doing this on ubuntu?
Thanks a lot.
With Docker-Compose you can easily create multiple containers in one go. For development, the containers are usually configured to mount a local folder into the containers filesystem. This way you can easily work on your code and have live reloading. A sample docker-compse.yml could look like this:
version: '2' services: node:
build: ./node
ports:
- "3000:3000"
volumes:
- ./node:/src
- /src/node_modules
links:
- mongo
command: nodemon --legacy-watch /src/bin/www
mongo:
image: mongo
You can then just type
docker-compose up
And you Stack will be up in seconds.

Resources