What is the best practice running cypress testing in the CI / CD pipeline? - node.js

I'm working in a private enterprise and want to integrate Cypress E2E tests with our gitlab CI / CD pipeline.
Since cypress needs to install a binary zip file, I can't simply run npm i and hope things will work right.
I'm facing 3 options and not sure which one is the ideal way:
Somehow include the cypress binaries during the CI, cache them, and deploy the app image to both test & dev environments. One will serve for testing and one for the actuall app deployment.
(if possible, it is the fastest way since it doesn't require additional image building, and I would
love to see a gitlab-ci.yml example :) )
Use 2 different images -
nodejs-8-rhel7 image - which will be used for app deployments (and is being used currently)
cypress-included image - which will include cypress as well as the app code (and will be used for testing).
In this case I would have to build the app image twice, one with cypress and one without, looks like a slow process.
Use the cypress-included image for both app & testing deployments.
Since our current application base image is nodejs-8-rhel7, I'm afraid changing the base image will cause some trouble (nodejs version differences etc..)
Which of the above is the best option? do you have a better solution?
Thanks!

Related

Node / Vite one build for multi environments

I'm using Vite framework to build my frontend application.
I would like to be able to build a version once, let's say 1.0 and then run it in dev env (with dev variables), then in production (with prod variables). As I do with Java apps for example.
This is quite important because I don't want to "build for prod" once my tests in dev are ok, taking the risk of building something slightly different from dev (dependencies, or even commits).
I know that node frontend projects are simply plain JS files once the application is built, so I don't think reading an env variable would be possible.
Any idea ?

Can Google App Engine (flexible environment) perform a build step defined in package.json just before deployment?

I couldn't find any documentation about build steps on the flexible environment. Only thing I found is that App Engine will run the start script from your package.json file after deployment, but is it possible to make it run the build script first? This is what Heroku does and I want to replicate it.
What you're looking for is the script called gcp-build as this one can perform a custom build step at deployment, just before starting the application. While this is only documented for Standard Environment as of now (I've let the engineers know), there are multiple public resources that can confirm this works on both environments. See the following links as reference:
Why does Google App Engine flex build step fail while standard works for the same code?
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/appengine/typescript

Read Azure App Settings in Angular4 CLI

I have an Angular4 web app, deployed on Azure. Now I want to deploy this app to other environments on Azure: one for testing, one for acceptance and one for production. Every environment has different API endpoints and may have other variables, like Application Insights. All those environments run Angular in production mode.
The way Angular advises you to do this, is by the Enviroment files (environment.test.ts, enviroment.acc.ts, environment.prod.ts). I could configure all the different API endpoints in those files, and run my build with --prod for production for example.
But that is not the way I want to do this. I want to use the exact same application package deployed to test for my acceptance environment, without rebuilding the project. In Visual Studio Online, this is also really simple to configure.
The point is: how can I make my API endpoints differ per environment in that way?
The way I want to do this, is by the App Settings in Azure. But Angular can't get to those environment variables because it's running on the client side. Node.js is running on serverside and could get those App Settings - but if that's the way I need to do it, how do I make Node.js (used in Angular4 CLI) to send those server variables to the client side? And what about performance impact for this solution?
How did you fix this problem for your Angular4 apps on Azure? Is it just impossible to fix this problem with the Azure App Settings?
For everyone with the same question: I didn't fix this problem the way I described above.
At the end, I did it the way Angular wants you to do it: so rebuild for dev, rebuild for acc and rebuild for prod.
In Visual Studio Online, at build time, it builds and tests our code and it saves the uncompiled/unminified code. At release time, it builds en tests it again and releases it to the right environment with the right environment variables (--prod for example).
I don't think there is another way to fix this.
The solution is pretty old school but it works! Although you can use branching or tag for this purpose instead of cloning the code to the package.
The best solution as you said is Azure app settings will be saved as environment variable so you should implement an API with node.js and share the variables you want.
Of course there is an impact because of additional http call, but it's just one time at application start which is about max 5ms and depends on each program policy whether is impact or not.
Another option could be move the variables to the JSON file in the asset folder, and change it at deploy runtime with release pipeline. that's easier implementation but the disadvantage is you will have to use release variables instead of app settings and if you have config changes you will have to update the variable value first and redeploy it, although that works most of the times but sometimes you want to change just like a connection string and you will have to redeploy.

Creating a Web UI for StrongLoop build & deploy processes?

I want to build web ui for StrongLoop. It would let a user build and deploy process with that UI like StrongLoop Arc.
There are simple node applications(Web Services) without created with StrongLoop tools. Need to deploy these applications via web ui. Solution in my mind is some server-side processes, listed steps below:
Upload zip folder(node application) to server
Extract zip and build to tar.gz by shell command (slc build) through node.js child_process API
Deploy tar.gz file to relevant StrongLoop host by shell command(slc deploy..) through API which is mentioned on previous step.
I wonder is there any alternative way to deploy node application(without created with StrongLoop tools) to StrongLoop host via web ui using some StrongLoop API?
I have looked API could not find specific solution.
What you require is a CDP (Continuous delivery pipeline) setup, there seem to be many ways in which you can achieve this (easiest way is using Codeship or similar platforms), but if you want to know how it works it requires a bit of orchestration tools to help you. To describe the steps I'll be using the following tools:
Docker (what is docker?)
Ansible (Use Cases and How it works?)
Jenkins (What is it and Why to use it?)
"There are many other combination of tools that you can look at, but this should give you an idea"
Now that we have the tools, I'll try to describe the deployment pipeline with a very basic use-case.
Step I "Ideally" - Creating a docker image for your nodejs application.
What generally everyone suggests is that you create a docker image of your application. Then save this image on docker-hub. How this will help you is that, now your nodejs application is contained inside a docker image which makes it independent of the Host and can be deployed anywhere you want.
To create this image all you need to do is create a Dockerfile, which is described in the in the link I've shared.
Step II "Ideally" - Creating an Ansible playbook to mimic the setup steps of your application.
Ansible playbooks are basically used to automate every manual process that you would need to do in order to setup-deploy-run your application. This decreases the need to run even trivial tasks like "slc build".
Step III "Ideally" - This is where we get to the UI stuff
By using Jenkins, you are given a UI which will help you configure tasks that can be combined with Github hooks and trigger the deployment as soon as you make a commit. This is explained in more details in the link shared.
So to summarize, This is what goes on at back to some extent, in order to automate the build and deployment of your application using UI. I hope this serves as a good starting point to achieve your requirements, and also in case you want skip these steps in the start, you could always go with Codeship or similar other tools to help you with the steps that you've mentioned.

How to speed up CI build times when using docker?

I currently use docker + travis CI to test/ deploy my app. This works great locally because I have data volumes for things like node_modules etc, and docker's layers provide caching speeding up builds.
However, when I push the code to travis it has to rebuild and install everything from scratch and it takes forever! Travis doesn't support caching docker layers atm. Is there some other way to speed up my builds, or another similar tool that allows docker layer caching?
You might want to investigate how i3wm has solved a similar problem.
The main developer has written on the design behind his Travis CI workflow. Quoting the relevant part:
The basic idea is to build a Docker container based on Debian testing
and then run all build/test commands inside that container. Our
Dockerfile installs compilers, formatters and other development tools
first, then installs all build dependencies for i3 based on the
debian/control file, so that we don’t need to duplicate build
dependencies for Travis and for Debian.
This solves the immediate issue nicely, but comes at a significant
cost: building a Docker container adds quite a bit of wall clock time
to a Travis run, and we want to give our contributors quick feedback.
The solution to long build times is caching: we can simply upload the
Docker container to the Docker Hub and make subsequent builds use the
cached version.
We decided to cache the container for a month, or until inputs to the
build environment (currently the Dockerfile and debian/control)
change. Technically, this is implemented by a little shell script
called ha.sh (get it? hash!) which prints the SHA-256 hash of the
input files. This hash, appended to the current month, is what we use
as tag for the Docker container, e.g. 2016-03-3d453fe1.
See our .travis.yml for how to plug it all together.

Resources