I have a repository that serves as a driver for kubernetes and docker, built using nodejs.
I have created several test cases for this repo to work on docker and kubernetes, and they all passed locally.
The only thing I don't know how to do is to set up kubernetes on the travis YML. Surprisingly, I can't find any decent examples anywhere. Below is the YML file I have, and between <> is the gap I need help to fill.
sudo: required
language: node_js
node_js: 6.9.5
services:
- docker
- <kubernetes>
branches:
only:
- staging
- master
addons:
hosts:
- localhost
- dev-controller
before_script:
- npm install -g grunt-cli
- docker pull soajsorg/soajs
- <pull the soajsorg/soajs image and load it to kubernetes>
script:
- grunt coverage
You could try to install minikube: http://github.com/kubernetes/minikube if you need it running on TravisCI otherwise I would suggest connecting to a self-managed Kubernetes cluster or using Google's cli/sdk to launch a small GKE cluster and use that for testing
Related
I'm setting up Bitbucket Pipelines to build a project using node 14.21.2, but it is being run with an image using node 4.2.1.
Here is the configuration I am using and the build output.
Pipelines config
definitions:
services:
npmbuild:
image: node:14.21.2
variables:
NPM_TOKEN: $NPM_TOKEN
pipelines:
default:
- step:
name: Build Production
services:
- npmbuild
script:
- node -v
Output
+ node -v
v4.2.1
N/A: version "v14.21.2" is not yet installed
How can I change this to use the correct version of node?
UPDATE: I have a workaround where I just specify the image at the root level, but my question still remains as to why setting the image at the step level didn't work as expected.
image: node:9.2.0
stages:
- build
build:
stage: build
script:
- set NODE_ENV=production
- npm install
- npm run transpile
- ls
- cd dist-server
- ls
- node /bin/www
#- npm run prod
artifacts:
expire_in: 1 day
paths:
- dist/
Above is my yaml file for ci can anyone share how to deploy this on the linux Azure Web App.
There is no out-of-the-box solution to deploy to Azure using Gitlab.
What you can do in your Gitlab pipeline is the following proces:
Build docker container
Push docker container to Gitlab Container Registry (is included in your Gitlab Repository)
Run a curl command to trigger the Azure App Service webhook to update
You can host this Docker container in Azure (after creating the App Service, you can find the webhook url in the Deployment settings)
I already did CI, but now I want to deploy to my server. My server is the same machine where I do CI, but I do CI in docker-executor. So I can't have acces to server folders to update production.
There is my script:
image: node:9.11.2
cache:
paths:
- node_modules/
before_script:
- npm install
stages:
- test
- deploy
test:
stage: test
script:
- npm run test
deploy:
stage: deploy
script:
#here I want to go to /home/projectFolder and make git pull, npm i, npm start
# but I can't beause I run CI in docker-environment which hasn't acces to my server's dirictories.
First of all you should consider using gitlab auto cicd ( or use it as a base to customize if you dont want to use kubernetes)
You have multiple way to do so but the simplest way should be to use an alpine image and
- install ssh (if necessary)
- load your private ssh key ( from pipeline secrets)
- run your npm commands through ssh.
The cleanest way would be :
- generating adding a valid Dockerfile to your project
- adding docker image generation for each commit on master (in your pipeline)
- Adding docker rm running image (in your pipeline)
- Adding docker run the newly generated image (in your pipeline) (by sharing your docker volume)
- Make nginx redirect to your container.
I can give more detailed advice depending on what you decide to do.
Hoping i helped.
My integration tests are highly dependent of a elastic search, given that to build my integration tests on bitbucket pipeline I would have to execute the docker-run command to be able to spin up my elastic-search instance during my integration tests.
But as probably some of you know, there's a limitation on the bitbucket pipeline
See the Docker command line reference for information on how to use
these commands. Other commands, such as docker run, are currently
forbidden for security reasons on our shared build infrastructure.
So given that I don't know how can I spin-up my escluster with all configurations that I need inside, painless scripts, mappings, ports exposed to be available for my integrations tests.
Does someone have any idea how could i achieve this?
Ok I managed to get it working, I was struggling to run elastic search, due this error https://github.com/docker-library/elasticsearch/issues/111
This was fixed by applying the config discovery-type: single-node. Since I'm using this for integration tests I don't need to run ES in production mode. The thing is bitbucket-pipeline was not showing error logs for this error, so I was completely blind and I had to try many things till find out. Since I can't build and run my own image on pipelines, I uploaded an image with my own configuration (including single-node config) and scripts and to docker hub.
This is how my yaml looked like in the end:
image: maven:3.3.9
pipelines:
default:
- step:
caches:
- maven
script:
- docker version
- mvn clean package verify -Dmaven.docker.plugin.skip=true -s settings.xml
services:
- elasticsearch
definitions:
services:
elasticsearch:
image: elastic-search-bitbucket-pipeline
options:
docker: true
You can try to define your elastic-search image as a service as described there:
Use services and databases in Bitbucket Pipelines
For those still looking for a more elaborate solution, I have created a Dockerfile like this:
FROM elasticsearch:7.0.1
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
In the same folder I have also created a custom config elasticsearch.yml:
network.host: 127.0.0.1
I then added the custom image to Docker Hub, for more info how to do that, look here: https://docs.docker.com/docker-hub/repos/
You can now use the custom image in your Pipelines service configuration and use it to run your tests.
You could also supply some more configuration inside your elasticsearch.yml
Enable CORS:
http.cors.enabled: true
http.cors.allow-origin: "*"
Set discovery type:
discovery.type: single-node
You can use my docker image:
https://hub.docker.com/r/xiting/elasticsearch-bitbucket-pipeline
Add service to your pipeline as below:
definitions:
steps:
- step: &run-tests
name: Run tests
script:
- sleep 30 # Waiting elasticsearch. In your real pipeline you can not use it.
- curl -XGET localhost:9250/_cat/health
services:
- elasticsearch
services:
elasticsearch:
image: xiting/elasticsearch-bitbucket-pipeline
variables:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
docker:
memory: 2048
pipelines:
pull-requests:
'**':
- step: *run-tests
I want to execute my automated tests, written in Nightwatch-Cucumber over a Jenkins CI in a Docker container. I have a Docker image that I want to use for it.
This is what I want to do in more detail.
Start tests over Jenkins CI job
On the same machine the Docker image is loaded and the related Docker container will start. This container based on a Unix OS. Also, some configuration in Docker container will be executed.
Tests will be executed (from local or remote) in a headless mode via xvfb and the report will be saved on Jenkins machine.
Over GitLab CI I've realized it over a .gitlab-ci.yml config file and it runs very good:
image: "my-docker-image"
stages:
- "chrome-tests"
before_script:
- "apt-get update"
- "apt-get install -y wget bzip2"
- "npm install"
cache:
paths:
- node_modules/
run-tests-on-chrome:
stage: "chrome-tests"
script:
- "whereis xvfb-run"
- "xvfb-run --server-args='-screen 0 1600x1200x24' npm run test-chrome"
But I want to realize the same procedure with Jenkins CI. What is the easiest way to do it and ro run my automated tests in a Docker image which is called by Jenkins? Should I write a Dockerfile or not or or or?
I'm currently running Selenium Test scripts written in PHP and running them through Jenkins using Docker Compose. You can do the same as well without the hassle of dealing with Xvfb yourself.
To run your Selenium tests using headless browsers inside a docker container and linking it to your application with docker-compose, you can simply use the pre-defined standalone server.
https://github.com/SeleniumHQ/docker-selenium
I'm currently using the Chrome Standalone image.
Here's what your docker-compose should look like:
version: '3'
services:
your-app:
build:
context: .
dockerfile: Dockerfile
your_selenium_application:
build:
context: .
dockerfile: Dockerfile.selenium.test
depends_on:
- chrome-server
- your-app
chrome-server:
image: selenium/standalone-chrome:3.4.0-einsteinium
When running docker-compose, it will spin up your application, the selenium environment that will be interacting with your app, and the standalone server that will provide you with your headless browser. Because they are linked, inside your selenium code, you can make your test requests to the host via your-app:80 for example. Your headless browser will be chrome-server:4444/wd/hub which is the default address.
This can all be done inside of Jenkins using only one command in your Execute Shell inside of your Jenkins Job. docker-compose will also allow you to easily run the tests on your local machine as well, and the results should be identical.
Check out the maintained Selenium Docker images, specifically the node flavors. It's a good place to start, whether you decide to use the containers as-is or roll your own.