I have a pipeline which starts some maven/java app, now I want to add a test stage , where I check if the app starts successfully , for example: when the build stage finishes, I check with curl
127.0.0.1:8080 if response 200 ok , else failed.
How can I create a Gitlab pipline for this use case?
stages:
- build
- deploy
- test
build:
stage: build
script:
- echo Build Stage 1
tags:
- java-run
deploy:
stage: deploy
tags:
- java-run
script:
- "some script"
test:
stage: test
tags:
- java-run
script:
I'm making some assumptions around your use-case here, so let me know if they aren't right. I'm assuming:
You're starting the java app remotely (I.e., your pipeline is deploying it to a cloud provider or non-CI/CD server)
Your server running CI/CD has access to the application via the internet
If so, assuming that you want your job to fail if the service is not accessible, you can simply curl the url using the -f flag, and it will fail if it receives a 404 error. Examples:
test:
image: alpine:latest
script:
- apk add curl
- curl -o /dev/null -s -w "%{http_code}\n" https://httpstat.us/404 -f
The above job will fail, as curl returns exit code 22 when it receives a >= 400 error code and the -f flag is used:
Now, if you're attempting to run the app in your CI/CD (which is why you're referring to 127.0.0.1 in your question), then you can't run the app locally in one job and test in another. The job would only exist and run within the context of the container that's running it, and test is in a separate container because it's a separate job. You have two options if you're attempting to run your app within the context of CI/CD and test it:
You can run your tests in the same job where you start the app (you may need to run the app using nohup to run it in the background)
You can package your app into a docker container, then run it as a service in your test job.
Related
I have a test command in my repo that should work when my server is up because the tests interact with the server once it is running.
On my local i use two commands on first terminal npm run dev - this gets the server running and on second terminal i run the command npm run test that runs test which only pass when the first command is running. How do i achieve this in my gitlab CICD test stage job?
currently i am doing this
test_job:
stage: test
script:
- npm run dev
- npm run test
so the pipeline executes npm run dev which doesnt self terminate and my pipeline gets stuck cant seem to find the solution. Help and suggestions are appreciated. Stack is typescript express graphql
You have two options that you can use for this:
If you're building your server into a container prior to this test job, you can run the container as a service, which will allow you to access it via it's service alias. That would look something like this:
test_job:
stage: test
services:
- name: my_test_container:latest
alias: test_container
script:
- npm run test # should hit http://test_container:<port> to access service
You can use the nohup linux command to run your service in the background. This will run the command in the background once it's started up, and it will die when the CI job ends (as part of shutting down the job). That would look like this:
test_job:
stage: test
script:
- nohup npm run dev &
- npm run test
just finished the ci cd build at gitlab, and i'm using a nodejs image with docker to build, and in the last step of deploy, the log show that is running yarn dev fine but the gitlab ci has a 1 hour limit of running pipeline.
What i need to do to run de expressjs app and finish the pipeline execution without stopping the app ?
I know that with docker i can ran with the detached option, but there is any way to do without build the app docker image ?
CICD Log with the app running:
image: node:12.18.1
stages:
- build
- test
- deploy
before_script:
- yarn
build-min-code:
stage: build
script:
- yarn
deploy-staging:
stage: deploy
script:
- yarn dev
only:
- dev
Like this works fine, but in one hour the timeout will finish the runner execution.
Bump up the timeout. The default is 60 minutes: https://docs.gitlab.com/ee/ci/pipelines/settings.html#timeout
Note: on the gitlab-runner - https://docs.gitlab.com/ee/ci/runners/README.html#set-maximum-job-timeout-for-a-runner
Is a container that is used in the build stage accessible in the next stage? I have yaml like this:
build_backend:
image: web-app
services:
- mysql:5.7
stage: build
script:
- make build
test_frontend:
image: node:8
stage: test
script:
- make run-tests
My tests, that are triggered in make run-tests need to run HTTP requests against the backend container if possible?
I was trying to avoid building a new container and then pushing to a registry only to pull it down again, but maybe there is no other way? If I did this, would my web-app container still have access to the mysql container if I added it as a service in the test_frontend job.
No, containers are not available between stages. Job artifacts (i.e. files) will be passed between stages by default and can also be passed explicitly betweeen jobs.
If you need to run tests against a container, you should indeed pull it down again from a registry. Then, you can use the docker in docker (dind) service to run your tests.
I think this blog post explains a similar use case nicely. The testing job that's is described there is the following:
test:
stage: test
script:
- docker run -d --env-file=.postgres-env postgres:9.5
- docker run --env-file=.environment --link=postgres:db $CONTAINER_TEST_IMAGE nosetests --with-coverage --cover-erase --cover-package=${CI_PROJECT_NAME} --cover-html
I'm trying to use GitLab CI to build, test and deploy an Express app on a server (the Runner is running with the shell executor). However, the test:async and deploy_staging jobs do not terminate. But when checking the terminal inside GitLab, the Express server does indeed start. What gives ?
stages:
- build
- test
- deploy
### Jobs ###
build:
stage: build
script:
- npm install -q
- npm run build
- knex migrate:latest
- knex seed:run
artifacts:
paths:
- build/
- node_modules/
tags:
- database
- build
test:lint:
stage: test
script:
- npm run lint
tags:
- lint
# Run the Express server
test:async:
stage: test
script:
- npm start &
- curl http://localhost:3000
tags:
- server
deploy_staging:
stage: deploy
script:
- npm start
environment:
name: staging
url: my_url_here
tags:
- deployment
The npm start is just node build/bundle.js. The build script is using Webpack.
Note: solution works fine when using a gitlab runner with shell executor
Generally in Gitlab CI we run ordered jobs with specific tasks that should be executed one after the end of the other.
So for the job build we have the npm install -q command that runs and terminates with an exit status (0 exit status if the command was succesful), then runs the next command npm run build and so on until the job is terminated.
For the test job we have npm start & process that keeps running so the job wont be able to terminate.
The problem is that sometimes we need to have some process that need to run in background or having some process that keeps living between tasks. For example in some kind of test we need to keep the server running, something like that:
test:
stage: test
script:
- npm start
- npm test
in this case npm test will never start because npm statrt keeps running without terminating.
The solution is to use before_script where we run a shell script that keeps npm start process running then we call after_script to kill that npm start process
so on our .gitlab-ci.yml we write
test:
stage: test
before_script:
- ./serverstart.sh
script:
- npm test
after_script:
- kill -9 $(ps aux | grep '\snode\s' | awk '{print $2}')
and on the serverstart.sh
# !/bin/bash
# start the server and send the console and error logs on nodeserver.log
npm start > nodeserver.log 2>&1 &
# keep waiting until the server is started
# (in this case wait for mongodb://localhost:27017/app-test to be logged)
while ! grep -q "mongodb://localhost:27017/app-test" nodeserver.log
do
sleep .1
done
echo -e "server has started\n"
exit 0
thanks to that serverstart.sh script is terminated while keeping npm start process alive and help us by the way move to the job where we have npm test.
npm test terminates and pass to after script where we kill all nodejs process.
You are starting a background job during your test phase which never terminates - therefore the job runs forever.
The idea of the GitLab CI jobs are shortly-running tasks - like compiling, executing unit tests or gathering information such as code coverage - which are executed in a predefined order. In your case, the order is build -> test -> deploy; since the test job doesn't finish, deploy isn't even executed.
Depending on your environment, you will have to create a different job for deploying your node app. For example, you can push the build output to a remote server using a tool like scp or upload it to AWS; after that, you reference the final URL in the url: field in your .gitlab-ci.yml.
I have two different project repositories: my application repository, and an API repository. My application communicates with the API.
I want to set up some integration and E2E tests of my application. The application will need to use the latest version of the API project when running these tests.
The API project is already setup to deploy when triggered
deploy_integration_tests:
stage: deploy
script:
- echo "deploying..."
environment:
name: integration_testing
only:
- triggers
My application has an integration testing job set up like this:
integration_test
stage: integration_test
script:
- echo "Building and deploying API..."
- curl.exe -X POST -F token=<token> -F ref=develop <url_for_api_trigger>
- echo "Now running the integration test that depends on the API deployment..."
The problem I am having is that the trigger only queues the API pipeline (both projects are using the same runner) and continues before the API pipeline has actually run.
Is there a way to wait for the API pipeline to run before trying to run the integration test?
I can do something like this:
integration_test_dependency
stage: integration_test_dependency
script:
- echo "Building and deploying API..."
- curl.exe -X POST -F token=<token> -F ref=develop <url_for_api_trigger>
integration_test
stage: integration_test
script:
- echo "Now running the integration test that depends on the API deployment..."
But that still doesn't grantee that the API pipeline runs and finishes before moving on to the integration_test stage.
Is there a way to do this?
I've come across this limitation recently and have set up an image that can be re-used to make this a simple build step:
https://gitlab.com/finestructure/pipeline-trigger
So in your case this would look like this using my image:
integration_test
stage: integration_test
image: registry.gitlab.com/finestructure/pipeline-trigger
script:
- echo "Now running the integration test that depends on the API deployment..."
- trigger -a <api token> -p <token> <project id>
Just use the project id (instead of having to find the whole url) and create a personal access token, which you supply here (best do this via a secret).
The reason the latter is needed is for polling the pipeline status. You can trigger without it but getting the result needs API authorisation.
See the project description for more details and additional things pipeline-trigger can do.
In case anyone else is here looking for this on triggered pipelines from the ci yaml, you can use the keyword depend for strategy to ensure the pipeline waits for the triggered pipeline:
trigger:
project: group/triggered-repo
strategy: depend
Currently this isn't possible. There are some issues in gitlab about this:
https://gitlab.com/gitlab-org/gitlab-ce/issues/25457
https://gitlab.com/gitlab-org/gitlab-ce/issues/29347
https://gitlab.com/gitlab-org/gitlab-ce/issues/22972
etc
Your best bet is to lend weight to some of these issues.
I was missing the exact same feature, so I wrote a python3 utility to do it.
See Python Commander on GitHub.