Call to external API:s randomly slow after moving integration test to a docker container - azure

I'm currently trying to learn docker and as an exercise I moved all our integration tests that we run in azure devops to a docker container. It don't do anything fancy but simply contains the dll:s to my tests. Here are my docker file:
FROM mcr.microsoft.com/dotnet/sdk:3.1.408-focal
WORKDIR /app
COPY ./Automation .
# run tests on docker run
ENTRYPOINT ["dotnet", "test", "Automation.dll", "--logger", "trx", "--results-directory", "/var/temp", "--filter"]
And then I run it in azure devops with a task like this:
steps:
- task: Docker#2
displayName: 'Run tests'
inputs:
containerRegistry: nameOfRegistry
repository: 'nameOfRepo'
command: run
arguments: '--name automation --env TZ=Europe/Stockholm repoUrl:latest "TestCategory=Tickets" --verbosity n'
continueOnError: true
The tests will call REST api:s that we have deployed in azure and it works fine most of the time. But it will randomly get spikes where call that normally take 100-300ms will take 5+ seconds. The problem is that I don't see these issued if I run the tests normally outside docker with a simple VSTest#2 task (on the same self-hosted agent).
So a test run that may take 20 min with VSTest#2 will take 30+ min inside docker with Docker#2.
So my question is, what could cause this problem? It would be easier to debug if all external api calls where slow but now it seems to happen pretty randomly but enough to make the test run considerably longer.

I found the problem and the performance issue where because of RestSharper. So I changed to httpclient and now it works a lot faster inside docker.
So if other have similar issue it can be good to check which library you use.

Related

Use cache with multiple image in gitlab CICD

I am making a gitlab CI/CD pipeline that uses two different image.
One of them necessitate the installations of some package using npm. In order to avoid multiple-time installation I've added some cache.
Let's see this example :
stages:
- build
- quality
cache:
paths:
- node_modules/
build-one:
image: node:latest
stage: build
script:
- npm install <some package>
build-two:
image: foo_image:latest
stage: build
script:
- some cmd
quality:
image: node:latest
stage: quality
script:
- <some cmd using the previously installed package>
The fact of having two different docker images forces me to specify it inside the job definition. So from my tests the cache isn't actually used and the command executed by the quality job will fail because the package isn't installed.
Is there a solution to this problem ?
Many thanks !
Kev'.
There can be two cases
Same runner is being used to run all the jobs. In this case the way to specified cache should work fine.
Different runners are being used to run different jobs. So suppose build job runs with runner 1 and quality jobs is running with runner 2 so the cache will only be present in runner 1.
In order to make use of caching in case 2 you will have to use distributed caching.
Then runner 1 will run the build job it will push the cache to s3 and runner 2 will pull with cache during the quality job and then can use that.

why we use images in CI CD pipeline?

actually i'm facing a problem in this code :
Sorted
- Changes
- Lint
- Build
- Tests
- E2E
- SAST
- DAST
- Publish
- Deployment
# Get Runner Image
image: Node:latest
# Set Variables for mysql
**variables:**
MYSQL_ROOT_PASSWORD: secret
MYSQL_PASSWORD:
..
..
**script:**
- ./addons/scripts/ci/lintphp.sh
why we use image I asked some one said that we build on it like the docker file command FROM ubuntu:latest
and one other told me it's because it executes the code and I don't actually know the script tag above what evem does it mean to execute inside the image or on the runner?
GitLab Runner is an open source application that collects pipeline job payload and executes it. Therefore, it implements a number of executors that can be used to run your builds in different scenarios, if you are using a docker executor you need to specify what image you will be using to run your builds.

Unable to connect to container in Gitlab CI in my free account

I have a free account of gitlab.
I also have a company account (not sure which plan).
I have the exact same project, a wrapper on EventStore.
In the CI pipeline I want to spin up a container with event store so that I can run some integration tests against it.
This is my .gitlab-ci.yml that restores, compiles, runs tests and publishes nuget packages
#Stages
stages:
- ci
- pack
#Global variables
variables:
GITLAB_RUNNER_DOTNET_CORE: mcr.microsoft.com/dotnet/core/sdk:2.2
EVENT_STORE: eventstore/eventstore:release-5.0.2
NUGET_REPOSITORY: $NEXUS_NUGET_REPOSITORY
NUGET_API_KEY: $NEXUS_API_KEY
NUGET_FOLDER_NAME: nupkgs
#Docker image
image: $GITLAB_RUNNER_DOTNET_CORE
#Jobs
ci:
stage: ci
services:
- $EVENT_STORE
variables:
# event store service params testing with standard ports
EVENTSTORE_INT_TCP_PORT: "1113"
EVENTSTORE_EXT_TCP_PORT: "1113"
EVENTSTORE_INT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PREFIXES: "http://*:2113/"
script:
- dotnet restore --no-cache --force
- dotnet build --configuration Release
- dotnet vstest test/*Tests/bin/Release/**/*Tests.dll
pack-beta-nuget:
stage: pack
script:
- export VERSION_SUFFIX=beta$CI_PIPELINE_ID
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME --version-suffix $VERSION_SUFFIX --include-source --include-symbols -p:SymbolPackageFormat=snupkg
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
except:
- master
pack-nuget:
stage: pack
script:
- dotnet restore
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
only:
- master
As you can see, I spin up the event store container.
From my integration tests I try to connect to the container within the CI using the following connection string:
"ConnectTo=tcp://admin:changeit#127.0.0.1:1113; HeartBeatTimeout=500;";
With my work account it works fine, there is a container listening on 127.0.0.1 on port 1113 and I can connect to it using the above connection string.
With my free personal account it is unable to connect.
Why?
I suspect it has something to do with the way docker is available on both gitlab CI runners, but why is different?
And more important, how can I configure event store on my personal CI pipeline on my free account so that I can connect to it if the localhost is not a valid host Uri for whatever reason?
Well, you have not provided any details but it seems you're using the Docker executor. In that case, services are not available on localhost but only accessible as service aliases.
This is an extract from the working CI file:
test:
stage: test
script:
- dotnet test
variables:
ASPNETCORE_ENVIRONMENT: Testing
EVENTSTORE_EXT_HTTP_PORT: 2113
EVENTSTORE_EXT_TCP_PORT: 1113
EVENTSTORE_RUN_PROJECTIONS: all
EVENTSTORE_START_STANDARD_PROJECTIONS: "true"
EventStore__ConnectionString: ConnectTo=tcp://admin:changeit#eventstore:1113
services:
- name: eventstore/eventstore:latest
alias: eventstore
only:
refs:
- branches
- tags
For this to work, your appsettings.Testing.json file needs to point to ConnectTo=tcp://admin:changeit#eventstore:1113.
If you want to keep using the appsettings file with the configuration that points to localhost, you can override the setting using env variable in the CI file. Just remember to add environment variables as the configuration source. The code snippet above has such an override, matching our settings structure:
{
"EventStore": {
"ConnectionString": "ConnectTo=whatever"
}
}
If you ever decide using the Kubernetes executor, you will need to revert to using localhost, because Kubernetes executor creates one pod per build with multiple containers, including all service containers. There's an open issue to support service aliases with Kubernetes runners, I think it will be like 12.9 or 13, pretty soon. That being said, using service aliases is a safe, future proof way of making it all work.
P.S. Just noticed that your setup works with one account and doesn't work with another. My guess would be that you either use different executors (Docker doesn't work and Kubernetes works) or different GitLab versions (if the service alias issue has already been fixed).

Are containers available between stages in Gitlab CI

Is a container that is used in the build stage accessible in the next stage? I have yaml like this:
build_backend:
image: web-app
services:
- mysql:5.7
stage: build
script:
- make build
test_frontend:
image: node:8
stage: test
script:
- make run-tests
My tests, that are triggered in make run-tests need to run HTTP requests against the backend container if possible?
I was trying to avoid building a new container and then pushing to a registry only to pull it down again, but maybe there is no other way? If I did this, would my web-app container still have access to the mysql container if I added it as a service in the test_frontend job.
No, containers are not available between stages. Job artifacts (i.e. files) will be passed between stages by default and can also be passed explicitly betweeen jobs.
If you need to run tests against a container, you should indeed pull it down again from a registry. Then, you can use the docker in docker (dind) service to run your tests.
I think this blog post explains a similar use case nicely. The testing job that's is described there is the following:
test:
stage: test
script:
- docker run -d --env-file=.postgres-env postgres:9.5
- docker run --env-file=.environment --link=postgres:db $CONTAINER_TEST_IMAGE nosetests --with-coverage --cover-erase --cover-package=${CI_PROJECT_NAME} --cover-html

Possible solution for bitbucket pipeline docker-run limitation

My integration tests are highly dependent of a elastic search, given that to build my integration tests on bitbucket pipeline I would have to execute the docker-run command to be able to spin up my elastic-search instance during my integration tests.
But as probably some of you know, there's a limitation on the bitbucket pipeline
See the Docker command line reference for information on how to use
these commands. Other commands, such as docker run, are currently
forbidden for security reasons on our shared build infrastructure.
So given that I don't know how can I spin-up my escluster with all configurations that I need inside, painless scripts, mappings, ports exposed to be available for my integrations tests.
Does someone have any idea how could i achieve this?
Ok I managed to get it working, I was struggling to run elastic search, due this error https://github.com/docker-library/elasticsearch/issues/111
This was fixed by applying the config discovery-type: single-node. Since I'm using this for integration tests I don't need to run ES in production mode. The thing is bitbucket-pipeline was not showing error logs for this error, so I was completely blind and I had to try many things till find out. Since I can't build and run my own image on pipelines, I uploaded an image with my own configuration (including single-node config) and scripts and to docker hub.
This is how my yaml looked like in the end:
image: maven:3.3.9
pipelines:
default:
- step:
caches:
- maven
script:
- docker version
- mvn clean package verify -Dmaven.docker.plugin.skip=true -s settings.xml
services:
- elasticsearch
definitions:
services:
elasticsearch:
image: elastic-search-bitbucket-pipeline
options:
docker: true
You can try to define your elastic-search image as a service as described there:
Use services and databases in Bitbucket Pipelines
For those still looking for a more elaborate solution, I have created a Dockerfile like this:
FROM elasticsearch:7.0.1
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
In the same folder I have also created a custom config elasticsearch.yml:
network.host: 127.0.0.1
I then added the custom image to Docker Hub, for more info how to do that, look here: https://docs.docker.com/docker-hub/repos/
You can now use the custom image in your Pipelines service configuration and use it to run your tests.
You could also supply some more configuration inside your elasticsearch.yml
Enable CORS:
http.cors.enabled: true
http.cors.allow-origin: "*"
Set discovery type:
discovery.type: single-node
You can use my docker image:
https://hub.docker.com/r/xiting/elasticsearch-bitbucket-pipeline
Add service to your pipeline as below:
definitions:
steps:
- step: &run-tests
name: Run tests
script:
- sleep 30 # Waiting elasticsearch. In your real pipeline you can not use it.
- curl -XGET localhost:9250/_cat/health
services:
- elasticsearch
services:
elasticsearch:
image: xiting/elasticsearch-bitbucket-pipeline
variables:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
docker:
memory: 2048
pipelines:
pull-requests:
'**':
- step: *run-tests

Resources