Pushing image to DockerHub using docker-maven-plugin from Bitbucket Pipelines - bitbucket-pipelines

I'm trying to setup docker-maven-plugin by fabric8 so I can use it from Bitbucket Pipelines.
My pom.xml looks like this:
..
..
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<configuration>
<dockerHost>???????????</dockerHost>
<verbose>true</verbose>
<pushRegistry>true</pushRegistry>
<authConfig>
<username>username</username>
<password>password</password>
</authConfig>
<images>

</images>
</configuration>
<executions>
...
</plugin>
This works perfectly when running locally.
The error I'm getting on Bitbucket Pipelines is:
[ERROR] DOCKER> Cannot create docker access object [Connect to localhost:2375 [localhost/127.0.0.1, localhost/0:0:0:0:0:0:0:1] failed: Connection refused (Connection refused)]
Yeah, this is happens since I'm not sure what to put in <dockerHost> tag, any idea?
Is there anything else needs to be done to make this work remotely?
Thank you!

I was facing the same issue but in running Integration tests which depend on docker images. After a lot of searches I found the following
There's undocumented environment variable of $BITBUCKET_DOCKER_HOST_INTERNAL which maybe helpful in your case.
I chose to add needed docker images as services in my bitbucket pipeline instead of depend on fabric8 as the following
image: maven:3.8.2-jdk-11
clone:
depth: full # SonarCloud scanner needs the full history to assign issues properly
definitions:
caches:
sonar: ~/.sonar/cache # Caching SonarCloud artifacts will speed up your build
steps:
- step: &build-test-sonarcloud
name: Build, test and analyze on SonarCloud
caches:
- maven
- sonar
script:
- mvn -f server/pom.xml -B verify org.sonarsource.scanner.maven:sonar-maven-plugin:sonar -Dsonar.projectName=user-management
services:
- redis
- rabbitmq
artifacts:
- target/**
services:
redis:
image: redis
rabbitmq:
image: rabbitmq
pipelines: # More info here: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html
branches:
'**':
- step: *build-test-sonarcloud
pull-requests:
'**':
- step: *build-test-sonarcloud
as mentioned in there documentation here
I hope this would be helpful

Related

How to put multiple images in image keyword in gitlab-ci

I have two jobs build_binary and build deb and I want to combine both of them. But the issue is they both use different images. Former one uses golang:latest and later one uses ubuntu:20.04 as shown:
gitlab-ci.yml
build_binary:
stage: build
image: golang:latest
rules:
- if: '$CI_COMMIT_TAG'
# tags:
# - docker-executor
artifacts:
untracked: true
script:
- echo "Some script"
build deb:
stage: deb-build
rules:
- if: '$CI_COMMIT_TAG'
# tags:
# - docker-executor
image: ubuntu:20.04
dependencies:
- build_binary
script:
- echo "Some script 2"
artifacts:
untracked: true
I have tried in these two ways but it didn't work.
build_binary:
stage: build
image: golang:latest ubuntu:20.04
and
build_binary:
stage: build
image: [golang:latest,ubuntu:20.04]
Any pointers would be very helpful.
It's not about gitlab-ci - for first hand you should understand what is images, what containers (docker) are in it's nature.
You cannot magically get mixed image that will be magically ubuntu:20.04+golang:latest. It's just impossible to make it from gitlab-ci file.
But you can create your own IMAGE.
You can take Dockerfile for ubuntu:20.04, at dockerhub https://hub.docker.com/_/ubuntu
Then you can add commands to it to install golang inside this operation system `
After this you open golang:latest Dockerfile and copy it's installation process to ubuntu Dockerfile with required modifications.
Then you do docker build -t my-super-ubuntu-and-golang - see manual
Then you check it - docker run and check that it's normal ubuntu with golang
If all success you can push it to your own account to dockerhub and in gitlab-ci:
image: your-name/golang-ubuntu-20.04
...
Suggestion to use services is very incorrect - service starts another image and connect it by network, so you can run postgres, rabbit and another services and use it in your tests for example:
image: alpine
services : [ rabbitmq ]
not mean that rabbitmq will be started on alpine - both will started - image of alpine and image of rabbitmq with local HOST name rabbitmq, and your alpine could connect to tcp://rabbitmq:5672 and use it. It's another approach.
P.S.
For example you can look at https://hub.docker.com/r/partlab/ubuntu-golang
I think it's not image you really want but you can see how to make mixed ubuntu-golang image
Using the image and services keywords in your.gitlab-ci.yml file, you may define a GitLab CI/CD job that uses multiple Docker images.
For example, you could use the following configuration to specify a job that uses both a Golang image and an Ubuntu image:
build:
image: golang:latest
services:
- ubuntu:20.04
before_script:
- run some command using Ubuntu
script:
- go build
- run some other command using Ubuntu

jHipster 7.9 switches to testcontainers by default - how to revert back to old testing mechanism?

The jHipster 7.9 release uses testcontainers by default for running Java integration tests. This is causing problems with my existing bitbucket-pipeline.
Is there a way to switch back to the previous test running approach used by jHipster for integration tests?
I did not find a straightforward way to switch back to the old handling of postgresql databases as part of the build integration tests.
I instead kept with the newer testcontainers approach for postgresql Spring #IntegrationTest tests. I needed to disable ryuk, and I also needed to increase the bitbucket RAM for the step running the integration tests with the Bitbucket 2x feature.
bitbucket-pipelines.yml
image: .... jhipster 7.9.2 based image ...
definitions:
services:
docker:
memory: 3000
pipelines:
default:
- step:
name: Clean & Build
....
- step:
name: Test and Verify
size: 2x
caches: ....
services:
- docker
script:
- export TESTCONTAINERS_RYUK_DISABLED=true
- ./mvnw -Pprod clean verify sonar:sonar
...

MobSF Analyzer failing to work on Gitlab-ci

I'm trying to set up MobSF SAST within Gitlab-ci and having a few issues.
I've followed the instructions within the Gitlab Docs and within the MobSF Gitlab repo
However, when I add:
To my .gitlab-ci.yml . I get a yml error stating that it could not get access
My .gitlab-ci.yml file looks like:
sast:
stage: Security
tags:
- docker
include:
- project: 'gitlab-org/security-products/analyzers/mobsf'
ref: master
file: '/template/mobsf.gitlab-ci.yml'
I have a docker image on my machine with gitlab-runners as an image. Does anyone have any thoughts about how to implement this so that i can run automated MobSF SAST on both Android and iOS?
So after working through this, It seems that you must have the following included in yoru gitlab-ci.yml file:
variables:
#required for Mobile SAST
SAST_EXPERIMENTAL_FEATURES: "true"
include:
- template: Security/SAST.gitlab-ci.yml
sast:
image: docker:19.03.8
stage: Security
variables:
SEARCH_MAX_DEPTH: 4
artifacts:
reports:
sast: gl-sast-report.json
tags:
- docker

Unable to connect to container in Gitlab CI in my free account

I have a free account of gitlab.
I also have a company account (not sure which plan).
I have the exact same project, a wrapper on EventStore.
In the CI pipeline I want to spin up a container with event store so that I can run some integration tests against it.
This is my .gitlab-ci.yml that restores, compiles, runs tests and publishes nuget packages
#Stages
stages:
- ci
- pack
#Global variables
variables:
GITLAB_RUNNER_DOTNET_CORE: mcr.microsoft.com/dotnet/core/sdk:2.2
EVENT_STORE: eventstore/eventstore:release-5.0.2
NUGET_REPOSITORY: $NEXUS_NUGET_REPOSITORY
NUGET_API_KEY: $NEXUS_API_KEY
NUGET_FOLDER_NAME: nupkgs
#Docker image
image: $GITLAB_RUNNER_DOTNET_CORE
#Jobs
ci:
stage: ci
services:
- $EVENT_STORE
variables:
# event store service params testing with standard ports
EVENTSTORE_INT_TCP_PORT: "1113"
EVENTSTORE_EXT_TCP_PORT: "1113"
EVENTSTORE_INT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PORT: "2113"
EVENTSTORE_EXT_HTTP_PREFIXES: "http://*:2113/"
script:
- dotnet restore --no-cache --force
- dotnet build --configuration Release
- dotnet vstest test/*Tests/bin/Release/**/*Tests.dll
pack-beta-nuget:
stage: pack
script:
- export VERSION_SUFFIX=beta$CI_PIPELINE_ID
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME --version-suffix $VERSION_SUFFIX --include-source --include-symbols -p:SymbolPackageFormat=snupkg
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
except:
- master
pack-nuget:
stage: pack
script:
- dotnet restore
- dotnet pack *.sln --configuration Release --output $NUGET_FOLDER_NAME
- dotnet nuget push **/*.nupkg --api-key $NUGET_API_KEY --source $NUGET_REPOSITORY
only:
- master
As you can see, I spin up the event store container.
From my integration tests I try to connect to the container within the CI using the following connection string:
"ConnectTo=tcp://admin:changeit#127.0.0.1:1113; HeartBeatTimeout=500;";
With my work account it works fine, there is a container listening on 127.0.0.1 on port 1113 and I can connect to it using the above connection string.
With my free personal account it is unable to connect.
Why?
I suspect it has something to do with the way docker is available on both gitlab CI runners, but why is different?
And more important, how can I configure event store on my personal CI pipeline on my free account so that I can connect to it if the localhost is not a valid host Uri for whatever reason?
Well, you have not provided any details but it seems you're using the Docker executor. In that case, services are not available on localhost but only accessible as service aliases.
This is an extract from the working CI file:
test:
stage: test
script:
- dotnet test
variables:
ASPNETCORE_ENVIRONMENT: Testing
EVENTSTORE_EXT_HTTP_PORT: 2113
EVENTSTORE_EXT_TCP_PORT: 1113
EVENTSTORE_RUN_PROJECTIONS: all
EVENTSTORE_START_STANDARD_PROJECTIONS: "true"
EventStore__ConnectionString: ConnectTo=tcp://admin:changeit#eventstore:1113
services:
- name: eventstore/eventstore:latest
alias: eventstore
only:
refs:
- branches
- tags
For this to work, your appsettings.Testing.json file needs to point to ConnectTo=tcp://admin:changeit#eventstore:1113.
If you want to keep using the appsettings file with the configuration that points to localhost, you can override the setting using env variable in the CI file. Just remember to add environment variables as the configuration source. The code snippet above has such an override, matching our settings structure:
{
"EventStore": {
"ConnectionString": "ConnectTo=whatever"
}
}
If you ever decide using the Kubernetes executor, you will need to revert to using localhost, because Kubernetes executor creates one pod per build with multiple containers, including all service containers. There's an open issue to support service aliases with Kubernetes runners, I think it will be like 12.9 or 13, pretty soon. That being said, using service aliases is a safe, future proof way of making it all work.
P.S. Just noticed that your setup works with one account and doesn't work with another. My guess would be that you either use different executors (Docker doesn't work and Kubernetes works) or different GitLab versions (if the service alias issue has already been fixed).

Possible solution for bitbucket pipeline docker-run limitation

My integration tests are highly dependent of a elastic search, given that to build my integration tests on bitbucket pipeline I would have to execute the docker-run command to be able to spin up my elastic-search instance during my integration tests.
But as probably some of you know, there's a limitation on the bitbucket pipeline
See the Docker command line reference for information on how to use
these commands. Other commands, such as docker run, are currently
forbidden for security reasons on our shared build infrastructure.
So given that I don't know how can I spin-up my escluster with all configurations that I need inside, painless scripts, mappings, ports exposed to be available for my integrations tests.
Does someone have any idea how could i achieve this?
Ok I managed to get it working, I was struggling to run elastic search, due this error https://github.com/docker-library/elasticsearch/issues/111
This was fixed by applying the config discovery-type: single-node. Since I'm using this for integration tests I don't need to run ES in production mode. The thing is bitbucket-pipeline was not showing error logs for this error, so I was completely blind and I had to try many things till find out. Since I can't build and run my own image on pipelines, I uploaded an image with my own configuration (including single-node config) and scripts and to docker hub.
This is how my yaml looked like in the end:
image: maven:3.3.9
pipelines:
default:
- step:
caches:
- maven
script:
- docker version
- mvn clean package verify -Dmaven.docker.plugin.skip=true -s settings.xml
services:
- elasticsearch
definitions:
services:
elasticsearch:
image: elastic-search-bitbucket-pipeline
options:
docker: true
You can try to define your elastic-search image as a service as described there:
Use services and databases in Bitbucket Pipelines
For those still looking for a more elaborate solution, I have created a Dockerfile like this:
FROM elasticsearch:7.0.1
COPY --chown=elasticsearch:elasticsearch elasticsearch.yml /usr/share/elasticsearch/config/
In the same folder I have also created a custom config elasticsearch.yml:
network.host: 127.0.0.1
I then added the custom image to Docker Hub, for more info how to do that, look here: https://docs.docker.com/docker-hub/repos/
You can now use the custom image in your Pipelines service configuration and use it to run your tests.
You could also supply some more configuration inside your elasticsearch.yml
Enable CORS:
http.cors.enabled: true
http.cors.allow-origin: "*"
Set discovery type:
discovery.type: single-node
You can use my docker image:
https://hub.docker.com/r/xiting/elasticsearch-bitbucket-pipeline
Add service to your pipeline as below:
definitions:
steps:
- step: &run-tests
name: Run tests
script:
- sleep 30 # Waiting elasticsearch. In your real pipeline you can not use it.
- curl -XGET localhost:9250/_cat/health
services:
- elasticsearch
services:
elasticsearch:
image: xiting/elasticsearch-bitbucket-pipeline
variables:
ES_JAVA_OPTS: '-Xms512m -Xmx512m'
docker:
memory: 2048
pipelines:
pull-requests:
'**':
- step: *run-tests

Resources