Awscli cannot connect to dynamodb-local in Bitbucket pipeline - aws-cli

I am using the Bitbucket pipleines to setup a docker container with dynamodb-local as the image. When I try to configure the dynamodb-local container (creating tables/listing tables/etc) via the AWS CLI, I get a read timeout error.
Here is a reproducable BB pipleine
image: atlassian/pipelines-awscli:latest
definitions:
scripts:
- script: &Config
docker run --name "check" -p8000:8000 -d --rm -v "${BITBUCKET_CLONE_DIR}/docker/dynamodb:/home/dynamodblocal/data" -w "/home/dynamodblocal" amazon/dynamodb-local:latest -jar DynamoDBLocal.jar -sharedDb -dbPath "./data";
docker ps;
AWS_ACCESS_KEY_ID=ABCD AWS_SECRET_ACCESS_KEY=EF1234 aws --region us-west-1 dynamodb describe-table --table-name Resources --endpoint-url http://localhost:8000;
pipelines:
branches:
"**":
- step:
name: Test
script:
- *Config
services:
- docker
size: 2x
My indended steps are as follows
Create a Container for the dynamodb-local image
Describe a specific table.
Im expecting the 2nd step to error, as I will be creating a table if the one im looking for does not exist.
However, the process hangs and a read timeout occurs as the aws command is unable to communicate with the container.
Am I missing something obvious here? This pattern works locally on my machine fine and only hangs in the bitbucket pipeline.
BTW atlassian/pipelines-awscli:latest is using awscli version 1

Related

GitLab Container to GKE (Kubernetes) deployment

Hello I have a problem with GitLab CI/CD. I'm trying to deploy container to Kubernetes on GKE however I'm getting an error:
This job failed because the necessary resources were not successfully created.
I created a service account with kube-admin rights and created cluster via GUI of GitLab so its fully itegrated. But when I run the job it still doesn't work..
by the way I use kubectl get pods in gitlab-ci file just to test if kubernetes is repsonding.
stages:
- build
- deploy
docker-build:
# Use the official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
# Default branch leaves tag empty (= latest tag)
# All other branches are tagged with the escaped branch name (commit ref slug)
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE${tag}" .
- docker push "$CI_REGISTRY_IMAGE${tag}"
deploy-prod:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl get pods
environment:
name: production
kubernetes:
namespace: test1
Any Ideas?
Thank you
namespace should be removed.
GitLab creates own namespace for every project

Connect Docker containers to same network as current Container in Azure Pipelines

I am running an Azure Container Job, where I spin up a different Docker container manually like this:
jobs:
- job: RunIntegrationTests
pool:
vmImage: "ubuntu-18.04"
container:
image: mynamespace/frontend_image:latest
endpoint: My Docker Hub Endpoint
steps:
- script: |
docker run --rm --name backend_container -p 8000:8000 -d backend_image inv server
I have to create the container manually since the image lives in AWS ECR, and the password authentication scheme that Azure provides for it can only be used with a token that expires, so it seems useless. How can I make it so that my_container is reachable from within subsequent steps of my job?. I have tried starting my job with:
options: --network mynetwork
And share it with "backend_container", but I get the error:
docker: Error response from daemon: Container cannot be connected
to network endpoints: mynetwork
While starting the "frontend" container, which might be because Azure is trying to start a container on multiple networks.
To run a container job, and attach a custom image to the created network, you can use a step as showed in the below example:
steps:
- task: DownloadPipelineArtifact#2
inputs:
artifactName: my-image.img
targetPath: images
target: host # Important, to run this on the host and not in the container
- bash: |
docker load -i images/my-image.img
docker run --rm --name my-container -p 8042:8042 my-image
# This is not really robust, as we rely on naming convections in Azure Pipelines
# But I assume they won't change to a really random name anyway.
network=$(docker network list --filter name=vsts_network -q)
docker network connect $network my-container
docker network inspect $network
target: host
Note: it's important the these steps run in the host, and not in the container (that is run for the container-job). This is done by specifying target: host for the task.
In the example the container from the custom image can the be addressed by my-container.
I ended up not using the container: property altogether, and started all containers manually, so that I can specify the same network:
steps:
- task: DockerInstaller#0
displayName: Docker Installer
inputs:
dockerVersion: 19.03.8
releaseType: stable
- task: Docker#2
displayName: Login to Docker hub
inputs:
command: login
containerRegistry: My Docker Hub
- script: |
docker network create integration_tests_network
docker run --rm --name backend --network integration_tests_network -p 8000:8000 -d backend-image inv server
docker run --rm --name frontend -d --network integration_tests_network frontend-image tail -f /dev/null
And run subsequents commands on the frontend container with docker exec

How do I get an updated copy of the environment variables in between bitbucket pipeline steps?

I have a script that updates keys for docker login.
If I do NOT run the script that updates keys the in a step, the docker login works perfectly.
If I run the update keys script on my local machine, the script works, and the docker login works
If I run the update keys in the pipeline build, the script works but the docker login does NOT work (because the environment variables are not being updated)
The update keys script needs to run before my docker login
How do I get an updated copy of the environment variables in between steps?
bitbucket-pipelines.yml
image: node:8.2.1
pipelines:
default:
- step:
name: Update Docker Password for Login
script:
- npm install aws-sdk request-promise base-64
- node build-tools/update-bb-aws-docker-login.js
- step:
name: Push Server to AWS Repository
script:
- docker login -u AWS -p $AWS_DOCKER_LOGIN https://$AWS_DOCKER_URL
- docker build -t dev .
- docker tag dev:latest $AWS_DOCKER_URL/dev:latest
- docker push $AWS_DOCKER_URL/dev:latest
options:
docker: true

build and push docker images with GitLab CI

I would like to build and push docker images to my local nexus repo with GitLab CI
This is my current CI file:
image: docker:latest
services:
- docker:dind
before_script:
- docker info
- docker login -u some_user -p nexus-rfit some_host
stages:
- build
build-deploy-ubuntu-image:
stage: build
script:
- docker build -t some_host/dev-image:ubuntu ./ubuntu/
- docker push some_host/dev-image:ubuntu
only:
- master
when: manual
I also have a job for an alpine docker image, but when I want to run any of it it's failing with the following error:
Checking out 13102ac4 as master...
Skipping Git submodules setup
$ docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: exit code 1
So technically the docker daemon in the image isn't running, but I have no idea why.
GitLab folks have a reference on their docs about using docker-build inside docker-based jobs: https://docs.gitlab.com/ce/ci/docker/using_docker_build.html#use-docker-in-docker-executor. Since you seem to have everything in place (i.e. the right image for the job and the additional docker:dind service), it's most likely a runner-config issue.
If you look at the second step in the docs:
Register GitLab Runner from the command line to use docker and privileged mode:
[...]
Notice that it's using the privileged mode to start the build and service containers. If you want to use docker-in-docker mode, you always have to use privileged = true in your Docker containers.
Probably you're using a runner that was not configured in privileged mode and hence can't properly run the docker daemon inside. You can directly edit the /etc/gitlab-runner/config.toml on your registered runner to add that option.
(Also, read on the section on the docs for some more info about the performance related to the storage driver you choose/your runner supports when using dind)

Ansible and docker: locally build image get pulled and causes failure

I'm using Ansible to provision my server with anything required to make a my website work. The goal is to install a base system and provide it with docker containers running apps (at the moment it's just one app).
The problem I'm facing is that my docker image isn't hosted at dockerhub or something else. Instead it's being built by an Ansible task. However, when I'm trying to run the built image, Ansible tries to pull it (which isn't possible) and then dies.
This is what the playbook section looks like:
- name: check or build image
docker_image:
path=/srv/svenv.nl-docker
name='svenv/svenv.nl'
state=build
- name: start svenv/svenv.nl container
docker:
name: svenv.nl
volumes:
- /srv/svenv.nl-docker/data/var/lib/mysql/:/var/lib/mysql/
- /srv/svenv.nl-docker/data/svenv.nl/svenv/media:/svenv.nl/svenv/media
ports:
- 80:80
- 3306:3306
image: svenv/svenv.nl
When I run this, a failure indicates that the svenv/svenv.nl get's pulled from the repository, it isn't there so it crashes:
failed: [vps02.svenv.nl] => {"changes": ["{\"status\":\"Pulling repository svenv/svenv.nl\"}\r\n", "{\"errorDetail\":{\"message\":\"Error: image svenv/svenv.nl:latest not found\"},\"error\":\"Error: image svenv/svenv.nl:latest not found\"}\r\n"], "failed": true, "status": ""}
msg: Unrecognized status from pull.
FATAL: all hosts have already failed -- aborting
My question is:
How can I
Build a local docker
Then start it as a container without pulling it
You are hitting this error:
https://github.com/ansible/ansible-modules-core/issues/1707
Ansible is attempting to create a container, but the create is failing with:
docker.errors.InvalidVersion: mem_limit has been moved to host_config in API version 1.19
Unfortunately, there is catch-all except: that is hiding this error. The result is that rather than failing with the above message, ansible assumes that the image is simply missing locally and attempts to pull it.
You can work around this by setting docker_api_version to something earlier than 1.19:
- name: start svenv/svenv.nl container
docker:
name: svenv.nl
ports:
- 80:80
- 3306:3306
image: svenv/svenv.nl
docker_api_version: 1.18

Resources