bitbucket pipelines can't find postgresql image - bitbucket-pipelines

I have following in my bitbucket-pipeline.yml
image: python:3.8
pipelines:
default:
- step:
caches:
- pip
script:
- pip install virtualenv
- virtualenv venv
- . venv/bin/activate
- pip install -r requirements.txt
- pip install -e .
- cp .env-example .env
- make test
services:
- postgres
- redis
definitions:
services:
postgres:
image: postgers:11
redis:
image: redis
However, my bitbucket pipeline always fails with following error:
rpc error: code = Unknown desc = failed to resolve image "docker.io/library/postgers:11": no available registry endpoint: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed

Probably it’s the typo in »postgers:11«

Related

Gitlab CI empty script

I'm trying to setup a simple CI/CD environment on gitlab. My code is a python app that needs an external service for testing. The service is a container that does not require any script to be run. My gitlab-ci.yml file is:
stages:
- dynamodb
- testing
build_dynamo:
stage: dynamodb
image: amazon/dynamodb-local:latest
unit_tests:
stage: testing
image: python:3.10.3
before_script:
- pip install -r requirements_tests.txt
- export PYTHONPATH="${PYTHONPATH}:./src"
script:
- python -m unittest discover -s ./tests -p '*test*.py'
For this config I get an error
Found errors in your .gitlab-ci.yml: jobs build_dynamo config should
implement a script: or a trigger: keyword
Hiow can I solve this or implement the setup I need?
Using service solved this
unit_tests:
image: python:3.10.3-slim-buster
services:
- name: amazon/dynamodb-local:latest
before_script:
- pip install -r requirements_tests.txt
- export PYTHONPATH="${PYTHONPATH}:./src"
script:
- python -m unittest discover -s ./tests -p '*test*.py'
endpoint for the service is amazon-dynamo-local:8000 since "/" are changed to "-".
Reference: https://docs.gitlab.com/ee/ci/services/#accessing-the-services

Docker compose can't find named Dockerfile

my project structure is defined like this (names are just for example):
- docker-compose.yml
- env_files
- foo.env
- dockerfiles
-service_1
-foo.Dockerfile
-requirements.txt
- code_folder_1
- ...
- code_folder_2
- ...
In my docker-compose:
some_service:
container_name: foo_name
build:
context: .
dockerfile: ./dockerfiles/service_1/foo.Dockerfile
ports:
- 80:80
env_file:
- ./env_files/foo.env
Dockerfile:
FROM python:3.8-slim
WORKDIR /some_work_dir
COPY ./dockerfiles/intermediary/requirements.txt .
RUN pip3 install --upgrade pip==21.3.1 && \
pip3 install -r requirements.txt
COPY . .
EXPOSE 80
And after i run docker compose build in the directory where compose file is located I get this error:
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount4158855397/Dockerfile: no such file or directory
I really do not understand why this is happening. I need to set context:. because I have multiple folders that I need to COPY inside foo.Dockerfile
Same error was replicated in macOS Monterey 12.5.1 and Ubuntu 18.04.4 LTS (Bionic Beaver)
I solved a similar issue by writing a script like the following:
#!/bin/bash
SCRIPT_PATH=$(readlink -f $(dirname $0))
SCRIPT_PATH=${SCRIPT_PATH} docker-compose -f ${SCRIPT_PATH}/docker-compose.yaml up -d --build $#
And changing the yaml into:
some_service:
container_name: foo_name
build:
context: .
dockerfile: ${SCRIPT_PATH}/dockerfiles/service_1/foo.Dockerfile
ports:
- 80:80
env_file:
- ${SCRIPT_PATH}/env_files/foo.env

Jira CircleCI integration. Please provide a CircleCI API token for this orb to work

I'm trying to integrate Jira with circle ci. I've followed the instructions and everything went smoothly. I've 1. installed CircleCI for Jira 2. created token in Jira 3. added that token in Circle CI 4. added orb in config.yml file. When I pushed the changes the build failed and it showed the below message.
/bin/bash: CIRCLE_TOKEN: Please provide a CircleCI API token for this orb to work!
This is my config.yml file
version: 2.1
orbs:
jira: circleci/jira#1.0.2
workflows:
build:
jobs:
- build:
post-steps:
- jira/notify
jobs:
build:
docker:
- image: circleci/python:3.6
steps:
- checkout
- restore_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
- run:
command: |
python3 -m venv venv
. venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
- save_cache:
key: deps1-{{ .Branch }}-{{ checksum "requirements.txt" }}
paths:
- "venv"
- run:
name: Running tests
command: |
. venv/bin/activate
python3 test_manage.py test
- store_artifacts:
path: test-reports/
destination: python_app
You need to generate a CircleCI API token and then create an environment variable on your CircleCI project with the name 'CIRCLE_TOKEN' and the token as its value.
If you want to use a different environment variable name, you can set the token_name parameter when defining the command in your CircleCI configuration.
Edit:
You may have issues using a project API token, instead, use a personal API token.

Bitbucket Pipeline with docker-compose: Container ID 166535 cannot be mapped to a host ID

I'm trying to use docker-compose inside bitbucket pipeline in order to build several microservices and run tests against them. However I'm getting the following error:
Step 19/19 : COPY . .
Service 'app' failed to build: failed to copy files: failed to copy directory: Error processing tar file(exit status 1): Container ID 166535 cannot be mapped to a host ID
As of now, my docker-compose.yml looks like this:
version: '2.3'
services:
app:
build:
context: .
target: dev
ports:
- "3030:3030"
image: myapp:dev
entrypoint: "/docker-entrypoint-dev.sh"
command: [ "npm", "run", "watch" ]
volumes:
- .:/app/
- /app/node_modules
environment:
NODE_ENV: development
PORT: 3030
DATABASE_URL: postgres://postgres:#postgres/mydb
and my Dockerfile is as follow:
# ---- Base ----
#
FROM node:10-slim AS base
ENV PORT 80
ENV HOST 0.0.0.0
EXPOSE 80
WORKDIR /app
COPY ./scripts/docker-entrypoint-dev.sh /
RUN chmod +x /docker-entrypoint-dev.sh
COPY ./scripts/docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
COPY package.json package-lock.json ./
# ---- Dependencies ----
#
FROM base as dependencies
RUN npm cache verify
RUN npm install --production=true
RUN cp -R node_modules node_modules_prod
RUN npm install --production=false
# ---- Development ----
#
FROM dependencies AS dev
ENV NODE_ENV development
COPY . .
# ---- Release ----
#
FROM dependencies AS release
ENV NODE_ENV production
COPY --from=dependencies /app/node_modules_prod ./node_modules
COPY . .
CMD ["npm", "start"]
And in my bitbucket-pipelines.yml I define my pipeline as:
image: node:10.15.3
pipelines:
default:
- step:
name: 'install docker-compose, and run tests'
script:
- curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
However, this example works when I try to use docker without docker-compose, defining my pipeline as:
pipelines:
default:
- step:
name: 'install and run tests'
script:
- docker build -t myapp .
- docker run --entrypoint="" myapp npm run test
- echo 'done!'
services:
- postgres
- docker
I found this issue (https://jira.atlassian.com/browse/BCLOUD-17319) in atlassian community, however I could not find a solution to fix my broken usecase. Any suggestions?
I would try to use an image with installed docker-compose already instead of installing it during the pipeline.
image: node:10.15.3
pipelines:
default:
- step:
name: 'run tests'
script:
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
definitions:
services:
docker:
image: docker/compose:1.25.4
try to add this to your bitbucket-pipelines.yml
if it doesn't work rename docker to customDocker in the definition and in the service sections.
if it doesn't work too, then because you don't need nodejs in the pipeline directly, try to use this approach:
image: docker/compose:1.25.4
pipelines:
default:
- step:
name: 'run tests'
script:
- docker-compose -v
- docker-compose run app npm run test
- echo 'tests done'
services:
- docker
TL;DR: Start from your baseimage and check for the ID that is creating the problem using commands in your dockerfile. Use "problem_id = error_message_id - 100000 - 65536" to find the uid or gid that is not supported. Chown copies the files that are modified inflating your docker image.
The details:
We were using base image tensorflow/tensorflow:2.2.0-gpu and though we tried to find the problem ourselves, we were looking too late in our Dockerfile and making assumptions that were wrong.With help from Atlassian support we found that /usr/local/lib/python3.6 contained many files belonging to group staff (gid = 50)
Assumption 1: Bitbucket pipelines have definitions for the standard "linux" user ids and group ids.
Reality: Bitbucket pipelines only define a subset of the standard users and groups. Specifically they do not define group "staff" with gid 50. Your Dockerfile base image may define group staff (in /etc/groups) but the Bitbucket pipeline is run in a docker container without that gid. DO NOT USE
RUN cat /etc/group && RUN /etc/passwd
to check for ids. Execute these commands as Bitbucket pipeline commands in your script.
Assumption 2: It was something we were installing that was breaking the build.
Reality: Although we could "move the build failure around" by adjusting which packages we installed. This was likely just a case of some packages overwriting the ownership of pre-existing
We were able to find the files by using the relationship between the id in the error message and the docker build id of
problem_id = error_message_id - 100000 - 65536
And used the computed id value (50) to fined the files early in our Dockerfile:
RUN find / -uid 50-ls
RUN find / -gid 50 -ls
For example:
Error processing tar file(exit status 1): Container ID 165586 cannot be mapped to a host ID
50 = 165586 - 100000 - 65536
Final solution (for us):
Adding this command early to our Dockerfile:
RUN chown -R root:root /usr/local/lib/python*
Fixed the Bitbucket pipeline build problem, but also increases the size of our Docker image because Docker makes a copy of all of the files that are modified (contents or filesystem flags). We will look again at multi-stage builds to reduce the size of our docker images.

Terraform local_file on CircleCI is not able to find the file

I am trying to deploy lambda functions using terraform over CircleCI
resource "aws_lambda_function" "demo_function" {
function_name = "my-dummy-function"
handler = "index.handler"
role = "${var.IAM_LAMBDA_ARN}"
filename = "${var.API_DIR}"
source_code_hash = "${base64sha256(data.local_file.dist_file.content)}"
runtime = "nodejs8.10"
}
data "local_file" "dist_file" {
filename = "${var.API_DIR}"
}
Error
on lambda/main.tf line 10, in data "local_file" "dist_file":
10: data "local_file" "dist_file" {
The variables are all fine.
Local deployment is also working fine.
Tried with different versions of Terraform too (0.11.xx and 0.12.0)
.circleci/config.yml
version: 2
jobs:
build:
working_directory: ~/tmp
docker:
- image: circleci/node:8
- image: hashicorp/terraform
steps:
- checkout
- restore-cache:
keys:
- v1-dependencies-{{ checksum "backend/package.json" }}
- vi-dependencies-
- run:
name: Installing modules
command: cd backend/ && npm ci
- save-cache:
paths:
- ./backend/node_modules
key: v1-dependencies-{{ checksum "backend/package.json" }}
- run:
name: Building backend code
command: cd backend/ && npm run build
- persist_to_workspace:
root: backend/dist
paths:
- terraform_demo-api.zip
deploy:
working_directory: ~/tmp
docker:
- image: alpine:3.8
steps:
- run:
name: Setting up
command: apk update && apk add ca-certificates openssl wget && update-ca-certificates
- run:
name: Installing
command: |
wget https://releases.hashicorp.com/terraform/0.12.0/terraform_0.12.0_linux_amd64.zip
apk add --update git curl openssh make python py-pip groff less unzip
unzip terraform_0.12.0_linux_amd64.zip -d /bin
rm -f terraform_0.12.0_linux_amd64.zip
pip install --quiet --upgrade pip && \
pip install --quiet awscli==1.14.5
- checkout
- attach_workspace:
at: infrastructure/lambda
- restore-cache:
keys:
- v1-infrastructure-{{ checksum "infrastructure/.terraform/terraform.tfstate" }}
- run:
name: Initialising terraform
command: cd infrastructure/ && terraform init -reconfigure -force-copy -backend=true -backend-config "bucket=geet-tf-state-bucket" -backend-config "key=terraform.tfstate" -backend-config "region=us-west-2"
- run: cd infrastructure/lambda/ && cat terraform_demo-api.zip >> api.zip
- run:
name: Executing plan
command: cd infrastructure/ && terraform plan -var="UI_BUCKET_NAME=$UI_BUCKET_NAME" -var="API_DIR=$API_DIR" -var="AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID" -var="AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY" -out .terraform/service.tfplan
- run:
name: Applying infrastructure
command: cd infrastructure/ && terraform apply -auto-approve .terraform/service.tfplan
- save-cache:
paths:
- ./infrastructure/.terraform
key: v1-infrastructure-{{ checksum "infrastructure/.terraform/terraform.tfstate" }}
workflows:
version: 2
build_and_deploy:
jobs:
- build
- deploy:
requires:
- build
In case one would say that the file path may differ or I can use ${path.module}, I have tried accessing a local file (example output.tf) from the same directory but still I get the file not found issue.

Resources