Time out Error on build in Gitlab - gitlab

I am using the following .gitlab-ci.yml file:
stages:
- build
build:
image: php:7.2-apache
stage: build
tags:
- runner01
script:
- echo -e "\n Build da aplicação. \n"
before_script:
- apt-get update -yqq
- apt-get install git -yqq
- docker-php-ext-install mysqli
- curl -sS https://getcomposer.org/installer | php
- php composer.phar install
I installed Gitlab using Docker but when cloning the repository it gives the error below. I could not find anything to help me solve this problem.
My docker-compose file:
version: '3'
services:
Gitlab_CI:
container_name: Gitlab_CI
image: 'gitlab/gitlab-ce:latest'
networks:
- 'DockerLAN'
restart: always
hostname: 'gitlab.docker'
ports:
- '81:80'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
Gitlab_Runner:
container_name: Gitlab_Runner
image: 'gitlab/gitlab-runner:latest'
networks:
- 'DockerLAN'
restart: always
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
- '/srv/gitlab-runner/config:/etc/gitlab-runner'
networks:
DockerLAN:
driver: bridge

Related

Healthcheck for Cosmodb emulator in docker

I've got a docker-compose file with following container defined:
cosmosdb:
build:
context: .
dockerfile: Dockerfile.cosmosdb
environment:
- NODE_TLS_REJECT_UNAUTHORIZED=0
- AZURE_COSMOS_EMULATOR_IP_ADDRESS_OVERRIDE=xxx.xxx.xxx.xxx
ports:
- "8081:8081"
- "8900:8900"
- "8901:8901"
- "8979:8979"
- "10250:10250"
- "10251:10251"
- "10252:10252"
- "10253:10253"
- "10254:10254"
- "10255:10255"
- "10256:10256"
- "10350:10350"
networks:
my-network:
ipv4_address: xxx.xxx.xxx.xxx
And a Dockerfile:
FROM mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator
RUN apt update && apt upgrade -y
RUN apt install curl -y
CMD /usr/local/bin/cosmos/start.sh
My question is, what is a good way to add a healthcheck here?

unable to run gatbsy site locally with docker-compose

This is my first time using docker and I have downloaded a docker-compose.yml from https://github.com/wodby/docker4wordpress which is a repo with docker images for wordpress . I am planning to use wordpress as a backend and use created blog posts from it's API in a Gatsby site . My problem is that since I don't have experience with docker I am unable to
run my gatsby site locally using the node image the above link provides inside my docker-compose.yml . I can run wordpress succesfully however as I had help from my boss to configure the above file before trying to use gatsby and node .
My project structure :
Wordpress_Project
|
+-- numerous wordpress folders
|
+-- gatsby_site folder
|
+-- docker-compose.yml
My docker-compose.yml
version: "3"
services:
mariadb:
image: wodby/mariadb:$MARIADB_TAG
container_name: "${PROJECT_NAME}_mariadb"
stop_grace_period: 30s
environment:
MYSQL_ROOT_PASSWORD: $DB_ROOT_PASSWORD
MYSQL_DATABASE: $DB_NAME
MYSQL_USER: $DB_USER
MYSQL_PASSWORD: $DB_PASSWORD
volumes:
# - ./mariadb-init:/docker-entrypoint-initdb.d # Place init .sql file(s) here.
- bedrock_dbdata:/var/lib/mysql # I want to manage volumes manually.
php:
image: wodby/wordpress-php:$PHP_TAG
container_name: "${PROJECT_NAME}_php"
environment:
PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
DB_HOST: $DB_HOST
DB_USER: $DB_USER
DB_PASSWORD: $DB_PASSWORD
DB_NAME: $DB_NAME
PHP_FPM_USER: www-data
PHP_FPM_GROUP: www-data
# # Read instructions at https://wodby.com/docs/stacks/wordpress/local#xdebug
# PHP_XDEBUG: 1
# PHP_XDEBUG_MODE: debug
# PHP_IDE_CONFIG: serverName=my-ide
# PHP_XDEBUG_IDEKEY: "my-ide"
# PHP_XDEBUG_CLIENT_HOST: 172.17.0.1 # Linux
# PHP_XDEBUG_CLIENT_HOST: host.docker.internal # Docker 18.03+ Mac/Win
# PHP_XDEBUG_CLIENT_HOST: 10.0.75.1 # Windows
# PHP_XDEBUG_LOG: /tmp/php-xdebug.log
volumes:
- ./:${CODEBASE}:cached
## Alternative for macOS users: Mutagen https://wodby.com/docs/stacks/wordpress/local#docker-for-mac
# - mutagen:/var/www/html
# # For XHProf and Xdebug profiler traces
# - files:/mnt/files
crond:
image: wodby/wordpress-php:$PHP_TAG
container_name: "${PROJECT_NAME}_crond"
environment:
CRONTAB: "0 * * * * wp cron event run --due-now --path=${CODEBASE}/web/wp"
command: sudo -E LD_PRELOAD=/usr/lib/preloadable_libiconv.so crond -f -d 0
volumes:
- ./:${CODEBASE}:cached
mailhog:
image: mailhog/mailhog
container_name: "${PROJECT_NAME}_mailhog"
labels:
- "traefik.http.services.${PROJECT_NAME}_mailhog.loadbalancer.server.port=8025"
- "traefik.http.routers.${PROJECT_NAME}_mailhog.rule=Host(`mailhog.${PROJECT_BASE_URL}`)"
nginx:
image: wodby/nginx:$NGINX_TAG
container_name: "${PROJECT_NAME}_nginx"
depends_on:
- php
environment:
NGINX_STATIC_OPEN_FILE_CACHE: "off"
NGINX_ERROR_LOG_LEVEL: debug
NGINX_BACKEND_HOST: php
NGINX_VHOST_PRESET: wordpress
NGINX_SERVER_NAME: ${PROJECT_BASE_URL}
NGINX_SERVER_ROOT: ${CODEBASE}/web
volumes:
- ./:${CODEBASE}:cached
## Alternative for macOS users: Mutagen https://wodby.com/docs/stacks/wordpress/local#docker-for-mac
# - mutagen:/var/www/html
labels:
- "traefik.http.routers.${PROJECT_NAME}_nginx.rule=Host(`${PROJECT_BASE_URL}`)"
portainer:
image: portainer/portainer-ce
container_name: "${PROJECT_NAME}_portainer"
command: -H unix:///var/run/docker.sock
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
labels:
- "traefik.http.services.${PROJECT_NAME}_portainer.loadbalancer.server.port=9000"
- "traefik.http.routers.${PROJECT_NAME}_portainer.rule=Host(`portainer.${PROJECT_BASE_URL}`)"
traefik:
image: traefik:v2.4
container_name: "${PROJECT_NAME}_traefik"
command: --api.insecure=true --providers.docker
ports:
- '80:80'
# - '8080:8080' # Dashboard
volumes:
- /var/run/docker.sock:/var/run/docker.sock
#this is the node image where I want to run gatsby locally
node:
image: wodby/node:$NODE_TAG
container_name: "${PROJECT_NAME}_node"
working_dir: /app
labels:
- "traefik.http.services.${PROJECT_NAME}_node.loadbalancer.server.port=3000"
- "traefik.http.routers.${PROJECT_NAME}_node.rule=Host(`node.${PROJECT_BASE_URL}`)"
expose:
- "3000"
volumes:
- ./gatsby_site:/app
command: sh -c 'npm install && npm run start'
volumes:
bedrock_dbdata:
external: true
I am trying to configure the above node image but I am unsuccessfull so far . I try accessing the node url on localhost only to get error 404 .
I would appreciate your help.

Flask App runs locally but errors out on DigitalOcean

The following is my docker-compose.yml file. It has to do with my docker-compose.yml file.. I think. Like I said, the app builds locally but when I run "docker-compose up -d" in my DigitalOcean Droplet I get this error.
ERROR: Couldn't find env file: /root/.env
The following is my docker-compose.yml file.
version: '2'
services:
postgres:
image: 'postgres:9.5'
container_name: postgress
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5432:5432'
networks:
- db_nw
redis:
image: 'redis:3.0-alpine'
container_name: redis
command: redis-server --requirepass pass123456word
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6379:6379'
website:
restart: always
build: .
container_name: website
command: >
gunicorn -c "python:config.gunicorn" --reload "app.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/app'
ports:
- 8000:8000
expose:
- 8000
networks:
- db_nw
- web_nw
depends_on:
- postgres
links:
- celery
- redis
- postgres
celery:
build: .
container_name: celery
command: celery worker -B -l info -A app.blueprints.contact.tasks
env_file:
- '.env'
volumes:
- '.:/app'
nginx:
restart: always
build: ./nginx
image: 'nginx:1.13'
container_name: nginx
volumes:
- /www/static
- .:/app
ports:
- 80:80
networks:
- web_nw
links:
- website
depends_on:
- website
networks:
db_nw:
driver: bridge
web_nw:
driver: bridge
volumes:
postgres:
redis:
My dockerfile
FROM python:3.7.5-slim-buster
RUN apt-get update \
&& apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install --upgrade pip -r requirements.txt
COPY . .
RUN pip install --editable .
CMD gunicorn -c "python:config.gunicorn" "app.app:create_app()"
Is something wrong with my volumes in my docker-compose.yml file? or am I doing something weird in my Dockerfile with the ENV to where its hard coded to a local machine rather than the "root" directory on DigitalOcean?F
I'm new to hosting docker images so this is my first go at something like this. Thanks!
When you access a Droplet, you're generally running as root.
You appear to have copied the docker-compose.yml correctly to the Droplet but you have not copied the .env file on which it depends to the Droplet's /root/.env.
If you copy the .env file to /root/.env on the Droplet, it should work.

unable to run my spring boot app using maven?

I am using the following docker-compose.yml to setup my environment
version: "3"
services:
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_USER: example_db_user
MYSQL_PASSWORD: example_db_pass
volumes:
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "3306:3306"
tomcat:
image: picoded/tomcat7
environment:
JDBC_URL: jdbc:mysql://db:3306/data-core?connectTimeout=0&socketTimeout=0&autoReconnect=true
JDBC_USER: example_db_user
JDBC_PASS: example_db_pass
container_name: tomcat7hope
ports:
- "8080:8080"
volumes:
- ./docker/data-core.war:/usr/local/tomcat/webapps/data-core.war
depends_on:
- db
maven:
build:
context: .
depends_on:
- tomcat
My dockerfile is
FROM maven:3.5-jdk-8
RUN mvn clean install
after running the command docker-compose up the maven container exited saying build failure no goals have been specified for this build.But i already specified RUN mvn clean install in my dockerfile so why is it not running properly??

docker-compose up didn't finish npm install.

I'm new to docker-compose and I'd like to use it for my current development.
after I ran docker-compose up -d everything was starting ok and it looks good. But my nodejs application wasn't installed correctly. It seems like npm install wasn't complete and I had to do docker exec -it api bash to run npm i manually inside the container.
Here's my docker-compose.
version: '2'
services:
app:
build: .
container_name: sparrow-api-1
volumes:
- .:/usr/src/app
- $HOME/.aws:/root/.aws
working_dir: /usr/src/app
environment:
- SPARROW_EVENT_QUEUE_URL=amqp://guest:guest#rabbitmq:5672
- REDIS_URL=redis
- NSOLID_APPNAME=sparrow-api
- NSOLID_HUB=registry:4001
- NODE_ENV=local
- REDIS_PORT=6379
- NODE_PORT=8081
- SOCKET_PORT=8002
- ELASTICSEARCH_URL=elasticsearch
- STDIN_OPEN=${STDIN_OPEN}
networks:
- default
depends_on:
- redis
- rabbitmq
- elasticsearch
expose:
- "8081"
ports:
- "8081:8081"
command: bash docker-command.sh
redis:
container_name: redis
image: redis:3.0.7-alpine
networks:
- default
ports:
- "6379:6379"
rabbitmq:
container_name: rabbitmq
image: rabbitmq:3.6.2-management
networks:
- default
ports:
- "15672:15672"
elasticsearch:
container_name: elasticsearch
image: elasticsearch:1.5.2
networks:
- default
ports:
- "9200:9200"
- "9300:9300"
registry:
image: nodesource/nsolid-registry
container_name: registry
networks:
- default
ports:
- 4001:4001
proxy:
image: nodesource/nsolid-hub
container_name: hub
networks:
- default
environment:
- REGISTRY=registry:4001
- NODE_DEBUG=nsolid
console:
image: nodesource/nsolid-console
container_name: console
networks:
- default
environment:
- NODE_DEBUG=nsolid
- NSOLID_APPNAME=console
- NSOLID_HUB=registry:4001
command: --hub hub:9000
ports:
- 3000:3000
# don't forget to create network as well
networks:
default:
driver: bridge
Here's my docker-command.sh
#!/usr/bin/env bash
# link the node modules to the root directory of our app, if not exists
modules_link="/usr/src/app/node_modules"
if [ ! -d "${modules_link}" ]; then
ln -s /usr/lib/app/node_modules ${modules_link}
fi
if [ -n "$STDIN_OPEN" ]; then
# if we want to be interactive with our app container, it needs to run in
# the background
tail -f /dev/null
else
nodemon
fi
Here's my Dockerfile
FROM nodesource/nsolid:latest
RUN mkdir /usr/lib/app
WORKDIR /usr/lib/app
COPY [".npmrc", "package.json", "/usr/lib/app/"]
RUN npm install \
&& npm install -g mocha \
&& npm install -g nodemon \
&& rm -rf package.json .npmrc
In your Dockerfile you are running npm install without any arguments first:
RUN npm install \
&& npm install -g mocha \
This will cause a non-zero exit code and due to the && the following commands are not executed. This should also fail the build though, so I'm guessing you already had a working image and added the npm instructions later. To rebuild the image use docker-compose build or simply docker-compose up --build. Per default docker-compose up will only build the image if it did not exist yet.

Resources