Tell me please.
I'm using Jenkins to build a project that runs in a docker container and I've run into a problem.
When executing this piece of code:
stage('deploy front') {
when { equals expected: 'do', actual: buildFront }
agent {docker{image 'ebiwd/alpine-ssh'}}
steps{
sh 'chmod 400 .iac/privatekey'
sh "ssh -i .iac/privatekey ci_user#134.209.181.163"
}
}
I get an error:
+ ssh -i .iac/privatekey ci_user#134.209.181.163
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added '134.209.181.163' (ECDSA) to the list of
known hosts.
bind: No such file or directory
unix_listener: cannot bind to path:
/root/.ssh/sockets/ci_user#134.209.181.163-22.uzumQ42Zb6Tcr2E9
Moreover, if you execute the following script with your hands in the container, then everything works
ssh -i .iac/privatekey ci_user#134.209.181.163
container with Jenkins started with docker-compose.yaml
version: '3.1'
services:
jenkins:
image: jenkins/jenkins:2.277.1-lts
container_name: jenkins
hostname: jenkins
restart: always
user: root
privileged: true
ports:
- 172.17.0.1:8070:8080
- 50000:50000
volumes:
- /opt/docker/jenkins/home:/var/jenkins_home
- /etc/timezone:/etc/timezone
- /usr/bin/docker:/usr/bin/docker
- /etc/localtime:/etc/localtime
- /var/run/docker.sock:/var/run/docker.sock
What could be the problem?
I have the same error in my Gitlab pipelines:
bind: No such file or directory
unix_listener: cannot bind to path: /root/.ssh/sockets/aap_adm#wp-np2-26.ebi.ac.uk-22.LIXMnQy4cW5klzgB
lost connection
I think that the error is related to this changeset.
In particular, the ssh config file requires the path "~/.ssh/sockets" to be present. Since we are not using the script /usr/local/bin/add-ssh-key (custom script created for that image) this path is missing.
I've opened an issue in the image project: Error using the image in CI/CD pipelines #10.
The problem was at this place:
agent {docker{image 'ebiwd/alpine-ssh'}}
When i change it to prev version, like:
agent {docker{image 'ebiwd/alpine-ssh:3.13'}}
Everything started working
Related
I am new in Docker and CI\CD
I am using a vps with Ubuntu 18.04.
The docker of the project runs locally and works fine.
I don't quite understand why the server is trying to find the docker on http, not on tcp.
override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
docker service status
daemon.json
{ "storage-driver":"overlay" }
gitlab-ci.yml
image: docker/compose:latest
services:
- docker:dind
stages:
- deploy
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
deploy:
stage: deploy
only:
- master
tags:
- deployment
script:
# - export DOCKER_HOST="tcp://127.0.0.1:2375"
- docker-compose stop || true
- docker-compose up -d
- docker ps
environment:
name: production
Error
Set the DOCKER_HOST variable. When using the docker:dind service, the default hostname for the daemon is the name of the service, docker.
variables:
DOCKER_HOST: "tcp://docker:2375"
You must have also setup your GitLab runner to enable privileged containers.
docker needs root permission to be access. If you want to run docker commands, or docker-compose from a regular user, you need to add your user to docker group like:
sudo usermod -a -G docker yourUserName
By doing that, you can up your services and other docker stuffs with your regular user. If you don't want to add your user into docker group, you need to always prefix sudo on every docker commando you run:
sudo docker-compose up -d
I installed gitlab_runner.exe and Docker Desktop on Windows 10 and try to execute the following from gitlab-ci.yml
.docker-build:
image: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/docker:19.03.12
services:
- name: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/docker:19.03.12-dind
alias: docker
before_script:
- docker info
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:$CI_PIPELINE_ID -t $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:$TAG -f $DOCKER_FILE $DOCKER_PATH
- docker push $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:$TAG
- docker push $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:$CI_PIPELINE_ID
As I am running locally, variables CI_REGISTRY is not getting set. I tried the following but nothing worked
1. gitlab-runner-windows-amd64.exe exec shell --env "CI_REGISTRY=gitco.com:4004" .docker-build
2. set CI_REGISTRY=gitco.com:4004 from Windows command prompt
3. Tried setting the variable within .gitlab-ci.yml
No matter, whatever I try, it does not recognize the CI_REGISTRY value and errored as follows
Error response from daemon: Get https://$CI_REGISTRY/v2/: dial tcp: lookup $CI_REGISTRY: no such host
I googled but unable to find relevant link for this issue. Any help is highly appreciated
Overview:
I updated the MySQL Node-RED module and now I must restart Node-RED to enable it. The message as follows:
Node-RED must be restarted to enable upgraded modules
Problem:
I am running the official node-red docker container using docker-compose and there is no command node-red command when I enter the container as suggested in-the-docs.
Question
How do I manually restart the node-red application without the shortcut in the official nodered docker container?
Caveats:
I have never used node.js and I am new to node-red.
I am fluid in Linux and other programming languages.
Steps-to-reproduce
Install docker and docker-compose.
Create a project directory with the docker-compose.yml file
Start the service: docker-compose up
navigate to the http://localhost:1880
click the hamburger menu icon->[manage-pallet]->pallet and search for and update the MySQL package.
Go into nodered container: docker-compose exec nodered bash
execute: node-red
result: bash: node-red: command not found
File:
docker-compose.yml
#
version: "2.7"
services:
nodered:
image: nodered/node-red:latest
user: root:root
environment:
- TZ=America/New_York
ports:
- 1880:1880
networks:
- nodered-net
volumes:
- ./nodered_data:/data
networks:
nodered-net:
You will need to bounce the whole container, there is no way to restart Node-RED while keeping the container running because the running instance is what keeps the container alive.
Run docker ps to find the correct container instance then run docker restart [container name]
Where [container name] is likely to be something like nodered-nodered_1
Running Consul with docker desktop using windows containers and experimental mode turned on works well. However if I try mounting bitnami consul's datafile to a local volume mount I get the following error:
chown: cannot access '/bitnami/consul'
My compose file looks like this:
version: "3.7"
services:
consul:
image: bitnami/consul:latest
volumes:
- ${USERPROFILE}\DockerVolumes\consul:/bitnami
ports:
- '8300:8300'
- '8301:8301'
- '8301:8301/udp'
- '8500:8500'
- '8600:8600'
- '8600:8600/udp'
networks:
nat:
aliases:
- consul
If I remove the volumes part, everything works just fine, but I cannot persist my data. If followed instructions in the readme file. The speak of having the proper permissions, but I do not know how to get that to work using docker desktop.
Side note
If I do not mount /bitnami but /bitnami/consul, I get the following error:
2020-03-30T14:59:00.327Z [ERROR] agent: Error starting agent: error="Failed to start Consul server: Failed to start Raft: invalid argument"
Another option is to edit the docker-compose.yaml to deploy the consul container as root by adding the user: root directive:
version: "3.7"
services:
consul:
image: bitnami/consul:latest
user: root
volumes:
- ${USERPROFILE}\DockerVolumes\consul:/bitnami
ports:
- '8300:8300'
- '8301:8301'
- '8301:8301/udp'
- '8500:8500'
- '8600:8600'
- '8600:8600/udp'
networks:
nat:
aliases:
- consul
Without user: root the container is executed as non-root (user 1001):
▶ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c590d7df611 bitnami/consul:1 "/opt/bitnami/script…" 4 seconds ago Up 3 seconds 0.0.0.0:8300-8301->8300-8301/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8301->8301/udp, 0.0.0.0:8600->8600/tcp, 0.0.0.0:8600->8600/udp bitnami-docker-consul_consul_1
▶ dcexec 0c590d7df611
I have no name!#0c590d7df611:/$ whoami
whoami: cannot find name for user ID 1001
But adding this line the container is executed as root:
▶ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ac206b56f57b bitnami/consul:1 "/opt/bitnami/script…" 5 seconds ago Up 4 seconds 0.0.0.0:8300-8301->8300-8301/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8301->8301/udp, 0.0.0.0:8600->8600/tcp, 0.0.0.0:8600->8600/udp bitnami-docker-consul_consul_1
▶ dcexec ac206b56f57b
root#ac206b56f57b:/# whoami
root
If the container is executed as root there shouldn't be any issue with the permissions in the host volume.
Consul container is a non-root container, in those cases, the non-root user should be able to write in the volume.
Using host directories as a volume you need to ensure that the directory you are mounting into the container has the proper permissions, in that case, writable permission for others. You can modify the permission by running sudo chmod o+x ${USERPROFILE}\DockerVolumes\consul (or the correct path to the host directory).
This local folder is created the first time you run docker-compose up or you can create it by yourself with mkdir. Once created (manually or automatically) you should give the proper permissions with chmod.
I am not familiar with Docker desktop nor Windows environments, but you should be able to do the equivalent actions using a CLI.
I'm using Ansible to provision my server with anything required to make a my website work. The goal is to install a base system and provide it with docker containers running apps (at the moment it's just one app).
The problem I'm facing is that my docker image isn't hosted at dockerhub or something else. Instead it's being built by an Ansible task. However, when I'm trying to run the built image, Ansible tries to pull it (which isn't possible) and then dies.
This is what the playbook section looks like:
- name: check or build image
docker_image:
path=/srv/svenv.nl-docker
name='svenv/svenv.nl'
state=build
- name: start svenv/svenv.nl container
docker:
name: svenv.nl
volumes:
- /srv/svenv.nl-docker/data/var/lib/mysql/:/var/lib/mysql/
- /srv/svenv.nl-docker/data/svenv.nl/svenv/media:/svenv.nl/svenv/media
ports:
- 80:80
- 3306:3306
image: svenv/svenv.nl
When I run this, a failure indicates that the svenv/svenv.nl get's pulled from the repository, it isn't there so it crashes:
failed: [vps02.svenv.nl] => {"changes": ["{\"status\":\"Pulling repository svenv/svenv.nl\"}\r\n", "{\"errorDetail\":{\"message\":\"Error: image svenv/svenv.nl:latest not found\"},\"error\":\"Error: image svenv/svenv.nl:latest not found\"}\r\n"], "failed": true, "status": ""}
msg: Unrecognized status from pull.
FATAL: all hosts have already failed -- aborting
My question is:
How can I
Build a local docker
Then start it as a container without pulling it
You are hitting this error:
https://github.com/ansible/ansible-modules-core/issues/1707
Ansible is attempting to create a container, but the create is failing with:
docker.errors.InvalidVersion: mem_limit has been moved to host_config in API version 1.19
Unfortunately, there is catch-all except: that is hiding this error. The result is that rather than failing with the above message, ansible assumes that the image is simply missing locally and attempts to pull it.
You can work around this by setting docker_api_version to something earlier than 1.19:
- name: start svenv/svenv.nl container
docker:
name: svenv.nl
ports:
- 80:80
- 3306:3306
image: svenv/svenv.nl
docker_api_version: 1.18