How to apply the changes of a Linux users group assignments inside a local ansible playbook? - linux

I´m trying to install docker and create a docker image within a local ansible playbook containing multiple plays, adding the user to docker group in between:
- hosts: localhost
connection: local
become: yes
gather_facts: no
tasks:
- name: install docker
ansible.builtin.apt:
update_cache: yes
pkg:
- docker.io
- python3-docker
- name: Add current user to docker group
ansible.builtin.user:
name: "{{ lookup('env', 'USER') }}"
append: yes
groups: docker
- name: Ensure that docker service is running
ansible.builtin.service:
name: docker
state: started
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: Create docker container
community.docker.docker_container:
image: ...
name: ...
When executing this playbook with ansible-playbook I´m getting a permission denied error at the "Create docker container" task. Rebooting and calling the playbook again solves the error.
I have tried manually executing some of the commands suggested here and executing the playbook again which works, but I´d like to do everything from within the playbook.
Adding a task like
- name: allow user changes to take effect
ansible.builtin.shell:
cmd: exec sg docker newgrp `id -gn`
does not work.
How can I refresh the Linux user group assignments from within the playbook?
I´m on Ubuntu 18.04.

Related

Connecting via ssh when building app with Jenkins

Tell me please.
I'm using Jenkins to build a project that runs in a docker container and I've run into a problem.
When executing this piece of code:
stage('deploy front') {
when { equals expected: 'do', actual: buildFront }
agent {docker{image 'ebiwd/alpine-ssh'}}
steps{
sh 'chmod 400 .iac/privatekey'
sh "ssh -i .iac/privatekey ci_user#134.209.181.163"
}
}
I get an error:
+ ssh -i .iac/privatekey ci_user#134.209.181.163
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added '134.209.181.163' (ECDSA) to the list of
known hosts.
bind: No such file or directory
unix_listener: cannot bind to path:
/root/.ssh/sockets/ci_user#134.209.181.163-22.uzumQ42Zb6Tcr2E9
Moreover, if you execute the following script with your hands in the container, then everything works
ssh -i .iac/privatekey ci_user#134.209.181.163
container with Jenkins started with docker-compose.yaml
version: '3.1'
services:
jenkins:
image: jenkins/jenkins:2.277.1-lts
container_name: jenkins
hostname: jenkins
restart: always
user: root
privileged: true
ports:
- 172.17.0.1:8070:8080
- 50000:50000
volumes:
- /opt/docker/jenkins/home:/var/jenkins_home
- /etc/timezone:/etc/timezone
- /usr/bin/docker:/usr/bin/docker
- /etc/localtime:/etc/localtime
- /var/run/docker.sock:/var/run/docker.sock
What could be the problem?
I have the same error in my Gitlab pipelines:
bind: No such file or directory
unix_listener: cannot bind to path: /root/.ssh/sockets/aap_adm#wp-np2-26.ebi.ac.uk-22.LIXMnQy4cW5klzgB
lost connection
I think that the error is related to this changeset.
In particular, the ssh config file requires the path "~/.ssh/sockets" to be present. Since we are not using the script /usr/local/bin/add-ssh-key (custom script created for that image) this path is missing.
I've opened an issue in the image project: Error using the image in CI/CD pipelines #10.
The problem was at this place:
agent {docker{image 'ebiwd/alpine-ssh'}}
When i change it to prev version, like:
agent {docker{image 'ebiwd/alpine-ssh:3.13'}}
Everything started working

Couldn't connect to Docker daemon

I am new in Docker and CI\CD
I am using a vps with Ubuntu 18.04.
The docker of the project runs locally and works fine.
I don't quite understand why the server is trying to find the docker on http, not on tcp.
override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd
docker service status
daemon.json
{ "storage-driver":"overlay" }
gitlab-ci.yml
image: docker/compose:latest
services:
- docker:dind
stages:
- deploy
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
deploy:
stage: deploy
only:
- master
tags:
- deployment
script:
# - export DOCKER_HOST="tcp://127.0.0.1:2375"
- docker-compose stop || true
- docker-compose up -d
- docker ps
environment:
name: production
Error
Set the DOCKER_HOST variable. When using the docker:dind service, the default hostname for the daemon is the name of the service, docker.
variables:
DOCKER_HOST: "tcp://docker:2375"
You must have also setup your GitLab runner to enable privileged containers.
docker needs root permission to be access. If you want to run docker commands, or docker-compose from a regular user, you need to add your user to docker group like:
sudo usermod -a -G docker yourUserName
By doing that, you can up your services and other docker stuffs with your regular user. If you don't want to add your user into docker group, you need to always prefix sudo on every docker commando you run:
sudo docker-compose up -d

bitnami consul cannot access file or directory using docker desktop volume mount

Running Consul with docker desktop using windows containers and experimental mode turned on works well. However if I try mounting bitnami consul's datafile to a local volume mount I get the following error:
chown: cannot access '/bitnami/consul'
My compose file looks like this:
version: "3.7"
services:
consul:
image: bitnami/consul:latest
volumes:
- ${USERPROFILE}\DockerVolumes\consul:/bitnami
ports:
- '8300:8300'
- '8301:8301'
- '8301:8301/udp'
- '8500:8500'
- '8600:8600'
- '8600:8600/udp'
networks:
nat:
aliases:
- consul
If I remove the volumes part, everything works just fine, but I cannot persist my data. If followed instructions in the readme file. The speak of having the proper permissions, but I do not know how to get that to work using docker desktop.
Side note
If I do not mount /bitnami but /bitnami/consul, I get the following error:
2020-03-30T14:59:00.327Z [ERROR] agent: Error starting agent: error="Failed to start Consul server: Failed to start Raft: invalid argument"
Another option is to edit the docker-compose.yaml to deploy the consul container as root by adding the user: root directive:
version: "3.7"
services:
consul:
image: bitnami/consul:latest
user: root
volumes:
- ${USERPROFILE}\DockerVolumes\consul:/bitnami
ports:
- '8300:8300'
- '8301:8301'
- '8301:8301/udp'
- '8500:8500'
- '8600:8600'
- '8600:8600/udp'
networks:
nat:
aliases:
- consul
Without user: root the container is executed as non-root (user 1001):
▶ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c590d7df611 bitnami/consul:1 "/opt/bitnami/script…" 4 seconds ago Up 3 seconds 0.0.0.0:8300-8301->8300-8301/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8301->8301/udp, 0.0.0.0:8600->8600/tcp, 0.0.0.0:8600->8600/udp bitnami-docker-consul_consul_1
▶ dcexec 0c590d7df611
I have no name!#0c590d7df611:/$ whoami
whoami: cannot find name for user ID 1001
But adding this line the container is executed as root:
▶ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ac206b56f57b bitnami/consul:1 "/opt/bitnami/script…" 5 seconds ago Up 4 seconds 0.0.0.0:8300-8301->8300-8301/tcp, 0.0.0.0:8500->8500/tcp, 0.0.0.0:8301->8301/udp, 0.0.0.0:8600->8600/tcp, 0.0.0.0:8600->8600/udp bitnami-docker-consul_consul_1
▶ dcexec ac206b56f57b
root#ac206b56f57b:/# whoami
root
If the container is executed as root there shouldn't be any issue with the permissions in the host volume.
Consul container is a non-root container, in those cases, the non-root user should be able to write in the volume.
Using host directories as a volume you need to ensure that the directory you are mounting into the container has the proper permissions, in that case, writable permission for others. You can modify the permission by running sudo chmod o+x ${USERPROFILE}\DockerVolumes\consul (or the correct path to the host directory).
This local folder is created the first time you run docker-compose up or you can create it by yourself with mkdir. Once created (manually or automatically) you should give the proper permissions with chmod.
I am not familiar with Docker desktop nor Windows environments, but you should be able to do the equivalent actions using a CLI.

install from remote location in ansible

I am new to ansible.I am trying to fetch a setup file from a remote server and trying to copy it on my mac and then run it if necessary.Here is my playbook.I tried get_url because i am running in a virtual box on mac..So I have ansible on my mac and linux on a virtual box on mac.So I can give commands in linux and not have to worry about mac os x syntax.So the issue i am facing is this.This is the error ansible is showing me.So please help in resolving this.Am i using the right command ,if not what can i do.
- name: download file
hosts: linux
user: root
vars_prompt:
- name: smb_username
prompt: "Enter smb share username"
- name: smb_password
prompt: "Enter smb share password"
private: yes
tasks:
- name: download file
command: smbclient "Actual url" {{ smb_password }} -U {{ smb_username }} -c "recurse;lcd /local/path;get archive.zip" creates=/local/path/archive.zip*
This isn't a playbook, playbooks start with
---
- hosts:
- hostA
tasks:
- name: ...
get_url: ...
Ansible has example playbooks, and one for get_url in particular: https://github.com/ansible/ansible-examples/blob/master/language_features/get_url.yml

Running Forever in Ansible Provision Never Fires or Always Hangs

I've come across a problem with Anisble hanging when trying to start a forever process on an Ansible node. I have a very simple API server I'm creating in vagrant and provisioning with Ansible like so:
---
- hosts: all
sudo: yes
roles:
- Stouts.nodejs
- Stouts.mongodb
tasks:
- name: Install Make Dependencies
apt: name={{ item }} state=present
with_items:
- gcc
- make
- build-essential
- name: Run NPM Update
shell: /usr/bin/npm update
- name: Create MongoDB Database Folder
shell: /bin/mkdir -p /data/db
notify:
- mongodb restart
- name: Generate Dummy Data
command: /usr/bin/node /vagrant/dataGen.js
- name: "Install forever (to run Node.js app)."
npm: name=forever global=yes state=latest
- name: "Check list of Node.js apps running."
command: /usr/bin/forever list
register: forever_list
changed_when: false
- name: "Start example Node.js app."
command: /usr/bin/forever start /vagrant/server.js
when: "forever_list.stdout.find('/vagrant/server.js') == -1"
But even though Ansible acts like everything is fine, no forever process is started. When I change a few lines to remove the when: statement and force it to run, Ansible just hands, possibly running the forever process (forever, I presume) but not launching the VM to where I can interact with it.
I've referenced essentially two points online; the only sources I can find.
as stated in the comments the variable content needs to be included in your question for anyone to provide a correct answer but to overcome this I suggest you can do it like so:
- name: "Check list of Node.js apps running."
command: /usr/bin/forever list|grep '/vagrant/server.js'|wc -l
register: forever_list
changed_when: false
- name: "Start example Node.js app."
command: /usr/bin/forever start /vagrant/server.js
when: forever_list.stdout == "0"
which should prevent ansible from starting the JS app if it's already running.

Resources