Install Metron on Ubuntu 14 fail vagrant Ubuntu - metron

Install Metron step by step but fail
Env:
root#ubuntu:~/metron# metron-deployment/scripts/platform-info.sh
Metron 0.7.2
--
* master
--
commit bb9a244d81feb54bc93456310f57a92d63cea38f (HEAD -> master, origin/master, origin/HEAD)
Author: tiborm <tibor.meller#gmail.com>
Date: Sat Oct 12 15:17:40 2019 -0500
METRON-2259 [UI] Hide Resolved and Hide Dismissed toggles not works when filtering is in manual mode (tiborm via sardell) closes apache/metron#1532
--
metron-deployment/development/ubuntu14/Vagrantfile | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--
ansible 2.5.1
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.15+ (default, Oct 7 2019, 17:39:04) [GCC 7.4.0]
--
Vagrant 2.2.6
--
vagrant-hostmanager (1.8.9, global)
--
Python 2.7.15+
--
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T11:41:47-05:00)
Maven home: /root/apache-maven-3.3.9
Java version: 1.8.0_231, vendor: Oracle Corporation
Java home: /usr/lib/jdk1.8.0_231/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "4.15.0-46-generic", arch: "amd64", family: "unix"
--
Docker version 18.09.7, build 2d0083d
--
node
v8.10.0
--
npm
3.5.2
--
g++ (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
--
Compiler is C++11 compliant
--
Linux ubuntu 4.15.0-46-generic #49-Ubuntu SMP Wed Feb 6 09:33:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
--
Total System Memory = 16041 MB
Processor Model: Intel(R) Xeon(R) CPU E7-4820 v4 # 2.00GHz
Processor Speed: 2000.000 MHz
Total Physical Processors: 4
Total cores: 4
Disk information:
/dev/sda1 493G 24G 448G 6% /
This CPU appears to support virtualization
I run vagrant --ansible-skip-tags="build" provision because download Ambari timeout when use vagrant up. And add
config.vm.network "forwarded_port", guest: 8080, host: 8080
config.vm.network "forwarded_port", guest: 4201, host: 4201
Get ERROR:
TASK [java_jdk : Update package cache] *****************************************
changed: [node1]
TASK [java_jdk : Install openjdk] **********************************************
ok: [node1]
TASK [ambari_config : include_vars] ********************************************
ok: [node1]
TASK [ambari_config : include_tasks] *******************************************
TASK [ambari_config : Wait for Ambari to start; http://node1:8080] *************
ok: [node1]
TASK [ambari_config : Deploy cluster with Ambari; http://node1:8080] ***********
[WARNING]: Module did not set no_log for password
ok: [node1]
PLAY [ambari_master] ***********************************************************
TASK [Gathering Facts] *********************************************************
ok: [node1]
TASK [epel : Install EPEL repository] ******************************************
skipping: [node1]
TASK [python-pip : Install Python's pip on Centos] *****************************
skipping: [node1]
TASK [python-pip : Install Python's pip on Ubuntu] *****************************
ok: [node1]
TASK [httplib2 : Install python httplib2 dependency] ***************************
ok: [node1]
TASK [ambari_gather_facts : Ask Ambari: cluster_name] **************************
skipping: [node1]
TASK [ambari_gather_facts : set_fact] ******************************************
skipping: [node1]
TASK [ambari_gather_facts : set_fact] ******************************************
ok: [node1]
TASK [ambari_gather_facts : Ask Ambari: namenode_host] *************************
changed: [node1]
TASK [ambari_gather_facts : set_fact] ******************************************
ok: [node1]
TASK [ambari_gather_facts : Ask Ambari: core_site_tag] *************************
fatal: [node1]: FAILED! => {"changed": true, "cmd": "curl -s -u admin:admin -X GET -H \"X-Requested-By: ambari\" 'http://node1:8080/api/v1/clusters/metron_cluster/hosts/node1/host_components/NAMENODE' | python -c 'import sys, json; print json.load(sys.stdin)[\"HostRoles\"][\"actual_configs\"][\"core-site\"][\"default\"]'", "delta": "0:00:00.137241", "end": "2019-10-23 11:29:26.165880", "msg": "non-zero return code", "rc": 1, "start": "2019-10-23 11:29:26.028639", "stderr": "Traceback (most recent call last):\n File \"<string>\", line 1, in <module>\nKeyError: 'core-site'", "stderr_lines": ["Traceback (most recent call last):", " File \"<string>\", line 1, in <module>", "KeyError: 'core-site'"], "stdout": "", "stdout_lines": []}
to retry, use: --limit #/root/metron/metron-deployment/development/ubuntu14/ansible/playbook.retry
PLAY RECAP *********************************************************************
node1 : ok=90 changed=26 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.

Related

Creating a ansible playbook for getting the apache status and want to display just the status

I have written an Ansible playbook which is as follows, but the problem it shows the status as disabled when i launch the playbook and when i check the same on my ubuntu server manually for the server status it shows active.
Please help and suggest what mistake I am doing in writing the playbook .
Note :- My goal is to get the status of the apache server {that is it in started or stopped status} and either print on the screen or append it in some abc.txt file and it should works every-time and update the abc.txt file with the status
testfile.yml
---
-
name: "To check if the apache is running or not"
hosts: webserver1
become: yes
tasks:
- name: Check Apache2 service status
service:
name: apache2
state: started
register: apache2_status
# - name: apache2 status
# command: service apache2 status
# register: apache2_status
- name: Print Apache2 service status
debug:
msg: "Apache2 service is {{ 'started' if apache2_status.status == 'started' else 'stopped' }}"
Running the following ansible command to run the playbook
ansible-playbook testfile.yml -i inventory --ask-become-pass
output
PLAY [To check if the apache is running or not] ********************************************************************************
TASK [Gathering Facts] *********************************************************************************************************
ok: [webserver1]
TASK [Check Apache2 service status] ********************************************************************************************
ok: [webserver1]
TASK [Print Apache2 service status] ********************************************************************************************
ok: [webserver1] => {
"msg": "Apache2 service is stopped"
}
PLAY RECAP *********************************************************************************************************************
webserver1 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
And When I check the same manually on my ubuntu webserver1
ubuntu#ubuntu1:~$ service apache2 status
● apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2022-12-31 08:56:30 UTC; 5h 28min ago
Docs: https://httpd.apache.org/docs/2.4/
Process: 632 ExecStart=/usr/sbin/apachectl start (code=exited, status=0/SUCCESS)
Main PID: 708 (apache2)
Tasks: 55 (limit: 1695)
Memory: 7.5M
CPU: 2.724s
CGroup: /system.slice/apache2.service
├─708 /usr/sbin/apache2 -k start
├─713 /usr/sbin/apache2 -k start
└─714 /usr/sbin/apache2 -k start
Dec 31 08:56:29 ubuntu1 systemd[1]: Starting The Apache HTTP Server...
Dec 31 08:56:30 ubuntu1 apachectl[685]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name,>
Dec 31 08:56:30 ubuntu1 systemd[1]: Started The Apache HTTP Server.
I have tried to run the Ansible playbook with different parameters but not able to replicate my expected output
What my expecting is My goal is to get the status of the apache server {that is it in started or stopped status ?} and either print on the screen or append it in some abc.txt file and it should works every-time and update the abc.txt file with the new status
For example, given the inventory
shell> cat hosts
[webservers]
webserver1 ansible_host=10.1.0.74
[webservers:vars]
ansible_connection=ssh
ansible_user=admin
ansible_become=yes
ansible_become_user=root
ansible_become_method=sudo
ansible_python_interpreter=/bin/python3.6
The playbook below in the first task makes sure the web server is running. The second task collects the status of all services. The third task writes the status to a file at the controller
- name: Get status of webserver and append the new status to file
hosts: webservers
tasks:
- name: Start lighttpd if not running already
service:
name: lighttpd
state: started
register: status
- debug:
var: status.state
- name:
service_facts:
- debug:
var: ansible_facts.services['lighttpd.service']
- lineinfile:
create: true
path: /tmp/webservers.status
line: "{{ '%Y-%m-%d %H:%M:%S'|strftime }} {{ item }} {{ s.state }} {{ s.status }}"
loop: "{{ ansible_play_hosts }}"
run_once: true
delegate_to: localhost
vars:
s: "{{ hostvars[item].ansible_facts.services['lighttpd.service'] }}"
gives
PLAY [Get status of webserver and append the new status to file] *****************************
TASK [Start lighttpd if not running already] *************************************************
ok: [webserver1]
TASK [debug] *********************************************************************************
ok: [webserver1] =>
status.state: started
TASK [service_facts] *************************************************************************
ok: [webserver1]
TASK [debug] *********************************************************************************
ok: [webserver1] =>
ansible_facts.services['lighttpd.service']:
name: lighttpd.service
source: systemd
state: running
status: disabled
TASK [lineinfile] ****************************************************************************
changed: [webserver1 -> localhost] => (item=webserver1)
PLAY RECAP ***********************************************************************************
webserver1: ok=5 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
and appends the status of the webserver(s) to the file
shell> cat /tmp/webservers.status
2023-01-01 01:03:15 webserver1 running disabled
2023-01-01 01:03:32 webserver1 running disabled
I'm pretty sure your problem is that you want to check the state, but you are effectively checking whether the service is enabled or not.
(ie a service can be currently running but not enabled, and even though your service is enabled, it is not specified in ansible, and it is not collecting that for you)
I'd suggest using systemd instead of service like this. Set the status to started, but in check mode. The module won't actually do anything, but it will set "changed" to true if a change needed to be made.
This is more Ansible-ish in my opinion than calling a shell command.
(I tested this on Fedora -- adjust the service name as you need to.)
---
- hosts: localhost
tasks:
- name: check if apache is running
ansible.builtin.systemd:
# on Ubuntu, I think this is apache2
name: httpd
state: started
register: httpd_status
check_mode: true
- name: debug
ansible.builtin.debug:
msg: "Apache2 service is {{ httpd_status.changed | ternary('stopped','started') }}"

How to change the default directory docker uses to build an image

I am trying to set up a gitlab ci.
Because I for some reasons I do not have "gitlab-runner" user and I do not have permission writin on "/home/user_1", this is my installation
/usr/local/bin/gitlab-runner install --user=user_1 --working-directory=/data/external/tmp/gitlab-runner
And this is how I register
/usr/local/bin/gitlab-runner register --url GITLAB_URL --registration-token TOKEN
By the way, I create this gitlab-ci.yml file:
stages:
- deploy
deploy:
stage: deploy
# only:
# - 3.0.x
script:
- echo "deploying"
- sudo docker build -t my_image:v1 .
- echo "********Docker Images********"
- sudo docker image list
- echo "********End of Docker Images********"
- sudo docker run -d -p 3000:5000 --rm --name my_container my_image:v1
tags:
- deploy
I get this error:
Error: error creating build container: Error committing the finished image:
error adding layer with blob "sha256:bb7d5a84853b217ac05783963f12b034243070c1c9c8d2e60ada47444f3cce04":
Error processing tar file(exit status 1):
Error setting up pivot dir: mkdir
/home/user_1/.local/share/containers/storage/overlay/62a747bf1719d2d37fff5670ed40de6900a95743172de1b4434cb019b56f30b4/diff/.pivot_root436648414:
permission denied
I would like to replace /home/user_1/.local/share/containers/storage/overlay/
with another address so that I do not get permission error.
Any advice on how to do so?
I am using Redhat Linux
docker --version is podman version 3.2.3
docker info:
server_name:/home/my_user[ 52 ] --> docker info
host:
arch: amd64
buildahVersion: 1.21.3
cgroupControllers: []
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: conmon-2.0.29-1.module+el8.4.0+11822+6cc1e7d7.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.29, commit: ae467a0c8001179d4d0adf4ada381108a893d7ec'
cpus: 8
distribution:
distribution: '"rhel"'
version: "8.4"
eventLogger: file
hostname: server_name
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
uidmap:
- container_id: 0
host_id: 67298
size: 1
kernel: 4.18.0-305.3.1.el8_4.x86_64
linkmode: dynamic
memFree: 1818484736
memTotal: 33444728832
ociRuntime:
name: runc
package: runc-1.0.0-74.rc95.module+el8.4.0+11822+6cc1e7d7.x86_64
path: /usr/bin/runc
version: |-
runc version spec: 1.0.2-dev
go: go1.15.13
libseccomp: 2.5.1
os: linux
remoteSocket:
path: /run/user/67298/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /bin/slirp4netns
package: slirp4netns-1.1.8-1.module+el8.4.0+11822+6cc1e7d7.x86_64
version: |-
slirp4netns version 1.1.8
commit: d361001f495417b880f20329121e3aa431a8f90f
libslirp: 4.3.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.1
swapFree: 67353165824
swapTotal: 67448598528
uptime: 789h 40m 40.57s (Approximately 32.88 days)
registries:
localhost:
Blocked: false
Insecure: true
Location: localhost
MirrorByDigestOnly: false
Mirrors: []
Prefix: localhost
mkdcvtmaapp01:
Blocked: false
Insecure: true
Location: server_name
MirrorByDigestOnly: false
Mirrors: []
Prefix: server_name
search:
- registry.access.redhat.com
- registry.redhat.io
- docker.io
store:
configFile: /home/my_user/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /bin/fuse-overlayfs
Package: fuse-overlayfs-1.6-1.module+el8.4.0+11822+6cc1e7d7.x86_64
Version: |-
fusermount3 version: 3.2.1
fuse-overlayfs: version 1.6
FUSE library version 3.2.1
using FUSE kernel interface version 7.26
graphRoot: /home/my_user/.local/share/containers/storage
graphStatus:
Backing Filesystem: nfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 0
runRoot: /run/user/67298/containers
volumePath: /home/my_user/.local/share/containers/storage/volumes
version:
APIVersion: 3.2.3
Built: 1627570963
BuiltTime: Thu Jul 29 11:02:43 2021
GitCommit: ""
GoVersion: go1.15.7
OsArch: linux/amd64
Version: 3.2.3
I also have tried these three in my gitlab ci but it did not work:
deploy:
variables:
DOCKER_DRIVER: overlay2
DOCKER_TMP: /data/external/tmp_docker_build
TMPDIR: /data/external/tmp_docker_build
I also did chmod 777 on .local, share, containers, storage, and overlay in this rout /home/user_1/.local/share/containers/storage/overlay/ but it is still not working.
I did not know about this before, either. Apparently you can set the data dir used by docker daemon by adding -g /path/to/dir to the docker daemon command.
For example by adding -g to the DOCKER_OPTS in /etc/default/docker on Ubuntu or Debian systems:
DOCKER_OPTS="-dns 8.8.8.8 -dns 8.8.4.4 -g /data/external/docker"
My source is https://forums.docker.com/t/how-do-i-change-the-docker-image-installation-directory/1169 - there is also a note about how this is done on Fedora or CentOS:
edit /etc/sysconfig/docker, and add the -g option in the other_args variable: ex. other_args="-g /var/lib/testdir". If there’s more than one option, make sure you enclose them in " ". After a restart, (service docker restart) Docker should use the new directory.

connect() got an unexpected keyword argument 'sock'

I'm getting below error when trying to connect to routers and i'm running latest version of Paramiko. What could be causing this error and is there any workaround ?
Update:
its simple scriptusing ansible to collect "show system uptime" command output.
I have inventory files which contains ip address of routers
$ cat uptime.yaml
---
- name: Get device uptime
hosts:
- all
connection: local
gather_facts: no
vars_prompt:
- name: username
prompt: UsernName
private: no
- name: password
prompt: Password
private: yes
tasks:
- name: get uptime using ansible moduel
junos_command:
commands:
- show system uptime
provider:
host: "{{ansible_host}}"
port: 22
username: "{{ username }}"
password: "{{ password }}"
>>> print(paramiko.__version__)
2.7.2
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]. This feature
will be removed from ansible-core in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
UsernName: regress
Password:
PLAY [Get device uptime] ************************************************************************************************************************************************************************
TASK [get uptime using ansible moduel] **********************************************************************************************************************************************************
[WARNING]: ['connection local support for this module is deprecated and will be removed in version 2.14, use connection ansible.netcommon.netconf']
fatal: [R1_re0]: FAILED! => {"changed": false, "msg": "connect() got an unexpected keyword argument 'sock'"}
fatal: [R2_re0]: FAILED! => {"changed": false, "msg": "connect() got an unexpected keyword argument 'sock'"}
PLAY RECAP **************************************************************************************************************************************************************************************
R1_re0 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
R2_re0 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

Why does the ansible remote copy not work from local

I just started learning ansible and writing first copy module but i am clueless why its not working.
---
- hosts: osa
tasks:
- name: Copying file to remote host
copy: src=/tmp/foo.txt dest=/tmp/bar.txt
Ran playbook but nothing happened and no error too.
# /usr/bin/ansible-playbook /home/spatel/ansible/first-playbook.yml
PLAY [osa] **************************************************************************************************************************************************************************
TASK [Gathering Facts] **************************************************************************************************************************************************************
ok: [10.5.1.160]
TASK [Copying file to remote host] **************************************************************************************************************************************************
ok: [10.5.1.160]
PLAY RECAP **************************************************************************************************************************************************************************
10.5.1.160 : ok=2 changed=0 unreachable=0 failed=0
My /etc/ansible/hosts file has Remote host IP address:
[osa]
10.5.1.160
Verbose output:
# ansible-playbook -vv first-playbook.yml
ansible-playbook 2.4.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 2.7.5 (default, Nov 6 2016, 00:28:07) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)]
Using /etc/ansible/ansible.cfg as config file
PLAYBOOK: first-playbook.yml ********************************************************************************************************************************************************
1 plays in first-playbook.yml
PLAY [osa] **************************************************************************************************************************************************************************
META: ran handlers
TASK [Copying file to remote host] **************************************************************************************************************************************************
task path: /home/spatel/ansible/first-playbook.yml:5
ok: [10.5.1.160] => {"changed": false, "checksum": "4383f040dc0303e55260bca327cc0eeb213b04b5", "failed": false, "gid": 0, "group": "root", "mode": "0644", "owner": "root", "path": "/tmp/bar.txt", "secontext": "unconfined_u:object_r:admin_home_t:s0", "size": 8, "state": "file", "uid": 0}
META: ran handlers
META: ran handlers
PLAY RECAP **************************************************************************************************************************************************************************
10.5.1.160 : ok=1 changed=0 unreachable=0 failed=0
UPDATE
I have noticed its copying file to localhost not remote host, this makes no sense, why is it saying in the output that it is executing the playbook on the remote host (10.5.1.160) ?????
# ls -l /tmp/bar.txt
-rw-r--r--. 1 root root 16 May 24 10:45 /tmp/bar.txt
I had a similar problem that it was looking for a file on the remote server. You are running your tasks on remote server hosts: osa but the file is on localhost. I could solve it with delegate_to: localhost.
---
- hosts: osa
tasks:
- name: Copying file to remote host
delegate_to: localhost
copy: src=/tmp/foo.txt dest=/tmp/bar.txt
However, I had to make even first a task to get the file from localhost and then to provide it afterwards to the remote host:
- name: read files
# This needs to run on localhost, because that's where
# the keys are stored.
delegate_to: localhost
command: cat {{item}}
# Register the results of this task in a variable called
# "files"
register: files
with_fileglob:
- "/tmp/*.txt"
- name: show what was stored in the files variable
debug:
var: files
- name: Copying file to remote host
delegate_to: localhost
copy: src="{{item.stdout}}" dest=/tmp/bar.txt
with_items: "{{keys.results}}"

docker swarm create fails when tar can't find container file system

I'm running docker on a CentOS VM. Some version information:
Linux cmodqa.lab.c-cor.com 3.10.0-229.4.2.el7.x86_64 #1 SMP Wed May 13 10:06:09 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[root#xxx ~]# docker version
Client version: 1.6.0
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): 8aae715/1.6.0
OS/Arch (client): linux/amd64
Server version: 1.6.0
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): 8aae715/1.6.0
OS/Arch (server): linux/amd64
Ran this command, as root:
TOKEN=$(docker run --rm swarm create)
Returns:
Timestamp: 2015-05-29 09:26:25.967347311 -0700 PDT
Code: System error
Message: [/usr/bin/tar -cf /var/lib/docker/tmp/c77446605e81944d4fb0d09a68339d2026db2b2af100/_tmp.tar -C /var/lib/docker/devicemapper/mnt/c77446605e81944d4fb0d09a68339d2026db2b2afs/tmp .] failed: /usr/bin/tar: /var/lib/docker/devicemapper/mnt/c77446605e81944d4fb0d09a6cb119e60ff/rootfs/tmp: Cannot chdir: No such file or directory
/usr/bin/tar: Error is not recoverable: exiting now
: exit status 2
Frames:
0: setupRootfs
Package: github.com/docker/libcontainer
File: rootfs_linux.go#30
1: Init
Package: github.com/docker/libcontainer.(*linuxStandardInit)
File: standard_init_linux.go#52
2: StartInitialization
Package: github.com/docker/libcontainer.(*LinuxFactory)
File: factory_linux.go#223
3: initializer
Package: github.com/docker/docker/daemon/execdriver/native
File: init.go#35
4: Init
Package: github.com/docker/docker/pkg/reexec
File: reexec.go#26
5: main
Package: main
File: docker.go#29
6: main
Package: runtime
File: proc.go#63
7: goexit
Package: runtime
File: asm_amd64.s#2232
time="2015-05-29T09:26:27-07:00" level=fatal msg="Error response from daemon: : exit stat
The file system location the tar command is trying to read from doesn't exist:
[root#cmodqa system]# ls -l /var/lib/docker/devicemapper/mnt/c77446605e81944d4fb0d09a68339d2026db2b2af1335a8a6395b1cb119e60ff/rootfs/tmp
ls: cannot access /var/lib/docker/devicemapper/mnt/c77446605e81944d4fb0d09a68339d2026db2b2af1335a8a6395b1cb119e60ff/rootfs/tmp: No such file or directory
In fact:
ls -l /var/lib/docker/devicemapper/mnt/c77446605e81944d4fb0d09a68339d2026db2b2af1335a8a6395b1cb119e60ff
total 0
The rootfs for the container doesn't seem to be there. (Does it disappear after the container stops?)
I've run this a few times. Same result.
I did some further digging, on docker's repository on Git.
This is a known issue, apparently based in Red Hat's packaging of Docker, and it effects more than swarm.
A bug was filed with Redhat:
https://bugzilla.redhat.com/show_bug.cgi?id=1213258
Use Docker 1.5.0 to work around this situation.

Resources