Why is Ansible task being skipped? - ubuntu-14.04

I have 2 ansible tasks that I am trying to run in a CIS hardening script on an Ubuntu 14.04 Server.
The first task is
- name: 8.1.12 Collect Use of Privileged Commands (Scored)
shell: /usr/bin/find {/usr/local/sbin,/usr/local/bin,/sbin,/bin,/usr/sbin,/usr/bin} -xdev \( -perm -4000 -o -perm -2000 \) -type f | awk '{print "-a always,exit -F path=" $1 " -F perm=x -F auid>=500 -F auid!=4294967295 -k privileged" }'
register: privileged_programs
tags:
- scored
- section8.1.12
This is supposed to register a list of privileged programs to be used in the next task. If I copy the command above onto the Ubuntu VM and run it, I get a long list of programs just like I should.
The second task is this:
- name: 8.1.12 Collect Use of Privileged Commands (Scored)
lineinfile: dest=/etc/audit/audit.rules line="{{item}}" insertafter=EOF state=present
with_items: privileged_programs.stdout_lines
when: privileged_programs is defined and privileged_programs.stdout_lines|length > 0
notify: restart auditd
tags:
- scored
- section8.1.12
It should fire if any results are registered but so far I have not been able to get it to run. It is skipped every time I try to run the 2 tasks. I am assuming that the privileged_programs variable is not being stored or passed correctly.
Note: I tried changing the first task from shell to command but I then got an error "stderr: /usr/bin/find: paths must precede expression"
Note2: I also checked in the etc/audit/audit.rules and verified that the privileged programs are not contained therein yet.
Edit: I added a debug in between the two tasks to output var=privileged_programs. Here is part of it that I think may indicate part of the issue:
"stderr": "/usr/bin/find: `{/usr/local/sbin,/usr/local/bin,/sbin,/bin,/usr/sbin,/usr/bin}': No such file or directory",
"stdout": "",
"stdout_lines": [],
"warnings": []
Anyone know why this would be?
Thanks in advance!

Bourne shell has some issue with the syntax. Works fine in Bash.
I made it working. Try the following syntax.
shell: /usr/bin/find /usr/local/sbin /usr/local/bin /sbin /bin /usr/sbin /usr/bin

I am not sure if this is the case, but ansible documentation states: If two handler tasks have the same name, only one will run. *
Try changing second task name.

my ansible skipping was related to a "when:" in state not matching the hostname.
e.g.
- set_fact:
efs_mount_target: "10.10.10.1"
when: ansible_hostname == 'server-01'

Related

udev rule doesnt trigger GUI application

I am able to get this udev rule in 99-monitor-hotplug.rules to trigger:
ACTION=="change", SUBSYSTEM=="drm", ENV{HOTPLUG}=="1",
RUN+="/usr/local/bin/monitor-hotplug.sh"
But I cannot seem to get it to trigger an OpenCV GUI application in the monitor-hotplug.sh script.
I understand fundamentally the udev rule runs as root but no matter what syntax I try I cannot get it to run properly at the user level for running the application (the actual script to run the application works fine).
I have tried in RUN this format:
su - your_X_user_here -c 'export DISPLAY=:0; bash -c "/path/to/script.sh"'
with script:
#!/bin/bash
#sleep 5
date >> /var/log/opencvlog.log
cd ~/Downloads
./displayimage /home/<username>/Pictures/picture.png
>/var/log/application.log
2>&1
Another attempt:
Adding in 99-monitor-hotplug.rules to the current syntax:
ACTION=="change", SUBSYSTEM=="drm", ENV{DISPLAY}=":0",
ENV{XAUTHORITY}="/home/<username>/.Xauthority" ENV{HOTPLUG}=="1",
RUN+="/usr/local/bin/monitor-hotplug.sh"
then in the actual script:
export DISPLAY=:0
export XAUTHORITY=/home/<username>/.Xauthority
cd ~/Downloads
date
./displayimage /home/<username>/Pictures/picture.png
None of this is working, any thoughts on how to get this to work?
Thanks
When using display managers like gdm the current X authority file might not be in the user home directory, but in runtime directories like /run or /var/run.
You may try something like:
USER=<username>
export XAUTHORITY=$(find /var/run/gdm3/ -type f -path "*${USER}*" 2> /dev/null)
Newer gdm versions seem to put the file in a more generic location:
export XAUTHORITY=$(find /run/user/$(id -u "$USER")/ -name Xauthority 2> /dev/null)
I used this technique to call xrandr to adjust the screen resolution from a udev rule:
https://git.ao2.it/libam7xxx.git/blob/HEAD:/contrib/am7xxx-autodisplay.sh

crontab bash script not running

I updated the script with the absolute paths. Also here is my current cronjob entry.
I went and fixed the ssh key issue so I know it works know, but might still need to tell rsync what key to use.
The script runs fine when called manually by user. It looks like not even the rm commands are being executed by the cron job.
UPDATE
I updated my script but basically its the same as the one below. Below I have a new cron time and added an error output.
I get nothing. It looks like the script doesn't even run.
crontab -e
35 0 * * * /bin/bash /x/y/z/s/script.sh 2>1 > /tmp/tc.log
#!/bin/bash
# Clean up
/bin/rm -rf /z/y/z/a/b/current/*
cd /z/y/z/a/to/
/bin/rm -rf ?s??/D????
cd /z/y/z/s/
# Find the latest file
FILE=`/usr/bin/ssh user#server /bin/ls -ht /x/y/z/t/a/ | /usr/bin/head -n 1`
# Copy over the latest archive and place it in the proper directory
/usr/bin/rsync -avz -e /urs/bin/ssh user#server:"/x/y/z/t/a/$FILE" /x/y/z/t/a/
# Unzip the zip file and place it in the proper directory
/usr/bin/unzip -o /x/y/z/t/a/$FILE -d /x/y/z/t/a/current/
# Run Dev's script
cd /x/y/z/t/
./old.py a/current/ t/ 5
Thanks for the help.
I figured it out, I'm use to working in cst and the server was in gmt time.
Thanks everybody for the help.

bash: cd: No such file or directory

I'm writing a bash function to jump into my last editted folder.
In my example, the last edited folder is titled 'daniel'.
The bash function looks fine.
>>:~$ echo $(ls -d -1dt -- */ | head -n 1)
daniel/
And I can manually cd into the directory.
>>:~$ cd daniel
>>:~/daniel$
But I can't use the bash function to cd into the directory.
>>:~$ cd $(ls -d -1dt -- */ | head -n 1)
bash: cd: daniel/: No such file or directory
Turns out someone added alias ls=ls --color to the bashrc of this server. My function works once the alias was removed. – Daniel Tan
This error is usually thrown when you enter a path that does not exist. See -bash: cd: Desktop: No such file or directory.
But the $(ls -d -1dt -- */ | head -n 1) is not wrong in the output. Thus the reason must be the different usage of sh and bash in that moment.
In my case, I had a docker container with that error when I accessed the folder with bash. The container was broken since I had force-closed it after docker-compose up which did not work. After that, on the existing containers, I could only use sh, not bash. I found this because of OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: "bash": executable file not found in $PATH": unknown. I guess that bash is loaded later than sh, and that at an early error at the start of the container, only sh gets loaded.
That would fit since you are in the sh, which can be seen from >>. Using sh, everything will work as expected. But the expression gets solved by bash. Which is probably not loaded for whatever reason.
In docker, using docker-compose, I also had a similar error saying sh: 1: cd: can't cd to /root/MYPROJECT. That could be solved by mounting the needed volumes in the services using
services:
host:
volumes:
- ~/MYPROJECT:/MYPROJECT # ~/path/on/host:/path/on/container
See Mount a volume in docker-compose. How is it done? and How to mount a host directory with docker-compose? or the official docs.

CoreOS - Cloud-Config not saving file

I'm trying to write an "initial" cloud-config file that does a bit of setup before my default Cloud-Config file replaces it and takes over. This is what it looks like, however whenever it runs the "clustersetup.service", it can't find the clustersetup.sh file that was supposed to save. Course if I run this from a terminal it works just fine. What am I doing wrong?
#cloud-config
coreos:
etcd:
addr: $private_ipv4:4001
peer-addr: $private_ipv4:7001
fleet:
public-ip: $private_ipv4
units:
- name: clustersetup.service
command: start
content: |
[Unit]
Description=Cluster Setup
[Service]
ExecStartPre=/usr/bin/wget -q http://10.0.2.2:8080/clustersetup.sh -O ~/clustersetup.sh
ExecStart=/usr/bin/bash ~/clustersetup.sh
ExecStop=/usr/bin/bash
Paths specified by systemd cannot be relative. Try this again specifying the full path /home/core/clustersetup.sh.
In my distribution (ubuntu), bash is in /bin. One thing you could do is:
ExecStartPre=/bin/bash -c '/usr/bin/wget -q http://10.0.2.2:8080/clustersetup.sh -O ~/clustersetup.sh'
ExecStart=/bin/bash -c ~/clustersetup.sh'
I think you will get the proper expansion of the ~ when pushing it through the shell. However, ~ will be relative to the process id executing the script (I don't know for certain that is core). If you wanted to be sure, you could:
ExecStartPre=/bin/bash -c '/usr/bin/wget -q http://10.0.2.2:8080/clustersetup.sh -O ~core/clustersetup.sh'
ExecStart=/bin/bash -c ~core/clustersetup.sh'
I haven't tested this. I agree with #Brian in that the explicit path would be a better idea. In general it is best not to get a shell involved with execution.

How to Free Inode Usage?

I have a disk drive where the inode usage is 100% (using df -i command).
However after deleting files substantially, the usage remains 100%.
What's the correct way to do it then?
How is it possible that a disk drive with less disk space usage can have
higher Inode usage than disk drive with higher disk space usage?
Is it possible if I zip lot of files would that reduce the used inode count?
If you are very unlucky you have used about 100% of all inodes and can't create the scipt.
You can check this with df -ih.
Then this bash command may help you:
sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n
And yes, this will take time, but you can locate the directory with the most files.
It's quite easy for a disk to have a large number of inodes used even if the disk is not very full.
An inode is allocated to a file so, if you have gazillions of files, all 1 byte each, you'll run out of inodes long before you run out of disk.
It's also possible that deleting files will not reduce the inode count if the files have multiple hard links. As I said, inodes belong to the file, not the directory entry. If a file has two directory entries linked to it, deleting one will not free the inode.
Additionally, you can delete a directory entry but, if a running process still has the file open, the inode won't be freed.
My initial advice would be to delete all the files you can, then reboot the box to ensure no processes are left holding the files open.
If you do that and you still have a problem, let us know.
By the way, if you're looking for the directories that contain lots of files, this script may help:
#!/bin/bash
# count_em - count files in all subdirectories under current directory.
echo 'echo $(ls -a "$1" | wc -l) $1' >/tmp/count_em_$$
chmod 700 /tmp/count_em_$$
find . -mount -type d -print0 | xargs -0 -n1 /tmp/count_em_$$ | sort -n
rm -f /tmp/count_em_$$
My situation was that I was out of inodes and I had already deleted about everything I could.
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 942080 507361 11 100% /
I am on an ubuntu 12.04LTS and could not remove the old linux kernels which took up about 400,000 inodes because apt was broken because of a missing package. And I couldn't install the new package because I was out of inodes so I was stuck.
I ended up deleting a few old linux kernels by hand to free up about 10,000 inodes
$ sudo rm -rf /usr/src/linux-headers-3.2.0-2*
This was enough to then let me install the missing package and fix my apt
$ sudo apt-get install linux-headers-3.2.0-76-generic-pae
and then remove the rest of the old linux kernels with apt
$ sudo apt-get autoremove
things are much better now
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 942080 507361 434719 54% /
My solution:
Try to find if this is an inodes problem with:
df -ih
Try to find root folders with large inodes count:
for i in /*; do echo $i; find $i |wc -l; done
Try to find specific folders:
for i in /src/*; do echo $i; find $i |wc -l; done
If this is linux headers, try to remove oldest with:
sudo apt-get autoremove linux-headers-3.13.0-24
Personally I moved them to a mounted folder (because for me last command failed) and installed the latest with:
sudo apt-get autoremove -f
This solved my problem.
I had the same problem, fixed it by removing the directory sessions of php
rm -rf /var/lib/php/sessions/
It may be under /var/lib/php5 if you are using a older php version.
Recreate it with the following permission
mkdir /var/lib/php/sessions/ && chmod 1733 /var/lib/php/sessions/
Permission by default for directory on Debian showed drwx-wx-wt (1733)
We experienced this on a HostGator account (who place inode limits on all their hosting) following a spam attack. It left vast numbers of queue records in /root/.cpanel/comet. If this happens and you find you have no free inodes, you can run this cpanel utility through shell:
/usr/local/cpanel/bin/purge_dead_comet_files
You can use RSYNC to DELETE the large number of files
rsync -a --delete blanktest/ test/
Create blanktest folder with 0 files in it and command will sync your test folders with large number of files(I have deleted nearly 5M files using this method).
Thanks to http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux
Late answer:
In my case, it was my session files under
/var/lib/php/sessions
that were using Inodes.
I was even unable to open my crontab or making a new directory let alone triggering the deletion operation.
Since I use PHP, we have this guide where I copied the code from example 1 and set up a cronjob to execute that part of the code.
<?php
// Note: This script should be executed by the same user of web server
process.
// Need active session to initialize session data storage access.
session_start();
// Executes GC immediately
session_gc();
// Clean up session ID created by session_gc()
session_destroy();
?>
If you're wondering how did I manage to open my crontab, then well, I deleted some sessions manually through CLI.
Hope this helps!
firstly, get the inode storage usage:
df -i
The next step is to find those files. For that, we can use a small script that will list the directories and the number of files on them.
for i in /*; do echo $i; find $i |wc -l; done
From the output, you can see the directory which uses a large number of files, then repeat this script for that directory like below. Repeat it until you see the suspected directory.
for i in /home/*; do echo $i; find $i |wc -l; done
When you find the suspected directory with large number of unwanted files. Just delete the unwanted files on that directory and free up some inode space by the following the command.
rm -rf /home/bad_user/directory_with_lots_of_empty_files
You have successfully solved the problem. Check the inode usage now with the df -i command again, you can see the difference like this.
df -i
eaccelerator could be causing the problem since it compiles PHP into blocks...I've had this problem with an Amazon AWS server on a site with heavy load. Free up Inodes by deleting the eaccelerator cache in /var/cache/eaccelerator if you continue to have issues.
rm -rf /var/cache/eaccelerator/*
(or whatever your cache dir)
We faced similar issue recently, In case if a process refers to a deleted file, the Inode shall not be released, so you need to check lsof /, and kill/ restart the process will release the inodes.
Correct me if am wrong here.
As told before, filesystem may run out of inodes, if there are a lot of small files. I have provided some means to find directories that contain most files here.
In one of the above answers it was suggested that sessions was the cause of running out of inodes and in our case that is exactly what it was. To add to that answer though I would suggest to check the php.ini file and ensure session.gc_probability = 1 also session.gc_divisor = 1000 and
session.gc_maxlifetime = 1440. In our case session.gc_probability was equal to 0 and caused this issue.
this article saved my day:
https://bewilderedoctothorpe.net/2018/12/21/out-of-inodes/
find . -maxdepth 1 -type d | grep -v '^\.$' | xargs -n 1 -i{} find {} -xdev -type f | cut -d "/" -f 2 | uniq -c | sort -n
On Raspberry Pi I had a problem with /var/cache/fontconfig dir with large number of files. Removing it took more than hour. And of couse rm -rf *.cache* raised Argument list too long error. I used below one
find . -name '*.cache*' | xargs rm -f
you could see this info
for i in /var/run/*;do echo -n "$i "; find $i| wc -l;done | column -t
For those who use Docker and end up here,
When df -i says 100% Inode Use;
Just run docker rmi $(docker images -q)
It will let your created containers (running or exited) but will remove all image that ain't referenced anymore freeing a whole bunch of inodes; I went from 100% back to 18% !
Also might be worth mentioning I use a lot CI/CD with docker runner set up on this machine.
It could be the /tmp folder (where all the temporarily files are stored, yarn and npm script execution for exemple, specifically if you are starting a lot of node script). So normally, you just have to reboot your device or server, and it will delete all the temporarily file that you don't need. For my, I went from 100% of use to 23% of use !
Many answers to this one so far and all of the above seem concrete. I think you'll be safe by using stat as you go along, but OS depending, you may get some inode errors creep up on you. So implementing your own stat call functionality using 64bit to avoid any overflow issues seems fairly compatible.
Run sudo apt-get autoremove command
in some cases it works. If previous unused header data exists, this will be cleaned up.
If you use docker, remove all images. They used many space....
Stop all containers
docker stop $(docker ps -a -q)
Delete all containers
docker rm $(docker ps -a -q)
Delete all images
docker rmi $(docker images -q)
Works to me

Resources