Ansible - Reading Environment Variable from Remote Host - linux

I am trying to read an Environment variable from Target Linux Host using Ansible playbook. I tried all the below tasks as per the document but there is no result.
- name: Test1
debug: msg="{{ ansible_env.BULK }}"
delegate_to: "{{ target_host }}"
- name: Test2
shell: echo $BULK
delegate_to: "{{ target_host }}"
register: foo
- debug: msg="{{ foo.stdout }}"
- name: Test3
debug: msg="{{ lookup('env','BULK')}} is an environment variable"
delegate_to: "{{ target_host }}"
The Environment variable "BULK" is not set in the local Host where I am executing the playbook, so I assume its returning nothing. Instead of BULK, if I use "HOME" which is always available, it returns the result. If I SSH into the target_host I am able to run echo $BULK without any issue.
How to obtain the Environment variable from the remote host?

If I SSH into the target_host I am able to run echo $BULK without any issue.
Most likely, because BULK is set in one of the rc-files sourced only in an interactive session of the shell on the target machine. And Ansible's gather_facts task runs in a non-interactive one.
How to obtain the Environment variable from the remote host?
Move the line setting the BULK variable to a place where it is sourced regardless of the session type (where exactly, depends on the target OS and shell)
See for example: https://unix.stackexchange.com/a/170499/133107 for hints.

source /etc/profile and then grep the env
my solution is not perfect, but often works. for example i want to check if the remote_host has environment variables for proxy servers, i do the following:
as an adhoc ansible command:
ansible remote_host -b -m shell -a '. /etc/profile && (env | grep -iP "proxy")'
Explanation:
i prefere the shell module, it does what i expect... the same if i do it on a shell
. /etc/profile sources the /etc/profile. And this file sources other files like under /etc/profile.d . So after this i have the fix machine part of the environment.
env | grep -iP "proxy" then filter the expanded environment for my variables i am looking for

Remote environment variables will be automatically gathered by ansible during the "Gathering Facts" task.
You can inspect them like this:
- name: inspect env vars
debug:
var: ansible_facts.env
In your case try this:
- name: Test4
debug: msg="{{ ansible_facts.env.BULK }} is the value of an environment variable"

Related

Ansible Run Command As Another User

I know my question is what become is designed to solve. And I do use it. However, my command seems to still be run as the ssh user. I'm trying to execute a which psql command to get the executable path. Running which psql as ssh user gives a different output than running the same command as my become user which is the output I want.
EDIT The problem is the $PATH variable ansible is using as suggested in comments. It is not using the correct $PATH variable. How can I direct ansible to use postgres users $PATH variable? Using environment module didn't work for me as suggested here https://serverfault.com/questions/734560/ansible-become-user-not-picking-up-path-correctly
EDIT2 So a solution is to use the environment module and set the path to the path I know has the psql executable but this seems hacky. Ideally, I'd like to just be able to use the become users path and not have to explicitly set it. Here's the hacky solution:
- name: Check if new or existing host
command: which psql
environment:
PATH: "/usr/pgsql-13/bin/:{{ansible_env.PATH}}"
become: yes
become_user: postgres
Playbook
---
- name: Playbook Control
hosts: all
become: yes
become_user: postgres
tasks:
- name: Check if new or existing host
shell: whoami && which psql
register: output
Relevant Output (the same as if I were to run the task command as my_user on myhost.net)
"stdout_lines": [
"postgres",
"/usr/bin/psql"
]
Expected Output (the output if I were to run the task command as postgres user on myhost.net)
"stdout_lines": [
"postgres",
"/usr/pgsql-13/bin/psql"
]
Inventory
myhost.net
[all:vars]
ansible_connection=ssh
ansible_user=my_user
Command
ansible-playbook --ask-vault-pass -vvv -i temp_hosts playbook.yml
In vault I only have the ssh pass of my_user.
Running the playbook with -vvv flag shows me that escalation was successful and that the output of this task is the output of running the command as ssh user, not become user. Any ideas?
Ansible by default uses sudo as the default become method.
Depending on how your linux system is configured (check /etc/sudoers), it could be that your $PATH variable is preserved for sudo commands.
You can either change this, or force ansible to use a different become method such as su:
https://docs.ansible.com/ansible/latest/user_guide/become.html#become-directives

how to dump gitlab ci environment variables to file

the question
How to dump all Gitlab CI environment variables (with variables set in the project or group CI/CD settings) to a file, but only them, without environment variables of the host on which gitlab runner is executed?
Background
We are using gitlab CI/CD to deploy our projects to a docker server. Each project contains a docker-compose.yml file which uses various environment variables, eg db passwords. We are using .env file to store this variables, so one can start/restart the containers after deployment from command line, without accessing gitlab.
Our deployments script looks something like this:
deploy:
script:
#...
- cp docker-compose.development.yml {$DEPLOY_TO_PATH}/docker-compose.yml
- env > variables.env
- docker-compose up -d
#...
And the docker-compose.yml file looks like this:
version: "3"
services:
project:
image: some/image
env_file:
- variables.env
...
The problem is now the .env file contains both gitlab variables and hosts system environment variables and in the result the PATH variable is overwritten.
I have developed a workaround with grep:
env | grep -Pv "^PATH" > variables.env
It allowed us to keep this working for now, but I think that the problem might hit us again with another variables which would be set to different values inside a container and on the host system.
I know I can list all the variables in docker-compose and similar files, but we already have quite a few of them in a few projects so it is not a solution.
You need to add to script next command
script:
...
# Read certificate stored in $KUBE_CA_PEM variable and save it in a new file
- echo "$KUBE_CA_PEM" > variables.env
...
This might be late, but I did something like this:
script:
- env |grep -v "CI"|grep -v "FF"|grep -v "GITLAB"|grep -v "PWD"|grep -v "PATH"|grep -v "HOME"|grep -v "HOST"|grep -v "SH" > application.properties
- cat application.properties
It's not the best, but it works.
The one problem with this is you can have variables with a string containing one of the exclusions, ie. "CI","FF","GITLAB","PWD","PATH","HOME","HOME","SH"
My reusable solution /tools/gitlab/script-gitlab-variables.yml:
variables:
# Default values
GITLAB_EXPORT_ENV_FILENAME: '.env.gitlab.cicd'
.script-gitlab-variables:
debug:
# section_start
- echo -e "\e[0Ksection_start:`date +%s`:gitlab_variables_debug[collapsed=true]\r\e[0K[GITLAB VARIABLES DEBUG]"
# command
- env
# section_end
- echo -e "\e[0Ksection_end:`date +%s`:gitlab_variables_debug\r\e[0K"
export-to-env:
# section_start
- echo -e "\e[0Ksection_start:`date +%s`:gitlab_variables_export_to_env[collapsed=true]\r\e[0K[GITLAB VARIABLES EXPORT]"
# verify mandatory variables
- test ! -z "$GITLAB_EXPORT_VARS" && echo "$GITLAB_EXPORT_VARS" || exit $?
# display variables
- echo "$GITLAB_EXPORT_ENV_FILENAME"
# command
- env | grep -E "^($GITLAB_EXPORT_VARS)=" > $GITLAB_EXPORT_ENV_FILENAME
# section_end
- echo -e "\e[0Ksection_end:`date +%s`:gitlab_variables_export_to_env\r\e[0K"
cat-env:
# section_start
- echo -e "\e[0Ksection_start:`date +%s`:gitlab_variables_cat-env[collapsed=true]\r\e[0K[GITLAB VARIABLES CAT ENV]"
# command
- cat $GITLAB_EXPORT_ENV_FILENAME
# section_end
- echo -e "\e[0Ksection_end:`date +%s`:gitlab_variables_cat-env\r\e[0K"
How to use .gitlab-ci.yml:
include:
- local: '/tools/gitlab/script-gitlab-variables.yml'
Your Job:
variables:
GITLAB_EXPORT_VARS: 'CI_BUILD_NAME|GITLAB_USER_NAME'
script:
- !reference [.script-gitlab-variables, debug]
- !reference [.script-gitlab-variables, export-to-env]
- !reference [.script-gitlab-variables, cat-env]
Result cat .env.gitlab.cicd:
CI_BUILD_NAME=Demo
GITLAB_USER_NAME=Benjamin
What you need dump all:
# /tools/gitlab/script-gitlab-variables.yml
dump-all:
- env > $GITLAB_EXPORT_ENV_FILENAME
# .gitlab-ci.yml
script:
- !reference [.script-gitlab-variables, dump-all]
I hope I could help

Ansible playbook to create output to logfile

This is one little part of my working ansible playbook.
I want to send the information which will be gathered to a log file (which the playbook will create)
I have tried so many different way but getting no where.
No error is coming back which can only tell me that the script is working but I guess its going somewhere else other than the destination which I would like it to do
Here is my script
Would be grateful of your thoughts and help
- name: netstat check
shell: netstat -tulnp | awk '{print $4}' | sed -n 's/.*:\([^",]*\)[",]*$/\1/p'
register: netstat
- name: copy output to local file
copy:
content: "{{ netstat.stdout}}"
dest: "/home/user_name/netstat.txt"
Thanks
I executed your playbook in my ansible server(hosts: localhost) and it works fine. A new file is created with the required output.
Incase you want it on the localhost, try giving delegate_to: localhost
- name: copy output to local file
copy:
content: "{{ netstat.stdout}}"
dest: "/home/user_name/netstat.txt"
delegate_to: localhost

Iterating Ansible setup command

I want to use ansible setup module to retrieve hosts specs and I tried with a bash for loop.
Ansible version: 2.4
My hosts inventory has been defined in a group of machines which I called rhelmachines
I would like to collect the following list of variables called "specs"
declare -a specs=("ansible_all_ipv4_addresses" "ansible_processor" "ansible_processor_cores" "ansible_uptime_seconds")
I am then trying to include the ansible command in a for bash loop:
for i in "${specs[#]}"
do
ansible rhelmachines -m setup -a 'filter='$i'
done
how can I concatenate multiple filters in one connection only ?
Thanks!
With a little sed hackery to convert ansible's output to JSON, you can use jq to extract only the pieces you need:
ansible -m setup localhost | sed -e 's/^[[:alpha:]].*[|].* [>][>] {$/{/' | jq -n '
[inputs |
.ansible_facts as $facts |
$facts.ansible_hostname as $hostname |
{($hostname): {
"ipv4_addresses": $facts.ansible_all_ipv4_addresses,
"processor": $facts.ansible_processor[0],
"cores": $facts.ansible_processor_cores,
"uptime": $facts.ansible_uptime_seconds}}] | add'
...generates output of the form:
{
"my-current-hostname": {
"ipv4_addresses": [
"192.168.119.129"
],
"processor": "Intel(R) Core(TM) i7-6700HQ CPU # 2.60GHz",
"cores": 1,
"uptime": null
}
}
(run with ansible 1.4.5, which doesn't generate uptime).
As one of the possible solution, I implemented an Ansible code exploiting Ansible facts. I implemented it first gathering the ansible facts. Then, I used a local_action and a loop. The loop indices are the several ansible facts. For every fact I am writing out a line of a file. In this way I am getting a file composed by all the ansible facts I declared in the loop for the rhelmachines.
---
- hosts: rhelmachines
gather_facts: True
tasks:
- name: Gather Facts
setup:
gather_subset=all
register: facts
- debug:
msg: "{{ facts.ansible_facts.ansible_all_ipv4_addresses }}"
- name: copy content from facts to output file
local_action:
module: lineinfile
line: "{{ item }}"
path: /tmp/assessments/facts.txt
loop:
- "{{ facts.ansible_facts.ansible_all_ipv4_addresses }}"
- "{{ facts.ansible_facts.ansible_all_ipv6_addresses }}"
I took #Luigi Sambolino answer and make it better. His answer was failing on more than one host in the inventory. He proposed using lineinfile which has one con in this situation - every fact that was the same as other machines were omitted. Other drawback was that results wasn't sorted together, all was mixed.
I needed to take some basic information about systems, like IP, OS version, and so on. Here's my playbook:
- hosts: all
gather_facts: true
ignore_unreachable: true
tasks:
- name: get the facts
setup:
gather_subset=all
register: facts
- name: remove file
local_action:
module: file
path: results
state: absent
- name: save results in file
local_action:
module: shell
cmd: echo "{{ item }}" >> results
with_together:
- "{{ facts.ansible_facts.ansible_default_ipv4.address }}"
- "{{ facts.ansible_facts.ansible_architecture }}"
- "{{ facts.ansible_facts.ansible_distribution }}"
- "{{ facts.ansible_facts.ansible_distribution_version }}"
- "{{ facts.ansible_facts.ansible_hostname }}"
- "{{ facts.ansible_facts.ansible_kernel }}"
Now the results look like this:
...
['10.200.1.21', 'x86_64', 'Ubuntu', '18.04', 'bacula', '4.15.18-7-pve']
['10.200.2.53', 'x86_64', 'Ubuntu', '18.04', 'webserver', '4.15.18-27-pve']
...
Square brackets can be deleted by sed and we have a nice CSV file that can be used with any spreadsheet, for example.
specs=( "ansible_all_ipv4_addresses"
"ansible_processor"
"ansible_processor_cores"
"ansible_uptime_seconds" )
args=( )
for spec in "${specs[#]}"; do args+=( -a "$spec" ); done
ansible rhelmachines -m setup "${args[#]}"
...will result in your final command being equivalent to:
ansible rhelmachines -m setup \
-a ansible_all_ipv4_addresses \
-a ansible_processor \
-a ansible_processor_cores \
-a ansible_uptime_seconds

Edit current user's shell with ansible

I'm trying yo push my dot files and some personal configuration files to a server (I'm not root or sudoer). Ansible connects as my user in order to edit files in my home folder.
I'd like to set my default shell to usr/bin/fish.
I am not allowed to edit /etc/passwd so
user:
name: shaka
shell: /usr/bin/fish
won't run.
I also checked the chsh command but the executable prompt for my password.
How could I change my shell on such machines ? (Debian 8, Ubuntu 16, Opensuse)
I know this is old, but I wanted to post this in case anyone else comes back here looking for advise like I did:
If you're running local playbooks, you might not be specifying the user and expecting to change the shell of user you're running the playbook as.
The problem is that you can't change the shell without elevating the privileges (become: yes), but when you do - you're running things as root. Which just changes the shell of the root user. You can double check that this is the case by looking at /etc/passwd and seeing what the root shell is.
Here's my recipe for changing the shell of the user running the playbook:
- name: set up zsh for user
hosts: localhost
become: no
vars:
the_user: "{{ ansible_user_id }}"
tasks:
- name: change user shell to zsh
become: yes
user:
name: "{{ the_user }}"
shell: /bin/zsh
This will set the variable the_user to the current running user, but will change the shell of that user using root.
I ended up using two ansible modules :
ansible expect
ansible prompt
First I record my password with a prompt :
vars_prompt:
- name: "my_password"
prompt: "Enter password"
private: yes
And then I use the module expect to send the password to the chsh command :
tasks:
- name: Case insensitve password string match
expect:
command: "chsh -s /usr/bin/fish"
responses:
(?i)password: "{{ my_password }}"
creates: ".shell_is_fish"
The creates sets a lock file avoiding this task to be triggered again. This may be dangerous because the shell could be changed after and ansible will not update it (because of the lock still present). You may want to avoid this behaviour.
Here is how I do it:
- name: Set login shell of user {{ ansible_env.USER }} to `/bin/zsh` with `usermod`
ansible.builtin.command: usermod --shell /bin/zsh {{ ansible_env.USER }}
become: true
changed_when: false
Ubuntu 16
add first line in ~/.bashrc
/usr/bin/fish && exit

Resources