I have code which search specified IP in linux system:
- name: find IP
set_fact:
ip: "{{ item }}"
with_items: "{{ansible_all_ipv4_addresses}}"
when: "item.startswith('10.')"
And works ok, but I can't find out how to discover interface name based on IP from fact "ip".
Does anybody could give some advice or maybe have some example how to do it?
Ansible provides a list of interfaces in the ansible_interfaces fact. You can use this to iterate over available interfaces, checking each one for a given ip address.
That ends up being trickier than it sounds because you'll need to construct fact names, which means rather than something simple like:
ansible_eth0
You instead need:
hostvars[inventory_hostname]["ansible_%s" % item]
An additional complication is that Ansible divides ip addresses into "primary" (which is ansible_eth0.ipv4.address) and "secondaries" (ansible_eth0.ipv4_secondaries), where the latter is a list of dictionaries with address keys. Assuming that we are iterating with item set to an interface name, we can get the primary address like this:
hostvars[inventory_hostname]["ansible_%s" % item].ipv4.address
But! That will fail for interfaces that don't have an ipv4 address assigned, or that don't have a corresponding ansible_<interface> fact for some reason. So we need to deal gracefully with that situation:
(hostvars[inventory_hostname]["ansible_%s" % item]|default({}).get('ipv4', {}).get('address')
This uses the default filter to ensure that we start with a dictionary, and then we use a few levels of Python's .get(key, default) method.
Checking against the secondary addresses is similar but requires that we use the map filter because ipv4_secondaries gives us a list of dictionaries and what we really want is a list of addresses (so that we can check if our target address is in that list):
((hostvars[inventory_hostname]['ansible_%s' % item]|default({}))
.get('ipv4_secondaries'))|map(attribute='address')|list
Putting it all together:
- hosts: localhost
vars:
target_address: 192.168.122.1
tasks:
- set_fact:
target_interface: "{{ item }}"
when: >
(hostvars[inventory_hostname]['ansible_%s' % item]|default({}))
.get('ipv4', {}).get('address') == target_address
or
target_address in ((hostvars[inventory_hostname]['ansible_%s' % item]|default({}))
.get('ipv4_secondaries'))|map(attribute='address')|list
with_items:
- "{{ ansible_interfaces }}"
- debug:
msg: >-
found interface {{ target_interface }}
with address {{ target_address }}
If I run this on my system, the playbook run concludes with:
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "found interface virbr0 with address 192.168.24.1"
}
If I run:
ansible-playbook playbook.yml -e target_address=192.168.1.75
I get:
TASK [debug] *******************************************************************
ok: [localhost] => {
"msg": "found interface eth0 with address 192.168.1.75"
}
As you can see from the above, this isn't exactly the sort of task that Ansible is meant for. You would probably be better off stuffing all this logic into an Ansible module so that you could use Python (or some other language) to perform the lookup in a more graceful fashion.
Update
Here is a module-based solution to the same problem.
Related
I am trying to publish a payload to a MQTT topic defined in the MQTT connection. However, I get this error in the enforcement log: -
Ran into a failure when enforcing incoming signal: The configured filters could not be matched against the given target with ID 'mqttTestTopic'. Either modify the configured filter or ensure that the message is sent via the correct ID. ...
What is required: -
"enforcement": {
"input": "{{ source:address }}",
"filters": [
"'"${TTN_APP_ID}"'/devices/{{ thing:name }}/up"
]
}
What I have tried: -
"enforcement": {
"input": "mqttTestTopic",
"filters": [
"mqttTestTopic/org.eclipse.ditto.testing.demo:digital-twin"
]
}
I am confused about what must be defined in the input and filters. Can I get more clarification?
If you don't need the Source enforcement, you can simply leave that configuration away.
You would only need to configure it, if you want to e.g. ensure that a device may only update its "twin" (or thing in Ditto) via a specific MQTT topic, e.g. containing the device/thing ID or name.
That would add an additional security mechanism, that a device A is prohibited from updating the thing of a device B.
For MQTT 3.1.1, the "input" can only have the value "{{ source:address }}" (for MQTT 5, also "{{ header:<header-name> }}" can be used) and the complete MQTT topic is then matched against the configured array of "filters".
The message is only accepted/processed if the MQTT topic matched the filter - which can make use of placeholders like {{ thing:id }} like documented.
I am recently doing an CI/CD setup using Azure. The goal is to have the developer select the type of build to be created i.e Staging / Prod.
Thanks to How to write if else condition in Azure DevOps Pipeline, I have added following code -
parameters:
- name: selectConfiguration
displayName: Select build configuration
type: string
default: Debug
values:
- Debug
- Release
variables:
- name: config
${{ if eq(variables['parameters.selectConfiguration'], 'Debug') }}:
value: Debug
${{ else }}:
value: Release
This gives me the following result -
But no matter what I select in this radio group, it always run the else block. i.e. the if-else always fails. Any help to understand what I am doing wrong here?
Try the below, it should work. I am using the same logic to switch between different agent pools.
variables:
${{ if eq(parameters.selectConfiguration, 'Debug') }}:
config: Debug
${{ else }}:
config: Release
In YAML pipeline, you can not use the if...else expression to assign different values to a variable in different conditions.
You can only use the if expression to determine a variable which has one specified value can be available in the specified condition. See "Conditionally assign a variable".
The if...else expression can be used to:
assign different values to an input of the task in different conditions. See "Conditionally set a task input".
run different steps in a job in different conditions. See "Conditionally run a step".
I added a rule into the rules.yml in order to get an alert whenever a container stops.
In order to get an alert for each stopped container with the suffix "dev-23", I used this rule:
- alert: ContainerKilled
expr: absent(container_start_time_seconds{name=~".*dev-23"})
for: 0m
labels:
severity: 'critical'
annotations:
summary: 'Container killed (instance {{ $labels.instance }})'
description: 'A container has disappeared\n VALUE = {{ $value }}\n LABELS: {{ $labels }}'
This indeed works, and I get an alert whenever a container that ends with "dev-23" stops. However, the summary and description of the receieved alert do not tell me what is the name of the stopped container.
In the alert I get this description:
description = A container has disappeared\n VALUE = 1\n LABELS: map[]
summary = Container killed (instance )
What should I use in order to get the exact name of the stopped container?
The issue is, that the metrics created with the absent() function does not have any labels and its value is always 1 (if the queried metric is not present)
You can leave it like that, without any label, if you just want to check if there is NO container running at all, without any details about its previous state and that is enough information to detect the issue.
Or you can use the UP metric, that has the labels, but potentially disappers, e.g. when you have a dynamic service discovery.
In general it a good practice to have an alert on UP{...}==0 and on absent({...})
I have a playbook that grabs ip address as below.
---
- hosts: all
tasks:
- debug: var=hostvars[inventory_hostname]['ansible_default_ipv4']['address']
Output:
TASK [debug] *************************************************************************************************************************************************
ok: [mwiwas01] => {
"hostvars[inventory_hostname]['ansible_default_ipv4']['address']": "10.0.12.15"
}
However, I wish to get the last two segments of an ip address i.e only 12.15.
Note: the ip addresses would change on each host hence I m looking for a standard solution that is compatible to work for any given IP version 4.
How can I grab the same from the IP address.
Make use of split function .
- debug: var=hostvars[inventory_hostname]['ansible_default_ipv4']['address'].split(".")[3]+hostvars[inventory_hostname]['ansible_default_ipv4']['address'].split(".")[4]
Hallo I am building in Hiera / Puppet a data structure for creating mysql / config files. My goal ist to have some default values which can be overwritten with a merge. It works until this point.
Because we have different mysql instances on many hosts I want to automaticly configure some paths to be unique for every instance. I have the instance name as a hash (name) of hashes in the Namespace: our_mysql::configure_db::dbs:
In my case I want to lookup the instance names like "sales_db' or 'hr_db' in paths like datadir, but I can not find a way to lookup the superior keyname.
Hiera data from "our_mysql" module represents some default values:
our_mysql::configure_db::dbs:
'defaults':
datadir: /var/lib/mysql/"%{lookup('lookup to superior hash-key name')}"
log_error: /var/log/mysql/"%{lookup('lookup to superior hash-key name')}".log
logbindir: /var/lib/mysql/"%{lookup('lookup to superior hash-key name')}"
db_port: 3306
...: ...
KEY_N: VALUE_N
Hiera data from node definiton:
our_mysql::configure_db::dbs:
'sales_db':
db_port: "3317"
innodb_buffer_pool_size: "1"
innodb_log_file_size: 1GB
innodb_log_files_in_group: "2"
server_id: "1"
'hr_db':
db_port: "3307"
I now how to do simple lookups or to iterate by
.each | String $key, Hash $value | { ... }
but I have no clue how to reference a key from a certain hierarchy level. Searching all related topics to puppet and hiera didn't help.
Is it possible an any way and if yes how?
As I understand the question, I think what you hope to achieve is that, for example, when you look up our_mysql::configure_db::dbs.sales_db key, you get a merge of the data for that (sub)key and those for the our_mysql::configure_db::dbs.defaults subkey, AND that the various %{lookup ...} tokens in the latter somehow resolve to the string sales_db.
I'm afraid that's not going to happen. The interpolation tokens don't even factor in here -- Hiera simply won't perform such a merge at all. I guess you have a hash-merge lookup in mind, but that merges only identical keys and subkeys, so not our_mysql::configure_db::dbs.sales_db and our_mysql::configure_db::dbs.defaults. Hiera provides for defaults for particular keys in the form of data recorded for those specific keys at a low-priority level of the data hierarchy. The "defaults" subkey you present, on the other hand, has no special meaning to the standard Hiera data providers.
You can still address this problem, just not entirely within the data. For example, consider this:
$dbs = lookup('our_mysql::configure_db::dbs', Hash, 'deep')
$dbs.filter |$dbname, $dbparms| { $dbname != 'defaults' }.each |$dbname, $dbparms| {
# Declare a database using a suitable resource type. "my_mysql::database" is
# a dummy resource name for the purposes of this example only
my_mysql::database {
$dbname:
* => $dbparams;
default:
datadir => "/var/lib/mysql/${dbname}",
log_error => "/var/log/mysql/${dbname}.log",
logbindir => "/var/lib/mysql/${dbname}",
* => $dbs['defaults'];
}
}
That supposes data of the form presented in the question, and it uses the data from the defaults subkey where those do not require knowledge of the specific DB name, but it puts the patterns for various directory names into the resource declaration, instead of into the data. The most important things to recognize are the use of the splat * parameter wildcard for obtaining multiple parameters from a hash, and the use per-expression resource property defaults by use of the default keyword in a resource declaration.
If you wanted to do so, you could push more details of the directory names back into the data with a little more effort (and one or more new keys).