I'm writing some tests for puppet and in my init_spec.rb file I want to use a variable that is declared in the default_facts.yml file. How could I import the value of that variable without having to declare it in the init_spec.rb file.
Thanks in advance!
In general, you would be able to access that data inside the RSpec.configuration object.
Supposing you had a default facts file like this:
▶ cat spec/default_facts.yml
# Use default_module_facts.yml for module specific facts.
#
# Facts specified here will override the values provided by rspec-puppet-facts.
---
concat_basedir: "/tmp"
ipaddress: "172.16.254.254"
is_pe: false
macaddress: "AA:AA:AA:AA:AA:AA"
You could address that data in your tests like this:
it 'ipaddress default fact' do
expect(RSpec.configuration.default_facts['ipaddress']).to eq '172.16.254.254'
end
(I am assuming of course that your default facts file was set up correctly, e.g. by PDK.)
If instead you just want a general way to access the data in any arbitrary YAML file, you can also do this:
▶ cat spec/fixtures/mydata.yml
---
foo: bar
Then in your tests you can write:
require 'yaml'
mydata = YAML.load_file('spec/fixtures/mydata.yml')
describe 'test' do
it 'foo' do
expect(mydata['foo']).to eq 'bar'
end
end
Related
Trying to figure out how can i access the env variable inside swagger.yaml configuration file.
The variable can be access inside the nodejs application using process.env.VARNAME. I want to use the same variable inside swagger.yaml file.
something like
definations:
myvariabledetail: "${process.env.VARNAME}"
. I already tried different combinations including "${process.env.VARNAME}",${process.env.VARNAME},${VARNAME} etc.
YAML as a text file format doesn't know anything about environment variables. A solution would be to load the YAML and then have code that uses a regex to find the environment variables and replace them with the current values. Then finally pass that resulting string into your YAML parser.
You can use envsub:
const envsub = require('envsub');
envsub({
templateFile: `${__dirname}/input.yml`,
outputFile: '/dev/null', // or filename to save result
})
.then(({ outputContents }) => console.log(outputContents));
I am trying to load the contents of a json file and assign them to variables.
My json file looks like this :
{ "master":{ "key1":"value1", "key2":"value2", "key3":"value3" } }
On my local machine, I was able to use the following manifest to load the json file and parse it ; it worked just fine.
$master_hash=loadjson('some_file.json')
$key1=$master_hash['master']['key1']
$key2=$master_hash['master']['key2']
$key3=$master_hash['master']['key3']
However, when I move it to the Puppet master, this fails as it looks for the json file on the Puppet master ! In my earlier request, Puppet : How to load file from agent, I was told to use a function and that worked fine for one fact, but in this case I need to generate a number of them depending on the contents of the json file. How can I achieve this ?
Functions like loadjson() execute on the machine which is compiling the catalog. In the majority of cases this means that the function executes on the master. Since some_file.json doesn't exist on the master it won't load the file.
If you want to transfer information from the agent to the master then you need to use a fact to do so. Facts are synced to the agent machine and executed at the start of the run, and their values are sent back to the master.
The answer to your previous question was a good base, but I'll expand on it a bit here:
# module_name/lib/facter/master_hash.rb
require 'json'
Facter.add(:master_hash) do
setcode do
# return content of foo as a string
f = File.read('/path/to/some_file.json')
master_hash = JSON.parse(f)
master_hash
end
end
The last line of the setcode block gets returned as the value of the fact. In this case it would expose a $::master_hash fact which would contain a hash from the parsed json.
hiera.yaml
---
:hierarchy:
- node/%{host_fqdn}
- site_config/%{host_site_name}
- site_config/perf_%{host_performance_class}
- site_config/%{host_type}_v%{host_type_version}
- site/%{host_site_name}
- environments/%{site_environment}
- types/%{host_type}_v%{host_type_version}
- hosts
- sites
- users
- common
# options are native, deep, deeper
:merge_behavior: deeper
We currently have this hiera config. So the config gets merged in the following sequence common.yaml > users.yaml > sites.yaml > hosts.yaml > types/xxx_vxxx.yaml > etc. For the variable top hierarchies, it gets overwritten only if that file exists.
eg:
common.yaml
server:
instance_type: m3.medium
site_config/mysite.yaml
server:
instance_type: m4.large
So for all other sites, the instance type will be m3.medium, but only for mysite it will be m4.large.
How can I achieve the same in Ansible?
I think that #Xiong is right that you should go the variables way in Ansible.
You can set up flexible inventory with vars precedence from general to specific.
But you can try this snippet if it helps:
---
- hosts: loc-test
tasks:
- include_vars: hiera/{{ item }}
with_items:
- common.yml
- "node/{{ ansible_fqdn }}/users.yml"
- "node/{{ ansible_fqdn }}/sites.yml"
- "node/{{ ansible_fqdn }}/types/{{ host_type }}_v{{ host_type_version }}.yml"
failed_when: false
- debug: var=server
This will try to load variables from files with structure similar to your question.
Nonexistent files are ignored (because of failed_when: false).
Files are loaded in order of this list (from top to bottom), overwriting previous values.
Gotchas:
all variables that you use in the list must be defined (e.g. host_type in this example can't be defined in common.yml), because list of items to iterate is templated before the whole loop is executed (see update for workaround).
Ansible overwrite(replace) dicts by default, I guess your use case expects merging behavior. This can be achieved with hash_behavior setting – but this is unusual for Ansible playbooks.
P.S. You may alter top-to-bottom-merge behavior by changing with_items to with_first_found and reverse the list (from specific to general). In this case Ansible will load variables from first file found.
Update: use variables from previous includes in file path.
You can split the loop into multiple tasks, so Ansible will evaluate each task's result before templating next file's include path.
Make hiera_inc.yml:
- include_vars: hiera/common.yml
failed_when: false
- include_vars: hiera/node/{{ ansible_fqdn }}/users.yml
failed_when: false
- include_vars: hiera/node/{{ ansible_fqdn }}/sites.yml
failed_when: false
- include_vars: hiera/node/{{ ansible_fqdn }}/types/{{ host_type | default('none') }}_v{{ host_type_version | default('none') }}.yml
failed_when: false
And in your main playbook:
- include: hiera_inc.yml
This looks a bit clumsy, but this way you can define host_type in common.yaml and it will be honored in the path templating for next tasks.
With Ansible 2.2 it will be possible to include_vars into named variable (not global host space), so you can include_vars into hiera_facts and use combine filter to merge them without altering global hash behavior.
I'm not familiar with Puppet, so this may not be a direct mapping. But what I understand your question to be is "how do I use values in one shared location but override their definitions for different servers?". In Ansible, you do this with variables.
You can define variables directly in your inventory. You can define variables in host- and group-specific files. You can define variables at a playbook level. You can define variables at a role level. Heck, you can even define variables with command-line switches.
Between all of these places, you should be able to define overrides to suit your situation. You'll probably want to take a look at the documentation section on how to decide where to define a variable for more info.
It seems a little more basic than Hiera, but somebody has created a basic ansible lookup plugin with similar syntax
https://github.com/sailthru/ansible-oss/tree/master/tools/echelon
I want Puppet to create a different variable name depending on the hiera file associated with the environment. I want to do this because I want Puppet to use the ip address associated with a specific network interface. Ideally, the network interface will be in the hiera file. That way you could concatenate the ip_address variable name with the network interface defined in the hiera file, which would look something like.
::ipaddress_{$network_interface_from_hiera_file}
Is this possible?
Right now I have an the following, but I think there is a better implementation. If the network interfaces change I would have to add another case.
if $environment == 'production' {
$client_address = $::ipaddress_enp130s0f0
} else {
$client_address = $::ipaddress_eth2
}
It sounds like you're after an eval in Puppet, like you have in shell and Perl other languages, and as far as I know, there isn't one.
I would probably just use a custom fact that always returns the IP address I care about. Of course, then you need to solve the problem of how to get the custom facts out to your fleet.
Another solution might be to use Hiera's hierarchical lookup:
In hiera.yaml:
:hierarchy:
- %{::node_environment}
- common
In common.yaml:
---
myclass::client_address: "%{::ipaddress_eth2}"
In production.yaml:
---
myclass::client_address: "%{::ipaddress_enp130s0f0}"
Finally, be aware that you can look up values from within Hiera, see here. Possibly that could be helpful.
For example, I have a file look as follows:
abc
def
ghi
Now, I want to use a Linux shell script to set some variables according to this file. I need the following variables to be set something like:
export abc=abc111
export def=def111
export ghi=ghi111
As you can see that the variable names are retrieved from list file as well. Thanks.
while read var; do
export $var=${var}111
done < vars.txt