Use a puppet module multiple times - puppet

I'm using a puppet module from Puppet Forge - https://forge.puppet.com/creativeview/mssql_system_dsn
The documentation indicates to use it like this:
class {'mssql_system_dsn':
dsn_name => 'vcenter',
db_name => 'vcdb',
db_server_ip => '192.168.35.20',
sql_version => '2012',
dsn_64bit => true,
}
I need to create multiple odbc data sources.
However, if I simply duplicate this snippet twice and change the parameters I get a multiple declaration error.
How can I declare this module multiple times?

How can I declare this module multiple times?
You cannot do so without modifying the module. Although it is possible to declare the same class multiple times if you use include-like syntax, that does not afford a means to use different parameters with different declarations. This is all connected to the fact that Puppet classes are singletons. I can confirm based on a quick review of the module's code that its design does not support defining multiple data sources.
I'd encourage you to file an enhancement request with the module author. If that does not quickly bear fruit, then you have the option of modifying the module yourself. It looks like that would be feasible, but not as simple as just changing a class keyword to define.

As the author didn't answer my request and had not merged a pull request from another contributor I created my own module;
https://forge.puppet.com/garfieldmoore/odbc_data_source
If anyone is interested enough to review my module's code and offer improvements or let me know when I have not followed best practises I would appreciate it

Related

Puppet: Class Ordering / Containment - always wrong order

I read a lot about ordering puppet classes with containment (iam using Puppet 6). But it still does not work for me in one case. Maybe my english is not good enough and i miss something. Maybe somebody know what iam doing wrong.
I have a profile to installing a puppetserver (profile::puppetserver). This profile has three sub-classes which I contain within the profile::puppetserver
class profile::puppetserver(
) {
contain profile::puppetserver::install
contain profile::puppetserver::config
contain profile::puppetserver::firewall
}
That works fine for me. Now I want to expand this profile and install PuppetDB. For this, i use the puppetdb module from puppet forge:
So what i do is add profile::puppetserver::puppetdb and the contain to the profile::puppetserver
class profile::puppetserver::puppetdb(
) {
# Configure puppetdb and its underlying database
class { 'puppetdb': }
# Configure the Puppet master to use puppetdb
class { 'puppetdb::master::config': }
}
When i provision my puppetserver first and add the profile::puppetserver::puppetdb after it, puppetdb installs and everything works fine.
If I add it directly with contain, and provisioning everything at once, it crashes. It's because the puppetdb module is installed randomly during my master server installs (and also the postgresql server and so on). That ends in my puppetserver is not running and my puppetdb generate no local ssl certificates and the service doesn't comes up.
What i try first:
I installed the puppetdb Package in my profile::puppetserver::puppetdb directly and use the required flag. It works when i provision all at once.
class profile::puppetserver::puppetdb (
) {
Package { 'puppetdb':
ensure => installed,
require => Class['profile::puppetserver::config']
}
}
So i think i could do the same in the code above:
class profile::puppetserver::puppetdb(
) {
# Configure puppetdb and its underlying database
class { 'puppetdb':
require => Class['profile::puppetserver::config']
}
# Configure the Puppet master to use puppetdb
class { 'puppetdb::master::config':
require => Class['profile::puppetserver::config']
}
}
But this does not work...
So i read about puppet class containment and ordering by chains. So i did this in my profile::puppetserver
class profile::puppetserver(
) {
contain profile::puppetserver::install
contain profile::puppetserver::config
contain profile::puppetserver::firewall
contain profile::puppetserver::puppetdb
Class['profile::puppetserver::install'] ->
Class['profile::puppetserver::config'] ->
Class['profile::puppetserver::firewall'] ->
Class['profile::puppetserver::puppetdb']
}
But it still does not have any effect... he still starts to install postgresql and the puppetdb package during my "puppetserver provisioning" in the install, config, firewall steps.
How i must write the ordering, that all things from the puppetdb module, which i call in profile::puppetserver::puppetdb, only starts when the rest of the provisioning steps are finished?
I really don't understand it. I think maybe it haves something to do with the fact, that i declare classes from the puppetdb module inside of profile::puppetserver::puppetdb and not the directly Resource Type. Because when i use the Package Resource Type with the Require Flag, it seems to work. But i really don't know how to handle this. I think there must be a way or?
I think maybe it haves something to do with the fact, that i declare
classes from the puppetdb module inside of
profile::puppetserver::puppetdb and not the directly Resource Type.
Because when i use the Package Resource Type with the Require Flag, it
seems to work.
Exactly so.
Resources are ordered with the class or defined-type instance that directly declares them, as well as according to ordering parameters and instructions applying to them directly.
Because classes can be declared multiple times, in different places, ordering is more complicated for them. Resource-like class declarations such as you demonstrate (and which you really ought to avoid as much as possible) do not imply any particular ordering of the declared class. Neither do declarations via the include function.
Class declarations via the require function place a single-ended ordering constraint on the declared class relative to the declaring class or defined type, and declarations via the contain function place a double-ended ordering constraint similar to that applying to all resource declarations. The chaining arrows and ordering metaparameters can place additional ordering constraints on classes.
But i really dont know how to handle this. I think there must be a way or?
Your last example shows a viable way to enforce ordering at the level of profile::puppetserver, but its effectiveness is contingent on each of its contained classes taking the same approach for any classes they themselves declare, at least where those third-level classes must be constrained by the order of the second-level classes. This appears to be where you are falling down.
Note also that although there is definitely a need to order some things relative to some others, it is not necessary or much useful to try to enforce an explicit total order over all resources. Work with the lightest hand possible, placing only those ordering constraints that serve a good purpose.

Writing ENV variables to configure an npm module

I currently have a project in a loose ES6 module format and my database connection is hard coded. I am wanting to turn this into an npm module and am now facing the issue of how to best allow the end user to configure the code. My first attempt was to rewrite it as classes to be instantiated but it is making the use of the code more convoluted than before so am looking at alternatives. I am exploring my configuration options. It looks like writing to the process env would be the way but I am pondering potential issues, no-nos and other options I have not considered.
Is having the user write config to process env an acceptable method of configuring an npm module? It's a bit like a global write so am dealing with namespace considerations for one. I have also considered using package.json but that's not going to work for things like credentials. Likewise using an rc file is cumbersome. I have not found any docs on the proper methodology if any.
process.env['MY_COOL_MODULE_DB'] = ...
There are basically 5ish options as I see it:
hardcode - not an option
create a configured scope such as classes - what I have now and bleh
use a config such as node-config - not really a user friendly option for npm
store as globals/env. As suggested in comment I can wrap that process in an exported function and thereby ensure that I have a complex non collisive namespace while abstracting that from end user
Ask user to create some .rc file - I would if I was big time like AWS but not in this case.
I mention this npm use case but this really applies to the general challenge of configuring code that is exported as functions. I have use cases for classes but when the only need is creating a configured scope at the expense (in my case) of more complex code I am not sure its worth it.
Update I realize this is a bit of a discussion question but it's helped me wrap my brain around options. I think something like this:
// options.js
let options = {}
export function setOptions(o) { options = o }
export function getOptions(o) { return options }
Then have the user call setOptions() and call this getOptions internally. I realize that since Node requires the module just once that my options object will be kept configured as I pass it around.
NPM modules should IMO be agnostic as to where configuration is stored. That should be left up to the developer, and they may pick their favorite method (env vars, rc files, JSON files, whatever).
The configuration can be passed to your module in various ways. A common way is to export a function that takes an options object:
export default options => {
let db = database.connect(options.database);
...
}
From there, it really depends on what exactly your module provides. If it's just a bunch of loosely coupled functions, you can just return an object:
export default options => {
let db = database.connect(options.database);
return {
getUsers() { return db.getUsers() }
}
}
If you want to allow multiple versions of that object to exist simultaneously, you can use classes:
class MyClass {
constructor(options) {
...
}
...
}
export default options => {
return new MyClass(options)
}
Or export the entire class itself.
If the number of configuration options is limited (say 3 or less), you can also allow them to be passed as separate arguments, instead of passing an object.

Collect values from other puppet classes

I'm trying to implement automatic monitoring using nagios/icinga and puppet.
Hosts and basic services are working but now I want to implement different checks for services based on hostgroups. While I could setup the hostgroups in hiera I want to be able to do the following:
Apply a class for each service (like ssh, http) which only "exports" a hostgroup-name (like "ssh-servers" and "http-servers"
and also apply a base class which "collects" these names, joins them to a string and exports a nagios_host resource like this:
##nagios_host { $::fqdn:
ensure => present,
use => "generic-host",
alias => $::hostname,
address => $::ipaddress,
hostgroups => $hostgroups, # this should be something like "ssh-servers, http-servers"
}
I'm just starting with puppet and looked at virtual resources and exported resources but I'm not sure how to apply this correctly. Is this even possible?
The export/import paradigm does not lend itself well to this type of data gathering. If you want to take advantage of it, you will need to define resource types that Just Work when gathered on the Nagios server from all the agent catalogs.
Your mileage might very well increase if you try and rely on PuppetDB queries instead. You get much more control this way.

Reusing Puppet Defined Type Parameter in Other Defined Type

Lets say I want to define a set of resources that have dependencies on each other, and the dependent resources should reuse parameters from their ancestors. Something like this:
server { 'my_server':
path => '/path/to/server/root',
...
}
server_module { 'my_module':
server => Server['my_server'],
...
}
The server_module resource both depends on my_server, but also wants to reuse the configuration of it, in this case the path where the server is installed. stdlib has functions for doing this, specifically getparam().
Is this the "puppet" way to handle this, or is there a better way to have this kind of dependency?
I don't think there's a standard "puppet way" to do this. If you can get it done using the stdlib and you're happy with it, then by all means do it that way.
Personally, if I have a couple defined resources that both need the same data I'll do one of the follow:
1) Have a manifest that creates both resources and passes the data both need via parameters. The manifest will have access to all data both resources need, whether shared or not.
2) Have both defined resources look up the data they need in Hiera.
I've been leaning more towards #2 lately.
Dependency is only a matter of declaring it. So your server_module resource would have a "require => Server['my_server']" parameter --- or the server resource would have a "before => Server_module['my_module']".

Is it secure to use a controller, module or action name to set include paths?

I want to set up include paths (and other paths, like view script paths) based on the module being accessed. Is this safe? If not, how could I safely set up include paths dynamically? I'm doing something like the code below (this is from a controller plugin.)
public function dispatchLoopStartup(Zend_Controller_Request_Abstract $request) {
$modName = $request->getModuleName();
$modulePath = APP_PATH.'/modules/'.$modName.'/classes';
set_include_path(get_include_path().PATH_SEPARATOR.$modulePath);
}
I'm not sure whether it is safe or not, but it doesn't sound like the best practice. What if someone entered a module name like ../admin/? You should sanitize the module name before using it.
It's fine as long as you sanitize your variables before using them, but it won't be very performant. Fiddling with include paths at runtime causes a serious impact performance.
You're trying to load models/helpers per module? You should look at Zend_Application:
Zend_Application provides a bootstrapping facility for applications which provides reusable resources, common- and module-based bootstrap classes and dependency checking. It also takes care of setting up the PHP environment and introduces autoloading by default.
Emphasis by me

Resources