I recently add the mysql module from puppetLabs in version 7.0 on our RHEL Satellite for managing all our MySql's servers and hold them with idempotency configuration and bloc any unwanted configuration.
In this case, a user with Granted privileges is able to create a Database (like root#localhost) and so add manually a Database with the command create database dbname; for example.
Problem, If i run my pupppet agent -t on my lab server, puppet is just ensuring that databeses defined in my smart class "Databeses" exist (or not) and do nothing...
The result expected was it can remove any modification (like database creation) when ran the agent.
Is there a way to do this ?
Thanks for replies
Since Mysql_database is an "ensurable" plugin type that implements prefetching, you should be able to use the Resources resource type to purge any unmanaged databases that are created on managed nodes. It might look something like this:
resources { 'mysql_database': purge => true }
Do this only if you're certain that you really want it!
Additionally, you might want to try some runs in --noop mode to look for issues before going live. That could help you recognize unanticipated issues, such as databases you want to keep -- belonging to MySQL itself, for example -- but are not currently managing.
Related
Is there a way to extract the infrastructure details by using terraform
e.g get a list of Linux server's version, firewall policy, opened ports, software packages installed etc..
My aim is to generate a block of code to describe the current server setup, then I can use a check list if validate against the code. therefor security loopholes can be identified and fixed
Not sure if I completely understand your question. But, there is not such an "automated" way to extract all the details of your not-terraformed infrastructure. Nevertheless, there exists a terraform import command with which you can import your existing resource (here the docs) to your state file.
Btw, if you are using Oracle Cloud, the Resource Discovery could be an option.
I'm using the ibm_installation_manager module from the puppet forge and it is a bit basic because IBM wrote Installation Manager in a time where idempotency was done much.
ref: https://forge.puppet.com/puppetlabs/ibm_installation_manager
As such it does not cater nicely for upgrades - so the module will not detect if an upgrade is needed, stop existing processes, do the upgrade and then start the processes again. It will just detect if an upgrade is needed and try to install the desired version and if that constitutes an upgrade that's great, but it will probably fail due to running instances.
So I need to implement some "stop processes" pre-upgrade functionality.
I need to mention at this point I'm new to ruby and fairly new to puppet.
The provider that the module uses (imcl.rb) has an exists method.
The ideal way for me to detect if an upgrade is going to happen (and stop the instances if it is) would be for my puppet manifest to be able to somehow call the exists method. Is this possible?
Or how would you approach this problem?
Something like imcl.exists(ibm_pkg["my_imcl_pkg_resource"])
The ideal way for me to detect if an upgrade is going to happen (and stop the instances if it is) would be for my puppet manifest to be able to somehow call the exists method. Is this possible?
No, it is not possible, at least not in any useful way. Your manifests describe how to build a catalog of resources describing the target state of the machine. In a master / agent setup, this happens on the master. The catalog is then used as input to a separate step, in which it is transferred to the target machine and applied there. It is in this second step that providers are engaged.
To the extent that you want the contents of your catalogs to be influenced by the current state of the target machine, the Puppet mechanism for that is to convey the needed state details to the catalog builder in the form of facts. It is relatively straightforward to add your own facts. Indeed, there are at least two distinct, non-exclusive mechanisms, going under the names "external facts" and "custom facts".
There is the need that one puppet agent contacts some different puppet masters.
Reason: there are different groups that create different and independent sets of manifests.
Possible groups and their tasks
Application Vendor: configuration of application
Security: hardening
Operations: routing tables, monitoring tools
Each of these groups should run it's own puppet master - the data (manifests and appropriate data) should be strictly separated. If it is possible, one group should even not see / have access to the manifests of the others (we are using MAC on the puppet agent OSes).
Thoughts and ideas that all failed:
using (only) hira is not flexible as needed - there is the need to have different manifests.
r10k: supports more than one environment, but in each environment can only access one set of manifests.
multi but same puppet server using e.g. DNS round robin: this is the other way round. We need different puppet masters.
Some ways that might be possible but...
running multiple instances of puppet agents. That 'feels' strange. Advantage: the access rights can be limited in the way as needed (e.g. the application puppet agent can run under the application user).
patching puppet that it can handle more than one puppet master. Disadvantage: might be some work.
using other mechanisms to split responsibility. Example: use different git-repositories. Create one puppet master. The puppet master pulls all the different repositories and serves the manifests.
My questions:
Is there a straight forward way implementing this requirement with puppet?
If not, is there some best practice how to do this?
While I think what you are trying to do here is better tackled by incorporating all of your modules and data onto a single master, and that utilizing environments will be effectively the exact same situation (different masters will provide a different set of modules/data) this can be achieved by implementing a standard multi-master infrastructure (one CA master for cert signing, multiple compile masters with certs signed by the same CA master, configured to forward cert traffic elsewhere) and configure each master to have whatever you need. You then end up having to specify which master you want to check in to on each run (a cronjob or some other approach), and have the potential for one checkin to change settings set by another (kinda eliminating the hardening/security concept).
I would urge you to think deeper on how to collaborate your varied aspects (git repos for each division's hiera data and modules that have access control) so that a central master can serve your needs (and access to that master would be the only way to get data/modules from everywhere).
This type of setup will be complex to implement, but the end result will be more reliable and maintainable. Puppet inc. may even be able to do consultation to help you get it right.
There are likely other approaches too, just fyi.
I've often found it convenient to multi-home a puppet agent for development purposes, because with a local puppet server you can instantly test manifest changes - there's no requirement to commit, push and r10k deploy environment like there is if you're just using directory environments and a single (remote) puppet server.
I've found the best way to do that is to just vary the path configuration (otherwise you run into problems with e.g. the CA certs failing to verify against the other server) - a form of your "running multiple instances of puppet agents" suggestion. (I still run them all privileged, so they can all use apt package {} etc.)
For Puppet 3, I'd do this by varying the libdir with --libdir (because the ssldir was under the libdir), but now (Puppet 4+) it looks more sensible to vary the --confdir. So, for example:
$ sudo puppet agent -t # Runs against main puppet server
$ sudo puppet agent -t \
--server=puppet.dev.example.com \
--confdir=/etc/puppetlabs/puppet-dev # Runs against dev puppet server
I need to perform some action (configure something) after stopping the tomcat service. Once the configuration is complete, I need to ensure that the tomcat service is up and running again. I have written following puppet code for the same:
Service {'tomcat': ensure => stopped }
->
class {'config':}
->
Service {'tomcat': ensure => running }
On puppet apply, it is complaining that
'Error: Duplicate declaration: Service[tomcat] is already declared in
file'
How to fix this problem. What is the recipe in puppet to stop a service, perform some action and then bring back the service again?
In puppet, you can't declare same service again. that's the error you have.
With puppet, you needn't care of tomcat stop/start processes. It takes care the final status (called "idemotency"). After you define the relationship between package, config files and services, it will do all jobs for you. For example, you need to understand below processes in puppet and the differences between -> and ~>.
Package['tomcat'] -> File['server.xml'] ~> Service['tomcat']
In your case, you apply the change in tomcat config file, and puppet will restart the tomcat services automatically.
For your reference, here is the copy-paste from Introduction to Puppet blog to explain what's the meaning of idempotency:
One big difference between Puppet and most other tools is that Puppet
configurations are idempotent, meaning they can safely be run multiple
times. Once you develop your configuration, your machines will apply
the configuration often — by default, every 30 minutes — and Puppet
will only make any changes to the system if the system state does not
match the configured state.
Update 2016:
Here another official Puppet blog post on idempotency: https://puppet.com/blog/idempotence-not-just-a-big-and-scary-word
This is not directly possible with Puppet, as #BMW concludes correctly. There are some more points to note, however.
There is some promising work in progress that will add limited support for transitional state declaration. However, this will not (in its current alpha state at least) allow you to enter such a state in preparation for and during application of a whole class.
A common workaround for this kind of issue is to manage the entity in question with two or more resources. The exec type is a good catch all solution because it can manage virtually anything. The obvious drawback is that the exec will have to be tailored to your agents (what do you know - there's a point to Puppet's type system after all ;-). Assuming that the manifest will be for one platform only, this is simple:
exec {
'stop-tomcat':
command => 'service tomcat stop',
onlyif => 'service tomcat status',
before => [
Class['config'],
Service['config'],
],
}
Ordering the exec before Service['config'] is redundant (because the service requires the class), but it is good practice to express that the service resource should have the final say.
I am trying to understand the best practice of setting up Puppet in the first place, let's say I have 1000 existing servers needs to be managed Puppet.
Do I manually install Puppet agent on each or there is a better way.
Sorry if this question is too generic just want to have some idea.
1000 servers could be a lot for a single master instance. of course it will depend on the master specs, and other factors related to the puppet runs.
There are few questions you need to answer first to determine how are you going to go about it such as
Puppet Enterprise or Open Source? What is the current configuration night mare you are trying to solve?
What is the current configuration data related to the challenge or
problem you have?
What are the current business roles (e.g. web server, load
balancer,database, ..etc) related to the problem you have? What
makes a role in terms of configurations?
I would suggest that you start first small to learn more about the puppet DSL, and its ECO system (master, agent, puppetdb, console/dashboard). I also recommend you start with the free 10 nodes puppet Enterprise as it will let you focus more on the problem at hand not how to configure the puppet masters, and agents, how to scale them, ..etc.
One more thing install puppet agent every where if you can in NOOP/disabled mode to get at least facts and run it in a masterless fashion using puppet apply when you need to. i find NOOP mode more useful as it tells you what needs to be changed, also you can enforce changes using --no-noop
hope that will get you started.
To answer your question: Yes, Puppet agent would need to be installed on every node. If you are managing 1000 nodes, I would assume you have your own OS image. In this case, its best to add it to the OS image, and use this image on 1000 nodes.