So I have a directory of puppet manifests that I want to run.
Is it possible to do something like:
include /etc/puppet/users/server522/*.pp
and have puppet run them?
I've tried
include users::server522::*
and several other variations
I always get an error about puppet being unable to find it.
Is there anyway to do this?
So my final solution to this was write a script that would take the directory listing and for each .pp file add an include into the server522.pp file. Quite annoying that puppet won't include an entire directory.
What are you trying to do here, and are you sure you're doing it the correct way? To wit, if you have multiple manifests corresponding to multiple servers, you need to define the nodes for each server. If OTOH you're trying to apply multiple manifests to a single node it's not clear why you would be doing that, instead of just using your defined classes. A little more information would be helpful here.
I do not see the point of each user having its own manifest. I would rather create script that would automatically build one manifest file, basing on data from some source, for instance from HEAD of git repository containing CSV file with current list of users.
If you realy want to use separate manifest file for every user you may consider having seprate module for every user:
manifests
default.pp <-- here comes default manifest
module_for_user_foo/
manifests/
init.pp <-- here comes your 'foo' user
module_for_user_bar/
manifests/
init.pp <-- here comes your 'bar' user
Now you may copy modules containing manifests.
Related
I am creating a build system for development purposes for the FreeCAD application. Repo is here if you want to get a better scope of what I'm talking about.
Essentially the folder structure is:
(Main)
(Linux)
(Ubuntu)
ubuntu.sh
ubuntu.Dockerfile
(Fedora)
fedora.sh
fedora.Dockerfile
(Windows)
(Mac)
.env
What I want to do is use the env variables in .env as a central source of truth for all the build scripts in the tree. But I don't want to have to explicitly define the path of the .env inside the files, absolute or relative paths, as I'm still iterating and I don't want to update all the files if I rearrange the tree. Alternatively, I don't want to put independent .env's in all the child dirs for the same reason (unless they auto update somehow)
My question is as follows:
How do I just explicitly define the "local" path of .env in each script, Dockerfile, etc but only have to modify one top level .env file to auto-update an evolving tree. In a cross platform way
Some things I thought through:
Windows uses "hard links" which are equivalent but non compatible with POSIX hardlinks. I thought about creating windows.env and posix.env in each child dir that point to the same main .env. But most config files can only take one .env path argument.
I thought about writing a script that will update all the .env's when run (would rather not have to), or alternatively, I will accept an answer that uses some dotenv tooling to accomplish the same goal as long as it's cross-platform, and runs locally. I'm just not super familiar with those toolings. I would prefer the tooling or script run as a service and not have to be run everytime in order to update the files.
IF I'm using Git AND only referring to shell scripts, then a command at the top of the script such as . /$(git rev-parse --show-toplevel)/.env works well but has major limitations for use with dockerfiles and other yml based file types.
I currently use a run.sh file at the top level dir that sources the .env and then calls the other files within it. This seems to be the most used pattern I see in other repos. But this means I need to have two files run.sh and run.pwsh which just seems extranuous and hacky to add extras files that are basically one liners.
I have a (private) project on Gitlab which uses GameMaker, and the .yy files were being detected as Yacc. I looked up how to change this, so I came across .gitattributes files, as described here and here. I created a .gitattributes file in the project directory with the following content:
*.yy linguist-language=GameMaker JSON
*.yy linguist-detectable=true
*.yyp linguist-language=GameMaker JSON
*.yyp linguist-detectable=true
The files are no longer being detected as Yacc, but they are also not detected as "GameMaker JSON", Gitlab now shows the repository as 100% GameMaker Language. I have tried both *.yy linguist-detectable syntax without the =true and with it, I have tried writing GameMaker-JSON with hyphens instead of spaces, and I have confirmed that the .gitattributes file was pushed onto the main branch (which is the only branch). How can I resolve this so that the .yy and .yyp files get recognized correctly, am I missing something?
It seems I mistakenly assumed that linguist allows you to specify custom language names in .gitattributes, but to my current knowledge, that is unfortunately not possible. I will henceforth specify to mark .yy and .yyp files as JSON in my project (refer to this comment I made), which I have already confirmed to work correctly.
My intention was to mark files that are specifically used as GameMaker project files or asset files (which are created and used by the GameMaker editor and not intended to be edited manually) differently from other files with JSON syntax (GameMaker also allows you to parse data from JSON files within your game code, these files would usually use the .json extension and not .yy or .yyp).
For now, it seems advisable for GameMaker projects to either specify .yy and .yyp as JSON or specify them to not be counted by linguist at all, since they aren't code that is manually written by the user.
I have a running puppet master-agent setup and currently trying to figure out how to use hiera to provision php.
My Puppetfile:
forge "http://forge.puppetlabs.com"
mod "jfryman/nginx"
mod "puppetlabs/mysql"
mod "mayflower/php"
mod 'puppetlabs-vcsrepo'
mod 'puppetlabs/ntp', '4.1.0'
mod 'puppetlabs/stdlib'
My site.pp:
hiera_include('classes')
My environment.conf, where the modulepath is maintained:
manifest = site.pp
modulepath = modules:site
My hiera config on puppet master at /etc/puppetlabs/puppet/hiera.yml:
---
:backends:
- yaml
:hierarchy:
- "nodes/%{::trusted.certname}"
- "environment/%{server_facts.environment}"
- common
:yaml:
# datadir is empty here, so hiera uses its defaults:
# - /etc/puppetlabs/code/environments/%{environment}/hieradata on *nix
# - %CommonAppData%\PuppetLabs\code\environments\%{environment}\hieradata on Windows
# When specifying a datadir, make sure the directory exists.
:datadir:
From what I understand, general config that should be present on all servers goes into common.yaml. With this setup, I managed to install ntp on my node with this config at hieradata/common.yaml:
---
classes:
- 'profile::base'
ntp::servers:
- server 0.de.pool.ntp.org
- server 1.de.pool.ntp.org
- server 2.de.pool.ntp.org
- server 3.de.pool.ntp.org
Now, my hierarchy also states that all node specific config should go into hieradata/nodes/{fqdn-of-the-node}.yml.
Now, finally coming to my questions:
I have a file hieradata/nodes/myserver.example.com.yml which holds this:
classes:
- 'profile::php'
And a manifest under site/profile/manifests/php.pp:
class profile::php {
class { '::php': }
}
But this does not provision php. As you saw, I use mayflower/php from the forge.
Now, my two questions are:
Is my hiera file for php in the right location? What am I missing then to make it provision php to my agent?
You have multiple issues/possibilities here, so let us go through them iteratively.
First, you are using the default datadir of:
/etc/puppetlabs/code/environments/%{environment}/hieradata
However, you have a priority of:
"environment/%{server_facts.environment}"
This does not make sense, since you have a priority that distinguishes data for nodes based on their directory environment, but you also are placing hieradata directly in directory environments. If you want priority based on directory environment, then change your hieradata directory to be outside the direct environments at:
/etc/puppetlabs/code/hieradata
Otherwise, you should remove that level from your priority as it adds no value and will increase lookup times.
Second, you did not show your site.pp, but did you remember your hiera_include('classes')? That will lookup the array classes and then include them, which is what it seems you want. If you are not doing it, then the node provisioning issue you described would occur.
Third, is site in your modulepath? You need to append it in either your puppet.conf or your environment.conf.
Fourth, your node's fqdn may not match the certname. Check the certs directory on your Puppetmaster for the node's cert.
Side notes:
The first half of your question contains a lot of extraneous information and is missing a lot of helpful relevant information. Please consider editing the question to provide more helpful information and to be more concise.
Since ntp worked, I am assuming your module install with r10k into the environment directories succeeded. Also I am assuming that the modules are present for the directory environment of your node.
There is no real reason to specify the php class as global in your declaration with ::php.
This is probably a basic question but I've been Googling for a while on it... I have a Cabal-ized Haskell project and I'm in the process of writing integration tests for it. I want to be able to include test resources for my project in the same repo and access them in tests. For example, here are a couple things I want to accomplish:
1) Check a dummy database instance into my repo, including a shell script that spins up a database process. I want to write an Hspec integration test that spins up the database process, makes some calls to it, and then shuts it down. So I need to be able to find the shell script so I can use System.Process.createProcess on it.
2) Check in paired "input" and "output" files. My test should process each of the input files and compare them to a corresponding output file to make sure they match. (I've read about "golden" but it doesn't seem to solve the problem of finding/reading the input files in the first place?)
In short, how can I go about creating a "resources" folder in the root folder of my Haskell project and find the path to it inside tests?
Have a look at an existing project that uses input and output file.
For example, take haddock, the source code is at https://github.com/haskell/haddock. They have the test files under a folder (https://github.com/haskell/haddock/tree/master/html-test/ref) and they are referenced as extra-source-files in the cabal file (https://github.com/haskell/haddock/blob/master/haddock.cabal). Then the test code (https://github.com/haskell/haddock/blob/master/html-test/run.lhs) uses some CPP macro (__FILE__) to get the current directory, and can then resolve the files relative to that folder.
I know puppet modules always have a files directory and I know where it's supposed to be and I have used the source => syntax effectively from my own, handwritten modules but now I need to learn how to deploy files using Hiera.
I'm starting with the saz-sudo module and I've read the docs but I can't see anything about where to put the sudoers file; the one I want to distribute.
I'm not sure whether I need to set up a site-wide files dir in /etc/puppetlabs/puppet and then make subdirs in there for every module or what. And does Hiera know to look in /etc/puppetlabs/puppet/files/sudo if I say, source => "puppet:///files/etc/sudoers" ? Do I need to add a pathname in /etc/hiera.yaml? Add a line - files ?
Thanks for any clues.
My cursory view of the puppet module, given their example of using hiera:
sudo::configs:
'web':
'source' : 'puppet:///files/etc/sudoers.d/web'
'admins':
'content' : "%admins ALL=(ALL) NOPASSWD: ALL"
'priority' : 10
'joe':
'priority' : 60
'source' : 'puppet:///files/etc/sudoers.d/users/joe'
Suggest it assumes you have a "files" puppet module. So under you puppet modules section:
mkdir -p files/files/etc/sudoers.d/
Drop your files in there.
Explanation:
The url 'puppet:///files/etc/sudoers.d/users/joe' is broken down thus:
puppet: protocol
///: Three slashes indicate the source of the file is in a module.
files: name of the module
etc/sudoers.d/users/joe: full path to the file within the module's "files" directory.
You don't.
The idea of a module (Hiera backed or not) is to lift the need to manage the whole sudoers file from you. Instead, you can manage each single entry in the sudoers file.
I recommend reviewing the documentation carefully. You should definitely not have a file { "/etc/sudoers": } resource in your manifest.
Hiera doesn't have to do anything with Files.
Hiera is like a Variables Database, and servers you based on the hierarchy you have.
the files inside puppet, are usually accessed in methods like source => but also these files are using some basic structure.
In most cases when you call an file or template.
A template can serve your needs to automatically build an sudoers based on that.
There are also modules that supports modifying sudoers too.
It is up to you what to do.
In this case, saz stores the location of the file in hiera, but the real location can be a file inside your puppet (like a module file or something similar).
Which is completely unrelated.
Read about puppet file server
If you have questions, just ask.
V