In what order will facts found in files under a module/lib/facter/* be loaded on a puppet client? - puppet

I have the following file structure:
lib
└── facter
├── rackspace.rb
├── system_load.rb
└── users.rb
I want to use a custom fact value found in system_load.rb (let's call this :system_me fact) in another custom fact that I am writing and defined in users.rb. Like this:
# users.rb
Facter.add('sogood') do
you = ''
me = Facter.value(:system_me)
if me == 'foo'
you = 'bar'
end
setcode do
you
end
end
However, I am worried on what happens if system_me fact value doesn't exist yet before client tries to run sogood.
So my question is:
Are fact files like the one seen in the lib folder structure above loaded in alphabetical order of the filename (rackspace.rb -> system_load.rb -> users.rb) when I run puppet apply —facterpath=module/lib/facter ?

If a fact resolution attempts to obtain the Facter.value() for another fact that has not yet been loaded then Facter will immediately attempt to load the needed fact. That means that
No, fact files are not necessarily loaded in alphabetical order, but
you nevertheless should not need to worry about fact loading order.
You do need to avoid creating dependency loops among facts, but custom facts relying on built-in facts will never cause such a loop.

Related

Puppet : How to load file from agent - Part 3

Using the functions in my earlier queries (see reference below), I am able to pull the file from the agent and perform the necessary tasks. However, this is affecting all the users on the system as it throws an exception stating that the file is not found. Is there anyway I can add some logic like unless file_exists .... to this ruby function ?
My hierarchy is shown below. I am not following why it affects other users who are not even in "mymodules".
Root
modules
mymodules
lib
facter
ruby_function1.rb
ruby_function2.rb
modules_by_userxx1
modules_by_userxx2
modules_by_userxx3
....
Reference :
Puppet : How to load file from agent
Puppet : How to load file from agent - Part 2
As requested by Dominic, adding reference code :
# module_name/lib/facter/master_hash.rb
require 'json'
Facter.add(:master_hash) do
setcode do
# return content of foo as a string
f = File.read('/path/to/some_file.json')
master_hash = JSON.parse(f)
master_hash
end
end
I'll assume you're talking about the custom fact from the previous answers rather than a Ruby function, in which case, add a File.exist? conditional:
# module_name/lib/facter/master_hash.rb
require 'json'
Facter.add(:master_hash) do
setcode do
if File.exist?('/path/to/some_file.json')
# return content of foo as a string
f = File.read('/path/to/some_file.json')
master_hash = JSON.parse(f)
master_hash
end
end
end
Please include the full error message (with the filename/line number it provides) and the source code when asking questions.
Custom facts are shipped within modules, but are all synced to agents as they're used to discover data.
Facts are synced and run on agents before the node is classified because they can be used to make classification and catalog decisions, e.g. based on hostname or OS. Because facts run before classification, it isn't yet possible to know which classes (and perhaps modules) are going to be applied, so should be safe to run on every node in the environment.

Cmake: read variables from another (processed) directory

I am trying to interrogate the inclusion path of a series of addsubdirectories():
-projectA
-projectB
-projectC
In this setup, projectA is adding ProjectB as a subdirectory which in turn adds ProjectC as a subdirectory. I want to let projectC know about the hierarchy.
I can achieve this by recursive calls
get_directory_property(parent DIRECTORY ${cur_dir} PARENT_DIRECTORY)
which will return nothing when it reaches projectA. That almost does it. It would be nice however, if I could read the ${PROJECT_NAME} from these directories and return A-B-C instead of projectA-projectB-projectC.
So my question is: Is there any way of reading the variables from a directory that is already parsed?
Note that in this case, although projectC would inherit variables from its parent, but the standard cache variables are replaced in the child projects, which is why I can't use them.
you can do:
get_directory_property(output DIRECTORY dir/path DEFINITION PROJECT_NAME)
or any normal variable:
get_directory_property(output DIRECTORY dir/path DEFINITION myVariable)

How does Scons compute the build signature?

I keep different versions of one project in different directories. (This does make sense in this project. Sadly.) As there are only minor differences between the versions, I hope I can speed all builds after the first one by using a common cache directory for all builds.
Unfortunately I had to realise that, when building an object file from the same sources in different directories, SCons 2.3.3 stores the result on different locations in the cache. (The location is equal to the build signature, I assume.) The same sources are recompiled for each and every directory. So why does SCons determine different build signatures although
the compile commands are identical and
the sources and the include files are the same (identical output of of the preprocessor phase, gcc -E ...)
I'm using the decider "MD5-timestamp"
Even the resulting object files are identical!
For a trivial example (helloworld from the SCons documentation) re-using the cache works. Though in the big project I'm working on, it does not. Maybe the "SCons build environment" influences the build signature, even if it does not have any effect on the compile command?
Are there any debug options that could help besides --cache-debug=-? Which method of SCons determines the build signature?
The folders look somewhat like this:
<basedir1>/
SConstruct
src/something.cpp …
include/header.hpp …
<basedir2>/
SConstruct
src/something.cpp …
include/header.hpp …
/SharedCache/
0/ 1/ 2/ … F/
I check out the project in both basedir1 and basedir2 and call scons --build-cache-dir=/SharedCache in both of them. (EDIT: --build-cache-dir is a custom option, implemented in the SConstruct file of this project. It maps to env.CacheDir('/SharedCache').
EDIT2: Before I realized this problem, I did some tests to evaluate the effects of using --cache-implicit or SCons 2.4.0.
This is the code of the method get_cachedir_bsig() from the file src/engine/SCons/Node/FS.py:
def get_cachedir_bsig(self):
"""
Return the signature for a cached file, including
its children.
It adds the path of the cached file to the cache signature,
because multiple targets built by the same action will all
have the same build signature, and we have to differentiate
them somehow.
"""
try:
return self.cachesig
except AttributeError:
pass
# Collect signatures for all children
children = self.children()
sigs = [n.get_cachedir_csig() for n in children]
# Append this node's signature...
sigs.append(self.get_contents_sig())
# ...and it's path
sigs.append(self.get_internal_path())
# Merge this all into a single signature
result = self.cachesig = SCons.Util.MD5collect(sigs)
return result
It shows how the path of the cached file is included into the "cache build signature", which explains the behaviour you see. For the sake of completeness, here is also the code of the get_cachedir_csig() method from the same FS.py file:
def get_cachedir_csig(self):
"""
Fetch a Node's content signature for purposes of computing
another Node's cachesig.
This is a wrapper around the normal get_csig() method that handles
the somewhat obscure case of using CacheDir with the -n option.
Any files that don't exist would normally be "built" by fetching
them from the cache, but the normal get_csig() method will try
to open up the local file, which doesn't exist because the -n
option meant we didn't actually pull the file from cachedir.
But since the file *does* actually exist in the cachedir, we
can use its contents for the csig.
"""
try:
return self.cachedir_csig
except AttributeError:
pass
cachedir, cachefile = self.get_build_env().get_CacheDir().cachepath(self)
if not self.exists() and cachefile and os.path.exists(cachefile):
self.cachedir_csig = SCons.Util.MD5filesignature(cachefile, \
SCons.Node.FS.File.md5_chunksize * 1024)
else:
self.cachedir_csig = self.get_csig()
return self.cachedir_csig
where the cache paths of the children are hashed into the final build signature.
EDIT: The "cache build signature" as computed above, is then used to build the "cache path". Like this, all files/targets can get mapped to a unique "cache path" by which they can get referenced and found in (= retrieved from) the cache. As the comments above explain, the relative path of each file (starting from the top-level folder of your SConstruct) is a part of this "cache path". So, if you have the same source/target (foo.c->foo.obj) in different directories, they will have different "cache paths" and get built independent of each other.
If you truly want to share sources between different projects, note how the CacheDir functionality is more intended for sharing the same sources between different developers, you may want to have a look at the Repository() method. It let's you mount (blend in) another source tree to your current project...

How can I delay evaluation of #import rules with the LESS parser?

I'm trying to put together a resource build system which will load LESS files from a set of directories:
Common
├─┬ sub
│ ├── A
│ └── B
├── C
└── ...
Each bottom-level directory will have an entry point, index.less. The index file will include #import statements, for example #import "colors.less";.
What I would like to happen is:
If the imported file exists in the current directory, use it.
If the file does not exist, use the file of the same name in the parent directory, recursively to the root.
So when parsing /Common/sub/A/index.less, look for colors.less in A, then in sub, then Common.
I've already developed the first half of a two-stage build process:
Scan the entire directory structure and load all files into an object:
common = { files: { "colors.less": "/* LESS file contents */", ... },
sub: {
files: { ... },
A: { files: { "index.less": "#import 'colors.less';", ... } },
B: { files: { "index.less": "#import 'colors.less';", ... } }
},
C: { files: { "index.less": "#import 'colors.less';", ... } }
}
Build resulting CSS file for each bottom-level directory.
Phase two is where I've run in to some issues. First, I create a parser.
var parser = new less.Parser({
filename: 'index.less'
});
Then parse the file:
parser.parse(common.sub.A.files['index.less'], function(e, tree) {
// `tree` is the AST
});
This gets us an abstract syntax tree (AST) delivered to the callback. The problem is that the LESS parser resolves all #import statements it finds with its own file importer, and merges the imported file into the current AST.
To get around this, I am currently overloading the importer to rewrite the path:
// before starting anything:
var importer = less.Parser.importer;
less.Parser.importer = function(path, currentFileInfo, callback, env) {
var newPath;
// here, use the object from phase 1 to resolve the path nearest file
// matching `path` (which is really just a filename), and set `newPath`
importer(newPath, currentFileInfo, callback, env);
};
However, the LESS importer still reads the file from disk. This is bad from a performance perspective since (A) we already have the file's contents in memory and (B) there are are large number of bottom-level directories, so we're forced to re-load and re-parse the same common files multiple times.
What I'd like to do is parse every LESS file in phase one, and then merge the ASTs as necessary during phase two.
In order to do this, I need to prevent LESS from evaluating #import nodes during parsing. Then in phase 2, I can manually find the #import nodes in the AST and merge in the already-parsed ASTs (recursively, since they can include their own #imports).
An interesting problem.
Without delving too deep into the implementation of the Less parser (which I assume you want to avoid as much as I do at this moment) why not simply add a pre-parsing step: Read all the .less files and comment out the imports?
So, right before running a less file through the parser, you can read it yourself and write to the file, commenting out the #import lines, and taking note of them as you do. Then run it through the parser and you get an AST to which you can attach the import information you grabbed earlier.
Now you can knit together all the AST's in whatever fashion you were already planning to.
Make sure to keep the less files in the state you found them in. Either un-comment those lines, or a slightly better method would be to copy each file you want to work on, comment out the imports, parse it, then delete. This way you wouldn't have to worry about polluting the original files.
Seems like a quick way of circumventing the issue. Unless you would rather do something like tell the less Parser itself to ignore #import tokens. (Just taking a random stab here, but perhaps if you edit the "import" function in lib/less/parser.js on line 1252 to return new(tree.Comment)(dir); it might just parse every #import token as a comment.

Puppet how to run all manifests in directory

So I have a directory of puppet manifests that I want to run.
Is it possible to do something like:
include /etc/puppet/users/server522/*.pp
and have puppet run them?
I've tried
include users::server522::*
and several other variations
I always get an error about puppet being unable to find it.
Is there anyway to do this?
So my final solution to this was write a script that would take the directory listing and for each .pp file add an include into the server522.pp file. Quite annoying that puppet won't include an entire directory.
What are you trying to do here, and are you sure you're doing it the correct way? To wit, if you have multiple manifests corresponding to multiple servers, you need to define the nodes for each server. If OTOH you're trying to apply multiple manifests to a single node it's not clear why you would be doing that, instead of just using your defined classes. A little more information would be helpful here.
I do not see the point of each user having its own manifest. I would rather create script that would automatically build one manifest file, basing on data from some source, for instance from HEAD of git repository containing CSV file with current list of users.
If you realy want to use separate manifest file for every user you may consider having seprate module for every user:
manifests
default.pp <-- here comes default manifest
module_for_user_foo/
manifests/
init.pp <-- here comes your 'foo' user
module_for_user_bar/
manifests/
init.pp <-- here comes your 'bar' user
Now you may copy modules containing manifests.

Resources