Puppet Execution Flow - puppet

I have this node.pp and I am wondering how puppet is going to execute it.
node 'agent.puppet.demo' {
include ssh
include postfix
include mysql
include apache
}
On the agent node, when I run this:
$ puppetd -t -d
The puppet is not executing it sequentially meaning, it does not execute ssh first, then postfix, ...
Does anyone know why this is? Is it because it is called 'declarative language' where the order of execution does not really matter?
If this is the case, then I can just in a certain way, declare what I want and puppet will figure out how to execute it?

Disclaimer: I am one of the developers of Puppet.
It will execute it in a consistent but unpredictable order, with the exception of any explicit or implicit dependencies in the code. Explicit dependencies are things that you specify with the subscribe or require metaparameters. Implicit dependencies come from the autorequire feature, which does things like automatically apply file resources in a sensible order.
The reason for this isn't so much that the language is declarative, but rather the language is declarative because order doesn't matter for most things in the underlying problem space.
For example, there really isn't much connection between managing ssh and managing postfix for most people - you could do the work in either order, or even at the same time, and everything would work out the same.
That frees us up to improve things in a whole lot of ways that "everything is in linear order" doesn't. We are working, for example, to batch up package installs while still respecting the explicit dependencies outside packages.
So, the order of execution and dependencies follows the underlying problem, and we have preserved that property to be able to do more awesome things.
The goal is exactly what you say at the end: that you declare what you want, and we take care of all the details of getting it there. In time we hope to be much smarter about logical dependencies, so you have to say even less to get that, too.

Disclaimer: I am still pretty new to puppet :)
The key is to think of everything in terms of dependencies. For class dependencies, I like to use the Class['a'] -> Class['b'] syntax. Say you have a tomcat class that requires a jdk class which downloads/installs the sun jdk from oracle. In your tomcat class, you can specify this with
Class['jdk'] -> Class['tomcat']
Alternatively you can declare a class with a require meta parameter rather than using include.

Related

How to protect my script from copying and modifying in it?

I created expect script for customer and i fear to customize it like he want without returning to me so I tried to encrypt it but i didn't find a way for it
Then I tried to convert it to excutable but some commands was recognized by active tcl like "send" command even it is working perfectly on red hat
So is there a way to protect my script to be reading?
Thanks
It's usually enough to just package the code in a form that the user can't directly look inside. Even the smallest of speed-bump stops them.
You can use sdx qwrap to parcel your script up into a starkit. Those are reasonably resistant to random user poking, while being still technically open (the sdx tool is freely available, after all). You can convert the .kit file it creates into an executable by merging it with a packaged runtime.
In short, it's basically like this (with some complexity glossed over):
tclkit sdx.kit qwrap myapp.tcl
tclkit sdx.kit unwrap myapp.kit
# Copy additional assets into myapp.vfs if you need to
tclkit sdx.kit wrap myapp.exe -runtime C:\path\to\tclkit.exe
More discussion is here, the tclkit runtimes are here, and sdx itself can be obtained in .kit-packaged form here. Note that the runtime you use to run sdx does not need to be the same that you package; you can deploy code for other platforms than the one you are running from. This is a packaging phase action, not a compilation or linking.
Against more sophisticated users (i.e., not Joe Ordinary User) you'll want the Tcl Compiler out of the ActiveState TclDevKit. It's a code-obscurer formally (it doesn't actually improve the performance of anything) and the TDK isn't particularly well supported any more, but it's the main current solution for commercial protection of Tcl code. I'm on a small team working on a true compiler that will effectively offer much stronger protection, but that's not yet released (and really isn't ready yet).
One way is to store the essential code running in your server as back-end. Just give the user a fron-end application to do the requests. This way essential processes are on your control, and user cannot access that code.

Languages with a NodeJS/CommonJS style module system

I really like the way NodeJS (and it's browser-side counterparts) handle modules:
var $ = require('jquery');
var config = require('./config.json');
module.exports = function(){};
module.exports = {...}
I am actually rather disappointed by the ES2015 'import' spec which is very similar to the majority of languages.
Out of curiosity, I decided to look for other languages which implement or even support a similar export/import style, but to no avail.
Perhaps I'm missing something, or more likely, my Google Foo isn't up to scratch, but it would be really interesting to see which other languages work in a similar way.
Has anyone come across similar systems?
Or maybe someone can even provide reasons that it isn't used all that often.
It is nearly impossible to properly compare these features. One can only compare their implementation in specific languages. I collected my experience mostly with the language Java and nodejs.
I observed these differences:
You can use require for more than just making other modules available to your module. For example, you can use it to parse a JSON file.
You can use require everywhere in your code, while import is only available at the top of a file.
require actually executes the required module (if it was not yet executed), while import has a more declarative nature. This might not be true for all languages, but it is a tendency.
require can load private dependencies from sub directories, while import often uses one global namespace for all the code. Again, this is also not true in general, but merely a tendency.
Responsibilities
As you can see, the require method has multiple responsibilities: declaring module dependencies and reading data. This is better separated with the import approach, since import is supposed to only handle module dependencies. I guess, what you like about being able to use the require method for reading JSON is, that it provides a really easy interface to the programmer. I agree that it is nice to have this kind of easy JSON reading interface, however there is no need to mix it with the module dependency mechanism. There can just be another method, for example readJson(). This would separate the concerns, so the require method would only be needed for declaring module dependencies.
Location in the Code
Now, that we only use require for module dependencies, it is a bad practice to use it anywhere else than at the top of your module. It just makes it hard to see the module dependencies when you use it everywhere in your code. This is why you can use the import statement only on top of your code.
I don't see the point where import creates a global variable. It merely creates a consistent identifier for each dependency, which is limited to the current file. As I said above, I recommend doing the same with the require method by using it only at the top of the file. It really helps to increase the readability of the code.
How it works
Executing code when loading a module can also be a problem, especially in big programs. You might run into a loop where one module transitively requires itself. This can be really hard to resolve. To my knowledge, nodejs handles this situation like so: When A requires B and B requires A and you start by requiring A, then:
the module system remembers that it currently loads A
it executes the code in A
it remembers that is currently loads B
it executes the code in B
it tries to load A, but A is already loading
A is not yet finished loading
it returns the half loaded A to B
B does not expect A to be half loaded
This might be a problem. Now, one can argue that cyclic dependencies should really be avoided and I agree with this. However, cyclic dependencies should only be avoided between separate components of a program. Classes in a component often have cyclic dependencies. Now, the module system can be used for both abstraction layers: Classes and Components. This might be an issue.
Next, the require approach often leads to singleton modules, which cannot be used multiple times in the same program, because they store global state. However, this is not really the fault of the system but the programmers fault how uses the system in the wrong way. Still, my observation is that the require approach misleads especially new programmers to do this.
Dependency Management
The dependency management that underlays the different approaches is indeed an interesting point. For example Java still misses a proper module system in the current version. Again, it is announced for the next version, but who knows whether this will ever become true. Currently, you can only get modules using OSGi, which is far from easy to use.
The dependency management underlaying nodejs is very powerful. However, it is also not perfect. For example non-private dependencies, which are dependencies that are exposed via the modules API, are always a problem. However, this is a common problem for dependency management so it is not limited to nodejs.
Conclusion
I guess both are not that bad, since each is used successfully. However, in my opinion, import has some objective advantages over require, like the separation of responsibilities. It follows that import can be restricted to the top of the code, which means there is only one place to search for module dependencies. Also, import might be a better fit for compiled languages, since these do not need to execute code to load code.

Why do modules use classes in puppet (instead of defines)

I've been creating a couple of classes (in different modules) for puppet. Both, separately, require maven. So both classes have something like the following:
class { "maven::maven":
version => "3.0.5"
}
(using the https://forge.puppetlabs.com/maestrodev/maven module from puppet forge)
But, if I have one node that has both of my classes, puppet complains because class 'maven::maven' is declared twice. I feel like each of my classes should be free to declare all of the things it needs. If a node has more than one class both of which require maven, then I don't see the problem.
So, my question is: was the author of that maven module wrong to use a class, should he have used a define instead? (because you can use/call/whatever a define multiple times). It appears that if he had used a define I would be able to have the block of code as many times as I like, so if he was right to use a class, why?
Thanks.
I think the rationale behind this is best explained in John Arundel's Puppet 3 Beginner's Guide:
So if you're wondering which to use, consider:
Will you need to have multiple instances of this on the same node
(for example, a website)? If so, use a definition.
Could this
cause conflicts with other instances of the same thing on this node
(for example, a web server)? If so, use a class.
If you're passing in parameters to a class, there is a possibility of a conflict, the problem being that it is not clear which sets of parameters will get used.
If your first module required maven with 3.0.5 but your second module required maven with 3.0.6, puppet would not know which one to use.
The puppet module, in making it a class and not a resource/defined type, does not handle the resolution as well, because it was probably intended for a single install.
Puppet currently only supports re-using class declarations without parameters, ie. include maven::maven.
Finally, declaring dependencies on other modules in your own module is a tricky thing, that I do not know how to fully resolve yet.

Where do QuickCheck instances belong in a cabal package?

I have a cabal package that exports a type NBT which might be useful for other developers. I've gone through the trouble of defining an Arbitrary instance for my type, and it would be a shame to not offer it to other developers for testing their code that integrates my work.
However, I want to avoid situations where my instance might get in the way. Perhaps the other developer has a different idea for what the Arbitrary instance should be. Perhaps my package's dependency on a particular version of QuickCheck might interfere with or be unwanted in the dependencies of the client project.
My ideas, in no particular order, are:
Leave the Arbitrary instance next to the definition of the type, and let clients deal with shadowing the instance or overriding the QuickCheck version number.
Make the Arbitrary instance an orphan instance in a separate module within the same package, say Data.NBT.Arbitrary. The dependency on QuickCheck for the overall package remains.
Offer the Arbitrary instance in a totally separate package, so that it can be listed as a separate test dependency for client projects.
Conditionally include both the Arbitrary instance and the QuickCheck dependency in the main package, but only if a flag like -ftest is set.
I've seen combinations of all of these used in other libraries, but haven't found any consensus on which works best. I want to try and get it right before uploading to Hackage.
On the basis of not much specific experience, but a general desire for robustness, the guiding principle for package dependencies should perhaps be
From each according to their ability; to each according to their need.
It's good to keep the dependencies of a package to the minimum needed for its essential functionality. That suggests option 3 or option 4 to me. Of course, it's a pain to chop the package up so much. If options are capable of expressing the conditionality involved, then option 4 sounds like a sensible approach, based on using language effectively to say what you mean.
It would be really good if a consensus emerged about which one switch we need to flick to get the testing kit as well as the basic functionality.
It's also clear that there's room for refinement here. It's amazing that Cabal works as well as it does, but it could allow for more sophisticated notions of "package", perhaps after the manner of the SML module system. Translating dependencies into function types, we basically get to write
simplePackage :: (Dependency1, .., Dependencyn) -> Deliverable
but one could imagine more elaborate combinations of products and functions, like
fancyPackage :: BasicDependency -> (BasicDeliverable, HelpfulExtras -> Gravy)
Until then, pick the option that most accurately reflects the actual deal. And tell us about it, so we can build that consensus.
The problem comes down to: how likely is it that someone using your library will be wanting to run QuickCheck tests using your NBT type?
If it is likely, and the Arbitrary instance is detailed (and thus not likely to change for different people), it would probably be best to ship it with your package, especially if you're going to make sure you keep updating the package (as for using a flag or not, that comes down to a bit of personal preference). If the instance is relatively simple however (and thus more likely that people would want to customise it), then it might be an idea to just provide a sample instance in the documentation.
If the type is primarily internal in nature and not likely to be used by others wanting to run tests, then using a flag to conditionally bring in QuickCheck is probably the best way to go to avoid unnecessary dependencies (i.e. the test suite is there just so you can test the package).
I'm not a fan of having QuickCheck-only packages in general, though it might be useful in some situations.

nodejs - what to use instead of require.paths?

Recent node docs say that modifying require.paths is bad practice. What should I do instead?
I believe the concern is that it can be repeatedly modified at run time, rather than just set. That could obviously be confusing and causes some quite bizarre bugs. Also, if individual packages modify the path the results are applied globally, which is really bad and goes against the modular nature of node.
If you have several library paths of your own, the best solution is to set the NODE_PATH environment variable before launching node. Node then picks this up when it's launched and applies it automatically.
I keep the related models in the same dir or a sub dir and load using:
var x = require('./mod/x');
In case it's an external module, I install it using npm that puts the module correctly in NODE_PATH.
I've never changed require.paths.
have a look at https://github.com/patrick-steele-idem/app-module-path-node; you can add a directory to the require statements in the top level, without influencing the paths of sub-modules.
Unless I'm making a mistake in my understanding, the primary limitation of the current system is that for namespacing you're stuck without the uses of folders for non-hierarchical dependencies.
What that means in practice...
Consider that you have x/y/z and a/b as well as a/b/c. If both a/b and a/b/c depend on z/y/z you end up having to either specify that relatively (require('../../x/y/z') and require('../../../x/y/z') respectively) or having to make every single package a node_module. Failing that you can probably do horrific things with symlinks or similar.
As far as I can tell the only alternative is to rather than use folders to namespace and organise, use filenames such as:
a.b.js
a.b.c.js
x.y.z.js

Resources