Difference between "::mysql::server" and "mysql::server" - puppet

I was going through an old puppet code. It was using mysql puppet module to install mysql-server.
I came across this
class { '::mysql::server':
}
and this
class { 'mysql::server':
}
Now I'm confused. Do they both mean the same thing or there's any difference between the two?

This is a really good question. The short answer is that they are the same, and that :: isn't needed for class names.
I'd always assumed the initial :: was needed to avoid scope ambiguity (where include bar in class foo would include ::foo::bar rather than ::bar) but checking the docs, they say that, for example, include must use the class's full name.
A working example:
$ cat scope.pp
class foo {
class bar {
notice("foo::bar")
}
class { 'bar':
}
}
class bar {
notice("bar")
}
class { 'foo':
}
$ puppet apply scope.pp
Notice: Scope(Class[Bar]): bar
I'd note that while this is true for class scope, it certainly isn't true for variable scope in Puppet, as below.
$ cat var_scope.pp
$bar = "bar"
class foo {
$bar = "foo::bar"
notice($::bar)
notice($bar)
}
include foo
notice($bar)
$ puppet apply var_scope.pp
Notice: Scope(Class[Foo]): bar
Notice: Scope(Class[Foo]): foo::bar
Notice: Scope(Class[main]): bar

Do they both mean the same thing or there's any difference between the two?
TL;DR: They mean the same thing for classes and defined types. That the form with the leading :: is supported can be viewed as either a backwards-compatibility feature, an internal consistency feature, or both. For variables, however, the leading :: indicates a top-scope variable, which might or might not be what you get if you use the bare variable name.
To clarify some details of the fine answer that #Jon already presented, we have to consider the behavior of Puppet version 3 and earlier. This is no longer documented on Puppet's main documentation site, but we can find the relevant docs in Puppet's online archive of obsolete documentation. Specifically, we want to look at Puppet namespaces and their behavior. The docs are an interesting read if you're into that sort of thing, especially the historical perspective on how Puppet 3 ended up where it was, but here's a somewhat whimsical version of events:
In the beginning, the Forge was formless and void, and there were no modules. Everyone wrote their own code for everything, and the devops faithful were sorely oppressed, reinventing many wheels.
In those days, the idea of modules was conceived. The modules of that day were built using the features then abroad in the land, such as the import function, which has since departed. But with code sharing came name collisions, and to that, the people responded with namespacing. And Reductive Labs looked on namespacing and saw that it was good.
But not everything was clear to Reductive or the people, and in ignorance, Reductive brought forth relative name resolution. And relative name resolution scrutinized names and namespaces very deeply, attempting to resolve even qualified names relative to every namespace in scope. Some of the people rejoiced at the convenience, but the wise among them soon grew troubled. It became clear to them that relative name resolution looked too deeply and saw too much. It sometimes saw things it was not meant to see, and opened paths for the faithful to fall into error.
So the wise intervened. They proclaimed that relative namespacing should be shackled and overcome, tamed by feeding it only names anchored to the one true anonymous namespace, that which existed before anything else Puppet. And the form of the shackles was the leading double colon, ::. And although relative name resolution often performed the same work unshackled, many heeded the wise, and were praised for it.
And Reductive, then naming itself Puppet Labs, regretted creating relative name resolution and urged the people to follow the counsel of the wise. But when it brought forth the Third Age of Puppet, it could not bring itself to trouble those among the people who paid no attention, so it allowed relative name resolution to live.
But at the dawn of the Fourth Age, Puppet, no longer labs, found the courage to at last slay relative name resolution, and it was no more. Since that day, Puppet no longer urges the leading double colon on purveyors and users of classes and types, yet it honors the legacy of past wisdom, and has pity on those who are slow to unlearn it.
Yet Puppet, in its merciful beneficence, has chosen class variables from among all named things as the only ones to bear two names. They have scopes that transcend namespace, and in their scopes they may be known by the simple names of their definitions. Like all named things, however, they can be known anywhere by their namespaced names, formed from their simple names and their class names, in either form. But what, then, of the variables of the top scope? By what name can they be known when they are hidden in the shadows? Here the leading double colon yet serves. Its mark of the top scope is not redundant for variables, and some among the wise make their code clear by using it always for such variables.

Related

What's the difference between 'my' and 'our' in Raku? [duplicate]

I've read the spec but I'm still confused how my class differs from [our] class. What are differences and when to use which?
The my scope declarator implies lexical scoping: following its declaration, the symbol is visible to the code within the current set of curly braces. We thus tend to call the region within a pair of curly braces a "lexical scope". For example:
sub foo($p) {
# say $var; # Would be a compile time error, it's not declared yet
my $var = 1;
if $p {
$var += 41; # Inner scope, $var is visible
}
return $var; # Same scope that it was declared in, $var is visible
}
# say $var; # $var is no longer available, the scope ended
Since the variable's visibility is directly associated with its location in the code, lexical scope is really helpful in being able to reason about programs. This is true for:
The programmer (both for their own reasoning about the program, but also because more errors can be detected and reported when things have lexical scope)
The compiler (lexical scoping permits easier and better optimization)
Tools such as IDEs (analyzing and reasoning about things with lexical scope is vastly more tractable)
Early on in the design process of the language that would become Raku, subroutines did not default to having lexical scope (and had our scope like in Perl), however it was realized that lexical scope is a better default. Making subroutine calls always try to resolve a symbol with lexical scope meant it was possible to report undeclared subroutines at compile time. Furthermore, the set of symbols in lexical scope is fixed at compile time, and in the case of declarative constructs like subroutines, the routine is bound to that symbol in a readonly manner. This also allows things like compile-time resolution of multiple dispatch, compile-time argument checking, and so forth. It is likely that future versions of the Raku language will specify an increasing number of compile-time checks on lexically scoped program elements.
So if lexical scoping is so good, why does our (also known as package) scope exist? In short, because:
Sometimes we want to share things more widely than within a given lexical scope. We could just declare everything lexical and then mark things we want to share with is export, but..
Once we get to the point of using a lot of different libraries, having everything try to export things into the single lexical scope of the consumer would likely lead to a lot of conflicts
Packages allow namespacing of symbols. For example, if I want to use the Cro clients for both HTTP and WebSockets in the same code, I can happily use both, and refer to them as Cro::HTTP::Client and Cro::WebSocket::Client respectively.
Packages are introduced by package declarators, such as class, module, grammar, and (with caveats) role. An our declaration will make an installation in the enclosing package construct.
These packages ultimately exist within a top-level package named GLOBAL - which is fitting, since they are effectively globally visible. If we declare an our-scoped variable, it is thus a global variable (albeit hopefully a namespaced one), about which enough has been written that we know we should pause for thought and wonder if a global variable is the best API decision (because, ultimately, everything that ends up visible via GLOBAL is an API decision).
Where things do get a bit blurry, however, is that we can have lexical packages. These are packages that do not get installed in GLOBAL. I find these extremely useful when doing OO programming. For example, I might have:
# This class that ends up in GLOBAL...
class Cro::HTTP::Client {
# Lexically scoped classes, which are marked `my` and thus hidden
# implementation details. This means I can refactor them however I
# want, and never have to worry about downstream fallout!
my class HTTP1Pipeline {
# Implementation...
}
my class HTTP2Pipeline {
# Implementation...
}
# Implementation...
}
Lexical packages can also be nested and contain our-scoped variables, however don't end up being globally visible (unless we somehow choose to leak them out).
Different Raku program elements have been ascribed a default scope:
Subroutines default to lexical (my) scope
Methods default to has scope (only visible through a method dispatch)
Type (class, role, grammar, subset) and module declarations default to package (our) scope
Constants and enumerations default to package (our) scope
Effectively, things that are most often there to be shared default to package scope, and the rest do not. (Variables do force us to pick a scope explicitly, however the most common choice is also the shortest one to type.)
Personally, I'm hesitant to make a thing more visible than the language defaults, however I'll often make them less visible (for example, my on constants that are for internal use, and on classes that I'm using to structure implementation details). When I could do something by exposing an our-scoped variable in a globally visible package, I'll still often prefer to make it my-scoped and provide a sub (exported) or method (visible by virtue of being on a package-scoped class) to control access to it, to buy myself some flexibility in the future. I figure it's OK to make wrong choices now if I've given myself space to make them righter in the future without inconveniencing anyone. :-)
In summary:
Use my scope for everything that's an implementation detail
Also use my scope for things that you plan to export, but remember exporting puts symbols into the single lexical scope of the consumer and risks name clashes, so be thoughtful about exporting particularly generic names
Use our for things that are there to be shared, and when its desired to use namespacing to avoid clashes
The elements we'd most want to share default to our scope anyway, so explicitly writing our should give pause for thought
As with variables, my binds a name lexically, whereas our additionally creates an entry in the surrounding package.
module M {
our class Foo {}
class Bar {} # same as above, really
my class Baz {}
}
say M::Foo; # ok
say M::Bar; # still ok
say M::Baz; # BOOM!
Use my for classes internal to your module. You can of course still make such local symbols available to importing code by marking them is export.
The my vs our distinction is mainly relevant when generating the symbol table. For example:
my $a; # Create symbol <$a> at top level
package Foo { # Create symbol <Foo> at top level
my $b; # Create symbol <$b> in Foo scope
our $c; # Create symbol <$c> in Foo scope
} # and <Foo::<$c>> at top level
In practice this means that anything that is our scoped is readily shared to the outside world by prefixing the package identifier ($Foo::c or Foo::<$c> are synonymous), and anything that is my scoped is not readily available — although you can certainly provide access to it via, e.g., getter subs.
Most of the time you'll want to use my. Most variables just belong to their current scope, and no one has any business peaking in. But our can be useful in some cases:
constants that don't poison the symbol table (this is why, actually, using constant implies an our scope). So you can make a more C-style enum/constants by using package Colors { constant red = 1; constant blue = 2; } and then referencing them as Colors::red
classes or subs that should be accessible but needn't be exported (or shouldn't be because overlapping symbols with builtins or other modules). Exporting symbols can be great, but sometimes it's also nice to have the package/module namespace to remind you what stuff goes with. As such, it's also a nice way to manage options at runtime via subs: CoolModule::set-preferences( ... ). (although dynamic variables can be used to nice effect here as well).
I'm sure others will comment with other times the our scope is useful, but these are the ones from my own experience.

Puppet Include vs Class and Best Practices

When should I be using an include vs a class declaration? I am exploring creating a profile module right now, but am struggling with methodology and how I should lay things out.
A little background, I'm using the puppet-labs java module which can be found here.
My ./modules/profile/manifests/init.pp looks like this:
class profile {
## Hiera Lookups
$java_version = hiera('profile::jdk::package')
class {'java':
package => $java_version,
}
}
This works fine, but I know that I can also remove the class {'java': block of the code and instead use include java. My question relates around two things. One, if I wanted to use an include statement for whatever reason, how could I still pass the package version from hiera to it? Second, is there a preferred method of doing this? Is the include something I really shouldn't be using, or are there advantages and disadvantages to each method?
My long term goal will be building out profile like modules for my environment. Likely I would have a default profile that applies to all of my servers, and then profiles for different application load outs. I could include the profiles into a role and apply things to my individual nodes at that level. Does this make sense?
Thanks!
When should I be using an include vs a class declaration?
Where a class declares another, internal-only class that belongs to the same module, you can consider using a resource-like class declaration. That leverages your knowledge of the implementation details of the module, as you need to be able to prove that no other declaration of the class in question will be evaluated before the resource-like one. If ever that constraint is violated, catalog building will fail.
Under all other circumstances, you should use include or one of its siblings, require and contain.
One, if I wanted to use an include statement for whatever reason, how
could I still pass the package version from hiera to it?
Exactly the same way you would specify any other class parameter via Hiera. I already answered that for you.
Second, is
there a preferred method of doing this?
Yes, see above.
Is the include something I
really shouldn't be using, or are there advantages and disadvantages
to each method?
The include is what you should be using. This is your default, with require and contain as alternatives for certain situations. The resource-like declaration syntax seemed good to the Puppet team when they first introduced it, in Puppet 2.6, along with parameterized classes themselves. But it turns out that that syntax introduced deep design problems into the language, and it has been a source of numerous bugs and headaches. Automatic data binding was introduced in Puppet 3 in part to address many of those, allowing you to assign values to class parameters without using resource-like declarations.
The resource-like syntax has the single advantage -- if you want to consider it one -- that the parameter values are expressed directly in the manifest. Conventional Puppet wisdom holds that it is better to separate data from code, however, so as to avoid needing to modify manifests as configuration requirements change. Thus, expressing parameter values directly in the manifest is a good idea only if you are confident that they will never change. The most significant category of such cases is when a class has read data from an external source (i.e. looked it up via Hiera), and wants to pass those values on to another class.
The resource-like syntax has the great disadvantage that if a resource-like declaration of a given class is evaluated anywhere during the construction of a catalog for a given target node, then it must be the first declaration of that class that is evaluated. In contrast, any number of include-like declarations of the same class can be evaluated, whether instead of or in addition to a resource-like declaration.
Classes are singletons, so multiple declarations have no more effect on the target node than a single declaration. Allowing them is extremely convenient. Evaluation order of Puppet manifests is notoriously hard to predict, however, so if there is a resource-like declaration of a given class somewhere in the manifest set, it is very difficult in the general case to ensure that it is the first declaration of that class that is evaluated. That difficulty can be managed in the special case I described above. This falls into the more general category of evaluation-order dependencies, and you should take care to ensure that your manifest set is free of those.
There are other issues with the resource-like syntax, but none as significant as the evaluation-order dependency.
Clarification with respect to automated data binding
Automated data binding, mentioned above, associates keys identifying class parameters with corresponding values for those parameters. Compound values are supported if the back end supports them, which the default YAML back end in fact does. Your comments on this answer suggest that you do not yet fully appreciate these details, and in particular that you do not recognize the significance of keys identifying (whole) class parameters.
I take your example of a class that could on one hand be declared via this resource-like declaration:
class { 'elasticsearch':
config => { 'cluster.name' => 'clustername', 'node.name' => 'nodename' }
}
To use an include-like declaration instead, we must provide a value for the class's "config" parameter in the Hiera data. The key for this value will be elasticsearch::config (<fully-qualified classname> :: <parameter name>). The associated value is wanted puppet-side as a hash (a.k.a. "associative array", a.k.a. "map"), so that's how it is specified in the YAML-format Hiera data:
elasticsearch::config:
"cluster.name": "clustername"
"node.name": "nodename"
The hash nature of the value would be clearer if there were more than one entry. If you're unfamiliar with YAML, then it would probably be worth your while to at least skim a primer, such as the one at yaml.org.
With that data in place, we can now declare the class in our Puppet manifests simply via
include 'elasticsearch'

Benefit of importing specific parts of a Haskell module

Except from potential name clashes -- which can be got around by other means -- is there any benefit to importing only the parts from a module that you need:
import SomeModule (x, y, z)
...verses just importing all of it, which is terser and easier to maintain:
import SomeModule
Would it make the binary smaller, for instance?
Name clashes and binary size optimization are just two of the benefits you can get. Indeed, it is a good practice to always identify what you want to get from the outside world of your code. So, whenever people look at your code they will know what exactly your code requesting.
This also gives you a very good chance to creat mocking solutions for test, since you can work through the list of imports and write mockings for them.
Unfortunately, in Haskell the type class instances are not that easy. They are imported implicitly and so can creates conflicts, also they may makes mocking harder, since there is no way to specify specific class instances only. Hopefully this can be fixed in future versions of Haskell.
UPDATE
The benifits I listed above (code maintenance and test mocking) are not limited to Haskell. Actually, it is also common practice in Java, as I know. In Java you can just import a single class, or even a single static variable/method. Unfortunately again, you still cannot selectively import member functions.
No, it's only for the purpose of preventing name clashes. The other mechanism for preventing name clashes - namely import qualified - results in more verbose (less readable) code.
It wouldn't make the binary smaller - consider that functions in a given module all reference each other, usually, so they need to be compiled together.

How to achieve encapsulation in J?

I'm not an expert on scope in J, so please correct me if I make a mistake. (That, in fact, is part of the reason for this question.)
What I want to do is create a name that is visible within (but not without) a locale. Note that assigning with =. does not achieve this.
I think this is impossible, but I'd love confirmation from a J expert.
After seeing Eelvex's answer, I feel I have to clarify my question. Here's what I want: I want a name that is global within a locale but invisible outside a locale, even if you know the name and qualify it with the locale suffix, exactly analogous to a private member of a class in OOP.
Let's imagine a J verb called private that makes a name private within a locale.
cocurrent 'foo'
x =: 3
private 'x' NB. x is still visible to all members of _foo_, but cannot be accessed in any way outside of _foo_
bar =: 3 : 'x & *'
cocurrent 'base'
bar_foo_ 14 NB. This works, because bar_foo_ can see x_foo_
x_foo_ NB. value error. We can't see x_foo_ because it's private to the locale.
Edit, (after OP's edit)
No, you can't hide a name. If an entity is visible in a locale, then it is accessible from all locales. AFAIK the only names that are truly private are names defined with =. inside an explicit : definition
Previews answer:
All names are visible within (but not without) their locale. Eg:
a_l1_ =: 15
a_l2_ =: 20
coclass 'l1'
a
15
coclass 'l2'
a
20
coclass 'base'
a
|value error: a
Short answer: Yes, it's impossible in current implementations.
Long answer: You probably should think of locales as being the public part of a class or object (though locales can also be used for other purposes, such as stack frames or closures).
If you want hidden information, you might think about putting it in a different process, or on a different machine, rather than in a locale. You could also try obscuring it (for example, using the foreign function interface, or files), but whether this is valid depends on your reasons for hiding the information.
That said, note that accessing arbitrary information in an arbitrary locale is somewhat like using the debugger api or reflection api in another language. You can do it, but if that's not what you want you should probably avoid doing that.
That said, in my opinion, you should ideally eliminate private state, rather than hide it. (And, if that winds up being too slow, you might also consider implementing the relevant speed-critical part of your code in some other language. J is wonderful for exploring architectural alternatives but the current implementations do not include compilers suitable for optimizing arbitrary, highly serial, algorithms. You could consider (13 :) or (f.) to be compilers - but they are not going to replace something like the gcc build tools and they currently are not capable of emitting code that gcc can handle.)
That said, it's also hypothetically possible that a language extension (analogous to 9!:24) could be added, to prevent explicit access to locales from new sentences.

puppet inheritance VS puppet composition

I just came cross puppet inheritance lately. A few questions around it:
is it a good practice to use puppet inheritance? I've been told by some of the experienced puppet colleagues Inheritance in puppet is not very good, I was not quite convinced.
Coming from OO world, I really want to understand under the cover, how puppet inheritance works, how overriding works as well.
That depends, as there are two types of inheritance and you don't mention which you mean.
Node inheritance: inheriting from one node fqdn { } definition to another. This in particular is strongly recommended against, because it tends to fail the principle of least surprise. The classic example that catches people out is this:
node base {
$mta_config = "main.cf.normal"
include mta::postfix # uses $mta_config internally
}
node mailserver inherits base {
$mta_config = "main.cf.mailserver"
}
The $mta_config variable is evaluated in the base scope, so the "override" that is being attempted in the mailserver doesn't work.
There's no way to directly influence what's in the parent node, so there's little benefit over composition. This example would be fixed by removing the inheritance and including mta::postfix (or another "common"/"base" class) from both. You could then use parameterised classes too.
Class inheritance: the use for class inheritance is that you can override parameters on resources defined in a parent class. Reimplementing the above example this way, we get:
class mta::postfix {
file { "/etc/postfix/main.cf":
source => "puppet:///modules/mta/main.cf.normal",
}
service { ... }
}
class mta::postfix::server inherits mta::postfix {
File["/etc/postfix/main.cf"]:
source => "puppet:///modules/mta/main.cf.server",
}
# other config...
}
This does work, but I'd avoid going more than one level of inheritance deep as it becomes a headache to maintain.
In both of these examples though, they're easily improved by specifying the data ahead of time (via an ENC) or querying data inline via extlookup or hiera.
Hopefully the above examples help. Class inheritance allows for overriding of parameters only - you can't remove previously defined resources (a common question). Always refer to the resource with a capitalised type name (file { ..: } would become File[..]).
Also useful is that you can also define parameters to be undef, effectively unsetting them.
At first, I am just specifying differences between the two, Inheritance is an "is-a" relationship and Composition is a "has-a" relationship.
1) In puppet inheritance is single inheritance that means, we cannot derive from more than one class. Inheritance is good in the puppet, but we should aware of where it applies. For example, Puppet docs section ["Aside: When to Inherit" at this link https://docs.puppetlabs.com/puppet/latest/reference/lang_classes.html#aside-when-to-inherit], They actually name exactly two situations where inheritance should happen:
when you want to overwrite a parameter of a resource defined in the
parent class
when you want to inherit from a parameters class for standard parameter values
But please note some important things here:
In puppet their is a difference between the Node and class inheritance.
Recent new version from puppet doesn't allow for the Node inheritance please check this https://docs.puppetlabs.com/puppet/latest/reference/lang_node_definitions.html#inheritance-is-not-allowed.
2) Composition on the other hand, is the design technique to implement has-a relationship. which we can do using the include puppet keyword and also, with class { 'baseclass': }, the later one is, if you want to use parameters.
(Please note: In puppet, we can use "include" multiple times but not the "class" syntax, as puppet will complain with duplicate class definitions)
So which is (either inheritance or composition) is better to use in Puppet: It depends on the context i mean, what puppet code you are writing at the moment and understanding the limitations of the puppet inheritance and when to use composition.
So, i will try to keep all this in few points:
1) At first, puppet uses single inheritance model.
2) In puppet, the general consensus around inheritance is to only use it when you need to inherit defaults from Base/Parent
3) But look at this problem where you want to inherit defaults from parent:
class apache {
}
class tomcat inherits apache {
}
class mysql inherits tomcat {
}
class commerceServer inherits mysql {
}
At first glance this looks logical but note that the MySQL module is now inheriting defaults and resources from the tomcat class. Not only does this make NO sense as these services are unrelated, it also offers an opportunity for mistakes to end up in your puppet manifests.
4) So the better approach is to simply perform an include on each class (I mean composition) you wish to use, as this eliminates all scope problems of this nature.
Conclusion: We can try and simplify our puppet manifests by using inheritance, this might be sufficient, but it’s only workable up to a point.If you’re environment grows to hundreds or even thousands of servers, made up over 20 or 30 different types of server, some with shared attributes and subtle differences, spread out over multiple environments, you will likely end up with an unmanageable tangled web of inherited modules. At this point, the obvious choice is composition.
Go thru these links, it helps to understand puppet composition and inheritance in a good manner (personally they helped me):
Designing Puppet - It is really good,http://www.craigdunn.org/2012/05/239/
Wiki link: http://en.wikipedia.org/wiki/Composition_over_inheritance
Modeling Class Composition with Parametrized Classes : https://puppetlabs.com/blog/modeling-class-composition-with-parameterized-classes
I am basically a programmer, personally a strong supporter of Inversion of Control/Dependency Injection, which is the concept/pattern which can be possible thru the composition.

Resources