I am an absolute novice in puppet and I need to modify the existent puppet script
So, I have in the chain:
package { 'python-pip':
ensure => 'present',
} ->
In reality it should be
package { 'python2-pip':
ensure => 'present',
} ->
Can I add an OR condition, so it will work for both 'python2-pip' and 'python-pip'? So, if any of these packages is installed the result will be positive - is it possible?
Can I add an OR condition, so it will work for both 'python2-pip' and 'python-pip'? So, if any of these packages is installed the result will be positive - is it possible?
No, it is not possible as such. Every Package resource manages exactly one package, and declaring such a resource means that Puppet should attempt to ensure the specified state -- in this case, that some version of the specified package is installed on the target node. If you declare two packages then Puppet will try to manage two packages.
If truly "in reality it should be" python2-pip, then you should simply change the manifest appropriately.
If your nodes under management are heterogeneous, however, you may actually mean that on a strict subset of your nodes, the package name should really be "python2-pip", whereas on others, the current "python-pip" is correct. This kind of situation is relatively common, and it is usually addressed by using node facts to modulate your declarations.
That could take the form of using if blocks or other conditional statements to switch between alternative whole declarations, but often one can be more surgical. For example, a common approach is to choose just the package name conditionally, store the result in a variable, and just plug it in to your declaration. Maybe something like this:
$pip_package = $::operatingsystemmajrelease ? {
default => 'python-pip',
'7' => 'python2-pip'
}
package { $pip_package:
ensure => 'present',
}
Since you are a novice, it might be worth your while to have a look at how some existing modules approach such problems. Many modules are available on Github for you to browse, and anything you install from the Forge is at minimum available locally on your machine for you to examine.
Related
I have been working on a Clojure/ClojureScript project and something intrigues me.
On the shadow-cljs.edn file, there is a declaration of the
dependencies. As you might see below, some of them have "a full name"
declaration, indicated as username/repository-name. An example is
venantius/accountant.
Others are declared only as repository-name, such as [bidi "2.1.5"] which is actually published by juxt user (source).
I am afraid this could be problematic since multiple users could create repositories with the same name:
{:source-paths ["src" "dev" "test"]
:dependencies [
;; for deploy w lein deps below need to be in project.cljs
;; third-party dependencies
[venantius/accountant "0.2.5"]
[bidi "2.1.5"]
[cljs-hash "0.0.2"]
[clova "0.46.0"]
[com.andrewmcveigh/cljs-time "0.5.2"]
[org.clojure/core.match "1.0.0"]
[binaryage/dirac "RELEASE"]
[com.pupeno/free-form "0.6.0"]
[garden "1.3.10"]
[hickory "0.7.1"]
[metosin/malli "0.8.4"]
[medley "1.4.0"]
[binaryage/oops "0.7.0"]
[djblue/portal "0.16.1"]
[djblue/portal "0.18.0"]
[proto-repl "0.3.1"]
[reagent "1.1.0"]
[re-frame "1.2.0"]
[district0x/re-frame-window-fx "1.1.0"]
[cljsjs/react-beautiful-dnd "12.2.0-2"]
I am not sure how the low-level of dependency installation goes in a Clojure/ClojureScript project.
Is it a bad practice to have only the brief name of dependency? Is an ambiguity problem feasible or even possible?
Until not too long ago it was allowed to publish dependencies to https://clojars.org without a group name. In those cases the group would become identical to the artifact id. So bidi is effectively bidi/bidi.
Nowadays, new packages may only be published with a specific group name. However, old packages may continue using their older name.
The names used to publish also do not need to match their github repo coordinates. These are separate systems. They often match but are not required to.
To anwer your question: You should avoid using the same dependency multiple times. And you should use the official published name for each library. Some libraries are still updated using their old identifiers. Some moved to the newer longer names, while the old ones are still available but no longer receiving updates. Always consult the documentation of the specific libs to be sure which one you are supposed to use. They'll usually have some kind of info in their READMEs.
Conflicts may happen if you get the "same" lib via different identifiers. These may be very difficult to identify, when you run into trouble. This is true for any dependency resolver your use (eg. project.clj, deps.edn, shadow-cljs.edn). Best practice is to keep your dependencies as clean as possible.
In Puppet 3 and prior, templates in defines inherited scope from their calling class the same way native defined types do, so if I had a file resource with a template source created by a define, that template could make use of variables set in the calling class no matter which class called the define.
In Puppet 4 (also tested with Puppet 3.8 future parser), that appears to no longer be the case, and it is causing breakage that is hard to even measure in my environment, which has relied on this behavior for tens of thousands of lines of code. Is there any way at all to get this back? After looking into it, even converting the defines into native types doesn't solve the problem, as they rely on the ability to gather server-side information about what templates are available in different modules via custom functions, and everything in a native resource type appears to happen on the client.
Is this fixable, or do I attempt to wait for Puppet 5?
Edit: Don't get too caught up in the word 'scope' here -- I need any way to pass all class variables to a define and unpack them there so that they are available to templates, or a way to have a native type see what files are inside specified modules on the puppetmaster. I will accept any bizarre, obscure message passing option as long as it has this result, because the classes do not know where the templates are -- only the define does, because it's making use of the helper functions that scan the filesystem on the server.
Edit 2: To prove this works as expected in Puppet 3.8.5, use the following:
modules/so1/manifests/autotemplate.pp:
# Minimal define example for debugging
define so1::autotemplate (
$ensure = 'present',
$module = $caller_module_name,
) {
$realtemplate = "${module}${title}"
file { $title :
ensure => $ensure,
owner => 'root', group => 'root', mode => '0600',
content => template($realtemplate),
}
}
in modules/so2/manifests/example.pp:
# Example class calling so1::autotemplate
class so2::example (
$value = 'is the expected value'
) {
so1::autotemplate { '/tmp/qwerasdf': }
}
in modules/so2/templates/tmp/qwerasdf:
Expected value: <%= #value %>
In Puppet 3.8.5 with future parser off, this results in /tmp/qwerasdf being generated on the system with the contents:
Expected value: is the expected value
In Puppet 3.8.5. with parser = future in environment.conf (and also Puppet 4.x, though I tested for this example specifically on a 3.8.5 future parser environment), this results in the file being create with the contents:
Expected value:
Edit 3: two-word touch-up for precision
In Puppet 3 and prior, defines inherited scope from their calling class the same way native defined types do, so if I had a file resource with a template source created by a define, that template could make use of variables set in the calling class no matter which class called the define.
What you're describing is Puppet's old dynamic scoping. The change in scoping rules was one of the major changes in Puppet 3.0; it is not new in Puppet 4. There was, however, a bug in Puppet 3 that templates still used dynamic scope. That was fixed in Puppet 3.5, but only prospectively -- that is, when the future parser is enabled. Defined types themselves went through the scoping change in Puppet 3.0.0, along with everything else. The scope changes were a big deal (and Puppet devoted considerable effort to alerting users to them) when they first went into place, but nowadays there's no big deal here.
it is causing breakage that is hard to even measure in my environment, which has relied on this behavior for tens of thousands of lines of code.
I'm sorry you're having that experience.
Is there any way at all to get this back?
No. Puppet scoping rules do not work the way you want them to do. That they did work that way in templates (but not most other places) in Puppet 3 was and still is contrary to Puppet's documentation, and never intentional.
Is this fixable, or do I attempt to wait for Puppet 5?
There is no way to get dynamic variable scoping in templates or elsewhere in Puppet 4, and I have no reason to think that there will be one in Puppet 5.
If you need a template to expand host variables from the scope of a particular class, then you can get that by evaluating the template in the scope of that class. Alternatively, an ERB template can obtain variables from (specific) other scopes by means of the scope object. On the other hand, if you want to expand the template in the scope of a defined type, then you could consider passing the needed variables as parameters of that type.
There may be other ways to address your problem, but I'd need more details to make any other suggestions. (And that would constitute a separate question if you choose to ask it here on SO.)
If you use something like $FlowIssue it's not guaranteed to be in everyone's .flowconfig file. If you declare a library interface, that seems to only work for the given project, not in other projects that import your package (even if you provide the .flowconfig and interface files in your NPM package).
Here's the code I'm trying to suppress errors for in apps that use my package:
// $FlowIssue
const isSSRTest = process.env.NODE_ENV === 'test' // $FlowIssue
&& typeof CONFIG !== 'undefined' && CONFIG.isSSR
CONFIG is a global that exists when tests are run by Jest.
I previously had an interface declaration for CONFIG, but that wasn't honored in user applications--perhaps I'm missing a mechanism to make that work?? With this solution, at least there is a good chance that user's have the $FlowIssue suppression comment. It's still not good enough though.
What's the idiomatic solution here for packages built with Flow?
Declaring a global variable
This is the way to declare a global variable:
declare var CONFIG: any;. Instead of any you could/should use the actual type.
Error Suppression
With flow v0.33 they introduced this change:
suppress_comment now defaults to matching // $FlowFixMe if there are
no suppress_comments listed in a .flowconfig
This means that there is a greater chance of your error being suppressed if you use $FlowFixMe.
Differences in .flowconfig between your library and your consumers' code are a real issue, and there is no way to make it so that your code can be dropped into any other project and be sure it will typecheck. On top of that, even if you have identical .flowconfigs, you may be running different versions of Flow. Your code may typecheck in one version, but not in another, so it may be the case that consumers of your library will be pinned to a specific version of Flow if they want to avoid getting errors reported from your library.
Worse, if one library type checks only in one version of Flow, and another type checks only in another version, there may be no version of Flow that a consumer can choose in order to avoid errors.
The only way to solve this generally is to write library definition files and publish them to flow-typed. Unfortunately, this is currently a manual process because there is not yet any tooling that can take a project and generate library definitions for it. In the mean time, simply copying your source files to have the .js.flow extension before publishing will work in some cases, but it is not a general solution.
See also https://stackoverflow.com/a/43852211/901387
I was wondering whether Node.js/npm include any kind of exension mechanism comparable to Python setuptools' "entry points".
So, in short:
is there any way I can do dynamic discovery of services provided by other packages using npm?
if not, what would be the best way to implement something similar? Specifying the extension name in the main module's configuration file seems to be the logical solution, but I wonder whether something "automatic" can be done.
I'm not aware of any builtin mechanism to do this.
One viable way of doing it yourself:
I made a small tool (Jumpstart) to quickly create project scaffolding from templates with placeholders, and I used a kind of plugin mechanism for that. It basically comes down to that the Jumpstart script searches for modules named jumpstart-* "adjacent" to where the module itself is installed. So it would work for both local and global installations. If installed locally, it would search the other local modules (on the same level) and if global, it searches the other global modules.
Note that here, "search" comes down to a simple fs.exists check to see if there's a Jumpstart template module with a particular name installed. However, nothing would stand in the way to actually get a full list of all installed packages matching the jumpstart-* pattern, and loading all at once. I could also search up the entire directory tree for node_modules directories and do the same. There's no point in doing this for this particular program, however.
See https://npmjs.org/package/jumpstart for docs.
The only limitation to this technique is that all modules must be named in a consistent fashion. Start with some string, end with some string, something like that. Any rogue packages polluting the namespace could be detected by doing further checks on a package contents: What files does it contain? What kind of object does its main module export? etc.
Brunch also uses a plugin mechanism. This one actually deals with file extensions, so is more relevant: https://github.com/brunch/brunch/wiki/Plugins . See for example source of the CoffeeScript plugin https://github.com/brunch/coffee-script-brunch/blob/master/src/index.coffee .
I am dealing with making some packages of some projects.
Assume I have a config file like that in my project.
name=foo
mail=foo#foo.com
After installation user edits config file with his/her information.
name=user
mail=user#somedomain.com
When a update comes, for the purpose of not ruin the users config file, I do not replace conf file with new ones as all packages should do.
There is no problem up to this point.
What if I add a new parameter to my config file? For example,
name=foo
mail=foo#foo.com
age=23
If I replace config file with new one, user will lost its settings. If I don't, my new parameter could not be used. I wonder what is the general procedures for this conditions? My question is valid no matter what package types it is (i.e. rpm, deb or tbz).
#William Pursell: Just because you don't see the problem, that does not mean that there isn't a problem.
This definitively is a problem and it has plagued me since I maintain deb packages. For example: many configuration files contain commented configuration items and other comments that the package user is supposed to read and understand before applying his configuration changes. If, in the normal course of software development, there are new configuration items, new default values, or different semantics to existing ones, the comments have to be adapted. This is the package maintainer's job. But at the same time, the package must not mess with the configuration changes already applied by the user.
When I do this in Debian/Ubuntu, the package user is confronted with this intimidating question:
Configuration file `/etc/...'
==> Modified (by you or by a script) since installation.
==> Package distributor has shipped an updated version.
What would you like to do about it ? Your options are:
Y or I : install the package maintainer's version
N or O : keep your currently-installed version
D : show the differences between the versions
Z : start a shell to examine the situation
The default action is to keep your current version.
*** ... (Y/I/N/O/D/Z) [default=N] ?
for every single file. That is, for some package upgrades, the user has to type yes/no/maybe :-) many times, every time. Fact is that the package user usually does not know what this is all about. She has to dig into the files, diff versions, and do some guessing in order to figure out a reasonable answer. An answer, by the way, that the package maintainer could have made already, if the packaging system would allow it.
I recognize that there may not exist a general solution to this problem. But I'd love to hear how other package maintainers cope with the situation.
I'm not sure I see the problem. As long as the software can handle the absence of the field in the config file (ie, use a reasonable default), then there is no difference in the two scenarios you describe. If you software cannot handle the absence of the field, I would argue that is a bug.