Have a basic doubt about puppet package resource.If I have a package resource declared in the the manifest file for eg:to install apache using apt-get.
1.During the first run of puppet agent,apache will get installed.
2.If I run the agent(using the existing code for package resource) again after ubuntu repo is refreshed with latest version of apache.
Will puppet update/refresh the apache in agent server?
The Package's ensure attribute determines the state.
What state the package should be in. On packaging systems that can retrieve new packages on their own, you can choose which package to retrieve by specifying a version number or latest as the ensure value. On packaging systems that manage configuration files separately from “normal” system files, you can uninstall config files by specifying purged as the ensure value. This defaults to installed.
Version numbers must match the full version to install, including release if the provider uses a release moniker. Ranges or semver patterns are not accepted except for the gem package provider. For example, to install the bash package from the rpm bash-4.1.2-29.el6.x86_64.rpm, use the string '4.1.2-29.el6'.
Valid values are present (also called installed), absent, purged, held, latest. Values can match /./.
Source: https://puppet.com/docs/puppet/5.3/types/package.html#package-attribute-ensure
Related
I want to install package A which has a dependency of Package B, and Package B has 2 providers, When I install package A, can we specify in the spec file which provider to be used to download package B when it’s being installed as a dependency
When you say "provider" - do you mean "repository?" If so, no.
A workaround would be to repackage the RPM with a different %dist tag, and then by explicitly calling that out, you would force it to use the version from your local repo. But that would be a rabbit hole that's probably not worth running down.
I am facing a problem with installing a package from Debian buster repo in a system which have two repos, this problem will affect the majority of the packages we deploy with puppet. Puppet is trying to install it form our local repo instead.
We are running multiple Ganeti cluster with Ubuntu 16 on the hardware as well as on the VM's. Now we decided to move to Debian stable for the hardware. We have a local repo in the company to provide our specific packages as well as some Ubuntu packages. I set up a new Ganeti cluster with Debian and some VM's for the testing phase.
the code I am using is the following:
package { 'haproxy':
ensure => latest,
}
on the VM I have installed the package haproxy manually because I had the error described below and I tried to see what will happen if the package is already present on the system so that I have the following situation:
# apt-cache policy haproxy
haproxy:
Installed: 1.8.19-1
Candidate: 1.8.19-1ppa1~xenial
Version table:
1.8.19-1ppa1~xenial 500
500 http://our.local.repo/local-xenial local-xenial/main amd64 Packages
*** 1.8.19-1 500
500 http://ftp.de.debian.org/debian buster/main amd64 Packages
100 /var/lib/dpkg/status
.....
.....
.....
When I run puppet agent on the node I get an error:
The following packages have unmet dependencies:
haproxy : Depends: libssl1.0.0 (>= 1.0.2~beta3) but it is not installable
E: Unable to correct problems, you have held broken packages.
Error: /Stage[main]/puppet_haproxy::Base/Package[haproxy]/ensure: change from 1.8.19-1 to 1.8.19-1ppa1~xenial failed: Could not update: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install haproxy' returned 100: Reading package lists...
Building dependency tree...
Reading state information...
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
haproxy : Depends: libssl1.0.0 (>= 1.0.2~beta3) but it is not installable
E: Unable to correct problems, you have held broken packages.
So obviously puppet is trying to upgrade the package to 1.8.19-1ppa1~xenial which is related to the latest in the package resource.
I don't want to change the ensure attribute to installed or present, rather trying to get the code working on Ubuntu (which it does) as well as on Debian.
Changing the Pin-Priority will be also not a good idea, since we need some our standard packages from the local repo to be installed on each system without modifying the puppet code in each module.
The only work around I thought of is to add the attribute install_options so that the package resource will be as follows (I didn't test it yet):
if $facts['operatingsystem'] == 'Debain' {
package { 'haproxy':
ensure => latest,
install_options => ['-t', 'buster'],
}
} else {
package { 'haproxy':
ensure => latest,
}
}
But that means I have to modify all package resources in each module when a conflict occurs, which I want to avoid.
Is there a better way to achieve this?
Thanks
If you were installing packages manually from the command line, subject to the repository configuration you describe, then you would need to provide a command-line option each time, right? You would need either to specify a particular package version or to modulate the source list.
Puppet has no magic solution for that. If you need to override the repository configuration with respect to particular packages, then you need one way or another to provide attributes appropriate for that purpose on the affected Package resources. There are two basic avenues of approach:
Update your repository configuration so that you don't need extra options. For example, perhaps you can split your local repo into two, one that has lower priority than the distro, and one that has higher. Or perhaps you need a separate repo for Debian than for Ubuntu. OR
Modify Package resources declared in your Puppet manifests, possibly in a distro-specific way. Although you can use conditional statements to achieve any needed distro-specificity, I'd suggest instead a data-driven approach relying on parameterized classes and Hiera.
I'm developing a custom recipe using the package packages from chef.
I created a file under the recipes folder called apache.rb.
Then I uploaded the cookbook through berks and I edited the recipes list on one node with recipe["packages::apache"].
When i run the chef-client I got the following error:
could not find recipe apache for cookbook packages
This is the apache.rb located under recipes/ folder:
package 'Install Apache' do
case node[:platform]
when 'redhat', 'centos'
package_name 'httpd'
version '2.2.0'
when 'ubuntu', 'debian'
package_name 'apache2'
end
action :install
end
Can you try knife upload . --force, to make sure the cookbook is really up-to-date on the chef server?
There might be an older version of the cookbook already uploaded (i.e. before you created the apache recipe), and because you've kept the version number in metadata.rb the same, knife (or berks, depending on what you use for the upload) might be skipping the upload, thinking nothing's changed.
UPDATE:
It should be noted that the above should really only be used if you are really sure you want to update the existing version on the Chef Server (e.g. if you are still in development).
Bumping the version number on the cookbook's metadata would be a much better way to solve this problem for production environments, as pointed out by #Tensibai in the comments below.
I'm used to deploy code depending on Composer (PHP's NPM cousing), that sports .json and .lock files. The first one describes the package and your version constraints, and the second one lists exactly what was installed. Always there's a lock file and you run composer install you're sure to receive the same set of packages; running composer update will re-read the json file, install new versions, and update the lock file.
That's awesome for production deployment, since you don't need to checkout your dependencies to your versioning system and you're sure to have the exact same set of dependencies in production as you have in development.
My question is: how to best automate deployment of NPM-dependent code? Is it possible to achieve a method similar to Composer? I've noticed that npm install only installs what's first available in the package.json file. After the first run, i.e. if you change a version constraint you must manually npm update that package - and that would render automate deployment useless, as there's no way to check in to versioning "update this package here to a new version"...
npm shrinkwrap is a analog of composer.lock file. It will generate a npm-shrinkwrap.json, that have all deps with version in it, so you can use it to deploy to production env. Also you can try a various libs from npm to lock versions or search for updates of it without changing packages.json.
I am struggling around a wrong usage of composer, for sure.
I set up this repository: https://github.com/alle/assets-merger
I forked the project and was just trying to make it a kohana-module, including all the dependencies.
As for it would need the YUI comporess JAR, I was trying to make just that JARfile as a dependency, and I ended to declare it in the composer.json file (please, look at this).
Once I need to add my new package to a project I add it in the require section as follows:
...
"alle/assets-merger": "dev-master",
...
But the (latest) composer update command says:
Loading composer repositories with package information
Updating dependencies (including require-dev)
Your requirements could not be resolved to an installable set of packages.
Problem 1
- Installation request for alle/assets-merger dev-develop -> satisfiable by alle/assets-merger[dev-develop].
- alle/assets-merger dev-develop requires yui/yuicompressor 2.4.8 -> no matching package found.
Potential causes:
- A typo in the package name
- The package is not available in a stable-enough version according to your minimum-stability setting see <https://groups.google.com/d/topic/composer-dev/_g3ASeIFlrc/discussion> for more details.
And my story ends here.
How should I configure my composer.json in the https://github.com/alle/assets-merger repository, in order to include it as a fully satisfied kohana-module in other projects?
Some things I notice in your composer.json.
There is a version of that CSS minify available on Packagist which says it is just a copy of the original Goole-Code hosted files, but with Composer: natxet/cssmin. It is version 3.0.2, but I think that shouldn't make a difference.
mrclay/minify is included twice in the packages with the same version. It also is available on Packagist. You will probably already use that (version 2.2.0 is registered, and because you didn't turn of Packagist access, it will be generally available for install unless a version requirement or conflict prevents it).
You are trying to download a JAR file (which is a java executable without and PHP), but try to get PHP classmaps out of it. That will fail for sure.
You did miss the big note in the Composer documentation saying that Composer cannot resolve repositories mentioned in sub packages, only in the root package. That means that whatever you mention in your alle/asset-merger package will not be used if you use that package anywhere else. You'd have to duplicate these repositories in every package in addition to adding the package name itself as "required".
What this means is that you probably avoided missing mrclay/minify because it is available on Packagist, you might as well have added the cssmin by accident, but you definitly did not add YUICompressor.
But you shouldn't add this in the first place, because it is no PHP software. You can however add post-install commands to your projects. All your Composer integration does is download the JAR file. You can do that with a post-install or post-update command. See the documentation here.