I am trying to change a file in puppet agents and I have written the below code.
modules/
├── helloworld
│ └── manifests
│ ├── init.pp
│ └── motd.pp
└── ssh
├── manifests
| └── init.pp
└── ssh_config
my Puppet manifest code:
# modules/ssh/manifests/init.pp
class ssh {
package { 'openssl':
ensure => present,
before => File['/etc/ssh/sshd_config'],
}
file {'ssh_config':
ensure => file,
path => '/etc/ssh/sshd_config',
mode => "600",
source => "puppet:///modules/ssh/ssh_config",
}
service {'sshd':
ensure => running,
enable => true,
subscribe => File['/etc/ssh/sshd_config'],
}
}
Below is the main manifest's code:
# manifests/site.pp
node default {
class { 'ssh': }
}
Below is the error I am receiving:
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for dheera.asicdesigners.com
Info: Applying configuration version '1478497316'
Error: /Stage[main]/Ssh/File[ssh_config]: Could not evaluate: Could not retrieve information from environment production source(s)
puppet:///modules/ssh/ssh_config
Notice: /Stage[main]/Ssh/Service[sshd]: Dependency File[ssh_config] has failures: true
Warning: /Stage[main]/Ssh/Service[sshd]: Skipping because of failed dependencies
Notice: Applied catalog in 0.21 seconds
Your ssh_config file location needs to be in the files directory of your module to be used with the Puppet URI in the source attribute like you are doing.
modules/
├── helloworld
│ └── manifests
│ ├── init.pp
│ └── motd.pp
└── ssh
├── manifests
| └── init.pp
└── files
├── ssh_config
Also, you probably meant your package resource to be openssh and not openssl.
Related
Backstory
I got this error amongst numerous others setting up a new virtual box LMDE5 on my new windows 11 pro computer. I haven't used windows in about 12 years, and the changes were crazy causing countless errors on Windows and LMDE5 Vbox.
My last issue was this in vs code.
Error
go module packages.Load error: err: exit status 2: stderr: go: no such tool "compile": go list
My project directory structure
.
├── docker-compose.yaml
├── project.code-workspace
├── go.mod
├── go.sum
├── main.go
└── sub_packages
├── backend
│ ├── folder1
│ ├── folder2
├── api
│ ├── handlers
│ └── requests
├── entities
├── services
└── utils
settings.json file
{
// ...
"go.goroot": "/usr/local/go",
"go.gopath": "/home/user_name/go",
// ...
}
Solution
Add the go tool dir ENV variable directly to VS Code settings.json to the settings to pickup the location of the folder .../linux_amd64 where compile was located.
settings.json file
{
// ...
"go.goroot": "/usr/local/go",
"go.gopath": "/home/username/go",
"go.alternateTools": {
"GOTOOLDIR": "/usr/local/go/pkg/tool/linux_amd64"
},
// ...
}
I'm working on a terragrunt code base for the first time, having used terraform a lot in the past without terragrunt. I'm a bit confused as to the structure terragrunt seems to enforce. I would usually organise my terraform thus:
main.tf
--> module
main.tf
--> module2
main.tf
This is listed as best practice on the terraform docs:
The Root Module
Terraform always runs in the context of a single root module. A
complete Terraform configuration consists of a root module and the
tree of child modules (which includes the modules called by the root
module, any modules called by those modules, etc.).
Source
But none of the terragrunt structures seem to represent this. It seems to be designed so that each module is independent and run using the run-all command.
This seems problematic to me, from the existing code base I can see that this initialises terraform for every module and I'd say causes issues with sharing secrets between modules. So I'd prefer to work with one root module and multiple child modules.
I can't find a terragrunt pattern that will allow me to do this?
I'm also confused as to how this responsibility is decomposed, do I actually structure my terraform (as above) or do I need an extra root .hcl file?
I'm after something a little like this I guess
└── live
├── prod
│ ├── terragrunt.hcl
│ ├── app
│ │ └── terragrunt.hcl
│ ├── mysql
│ │ └── terragrunt.hcl
│ └── vpc
│ └── terragrunt.hcl
├── qa
│ ├── terragrunt.hcl
│ ├── app
│ │ └── terragrunt.hcl
│ ├── mysql
│ │ └── terragrunt.hcl
│ └── vpc
│ └── terragrunt.hcl
└── stage
├── terragrunt.hcl
├── app
│ └── terragrunt.hcl
├── mysql
│ └── terragrunt.hcl
└── vpc
└── terragrunt.hcl
But this example just talks about specifying the provider block and nothing about a root main.tf. So I'm lost?
Each TF module you used to use consumed inputs, created resources and provided outputs. The wiring of the modules was done via main.tf you are referring to in plain Terraform.
In your case with Terragrunt now, the wiring will be done by terragrunt.hcl files. On the module level (e.g.live/prod/app/terragrunt.hcl) you could define the module dependencies, i.e. where are the input values for this modules input variables, e.g.:
inputs {
username = dependency.iam.output.user.name
}
With this in mind, you might or might not use root-level terragrunt.hcl files. If you want to invoke the parent-folder terragrunt.hcl code, you need to add the following block into your module:
include "root" {
path = find_in_parent_folders()
expose = true
}
See the docs for this function here.
I have a terraform modules repository with different set of modules with below structure
BitBucket Repository(URL: git#bitbucket.org:/{repoName}.git?ref=develop
└── modules
├── s3
│ ├── locals.tf
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
└── tfstate
└── main.tf
develop is the branch that I want to use which I have given in the source URL. I am calling the module repository as given below:
├── examples
│ ├── gce-nextgen-dev.tfvars
│ └── main.tf
main.tf
module "name" {
source = "git#bitbucket.org:{url}/terraform-modules.git? ref=develop/"
bucketName = "terraformbucket"
environment = "dev"
tags = map("ExtraTag", "ExtraTagValue")
}
How can I call the modules from sub-directories which is in a BitBucket repository.
It works if I remove the ref=develop from the URL and just give git#bitbucket.org:{url}/terraform-modules.git//modules//s3
But I don't want to use master but develop branch in this case.
When I try to install basic Windows modules (new to puppet) they are not recognized when I try to add their classes to a new group.
When I list the modules I can see them, when I checked the production module path it is in the right place, why can't I see them in the GUI?
Thanks..
[root#puppetmaster ~]# puppet module list <br>
/etc/puppetlabs/code/environments/production/modules<br>
├── badgerious-windows_env (v2.2.2)<br>
├── chocolatey-chocolatey (v1.2.1)<br>
├── puppet-download_file (v1.2.1)<br>
├── puppet-iis (v1.4.1)<br>
├── puppet-windowsfeature (v1.1.0)<br>
├── puppetlabs-acl (v1.1.2)<br>
├── puppetlabs-apache (v1.8.0)<br>
├── puppetlabs-concat (v1.2.5)<br>
├── puppetlabs-powershell (v1.0.6)<br>
├── puppetlabs-reboot (v1.2.1)<br>
├── puppetlabs-registry (v1.1.3)<br>
├── puppetlabs-stdlib (v4.11.0)<br>
├── puppetlabs-windows (v2.1.1)<br>
└── puppetlabs-wsus_client (v1.0.1)<br>
/opt/puppetlabs/puppet/modules<br>
├── puppetlabs-pe_accounts (v2.0.2-6-gd2f698c)<br>
├── puppetlabs-pe_concat (v1.1.2-7-g77ec55b)<br>
├── puppetlabs-pe_console_prune (v0.1.1-9-gfc256c0)<br>
├── puppetlabs-pe_hocon (v2015.3.0-rc0)<br>
├── puppetlabs-pe_inifile (v1.1.4-16-gcb39966)<br>
├── puppetlabs-pe_java_ks (v1.2.4-37-g2d86015)<br>
├── puppetlabs-pe_nginx (v2015.2.0-rc0)<br>
├── puppetlabs-pe_postgresql (v3.4.4-35-g51cdb78)<br>
├── puppetlabs-pe_puppet_authorization (v2015.3.0-rc1-31-g6d266e1)<br>
├── puppetlabs-pe_puppetdbquery (v2015.3.0-rc1-1-gb278efd)<br>
├── puppetlabs-pe_r10k (v2015.2.2-2-g21c67b9)<br>
├── puppetlabs-pe_razor (v0.2.1-84-gbb045d2)<br>
├── puppetlabs-pe_repo (v2015.3.0-rc2-39-g796afc6)<br>
├── puppetlabs-pe_staging (v0.3.3-24-g2d5dbb0)<br>
└── puppetlabs-puppet_enterprise (v2015.3.1-1-g8c41b9f)<br>
[root#puppetmaster ~]# puppet config print modulepath --section master --environment production
/etc/puppetlabs/code/environments/production/modules:/etc/puppetlabs/code/modules:/opt/puppetlabs/puppet/modules
In order to make the foreman GUI fetch installed modules, You need a foreman-proxy running with puppet.yml file enabled in it.
Note: I hope you have a working foreman-proxy running with puppet.yml
file enabled. If it's not running, please enable puppet in
foreman-proxy and configure proxy settings in foreman GUI. That's
enough to make the classes visible.
I also pasted my puppet.conf file, just incase if you need.
[main]
logdir=/var/log/puppet
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet
factpath=$vardir/lib/facter
prerun_command=/etc/puppet/etckeeper-commit-pre
postrun_command=/etc/puppet/etckeeper-commit-post
environment=production
reports=log, foreman
[master]
dns_alt_names = yourdomain.com
certname=yourdomain.com
external_nodes=/etc/puppet/node.rb
node_terminus=exec
[agent]
server=yourdomain.com
listen=true
[production]
modulepath=/etc/puppetlabs/code/environments/production/modules
Issue:
In Foreman I see the Environment column empty for this (and any other client) I try to add.
Environment:
I have a Foreman 1.7 Server with 2 additional Puppet masters (3.8.2) which are seen in Smart-Proxies and look healthy. I have created a new environment call 'destruct' which is defined in /etc/puppet/environments on all 3 servers with 'puppet' as the owner.
/etc/puppet/environments/destruct/
├── manifests
└── modules
└── linux_ntp
├── manifests
│ ├── config.pp
│ ├── init.pp
│ ├── install.pp
│ ├── params.pp
│ └── service.pp
├── metadata.json
├── Modulefile
├── Rakefile
├── README.markdown
├── spec
│ ├── spec_helper.rb
│ └── spec.opts
├── templates
│ └── ntp.conf.erb
└── tests
└── init.pp
The environments path is specified in the [master] section in puppet.conf in all 3 servers (foreman server + 2 external puppet masters) so directory environments should be in play.
environmentpath = /etc/puppet/environments
basemodulepath = /etc/puppet/modules
I added the 'destruct' environment in Foreman: Configure->Environments, after which I ran the import from all 3 servers and it did not try to remove it, and it did import that single ntp module.
When I attempt to add a new puppet agent I specify the environment as 'destruct' in puppet.conf:
report = true
pluginsync = true
masterport = 8140
certname = clientname.domain
server = puppetserver1.domain
listen = true
environment = destruct
ca_server = foremanserver1.domain
However in Foreman I see the Environment column empty for this (and any other client) I try to add. There are no errors on the puppet agent indicating it cannot find that environment.
I am able to manually assign a sever to an environment after it is in Foreman and successfully run modules, but that is far from ideal.
Any ideas why client systems are not being automatically assigned to the correct environment?
It appears that this behavior is expected. If you are using Foreman only as a puppet ENC then when a new sever is added via the puppet agent it does not auto-populate the Puppet Environment in Foreman based on what is in puppet.conf 'environment' variable, as I expected.
It looks like the best way is to create the Hosts in Foreman (via the API or WebUI) then your puppet environment in Foreman would be correctly applied.