Puppet unable to find environment - puppet

I have a simple Puppet environment, just started with one master and one agent.
I am getting following error when I do puppet module list from my agent. I run puppet agent -t it is not even going to my site.pp and test.pp.
I am not sure if I am missing anything in the Puppet configurations.
puppet module list
/usr/lib/ruby/site_ruby/1.8/puppet/environments.rb:38:in `get!': Could not find a directory environment named 'test' anywhere in the path: /etc/puppet/environments. Does the directory exist? (Puppet::Environments::EnvironmentNotFound)
from /usr/lib/ruby/site_ruby/1.8/puppet/application.rb:365:in `run'
from /usr/lib/ruby/site_ruby/1.8/puppet/util/command_line.rb:146:in `run'
from /usr/lib/ruby/site_ruby/1.8/puppet/util/command_line.rb:92:in `execute'
from /usr/bin/puppet:8
Here is my Puppet master puppet.conf
[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl
dns_alt_names = cssdb-poc-01.cisco.com cssdb-poc-01
[master]
server = cssdb-poc-01.cisco.com
certname = cssdb-poc-01.cisco.com
dns_alt_names = cssdb-poc-01.cisco.com cssdb-poc-01
environmentpath = /etc/puppet/environments
environment = test
[agent]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt
# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig
~
Here is the directory structure on puppet master.
[root#cssdb-poc-01 test]# tree /etc/puppet/environments/
/etc/puppet/environments/
├── example_env
│   ├── manifests
│   ├── modules
│   └── README.environment
├── production
└── test
├── environment.conf
├── manifests
│   └── site.pp
└── modules
└── cassandra
├── manifests
└── test.pp
Here is the my puppet agent puppet.conf
cat /etc/puppet/puppet.conf
[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl
[main]
server=cssdb-poc-01.cisco.com
environmentpath = /etc/puppet/environments
environment = test
[agent]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt
# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig

The issue was with my environment.conf file.
[root#cssdb-poc-01 templates]# cat /tmp/environment.conf
modulepath = /etc/puppet/environments/test/modules:$basemodulepath
manifest = manifests
I removed it from environment directory and it started working, not puppet modules list but puppet agent -t
#Frank you are right puppet modules list will not work on agent nodes.
Thanks for your help.

Custom modules will not show up in puppet modules list output. It lists modules with metadata, typically installed from the Forge using puppet module install.
On the agent, it is normal to have no local environments to search for modules (or install them).

Related

Reusing Terraform modules without exposing any variables

Consider the following folder structure:
.
├── network-module/
│ ├── main.tf
│ └── variables.tf
├── dev.tfvars
├── prod.tfvars
├── main.tf
└── variables.tf
This is a simple Terraform configuration running under a GitLab pipeline.
network-module contains some variables for the network settings that change depending on the environment (dev, prod, etc) we deploy.
The main module has an environment variable that can be used to set the target environment.
What I want to achieve is to hide the variables that the network module needs from the parent module, so that users only need to specify the environment name and can omit the network configuration for the target environment altogether.
Using -var-file when running plan or apply works, but to do that I need to include all the variables the submodule needs in the parent module's variable file.
Basically, I don't want all the variables exposed to the outside world.
One option that comes to mind is to run some scripts inside the pipeline and change the contents of the configuration through string manipulation, but that feels wrong.
Do I have any other options?
Sure, just set your per-environment configuration in the root module.
locals {
network_module_args = {
dev = {
some_arg = "arg in dev"
}
prod = {
some_arg = "arg in prod"
}
}
}
module "network_module" {
source = "network-module"
some_arg = lookup(local.network_module_args, environment, "")
}

How to install elasticsearch in custom path?

I need to install Elasticsearch to path /opt/elasticsearch/ with all information in this path.
I mean, I need config path put in opt too.
https://www.elastic.co/guide/en/elasticsearch/reference/7.15/settings.html it says here I can set ES_PATH_CONF and ES_HOME env vars to change installation and config paths, but it doesn't work.
rpm --install elasticsearch-7.15.0-x86_64.rpm --prefix=/opt/elasticsearch/ it's not what I need and doesn't change config path.
It makes home directory in /opt/elasticsearch, and I get next structure and paths doesn't change. It still needs execution bins in /usr/share/elasticsearch/bin/
el6:~ # tree /opt/elasticsearch/ -d -L 3
/opt/elasticsearch/
├── lib
│   ├── sysctl.d
│   ├── systemd
│   │   └── system
│   └── tmpfiles.d
└── share
└── elasticsearch
├── bin
├── jdk
├── lib
├── modules
└── plugins
but i need
el5:~ # tree /opt/elasticsearch/ -d -L 1
/opt/elasticsearch/
├── bin
├── config
├── data
├── jdk
├── lib
├── logs
├── modules
└── plugins
with manually installation
mkdir /opt/elasticsearch/ && tar -xzf elasticsearch-7.15.0-linux-x86_64.tar.gz -C /opt/elasticsearch/ --strip-components 1
I have needed me structure. I made systemd service
[Unit]
Description=Elasticsearch
Documentation=https://www.elastic.co
Wants=network-online.target
After=network-online.target
[Service]
Type=notify
RuntimeDirectory=elasticsearch
PrivateTmp=true
Environment=ES_HOME=/opt/elasticsearch
Environment=ES_PATH_CONF=/opt/elasticsearch/config
Environment=PID_DIR=/var/run/elasticsearch
Environment=ES_SD_NOTIFY=true
EnvironmentFile=-/etc/sysconfig/elasticsearch
WorkingDirectory=/opt/elasticsearch
User=elasticsearch
Group=elasticsearch
ExecStart=/opt/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet
# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.
StandardOutput=journal
StandardError=inherit
# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65535
# Specifies the maximum number of processes
# Specifies the maximum number of processes
LimitNPROC=4096
# Specifies the maximum size of virtual memory
LimitAS=infinity
# Specifies the maximum file size
LimitFSIZE=infinity
# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0
# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM
# Send the signal only to the JVM rather than its control group
KillMode=process
# Java process is never killed
SendSIGKILL=no
# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143
# Allow a slow startup before the systemd notifier module kicks in to extend the timeout
TimeoutStartSec=5000
[Install]
WantedBy=multi-user.target
But it doesn't start, doesn't crash, and doesn't write any logs in journalctl.
How can I install elasticsearch in opt with configs in it?
You could install elasticsearch to /opt/elasticsearch using your rpmcommand, then move the config files from their default location to your location of choice, and finally change the ES_PATH_CONFand ES_HOME env vars to their respective new path.
When using the "manual" installation method (by downloading the .tar.gz) you have the freedom to put the files where you want. wget returns 404 because the file/URL does not exist. wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.15.0-linux-x86_64.tar.gz should be the correct one. (you're missing -linux)
the only way to do that is to download the tar.gz into your directory and then manually add all the environment variables, and build and manage your own init script

terraform init not working when specifying modules

I am new to terraform and trying to fix a small issue which I am facing when testing modules.
Below is the folder structure I have in my local computer.
I have below code at storage folder level
#-------storage/main.tf
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "my-first-terraform-bucket" {
bucket = "first-terraform-bucket"
acl = "private"
force_destroy = true
}
And below snippet from main_code level referencing storage module
#-------main_code/main.tf
module "storage" {
source = "../storage"
}
When I am issuing terraform init / plan / apply from storage folder it works absolutely fine and terraform creates the s3 bucket.
But when I am trying the same from main_code folder I am getting the below error -
main_code#DFW11-8041WL3: terraform init
Initializing modules...
- module.storage
Error downloading modules: Error loading modules: module storage: No Terraform configuration files found in directory: .terraform/modules/0d1a7f4efdea90caaf99886fa2f65e95
I have read many issue boards on stack overflow and other github issue forums but did not help resolving this. Not sure what I am missing!
Just update the existing modules by running terraform get --update. If this not work delete the .terraform folder.
I agree the comments from #rclement.
Several ways to troubleshooting terraform issues.
Clean .terraform folder and rerun terraform init.
This is always the first choice. But it takes time when you run terraform init next time, it starts installing all providers and modules again.
If you don't want to clean .terraform to save the deployment time, you can run terraform get --update=true
Most case is, you did some changes in modules, and it need be refreshed.
I had a similar issue but the problem for me was, The module I have created was looking for the providers.tf so had to add it for the modules as well and it worked.
├── main.tf
├── modules
│   └── droplets
│   ├── main.tf
│   ├── providers.tf
│   └── variables.tf
└── variables.tf
So my providers was present in the root locations previous which modules could not use so the issue for me.

Puppet enterprise error while running "puppet agent -t" commnad, unable to get User/Group data from hieara

I have Puppet enterprise installed on my VM, running in Virtualbox.
The installation went fine, but when I try to run the command puppet agent -t I get the following error:
[root#puppetmaster ~]# puppet agent -t
Info: Using configured environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Function Call, Could not find data item role in any Hiera data file and no default supplied at /etc/puppetlabs/code/environments/production/manifests/site.pp:32:10 on node puppetmaster.localdomain
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Here is my site.pp file line where the error is coming from;
## site.pp ##
# This file (/etc/puppetlabs/puppet/manifests/site.pp) is the main entry point
# used when an agent connects to a master and asks for an updated configuration.
#
# Global objects like filebuckets and resource defaults should go in this file,
# as should the default node definition. (The default node can be omitted
# if you use the console and don't define any other nodes in site.pp. See
# http://docs.puppetlabs.com/guides/language_guide.html#nodes for more on
# node definitions.)
## Active Configurations ##
# Disable filebucket by default for all File resources:
#http://docs.puppetlabs.com/pe/latest/release_notes.html#filebucket-resource-no-longer-created-by-default
File { backup => false }
# DEFAULT NODE
# Node definitions in this file are merged with node data from the console. See
# http://docs.puppetlabs.com/guides/language_guide.html#nodes for more on
# node definitions.
# The default node definition matches any node lacking a more specific node
# definition. If there are no other nodes in this file, classes declared here
# will be included in every node's catalog, *in addition* to any classes
# specified in the console for that node.
node default {
# This is where you can declare classes for all nodes.
# Example:
# class { 'my_class': }
$role = hiera('role')
$location = hiera('location')
notify{"in the top level site.pp : role is '${role}', location is '${location}'": }
include "::roles::${role}"
}
If you look at the error, it can't find the hiera key that you've asked for in your site.pp:
Could not find data item role in any Hiera data file and no default supplied at /etc/puppetlabs/code/environments/production/manifests/site.pp:32:10 on node puppetmaster.localdomain
In your code, you have the following:
$role = hiera('role')
$location = hiera('location')
Both of these are hiera calls, that require that hiera is setup and that the relevant key is in a hieradata folder.
A useful tool to help you diagnose hiera issues is hiera_explain, which shows you how your hiera hierarchy is setup and configured, and might help explain what the issue is with your code.

Puppet classes with environment directories

I am new to puppet and would like to avoid some of the common issues that I see and get away from using import statements since they are being deprecated. I am starting with very simple task of creating a class that copies a file to a single puppet agent.
So I have this on the master:
/etc/puppet/environments/production
/etc/puppet/environments/production/modules
/etc/puppet/environments/production/mainfests
/etc/puppet/environments/production/files
I am trying to create node definitions in a file called nodes.pp in the manifests directory and use a class that I have defined (class is test_monitor) in a module called test:
node /^web\d+.*.net/ {
include test_monitor
}
However when I run puppet agent -t on the agent I get :
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class test_monitor for server on node server
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
What is the proper way to configure this to work. I would like to have node definitions in a file or files which can have access to classes I build in custom modules.
Here is my puppet.conf:
[main]
environmentpath = $confdir/environments
default_manifest = $confdir/environments/production/manifests
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
factpath=$vardir/lib/facter
[master]
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
I know this is probably something stupid that I am not doing correctly or have mis-configured but I cant seem to get it to work. Any help is appreciated!! To be clear I am just trying to keep things clean and have classes in separate files with specific node types also in their own files. I have a small to medium to size environment. (approx 150 servers in a data center)
Let me guess, maybe the test module has wrong structure. You need some subfolders and files under folder modules
└── test
├── files
├── manifests
│   ├── init.pp
│   └── monitor.pp
└── tests
└── init.pp
I recommend change from test_monitor to test::monitor, it makes sense for me, if you need use test_monitor , you need a test_monitor module or test_monitor.pp file.
node /^web\d+.*.net/ {
include test::monitor
}
Then put monitor tasks in monitor.pp file
And that was as simple as adding the proper module path to puppet.conf
basemodulepath = $confdir/environments/production/modules

Resources