Terragrunt Best practice: dependencies between modules - terraform

I have a terragrunt project like this
├── common_vars.hcl
├── envs
│   ├── dev
│   │   ├── env_vars.hcl
│   │   ├── rds-aurora
│   │   │   └── terragrunt.hcl
│   │   ├── rds-sg
│   │   │   └── terragrunt.hcl
│   │   └── vpc
│   │   └── terragrunt.hcl
│   └── prod
│   ├── env_vars.hcl
│   ├── rds-sg
│   │   └── terragrunt.hcl
│   └── vpc
│   └── terragrunt.hcl
├── modules
│   ├── aws-data
│   │   ├── main.tf
│   │   └── outputs.tf
│   ├── rds-aurora
│   │   └── main.tf
│   ├── rds-sg
│   │   └── main.tf
│   └── vpc
│   └── main.tf
└── terragrunt.hcl
The rds-sg is the security group depends on the vpc.
The terragrunt.hcl under dev and prod has the same code like this.
terraform {
source = format("%s/modules//%s", get_parent_terragrunt_dir(), path_relative_to_include())
}
include {
path = find_in_parent_folders()
}
dependencies {
paths = ["../vpc"] # not dry
}
dependency "vpc" {
config_path = "../vpc" # not dry
}
inputs = {
vpc_id = dependency.vpc.outputs.vpc_id # if something changes or we need more inputs
}
As described in the comments, some codes are not so DRY. If I want to change something like change to another vpc or add more inputs, then I need to modify this file everywhere.
So I want something in the main.tf under modules
module "rds-sg" {
source = "terraform-aws-modules/security-group/aws//modules/mysql"
name = "${var.name_prefix}-db-sg"
description = "Security group for mysql 3306 port open within VPC"
vpc_id = ""
# I want something like
# vpc_id = dependency.vpc.outputs.vpc_id
}
Is that possible? or some better practices to solve this problem?
Thanks very much.
Maybe using terraform_remote_state can fix this problem. Any better idea?
This comment may explain this problem better.
https://github.com/gruntwork-io/terragrunt/issues/759#issuecomment-687610130

I would use Data sources to read any ID for any resources:
Add this to the module that uses VPC ID.
data "aws_vpc" "this" {
filter {
name = "tag:Name"
values = [var.name]
}
}
...
vpc_id = data.aws_vpc.this.id
This way you are making sure to read from AWS API not from State file, which has on plan validation also.

Related

I'd like to solve django webserver collectstatic error in ubuntu server

I try to get static file to using 'python manage.py collectstatic', but it's not working.
I run my django project in Ubuntu 20.04.4, and using Nginx as a webserver, and Guicorn for WSGI server.
Here is my error log,
You have requested to collect static files at the destination
location as specified in your settings.
This will overwrite existing files!
Are you sure you want to do this?
Type 'yes' to continue, or 'no' to cancel: yes
Traceback (most recent call last):
File "manage.py", line 22, in <module>
main()
File "manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/home/devadmin/venvs/bio_platform/lib/python3.8/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/home/devadmin/venvs/bio_platform/lib/python3.8/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/devadmin/venvs/bio_platform/lib/python3.8/site-packages/django/core/management/base.py", line 414, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/devadmin/venvs/bio_platform/lib/python3.8/site-packages/django/core/management/base.py", line 460, in execute
output = self.handle(*args, **options)
File "/home/devadmin/venvs/bio_platform/lib/python3.8/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 209, in handle
collected = self.collect()
File "/home/devadmin/venvs/bio_platform/lib/python3.8/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 135, in collect
handler(path, prefixed_path, storage)
File "/home/devadmin/venvs/bio_platform/lib/python3.8/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 368, in copy_file
if not self.delete_file(path, prefixed_path, source_storage):
File "/home/devadmin/venvs/bio_platform/lib/python3.8/site-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 278, in delete_file
if self.storage.exists(prefixed_path):
File "/home/devadmin/venvs/bio_platform/lib/python3.8/site-packages/django/core/files/storage.py", line 362, in exists
return os.path.lexists(self.path(name))
File "/home/devadmin/venvs/bio_platform/lib/python3.8/site-packages/django/contrib/staticfiles/storage.py", line 39, in path
raise ImproperlyConfigured(
django.core.exceptions.ImproperlyConfigured: You're using the staticfiles app without having set the STATIC_ROOT setting to a filesystem path.
I found the many solution from google, and i guessing that it cause because i didn't set a STATIC_ROOT properly.
Here is my project dir structure and code below;
└── bio_platform
├── common
│   ├── __init__.py
│   ├── __pycache__
│   │   ├── __init__.cpython-310.pyc
│   │   ├── __init__.cpython-38.pyc
│   │   ├── __init__.cpython-39.pyc
│   │   ├── admin.cpython-310.pyc
│   │   ├── admin.cpython-38.pyc
│   │   ├── admin.cpython-39.pyc
│   │   ├── apps.cpython-310.pyc
│   │   ├── apps.cpython-38.pyc
│   │   ├── apps.cpython-39.pyc
│   │   ├── forms.cpython-310.pyc
│   │   ├── forms.cpython-38.pyc
│   │   ├── forms.cpython-39.pyc
│   │   ├── models.cpython-310.pyc
│   │   ├── models.cpython-38.pyc
│   │   ├── models.cpython-39.pyc
│   │   ├── urls.cpython-310.pyc
│   │   ├── urls.cpython-38.pyc
│   │   ├── urls.cpython-39.pyc
│   │   ├── views.cpython-310.pyc
│   │   ├── views.cpython-38.pyc
│   │   └── views.cpython-39.pyc
│   ├── admin.py
│   ├── apps.py
│   ├── forms.py
│   ├── migrations
│   │   ├── __init__.py
│   │   └── __pycache__
│   ├── models.py
│   ├── tests.py
│   ├── urls.py
│   └── views.py
├── config
│   ├── __init__.py
│   ├── __pycache__
│   │   ├── __init__.cpython-310.pyc
│   │   ├── __init__.cpython-38.pyc
│   │   ├── __init__.cpython-39.pyc
│   │   ├── settings.cpython-310.pyc
│   │   ├── settings.cpython-38.pyc
│   │   ├── settings.cpython-39.pyc
│   │   ├── urls.cpython-310.pyc
│   │   ├── urls.cpython-38.pyc
│   │   ├── urls.cpython-39.pyc
│   │   ├── wsgi.cpython-310.pyc
│   │   ├── wsgi.cpython-38.pyc
│   │   └── wsgi.cpython-39.pyc
│   ├── asgi.py
│   ├── settings
│   │   ├── __pycache__
│   │   ├── base.py
│   │   ├── local.py
│   │   └── prod.py
│   ├── urls.py
│   └── wsgi.py
├── db.sqlite3
├── manage.py
├── node_modules
│   └── bootstrap
│   ├── LICENSE
│   ├── README.md
│   ├── dist
│   ├── js
│   ├── package.json
│   └── scss
├── package-lock.json
├── pybo
│   ├── __init__.py
│   ├── __pycache__
│   │   ├── __init__.cpython-310.pyc
│   │   ├── __init__.cpython-38.pyc
│   │   ├── __init__.cpython-39.pyc
│   │   ├── admin.cpython-310.pyc
│   │   ├── admin.cpython-38.pyc
│   │   ├── admin.cpython-39.pyc
│   │   ├── apps.cpython-310.pyc
│   │   ├── apps.cpython-38.pyc
│   │   ├── apps.cpython-39.pyc
│   │   ├── forms.cpython-310.pyc
│   │   ├── forms.cpython-38.pyc
│   │   ├── forms.cpython-39.pyc
│   │   ├── models.cpython-310.pyc
│   │   ├── models.cpython-38.pyc
│   │   ├── models.cpython-39.pyc
│   │   ├── urls.cpython-310.pyc
│   │   ├── urls.cpython-38.pyc
│   │   ├── urls.cpython-39.pyc
│   │   └── views.cpython-310.pyc
│   ├── admin.py
│   ├── apps.py
│   ├── forms.py
│   ├── migrations
│   │   ├── 0001_initial.py
│   │   ├── 0002_question_author.py
│   │   ├── 0003_answer_author.py
│   │   ├── 0004_answer_modify_date_question_modify_date.py
│   │   ├── 0005_comment.py
│   │   ├── 0006_answer_voter_question_voter_alter_answer_author_and_more.py
│   │   ├── 0007_auto_20220411_1325.py
│   │   ├── __init__.py
│   │   └── __pycache__
│   ├── models.py
│   ├── templatetags
│   │   ├── __pycache__
│   │   └── pybo_filter.py
│   ├── tests.py
│   ├── urls.py
│   └── views
│   ├── __pycache__
│   ├── answer_views.py
│   ├── base_views.py
│   ├── comment_views.py
│   ├── question_views.py
│   └── vote_views.py
├── static
│   ├── bootstrap.min.css
│   ├── bootstrap.min.js
│   ├── jquery-3.6.0.min.js
│   └── style.css
├── templates
│   ├── base.html
│   ├── common
│   │   ├── login.html
│   │   └── signup.html
│   ├── form_errors.html
│   ├── navbar.html
│   └── pybo
│   ├── answer_form.html
│   ├── comment_form.html
│   ├── question_detail.html
│   ├── question_form.html
│   └── question_list.html
└── winehq.key
As you could notice, i divide my setting files like this,
It because i'd like to operate Server development environment and Local development environment separately.
├── settings
│   │   ├── __pycache__
│   │   ├── base.py
│   │   ├── local.py
│   │   └── prod.py
And here is my full code of setting files
(I blanked the private parts which is like IP address).
base.py
"""
Django settings for config project.
Generated by 'django-admin startproject' using Django 4.0.3.
For more information on this file, see
https://docs.djangoproject.com/en/4.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/4.0/ref/settings/
"""
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/4.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'blank' #private part
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ['blank'] #private part
# Application definition
INSTALLED_APPS = [
'common.apps.CommonConfig',
'pybo.apps.PyboConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'config.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [BASE_DIR / 'templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'config.wsgi.application'
# Database
# https://docs.djangoproject.com/en/4.0/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/4.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/4.0/topics/i18n/
LANGUAGE_CODE = 'ko-kr'
TIME_ZONE = 'Asia/Seoul'
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/4.0/howto/static-files/
STATIC_URL = 'static/'
#STATIC_ROOT = os.path.join(BASE_DIR,'static')
STATICFILES_DIRS = [
BASE_DIR / 'static',
]
# Default primary key field type
# https://docs.djangoproject.com/en/4.0/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
LOGIN_REDIRECT_URL = '/'
LOGOUT_REDIRECT_URL = '/'
local.py
from .base import *
ALLOWED_HOSTS = []
prod.py
from .base import *
import os
ALLOWED_HOSTS = ['blank'] #private part
STATIC_URL = 'static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
#STATIC_ROOT = BASE_DIR / 'static/'
STATICFILES_DIRS = []
# concept addon
Also, here is my bio_platform.service for nignx server.
bio_platform.service
server {
listen 80;
server_name blank; #private part
location = /favicon.ico { access_log off; log_not_found off; }
location /static {
alias /home/devadmin/projects/bio_platform/static;
}
location / {
include proxy_params;
proxy_pass http://unix:/tmp/gunicorn.sock;
}
}
I did some of solution that given in google and stackoverflow also which is like:
what i try
set a STATIC_ROOT and STATIC_URL on prod.py
set a STATIC_ROOT and STATIC_URL on base.py
But as you know, it didn't work and django.core.exceptions.ImproperlyConfigured: You're using the staticfiles app without having set the STATIC_ROOT setting to a filesystem path. error is still remain.
For know, i'm not sure where sould i set STATIC_ROOT or STATIC_URL, STATICFILES_DIR.
Please helpe me guys....
I did found what i wrong in my project.
The problem was caused by path setting which is bring prod.py settings automaticly from venvs folder.
I made a bio_platform.sh file under the path
~/venvs/
before
#!/bin/bash
cd ~/projects/bio_paltform
export DJANGO_SETTINGS_MODULE=config.settings.prod
. ~/venvs/bio_paltform/bin/activate
As you guys could see that i typed wrong that bio_paltform.
So i change the bio_paltform to bio_platform.
after
#!/bin/bash
cd ~/projects/bio_platform
export DJANGO_SETTINGS_MODULE=config.settings.prod
. ~/venvs/bio_platform/bin/activate
After i modify this file, python managy.py collectstatic order is work.
$ python manage.py collectstatic
You have requested to collect static files at the destination
location as specified in your settings:
/home/devadmin/projects/bio_platform/static
This will overwrite existing files!
Are you sure you want to do this?
Type 'yes' to continue, or 'no' to cancel: yes
128 static files copied to '/home/devadmin/projects/bio_platform/static'.

How to restrict a module/service from executing in terraform

I have multiple environments which gets provisioned using terraform. my question how to restrict a particular module from executing on a particular environment.
Eg: I need to deploy a instance service and associated module only in production, but not to lower environments
I have the below structure with services and each service calls modules.
structure:
├── instances
│   ├── config.tfplan
│   ├── data.tf
│   ├── main.tf
│   └── variables.tf
├── modules
│   ├── buckets
│   │   ├── buckets.tf
│   │   ├── outputs.tf
│   │   └── variables.tf
│   ├── compartments
│   │   ├── compartments.tf
│   │   ├── output.tf
│   │   └── variables.tf
│   ├── iam
│   │   ├── groups.tf
│   │   ├── outputs.tf
│   │   ├── policies.tf
│   │   ├── users.tf
│   │   └── variables.tf
│   ├── instances
│   │   ├── instance.tf
│   │   ├── output.tf
│   │   └── variables.tf
This was not possible in older versions of Terraform but with version 0.12.6. You can add for_each in the module. Please find here an example of how I am using for_each:
Define variable:
variable "cloudwatch_event" {
description = "Map of cloudwatch event configuration"
type = map(any)
default = {
default = {}
}
}
Define resource:
module "cloudwatch_event" {
source = "git::git#github.com:tomarv2/terraform-aws-cloudwatch-event.git?ref=v0.0.4"
for_each = var.cloudwatch_event
description = lookup(each.value, "description", null)
custom_input = lookup(each.value, "custom_input", null)
suffix = lookup(each.value, "suffix", "rule")
schedule = lookup(each.value, "schedule", null)
deploy_event_rule = var.deploy_cloudwatch_event_trigger
deploy_event_target = var.deploy_cloudwatch_event_trigger
target_arn = join("", aws_lambda_function.lambda.*.arn)
#-----------------------------------------------
# Note: Do not change teamid and prjid once set.
teamid = var.teamid
prjid = var.prjid
}
As you can see if there is no cloudwatch_event this resource will not get deployed.
Please check the module repo located here: https://github.com/tomarv2/terraform-aws-lambda
Terraform documentation for your reference: https://www.terraform.io/docs/language/meta-arguments/for_each.html

Azure WebApps React: the environment variables exist but are not present when I call process.env

Prerequisite
this is my first usage of React/Node.JS/Azure App Service. I usually deploy apps using flask/jinja2/gunicorn.
The use case
I would like to use the environment variables stored in the Configuration of my App Service on Azure
Unfortunately, the environment variables displays 3 environment variables (NODE_END, PUBLIC_URL and FAST_REFRESH) instead of several dozens.
The partial content of the Azure App Service appsettings
[
{
"name": "REACT_APP_APIKEY",
"value": "some key",
"slotSetting": false
},
{
"name": "REACT_APP_APPID",
"value": "an app id",
"slotSetting": false
},
{
"name": "REACT_APP_AUTHDOMAIN",
"value": "an auth domain",
"slotSetting": false
},
{
"name": "APPINSIGHTS_INSTRUMENTATIONKEY",
"value": "something",
"slotSetting": false
},
{
"name": "APPLICATIONINSIGHTS_CONNECTION_STRING",
"value": "something else",
"slotSetting": false
},
{
"name": "ApplicationInsightsAgent_EXTENSION_VERSION",
"value": "some alphanumeric value",
"slotSetting": false
},
{
"name": "KUDU_EXTENSION_VERSION",
"value": "78.11002.3584",
"slotSetting": false
}
]
The CI/CD process
I am using Azure DevOps to build and deploy the app on Azure.
The process runs npm install and npm run build before generating the zip file containing the build (see the directory tree list here below)
How do I Run the App?
The startup command contains npx serve -l 8080 .
The Issue
I display the environment variables with
console.log('process.env', process.env);
The content of the process.env is
{
"NODE_ENV": "production",
"PUBLIC_URL": "",
"FAST_REFRESH": true
}
The Wired part
I use SSH on Azure and I run
printenv | grep APPINS the result is
APPSETTING_APPINSIGHTS_INSTRUMENTATIONKEY=something
APPINSIGHTS_INSTRUMENTATIONKEY=something
printenv | grep APPLICATION the result is
APPSETTING_APPLICATIONINSIGHTS_CONNECTION_STRING=something else
APPLICATIONINSIGHTS_CONNECTION_STRING=something else
Misc
Directory Tree list
.
├── asset-manifest.json
├── favicon.ico
├── images
│   ├── app
│   │   └── home_page-ott-overthetop-platform.png
│   ├── films
│   │   ├── children
│   │   │   ├── despicable-me
│   │   │   │   ├── large.jpg
│   │   │   │   └── small.jpg
│   ├── icons
│   │   ├── add.png
│   ├── misc
│   │   ├── home-bg.jpg
│   ├── series
│   │   ├── children
│   │   │   ├── arthur
│   │   │   │   ├── large.jpg
│   │   │   │   └── small.jpg
│   └── users
│   ├── 1.png
├── index.html
├── static
│   ├── css
│   │   ├── 2.679831fc.chunk.css
│   │   └── 2.679831fc.chunk.css.map
│   ├── js
│   │   ├── 2.60c35184.chunk.js
│   │   ├── 2.60c35184.chunk.js.LICENSE.txt
│   │   ├── 2.60c35184.chunk.js.map
│   │   ├── main.80f5c16d.chunk.js
│   │   ├── main.80f5c16d.chunk.js.map
│   │   ├── runtime-main.917a28e7.js
│   │   └── runtime-main.917a28e7.js.map
│   └── media
│   └── logo.623fc416.svg
└── videos
└── bunny.mp4
74 directories, 148 files
When you run your application locally, you could use .env file to config your environment variables. Format like "name=value"(without quotes).
Here is a sample:
REACT_APP_APIKEY=REACT_APP_APIKEY
REACT_APP_APPID=REACT_APP_APPID
REACT_APP_AUTHDOMAIN=REACT_APP_AUTHDOMAIN
When I call console.log('process.env', process.env); in index.js file, it works well:
After configuring the app settings on portal, deploy the nodejs web app to Azure.
You can view log output (calls to console.log) from the app directly in the VS Code output window. Just right-click the app node and choose Start Streaming Logs in the AZURE APP SERVICE explorer.
It shows the environment variables on portal Application settings:
By the way:
If you are very new to use node with Azure web app, you could have a look at this: https://learn.microsoft.com/en-us/azure/app-service/quickstart-nodejs?pivots=platform-windows
About how to use .env file, see this: https://holycoders.com/node-js-environment-variable/

Source Terraform Module From Sub-Directories In a BitBucket Repository With A different branch

I have a terraform modules repository with different set of modules with below structure
BitBucket Repository(URL: git#bitbucket.org:/{repoName}.git?ref=develop
└── modules
├── s3
│   ├── locals.tf
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
└── tfstate
└── main.tf
develop is the branch that I want to use which I have given in the source URL. I am calling the module repository as given below:
├── examples
│   ├── gce-nextgen-dev.tfvars
│   └── main.tf
main.tf
module "name" {
source = "git#bitbucket.org:{url}/terraform-modules.git? ref=develop/"
bucketName = "terraformbucket"
environment = "dev"
tags = map("ExtraTag", "ExtraTagValue")
}
How can I call the modules from sub-directories which is in a BitBucket repository.
It works if I remove the ref=develop from the URL and just give git#bitbucket.org:{url}/terraform-modules.git//modules//s3
But I don't want to use master but develop branch in this case.

Setting up environment directory - Getting Could not find default node or by name with XXX on environment other than "production"

I m currently trying to configure some directory environment to manage different clients
Puppet Master version : "puppet-server-3.8.1-1" (centos 6)
Here is my tree from the puppet master /etc/puppet :
├── organisation
│   ├── environment.conf
│   ├── manifests
│   │   ├── accounts.pp
│   │   ├── lab_accounts.pp
│   │   ├── lab_nodes.pp
│   │   └── nodes.pp
│   └── modules
│   ├── account
│   │   ├── files
│   │   ├── lib
│   │   ├── spec
│   │   │   └── classes
│   │   └── templates
│   └── dns
│   ├── manifests
│   │   └── init.pp
│   └── templates
│   ├── resolv.conf.erb
│   └── resolv.conf.fqdn.erb
├── production
│   ├── environment.conf
│   ├── manifests
│   │   ├── accounts.pp
│   │   ├── lab_accounts.pp
│   │   └── lab_nodes.pp
│   └── modules
│   ├── account
│   │   ├── CHANGELOG
│   │   ├── files
│   │   ├── lib
│   │   ├── LICENSE
│   │   ├── manifests
│   │   │   ├── init.pp
│   │   │   └── site.pp
│   │   ├── metadata.json
│   │   ├── Modulefile
│   │   ├── Rakefile
│   │   ├── README.mkd
│   │   ├── spec
│   │   │   ├── classes
│   │   │   ├── defines
│   │   │   │   └── account_spec.rb
│   │   │   └── spec_helper.rb
│   │   └── templates
│   ├── dns
│   │   ├── manifests
│   │   │   └── init.pp
│   │   └── templates
│   │   ├── resolv.conf.erb
│   │   └── resolv.conf.fqdn.erb
│   └── sshkeys
│   └── manifests
│   └── init.pp
└── README.md
Now the configuration files :
/etc/puppet.conf
[main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = $vardir/ssl
dns_alt_names = centos66a.local.lab,centos66a,puppet,puppetmaster
[master]
environmentpath = $confdir/environments
basemodulepath = $confdir/modules:/opt/puppet/share/puppet/modules
[agent]
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
server = puppet
Here is the environment I called "organisation" :
/etc/puppet/environments/organisation/environment.conf
modulepath = /etc/puppet/environments/organisation/modules
environment_timeout = 5s
Now I declare my nodes in "nodes.pp" :
/etc/puppet/environments/organisation/manifests/nodes.pp
node 'centos66a.local.lab' {
include dns
}
node 'gcacnt02.local.lab' {
include dns
}
Here is the output when I try to sync my node to the master :
gcacnt02:~ # hostname
gcacnt02.local.lab
gcacnt02:~ # puppet agent -t
Info: Creating a new SSL key for gcacnt02.local.lab
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for gcacnt02.local.lab
Info: Certificate Request fingerprint (SHA256): 49:73:11:78:99:6F:50:BD:6B:2F:5D:B9:92:7C:6F:A9:63:52:92:53:DB:B8:A1:AE:86:21:AF:36:BE:B0:94:DB
Info: Caching certificate for gcacnt02.local.lab
Info: Caching certificate for gcacnt02.local.lab
Info: Retrieving pluginfacts
Info: Retrieving plugin
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find default node or by name with 'gcacnt02.local.lab, gcacnt02.local, gcacnt02' on node gcacnt02.local.lab
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
if I move /etc/puppet/environments/organisation/manifests/nodes.pp to /etc/puppet/environments/production/manifests/nodes.pp it works just fine.
When I print "manifest" from "organisation" and "production" I get a correct output asweel :
[root#centos66a environments]# puppet config print manifest --section master --environment production
/etc/puppet/environments/production/manifests
[root#centos66a environments]# puppet config print manifest --section master --environment organisation
/etc/puppet/environments/organisation/manifests
I m probably missing something here but can't put my finger on it...
Thank you
Problem resolved.
Configuration on the master is OK.
Since Puppet scan for directory in the environment directory set by the "environmentpath" variables, I thought that the master would automaticly reply to nodes in setup in each environment. This is false.
Default environment is : Production.
If you set any other environment you have to configure each puppet agent node to query to a specific environment
In my case, my node is gcacnt02.local.lab. So to fix the issue I had to add the following variable in /etc/puppet/puppet.conf
[main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = $vardir/ssl
[agent]
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
environment = lan

Resources