Puppet classes with environment directories - puppet

I am new to puppet and would like to avoid some of the common issues that I see and get away from using import statements since they are being deprecated. I am starting with very simple task of creating a class that copies a file to a single puppet agent.
So I have this on the master:
/etc/puppet/environments/production
/etc/puppet/environments/production/modules
/etc/puppet/environments/production/mainfests
/etc/puppet/environments/production/files
I am trying to create node definitions in a file called nodes.pp in the manifests directory and use a class that I have defined (class is test_monitor) in a module called test:
node /^web\d+.*.net/ {
include test_monitor
}
However when I run puppet agent -t on the agent I get :
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class test_monitor for server on node server
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
What is the proper way to configure this to work. I would like to have node definitions in a file or files which can have access to classes I build in custom modules.
Here is my puppet.conf:
[main]
environmentpath = $confdir/environments
default_manifest = $confdir/environments/production/manifests
vardir=/var/lib/puppet
ssldir=/var/lib/puppet/ssl
factpath=$vardir/lib/facter
[master]
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
I know this is probably something stupid that I am not doing correctly or have mis-configured but I cant seem to get it to work. Any help is appreciated!! To be clear I am just trying to keep things clean and have classes in separate files with specific node types also in their own files. I have a small to medium to size environment. (approx 150 servers in a data center)

Let me guess, maybe the test module has wrong structure. You need some subfolders and files under folder modules
└── test
├── files
├── manifests
│   ├── init.pp
│   └── monitor.pp
└── tests
└── init.pp
I recommend change from test_monitor to test::monitor, it makes sense for me, if you need use test_monitor , you need a test_monitor module or test_monitor.pp file.
node /^web\d+.*.net/ {
include test::monitor
}
Then put monitor tasks in monitor.pp file

And that was as simple as adding the proper module path to puppet.conf
basemodulepath = $confdir/environments/production/modules

Related

Vendored Chaincode has false dependencies

I have chaincode with the following directory structure
$GOPATH/myproject/chaincode/mycc/go
├── mycc.go
├── chaincode
│   └── chaincode.go
└── vendor
├── github.com
├── ...
Because of my usage of hyperledgers cid package, I use vendoring and have the vendor directory next to the chaincode. Now for testablitiy, mycc.go only includes the main function:
package main
import (
"myproject/chaincode/mycc/go/chaincode"
"github.com/hyperledger/fabric/core/chaincode/shim"
)
func main() {
err := shim.Start(new(chaincode.MyChaincode))
if err != nil {
logger.Error(err.Error())
}
}
The chaincode.go implements the rest of the chaincode, including the MyChaincode struct with Init, Invoke, etc. The relevant imports are identical to the one in the mycc.go:
"github.com/hyperledger/fabric/core/chaincode/shim"
During the instantiation of the chaincode, something with the dependencies seems to be mixed up, because I receive the error message:
*chaincode.MyChaincode does not implement "chaincode/mycc/go/vendor/github.com/hyperledger/fabric/core/chaincode/shim".Chaincode (wrong type for Init method)
have Init("chaincode/mycc/go/vendor/myproject/chaincode/mycc/go/vendor/github.com/hyperledger/fabric/core/chaincode/shim".ChaincodeStubInterface) "chaincode/approvalcc/go/vendor/ma/chaincode/approvalcc/go/vendor/github.com/hyperledger/fabric/protos/peer".Response
want Init("chaincode/mycc/go/vendor/github.com/hyperledger/fabric/core/chaincode/shim".ChaincodeStubInterface) "chaincode/mycc/go/vendor/github.com/hyperledger/fabric/protos/peer".Response
So clearly it seems that the import in the inner chaincode package is resolved wrongly with the vendor directory appearing twice in the path.
The fabric-ccenv container which builds chaincode attempts to be "helpful" but including shim in the GOPATH inside the container. It also ends up including the shim/ext/... folders as well but unfortunately does not actually properly include their transitive dependencies.
When you combine this with how the chaincode install/package commands also attempt to be helpful and your attempt to vendor, things got ugly.
I actually just pushed a fix targeted for 1.4.2 to address the fabric-ccenv issue.
It seems like your init method is not initialized properly so please check if the chaincode is installed or instantiated properly or not. That you can check out just by looking out for the instantiated chaincode docker container.

terraform init not working when specifying modules

I am new to terraform and trying to fix a small issue which I am facing when testing modules.
Below is the folder structure I have in my local computer.
I have below code at storage folder level
#-------storage/main.tf
provider "aws" {
region = "us-east-1"
}
resource "aws_s3_bucket" "my-first-terraform-bucket" {
bucket = "first-terraform-bucket"
acl = "private"
force_destroy = true
}
And below snippet from main_code level referencing storage module
#-------main_code/main.tf
module "storage" {
source = "../storage"
}
When I am issuing terraform init / plan / apply from storage folder it works absolutely fine and terraform creates the s3 bucket.
But when I am trying the same from main_code folder I am getting the below error -
main_code#DFW11-8041WL3: terraform init
Initializing modules...
- module.storage
Error downloading modules: Error loading modules: module storage: No Terraform configuration files found in directory: .terraform/modules/0d1a7f4efdea90caaf99886fa2f65e95
I have read many issue boards on stack overflow and other github issue forums but did not help resolving this. Not sure what I am missing!
Just update the existing modules by running terraform get --update. If this not work delete the .terraform folder.
I agree the comments from #rclement.
Several ways to troubleshooting terraform issues.
Clean .terraform folder and rerun terraform init.
This is always the first choice. But it takes time when you run terraform init next time, it starts installing all providers and modules again.
If you don't want to clean .terraform to save the deployment time, you can run terraform get --update=true
Most case is, you did some changes in modules, and it need be refreshed.
I had a similar issue but the problem for me was, The module I have created was looking for the providers.tf so had to add it for the modules as well and it worked.
├── main.tf
├── modules
│   └── droplets
│   ├── main.tf
│   ├── providers.tf
│   └── variables.tf
└── variables.tf
So my providers was present in the root locations previous which modules could not use so the issue for me.

How to get JSF working with spring-boot app in docker container?

I have an app built with spring-boot 1.4.0 and JSF 2.2 (Mojarra 2.2.13) running on embedded tomcat (8.5).
When I run the app locally, it works fine - regardless of whether I start it from my IDE, with mvn spring-boot:run or java -jar target/myapp.jar
However, when I start the app inside a docker container (using docker 1.12.1) everything works except for the JSF pages. For example the spring-boot-actuator pages like /health work just fine but when trying to access /home.xhtml (a JSF page) I get:
Whitelabel Error Page
This application has no explicit mapping for /error, so you are seeing this as a fallback.
Wed Oct 12 09:33:33 GMT 2016
There was an unexpected error (type=Not Found, status=404).
/home.xhtml Not Found in ExternalContext as a Resource
I use io.fabric8:docker-maven-plugin to build the docker image (based on alpine with openjdk-8-jre), which includes the build artifact under /maven inside the docker container and java -jar /maven/${project.artifactId}-${project.version}.jar to start the app.
I am using port 9000 instead of the standard tomcat port (8080) and this is also exposed by the docker image.
I already compared both the local jar and the one included in the docker container and they both have the same content.
Any help would be appreciated.
Thanks in advance
XHTML files and the rest of the JSF files was placed in the wrong directory.
At least when using <packaging>jar</packaging> it appears that all JSF stuff must be placed under src/main/resources/META-INF instead of src/main/webapp where most IDEs will automatically place it.
When JSF stuff is in src/main/webapp it does not get included (or not in the right place) into the spring-boot-repackaged jar file.
My src/main/resources looks like this now:
src
└── main
   └── resources
├── application.yml
└── META-INF
   ├── faces-config.xml
   └── resources
   ├── administration
      │   └── userlist.xhtml
       ├── css
      │   └── default.css
      ├── images
      │   └── logo.png
      └── home.xhtml
I've had exactly the same issue. Running the Spring Boot application directly from maven or from a packaged WAR file worked fine, but running it from a Docker container lead to the same problem.
Actually, the problem for me was that accessing dynamic resources (JSPs or the like) always lead to trying to show the error page, which could not be found and therefore a 404 was returned, but static content was returned correctly.
It sounds silly, but it turned out for me that it matters whether you call the packaged WAR file ".war" or ".jar" when you start it. I used a Docker example for Spring Boot which (for some reason) names the WAR file "app.jar". If you do that, you run into this issue in the Docker container. If you call it "app.war" instead, it works fine.
This has nothing to do with Docker, by the way. The same happens if you try to start the application directly on the host when it has a .jar extension (unless you start it in a location where the needed resources are lying around in an extracted manner).
Starting my application with
java -Duser.timezone=UTC -Djava.security.egd=file:/dev/./urandom -jar application.jar
leads to the error. Renaming from .jar to .war and using
java -Duser.timezone=UTC -Djava.security.egd=file:/dev/./urandom -jar application.war
made it work.
What worked for me was copying the webapp folder to META-INF while adding resources
<resource>
<directory>src/main/webapp</directory>
<targetPath>META-INF/resources</targetPath>
</resource>
and then add change the documentRoot to the path where it is available in the docker container:
#Bean
public WebServerFactoryCustomizer<ConfigurableServletWebServerFactory>
webServerFactoryCustomizer() {
return factory -> factory.setDocumentRoot(new File("/workspace/META-INF/resources"));
}
now i can use jsf pages while using spring-boot:build-image and jar packaging
ps: i still had a bunch of exceptions, which where gone after upgrading spring boot+liquibase to newest version, but i think this does not belong to this topic

Puppet client error when using puppet agent -t

I m trying to use puppet to manage my servers.
I m using opensource puppet
Actually, I have downloaded a module 'docker' (from garther/docker) and I have it in my module list in my master agent :
/home/skahrz/.puppet/modules
├── garethr-docker (v4.1.1)
├── puppetlabs-apt (v2.2.0)
├── puppetlabs-stdlib (v4.9.0)
└── stahnma-epel (v1.1.1)
I have the following site.pp (I m actually learning step by step, not using environnements for the moment) :
node 'my.host.com' {
include 'docker'
docker::image { 'hello-world':
image_tag => 'precise',
}
}
When I try to use puppet agent -t on my client node, I got this error output :
Info: Retrieving plugin
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class docker for my.host.com on node my.host.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
So, I tried to follow this link :
https://docs.puppetlabs.com/pe/latest/console_accessing.html
I can telnet my master from nodes.
But I dont have the console since I use the opensource ... So I m a bit locked.
Can anybody help me with this error ?

Puppet unable to find environment

I have a simple Puppet environment, just started with one master and one agent.
I am getting following error when I do puppet module list from my agent. I run puppet agent -t it is not even going to my site.pp and test.pp.
I am not sure if I am missing anything in the Puppet configurations.
puppet module list
/usr/lib/ruby/site_ruby/1.8/puppet/environments.rb:38:in `get!': Could not find a directory environment named 'test' anywhere in the path: /etc/puppet/environments. Does the directory exist? (Puppet::Environments::EnvironmentNotFound)
from /usr/lib/ruby/site_ruby/1.8/puppet/application.rb:365:in `run'
from /usr/lib/ruby/site_ruby/1.8/puppet/util/command_line.rb:146:in `run'
from /usr/lib/ruby/site_ruby/1.8/puppet/util/command_line.rb:92:in `execute'
from /usr/bin/puppet:8
Here is my Puppet master puppet.conf
[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl
dns_alt_names = cssdb-poc-01.cisco.com cssdb-poc-01
[master]
server = cssdb-poc-01.cisco.com
certname = cssdb-poc-01.cisco.com
dns_alt_names = cssdb-poc-01.cisco.com cssdb-poc-01
environmentpath = /etc/puppet/environments
environment = test
[agent]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt
# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig
~
Here is the directory structure on puppet master.
[root#cssdb-poc-01 test]# tree /etc/puppet/environments/
/etc/puppet/environments/
├── example_env
│   ├── manifests
│   ├── modules
│   └── README.environment
├── production
└── test
├── environment.conf
├── manifests
│   └── site.pp
└── modules
└── cassandra
├── manifests
└── test.pp
Here is the my puppet agent puppet.conf
cat /etc/puppet/puppet.conf
[main]
# The Puppet log directory.
# The default value is '$vardir/log'.
logdir = /var/log/puppet
# Where Puppet PID files are kept.
# The default value is '$vardir/run'.
rundir = /var/run/puppet
# Where SSL certificates are kept.
# The default value is '$confdir/ssl'.
ssldir = $vardir/ssl
[main]
server=cssdb-poc-01.cisco.com
environmentpath = /etc/puppet/environments
environment = test
[agent]
# The file in which puppetd stores a list of the classes
# associated with the retrieved configuratiion. Can be loaded in
# the separate ``puppet`` executable using the ``--loadclasses``
# option.
# The default value is '$confdir/classes.txt'.
classfile = $vardir/classes.txt
# Where puppetd caches the local configuration. An
# extension indicating the cache format is added automatically.
# The default value is '$confdir/localconfig'.
localconfig = $vardir/localconfig
The issue was with my environment.conf file.
[root#cssdb-poc-01 templates]# cat /tmp/environment.conf
modulepath = /etc/puppet/environments/test/modules:$basemodulepath
manifest = manifests
I removed it from environment directory and it started working, not puppet modules list but puppet agent -t
#Frank you are right puppet modules list will not work on agent nodes.
Thanks for your help.
Custom modules will not show up in puppet modules list output. It lists modules with metadata, typically installed from the Forge using puppet module install.
On the agent, it is normal to have no local environments to search for modules (or install them).

Resources