Puppet Enterprise: could not find terminus console for indirection node - puppet

New to Puppet Enterprise and I've run into an issue compiling the catalog for a simple agent node. The master is running on an rhel6 box, and the agent is running on a centos6.5 box launched via vagrant from the master. The issue occurs when I run the following from the agent VM:
bash-4.1$ sudo puppet agent --waitforcert 60 --test --certname agent.example.com
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: Error 400 on SERVER: **Could not find terminus console for indirection node**
Info: Retrieving plugin
Info: Loading facts in /var/opt/puppet/lib/facter/maven_version.rb
Info: Loading facts in /var/opt/puppet/lib/facter/facter_dot_d.rb
Info: Loading facts in /var/opt/puppet/lib/facter/root_home.rb
Info: Loading facts in /var/opt/puppet/lib/facter/jenkins.rb
Info: Loading facts in /var/opt/puppet/lib/facter/pe_version.rb
Info: Loading facts in /var/opt/puppet/lib/facter/puppet_vardir.rb
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed when searching for node agent.example.com: **Could not find terminus console for indirection node**
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
The 'puppet.conf' file for the Puppet Master (3.4.3 - Puppet Enterprise 3.2.3) is as follows:
[main]
vardir = /var/opt/lib/pe-puppet
logdir = /var/log/pe-puppet
rundir = /var/run/pe-puppet
ssldir = /etc/puppetlabs/puppet/ssl
user = pe-puppet
group = pe-puppet
[master]
certname = puppetmaster.example.com
reports = puppetdb
node_terminus = plain
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
storeconfigs_backend = puppetdb
storeconfigs = true
The 'puppet.conf' for the Puppet Agent (version 3.3.1) is as follows:
[main]
vardir = /var/opt/puppet
logdir = /var/log/puppet
rundir = /var/run/puppet
archive_files = true
archive_file_server = puppet
ssldir = $vardir/ssl
[agent]
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
server = puppetmaster.example.com
certname = agent.example.com
environment = production
The certificates seem in order. From the puppet master:
[mark#puppetmaster puppetlabs]$ sudo puppet cert list agent.example.com
+ "agent.example.com" (SHA256) blah
Finally, the 'site.pp' is trival:
node default {
# This is where you can declare classes for all nodes.
# Example:
# class { 'my_class': }
}
# The agentnode placeholder
node 'agent.example.com' {
# tba
}
From reading up on the the catalog compilation steps I would have thought that since I set the terminus to 'plain' the puppet master would simply retrieve the node object from the site.pp manifest, however it seems to be looking for the console node terminus...
Any thoughts or insight would be appreciated.

I'm currently beating my head against the wall with a similar problem. While I haven't found the solution to my particular version of this issue, I've found some good pointers in my research that have worked for others.
Make sure you're running the correct version of Ruby (ie: 1.8.x not 1.9+)
EDIT: Apparently this bug has been fixed. Ruby versions up to 2.1 are generally supported.
Make sure the puppetdb-terminus package is installed on your Puppet Master
Is the markup in routes.yaml (if you have that file) correct?

I had a similar problem trying to get use environments.
Commenting the node_terminus config will default to site.pp which is what you want right?

Related

Eclipse Che 7 Node.js Error: This workspace is using old definition format

I am new to using Eclipse Che. After trying (and failing, see error at the end) to install it on a local kubernetes cluster according to this article, I ended up running it locally using docker according to documentation on: https://www.eclipse.org/che/docs/che-6/docker-single-user.html. Even thought this is the documentation for version 6 it seems to start version 7 just fine.
It starts up normally (warnings don't matter AFAIK):
WARN: Bound 'eclipse/che' to 'eclipse/che:7.0.0-beta-5.0'
INFO: Proxy: HTTP_PROXY=gateway.docker.internal:3128, HTTPS_PROXY=gateway.docker.internal:3129, NO_PROXY=
WARN: Potential networking issue discovered!
WARN: We have identified that http and https proxies are set but no_proxy is not. This may cause fatal networking errors. Set no_proxy for your Docker daemon!
INFO: (che cli): 7.0.0-beta-5.0 - using docker 18.06.1-ce / docker4mac
WARN: Newer version 'rc' available
INFO: (che init): Installing configuration and bootstrap variables:
INFO: (che init): CHE_HOST=192.168.65.3
INFO: (che init): CHE_VERSION=7.0.0-beta-5.0
INFO: (che init): CHE_CONFIG=~/che
INFO: (che init): CHE_INSTANCE=~/che/instance
INFO: (che config): Generating che configuration...
INFO: (che config): Customizing docker-compose for running in a container
INFO: (che start): Preflight checks
mem (1.5 GiB): [OK]
disk (100 MB): [OK]
port 8080 (http): [AVAILABLE]
conn (browser => ws): [OK]
conn (server => ws): [OK]
INFO: (che start): Starting containers...
INFO: (che start): Services booting...
INFO: (che start): Server logs at "docker logs -f che"
INFO: (che start): Booted and reachable
INFO: (che start): Ver: 7.0.0-beta-5.0
INFO: (che start): Use: http://localhost:8080
INFO: (che start): API: http://localhost:8080/swagger
I get the workspace set up screen and select the node.js stack. The stack is created just fine and the workspace is running. However, then, I am stuck. Cannot create any new project or import project. If I go to the workspace configuration, the top bar shows the following error:
The IDE shows "There are no projects", even though they are shown when listing projects from the workspace overview:
I tried looking in the documentation, but since the link points to the docs for version 6, it does not mention anything about updating the workspace definition. I also tried deleting and re-creating the workspace and I tried creating a project from a template (nodejs-hello-world and web-nodejs-simple).
Is there anyone who has the same problem or has already solved it? There should be a way to use old workspace definitions. I guess my next step is to downgrade to version 6 or to follow the installation steps for version 7, which is using chectl.
PS: for completeness sake, here is the error I ran into when following the manual on installing eclipse che 6 using docker for Mac:
helm upgrade --install che --namespace che --set cheImage=eclipse/che-server:6.19.5 --set global.cheWorkspacesNamespace="che" --set
global.ingressDomain=${CHE_DOMAIN}.nip.io ./
Release "che" does not exist. Installing it now.
Error: validation failed: error validating "": error validating data:
[unknown object type "nil" in ConfigMap.data.CHE_LOGGER_CONFIG, unknown
object type "nil" in ConfigMap.data.CHE_OAUTH_GITHUB_CLIENTID, unknown object
type "nil" in ConfigMap.data.CHE_OAUTH_GITHUB_CLIENTSECRET, unknown object
type "nil" in ConfigMap.data.CHE_WORKSPACE_HTTPS__PROXY, unknown object type
"nil" in ConfigMap.data.CHE_WORKSPACE_HTTP__PROXY, unknown object type "nil"
in ConfigMap.data.CHE_WORKSPACE_NO__PROXY]
UPDATE 1: Added screenshots of projects and project view. Also tried downgrading to 6.19.0, with same result. Of course I also checked the documentation for Che 7, but it also does not mention updating workspace definitions.
UPDATE 2: Using chectl according to the quick-starts guide did not help, since I ran into an issue when starting up the pods. I reported the issue to the chectl team and hope to be able to help them resolve it.

Puppet can't deactivate nodes

I'm using Puppet with PuppetDb. The two are connected and I can see PuppetDb update whenever I add or update a node.
But when I try to deactivate a node with puppet node deactivate nodeName I get back:
Warning: Error connecting to puppetdb on 8081 at route /pdb/cmd/v1?checksum=36a4313be5bac718badc45495f0266bf87c7a806&version=3&certname=v-hub-1.5659710c-33d5-45f2-a477-6
ccf1357e1ac.local.dockerapp.io&command=deactivate_node, error message received was 'SSL_connect SYSCALL returned=5 errno=0 state=unknown state'. Failing over to the next
PuppetDB server_url in the 'server_urls' list
Error: Failed to execute '/pdb/cmd/v1?checksum=36a4313be5bac718badc45495f0266bf87c7a806&version=3&certname=v-hub-1.5659710c-33d5-45f2-a477-6ccf1357e1ac.local.dockerapp.i
o&command=deactivate_node' on at least 1 of the following 'server_urls': https://puppetdb:8081
Error: undefined method `[]' for #<Puppet::Util::Log:0x00000003a15178>
Error: Try 'puppet help node deactivate' for usage
Any suggestions on how to debug this? I've tried deleting and regenerating the certificate with puppet cert generate puppetdb. As mentioned when it comes to creating or updating nodes on PuppetDb there is no problem.
Puppetserver version: 2.7.2

Puppet master and client on one machine

I would like to test the puppet client on the same machine as the master resides. I followed this tutorial "http://www.elsotanillo.net/2011/08/installing-puppet-master-and-client-in-the-same-host-the-debian-way/". He was saying that generating SSL at the right moment is the trick involved in keeping master and client communicating successfully in one machine. I killed puppet master process, generated puppet.conf file as he given in that link, installed puppet client, but when I try to generate SSL using the below command. It failed. You could see the log below.
puppetd --no-daemonize --onetime --verbose --waitforcert 30
I replaced puppetd with puppet agent to make it work in latest version of puppet
Warning: Unable to fetch my node definition, but the agent run will
continue:
Warning: Connection timed out - connect(2)
Info: Retrieving pluginfacts
Error: /File[/home/lhdadmin/.puppet/var/facts.d]: Failed to generate
additional resources using 'eval_generate': Connection timed out -
connect(2)
Error: /File[/home/lhdadmin/.puppet/var/facts.d]: Could not evaluate:
Could not retrieve file metadata for puppet://puppet/pluginfacts:
Connection timed out - connect(2)
Info: Retrieving plugin
Error: /File[/home/lhdadmin/.puppet/var/lib]: Failed to generate
additional resources using 'eval_generate': Connection timed out -
connect(2)
Error: /File[/home/lhdadmin/.puppet/var/lib]: Could not evaluate:
Could not retrieve file metadata for puppet://puppet/plugins:
Connection timed out - connect(2)
I tried to install puppetdb thinking that was the missing component could be triggering the above error, but it couldn't find puppetdb module to install. see the errors below
sudo puppet resource package puppetdb ensure=latest
Error: Could not update: Execution of '/usr/bin/apt-get -q -y -o
DPkg::Options::=--force-confold install puppetdb' returned 100:
Reading package lists... Building dependency tree... Reading state
information... E: Unable to locate package puppetdb Error:
/Package[puppetdb]/ensure: change from purged to latest failed: Could
not update: Execution of '/usr/bin/apt-get -q -y -o
DPkg::Options::=--force-confold install puppetdb' returned 100:
Reading package lists... Building dependency tree... Reading state
information... E: Unable to locate package puppetdb
package { 'puppetdb': ensure => 'purged', }
Aah , I think you have not mentioned your puppet class in init.pp or have defined your node in node.pp .
If you don't want to use puppetdb then please don't include in your puppet/puppet.conf file and if you want to use it then cross check the puppetdb by login in manually by the user mentioned in puppet.conf file.
storeconfigs = true
dbname = puppet-db
dbadapter = mysql
dbuser = puppet-user
dbpassword = puppet
dbserver = localhost
Also check for the proper repo in /etc/apt/sources.list , E: Unable to locate package puppetdb this error generally occurs due to failed internet connectivity, or if it is unable to reach the server.

Error 400 on puppet SERVER

On agent node:
root#agent2-VirtualBox:/var/lib/puppet# puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find default node or by name with 'agent2-virtualbox.servicemesh.com, agent2-virtualbox.servicemesh, agent2-virtualbox, agent2-VirtualBox.servicemesh.com, agent2-VirtualBox.servicemesh, agent2-VirtualBox' on node agent2-virtualbox.servicemesh.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
root#agent2-VirtualBox:/var/lib/puppet#
On the puppet master side:
root#puppetmaster:~# puppet cert sign --all
Error: No waiting certificate requests to sign
root#puppetmaster:~#
How to resolve this?
You need to check you site.pp and make sure you either have a default node definition defined or a node definition for the FQDN of your agent.
https://docs.puppet.com/puppet/latest/reference/lang_node_definitions.html
for example:
node 'agent2-virtualbox.servicemesh.com' {
import ntp
}

Dataxtax Agent error

While adding existing cluster in OpsCenter I receive an error:
ERROR: Agent for XXX.XXX.XXX.XXX was unable to complete operation (http://XXX.XXX.XXX.XXX:61621/snapshots/pit/properties?): java.lang.IllegalArgumentException: No implementation of method: :make-reader of protocol: #'clojure.java.io/IOFactory found for class: nil
On agent there is an error:
java.lang.IllegalArgumentException: No implementation of method: :make-reader of protocol: #'clojure.java.io/IOFactory found for class: nil
at clojure.core$_cache_protocol_fn.invoke(core_deftype.clj:541)
at clojure.java.io$fn__8551$G__8546__8558.invoke(io.clj:73)
at clojure.java.io$reader.doInvoke(io.clj:106)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at clojure.lang.AFn.applyToHelper(AFn.java:161)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invoke(core.clj:619)
at clojure.core$slurp.doInvoke(core.clj:6278)
at clojure.lang.RestFn.invoke(RestFn.java:410)
at opsagent.backups.pit$read_properties.invoke(pit.clj:68)
at opsagent.backups.pit$enabled_QMARK_.invoke(pit.clj:106)
at clojure.core$eval37.invoke(NO_SOURCE_FILE:107)
at clojure.lang.Compiler.eval(Compiler.java:6619)
at clojure.lang.Compiler.eval(Compiler.java:6609)
at clojure.lang.Compiler.eval(Compiler.java:6582)
at clojure.core$eval.invoke(core.clj:2852)
at opsagent.opsagent$post_interface_startup.doInvoke(opsagent.clj:102)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at opsagent.conf$handle_new_conf.invoke(conf.clj:198)
at opsagent.messaging$message_callback$fn__12316.invoke(messaging.clj:52)
at opsagent.messaging.proxy$java.lang.Object$StompConnection$Listener$7f16bc72.onMessage(Unknown Source)
at org.jgroups.client.StompConnection.notifyListeners(StompConnection.java:324)
at org.jgroups.client.StompConnection.run(StompConnection.java:274)
at java.lang.Thread.run(Thread.java:745)
And cluster creation failed. Also i get this error during startup. I tried reinstall agent but in won't help
DataStax Agent version: 5.1.0
OpsCenter version 5.1.0
root#node1:~# java -version
java version "1.7.0_75"
OpenJDK Runtime Environment (IcedTea 2.5.4) (7u75-2.5.4-1~deb7u1)
OpenJDK 64-Bit Server VM (build 24.75-b04, mixed mode)
root#node1:~#
Content of address.yaml
stomp_interface: "YYY.YYY.YYY.YYY"
Content of opscenterd.conf
[webserver]
port = 8888
interface = 0.0.0.0
use_ssl = false
[logging]
level = INFO
<cluster name>.conf is absent, because cluster not added
The problem the agent is having is finding your installation of DSE on that node. When it can't find DSE it can't get the archiving properties file to update and errors out.
This error message is unfortunately terribly unhelpful. I've created a ticket to fix the error message (it's unfortunately private, but you can use this ticket number when discussing the issue with DataStax: OPSC-4826)
For a work around, try setting cassandra_install_location in your address.yaml file on that node. After adjusting address.yaml please bounce the agent and you can retry that operation.
You can find a document listing this and more address.yaml config items here: http://www.datastax.com/documentation/opscenter/5.1/opsc/configure/agentAddressConfiguration.html
I think the issue will be with your Java installation. I believe you'll need Oracle Java, not OpenJDK.
This worked for me:
ubuntu:~$ sudo add-apt-repository ppa:webupd8team/java
ubuntu:~$ sudo apt-get update && sudo apt-get install oracle-java7-installer oracle-java7-set-default

Resources