Following is the error that I get when I try to run the command puppet agent -t on Puppet Agent. It happens when PuppetServer tries to reach V3 of PuppetDb instead of V4, although the V3 is depracated, and should not be called ideally. Not sure how to fix this.
All the configs are in place as defined here : http://jurjenbokma.com/ApprenticesNotes/ar27s05.xhtml
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Failed to submit 'replace facts' command for puppetmaster.test.org to PuppetDB at puppetmaster.test.org:8081: [404 ] <html><head><meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/><title>Error 404 </title></head><body><h2>HTTP ERROR: 404</h2><p>Problem accessing /v3/commands. Reason:<pre> Not Found</pre></p><hr /><i><small>Powered by Jetty://</small></i></body></html>
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I was following a tutorial for an older version, whereas, for latest version (Puppet v4.x) we need to have different modules.
There is an interface between PuppetMaster and PuppetDb which is responsible for making API calls to PuppetDb, in the link being followed it asks to install
sudo puppet resource packagepuppetdb-terminusensure=latest which uses /v3 api of PuppetDb, whereas, for the latest version we need to install
sudo puppet resource packagepuppetdb-terminiensure=latest
which uses /v4 api of PuppetDb...
And the problem is solved!
Related
I'm learning Puppet now. Everything is new to me... After installed a puppet7 server and agent on my two learning VMs--
192.168.160.131 puppet-mst.eisen #The puppet server
192.168.160.140 sles12.eisen #The puppet agent
And I've successfully signed the node "sles12.eisen" to the server "puppet-mst.eisen" --
[root#puppet-mst manifests]# puppetserver --version
puppetserver version: 7.4.1
[root#puppet-mst manifests]# puppetserver ca list --all
Signed Certificates:
puppet-mst.eisen (SHA256) 0B:3F:DA:60:2F:2D:D3:91:94:58:E2:B6:32:28:50:8E:D4:1C:A0:8F:A0:CF:94:99:6E:EE:99:46:B4:1D:30:58 alt names: ["DNS:puppet-mst.eisen"] authorization extensions: [pp_cli_auth: true]
puppet-mst (SHA256) C8:89:47:D2:15:74:6E:49:E7:9A:27:B5:EA:10:9B:81:C4:DC:68:E8:B4:01:07:5D:63:34:5A:AF:B6:66:C9:EE alt names: ["DNS:puppet-mst"]
sles12.eisen (SHA256) C5:40:D7:8A:C6:64:BD:E8:BF:D3:BB:5D:01:24:66:03:57:96:84:31:84:42:DF:36:AA:D1:25:14:76:4D:A5:99 alt names: ["DNS:sles12.eisen"]
Then I wrote a testing module --filetest1, and hope it can put a file to the agent node in /tmp/puppettest --
[root#puppet-mst manifests]# cat /etc/puppetlabs/code/environments/production/modules/filetest1/manifests/init.pp
class filetest1{
file {'/tmp/puppettest/filetest1':
ensure => file,
content => 'Hello World!',
}
}
[root#puppet-mst manifests]# cat /etc/puppetlabs/code/environments/production/manifests/site.pp
node 'sles12.eisen'{
include filetest1
}
But the "puppet agent --test" can't work, it's said it either server can't find agent node, or the test module's catalog is missing --
sles12:/tmp/puppettest # puppet --version
7.12.0
sles12:/tmp/puppettest # hostname -f
sles12.eisen
sles12:/tmp/puppettest # puppet agent --test --verbose
Info: Using environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Failed when searching for node sles12.eisen: Failed to find sles12.eisen via exec: Execution of '/etc/puppetlabs/puppet/node.rb sles12.eisen' returned 1:
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I don't know what's wrong here. Please kind help. Thanks
Regards
Eisen
The error message suggests that you have configured Puppet to use an external node classifier (/etc/puppetlabs/puppet/node.rb), and either the attempt to execute it is failing altogether, or it is terminating with a failure status, or it is not outputting anything.
You may want to explore ENCs later, but now is probably not the time for that. To disable use of an ENC, edit /etc/puppetlabs/puppet/puppet.conf and either remove the node_terminus setting or change its value to plain.
I am referring to the following link Installation Link for installing Kubernetes on Ubuntu 18.04. I am getting the following errors on typing the command :
sudo kubeadm join 192.168.0.114:6443 --token qgce4f.tgzda1zemqnro1em --discovery-token-ca-cert-hash sha256:6ebc15a5a9818481f8a98af01a7a367ba93b2180babb954940edd8178548773a ignore-preflight-errors=All
W0303 18:33:39.565868 7098 join.go:185] [join] WARNING: More than one API server endpoint supplied on command line [192.168.0.114:6443 ignore-preflight-errors=All]. Using the first one.
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[preflight] Some fatal errors occurred:
[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
Can someone please tell me how to fix this? Thanks !
Consider using the kubeadm reset command as described here:
The "reset" command executes the following phases:
preflight Run reset pre-flight checks
update-cluster-status Remove this node from the ClusterStatus object.
remove-etcd-member Remove a local etcd member.
cleanup-node Run cleanup node.
The fourth phase of this command should fix the 4 errors you mentioned:
A ) It will stop the kubelet service - so port 10250 will be released.
B ) It will delete contents of the following directories:
/etc/kubernetes/manifests
/etc/kubernetes/pki
C ) It will delete the following files:
/etc/kubernetes/admin.conf
/etc/kubernetes/kubelet.conf
/etc/kubernetes/bootstrap-kubelet.conf
/etc/kubernetes/controller-manager.conf
/etc/kubernetes/scheduler.conf
(*) Make sure you run the kubeadm join command with verbosity level of 5 and above (by appending the --v=5 flag).
Logstash 5.1.1 on Windows 7 32bit, trying without success to install logstash-filter-elapsed. Below the method I tried.
Before running plugin installer, I run:
set DEBUG=1
to show debug output.
Then I run:
logstash-plugin install logstash-filter-elapsed
This fails both when I'm in my corporate network (with proxy) and when using a network without proxy, with the following error:
Looking if package named: logstash-filter-elapsed exists at https://artifacts.elastic.co/downloads/logstash-plugins/logstash-filter-elapsed/logstash-filter-elapsed-5.1.1.zip
Errno::ECONNREFUSED: Connection refused - Connection refused
initialize at org/jruby/ext/socket/RubyTCPSocket.java:126
open at org/jruby/RubyIO.java:1197
connect at C:/tools/logstash-5.1.1/vendor/jruby/lib/ruby/1.9/net/http.rb:763
timeout at org/jruby/ext/timeout/Timeout.java:98
connect at C:/tools/logstash-5.1.1/vendor/jruby/lib/ruby/1.9/net/http.rb:763
do_start at C:/tools/logstash-5.1.1/vendor/jruby/lib/ruby/1.9/net/http.rb:756
start at C:/tools/logstash-5.1.1/vendor/jruby/lib/ruby/1.9/net/http.rb:745
start at C:/tools/logstash-5.1.1/vendor/jruby/lib/ruby/1.9/net/http.rb:557
start at C:/tools/logstash-5.1.1/lib/pluginmanager/utils/http_client.rb:13
remote_file_exist? at C:/tools/logstash-5.1.1/lib/pluginmanager/utils/http_client.rb:23
get_installer_for at C:/tools/logstash-5.1.1/lib/pluginmanager/pack_fetch_strategy/repository.rb:27
create at C:/tools/logstash-5.1.1/lib/pluginmanager/install_strategy_factory.rb:15
each at org/jruby/RubyArray.java:1613
create at C:/tools/logstash-5.1.1/lib/pluginmanager/install_strategy_factory.rb:14
execute at C:/tools/logstash-5.1.1/lib/pluginmanager/install.rb:27
run at C:/tools/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:67
execute at C:/tools/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/subcommand/execution.rb:11
run at C:/tools/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:67
run at C:/tools/logstash-5.1.1/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:132
(root) at C:\tools\logstash-5.1.1\lib\pluginmanager\main.rb:46
It seems that there's a timeout in contacting the url "https://artifacts.elastic.co/downloads/logstash-plugins/logstash-filter-elapsed/logstash-filter-elapsed-5.1.1.zip".
Opening this url in a browser returns an xml with an error message (seems like the xml you when trying to access a not existing object from a AWS S3 bucket).
Adding the --no-verify flag yelds the same result.
EDIT
I tried to bypass this problematic "Looking if package named:..." step: I edited logstash-5.1.1\lib\pluginmanager\pack_fetch_strategy\repository.rb, adding a return nil at the beginning of the def get_installer_for(plugin_name).
Running the installation, I get another error:
Installing logstash-filter-elapsed
Error Bundler::HTTPError, retrying 1/10
Could not fetch specs from https://rubygems.org/
Error Bundler::HTTPError, retrying 2/10
Could not fetch specs from https://rubygems.org/
...
I need to get data from an old MySQL server and I'm getting the following error when trying to connect to it with RMySQL or DBI packages:
Error in .local(drv, ...) :
Failed to connect to database: Error: Connection using old (pre-4.1.1) authentication protocol refused (client option 'secure_auth' enabled)
On terminal, I must use the '--secure_auth=false' option to be able to connect to that MySQL server, but I'cant handle how to use it with RMySQL and DBI packages.
Reading these packages docs, I've found the default.file arg to use in cdbConnect() functions. So I've created a '.my.cnf' file with 'secure_auth=false' direction (following MySQL documentation). But with this conf file the dbConnect() function crashed.
With RJDBC package, I can connect to that server, even without any extra option to set secure_auth. But I would like to use RMySQL to do that, since I am already using it to many other connections in the same script and also because RMySQL is more updated than RJDBC.
My sessionInfo():
R version 3.3.1 (2016-06-21)
Platform: x86_64-redhat-linux-gnu (64-bit)
Running under: Red Hat Enterprise Linux
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] RJDBC_0.2-5 rJava_0.9-8 DT_0.1 reshape2_1.4.1 RAdwords_0.1.9
[6] RGA_0.4.2 highcharter_0.3.0 ggplot2_2.1.0 lubridate_1.5.6 dplyr_0.4.3
[11] gpbR_1.0 RMySQL_0.10.9 DBI_0.4-1
I want to use https://forge.puppetlabs.com/example42/splunk to setup splunk on some of my systems.
So on my puppet master I did puppet module install example42-splunk.
I use the PE console so I added the class splunk and associated splunk with a group that has one of my nodes, my-mongo-1.
I logon to my-mongo-1 and execute ...
[root#my-mongo-1 ~]# puppet agent -t
...
Info: Caching catalog for my-mongo-1
Info: Applying configuration version '1417030622'
Notice: /Stage[main]/Splunk/Package[splunk]/ensure: created
Notice: /Stage[main]/Splunk/Exec[splunk_create_service]/returns: executed successfully
Notice: /Stage[main]/Splunk/File[splunk_change_admin_password]/ensure: created
Info: /Stage[main]/Splunk/File[splunk_change_admin_password]: Scheduling refresh of Exec[splunk_change_admin_password]
Notice: /Stage[main]/Splunk/Service[splunk]/ensure: ensure changed 'stopped' to 'running'
Info: /Stage[main]/Splunk/Service[splunk]: Unscheduling refresh on Service[splunk]
Notice: /Stage[main]/Splunk/Exec[splunk_change_admin_password]/returns: Could not look up HOME variable. Auth tokens cannot be cached.
Notice: /Stage[main]/Splunk/Exec[splunk_change_admin_password]/returns:
Notice: /Stage[main]/Splunk/Exec[splunk_change_admin_password]/returns: In handler 'users': The password cannot be set to the default password.
Error: /Stage[main]/Splunk/Exec[splunk_change_admin_password]: Failed to call refresh: /opt/splunkforwarder/bin/puppet_change_admin_password returned 22 instead of one of [0]
Error: /Stage[main]/Splunk/Exec[splunk_change_admin_password]: /opt/splunkforwarder/bin/puppet_change_admin_password returned 22 instead of one of [0]
Notice: Finished catalog run in 11.03 seconds
So what am I doing wrong here?
Why do I get the Could not look up HOME variable. Auth tokens cannot be cached. error?
I saw you asked this on Ask Puppet, and gave it a quick test in Vagrant, and there are two solutions:
1) Give a different password for Splunk in Puppet (as it's complaining about using the default password)
class { "splunk":
install => "server",
admin_password => 'n3wP4assw0rd',
}
2) Upgrade the module to a newer version that doesn't have this issue:
puppet module upgrade example42-splunk --force