Can't find rbenv after puppet install - puppet

I am using puppet to setup a ruby on rails server (14.04). The install seems to work fine but then I can't find rbenv or bundler and ruby -v reports the system ruby 1.9.3.
Install plugin module
puppet module install jdowning-rbenv
pp file
class rails-test_server {
include ruby
class { rbenv: }
rbenv::plugin { 'sstephenson/ruby-build': }
rbenv::build { '2.2.0': global => true }
}
in the module
[$install_dir]
# This is where rbenv will be installed to.
# Default: '/usr/local/rbenv'
#
# [$owner]
# This defines who owns the rbenv install directory.
# Default: 'root'
Here is the output
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for rails-test
Info: Applying configuration version '1424804476'
Notice: /Stage[main]/Rbenv::Deps::Debian/Package[build-essential]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Rbenv::Deps::Debian/Package[libssl-dev]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Rbenv::Deps::Debian/Package[libffi-dev]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Rbenv::Deps::Debian/Package[libreadline6-dev]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Git/Package[git]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Rbenv/Exec[git-clone-rbenv]/returns: executed successfully
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv]/group: group changed 'root' to 'adm'
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv]/mode: mode changed '0755' to '0775'
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv/shims]/ensure: created
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv/plugins]/ensure: created
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv/versions]/ensure: created
Notice: /Stage[main]/Rails-test_server/Rbenv::Plugin[sstephenson/ruby-build]/Exec[install-sstephenson/ruby-build]/returns: executed successfully
Info: /Stage[main]/Rails-test_server/Rbenv::Plugin[sstephenson/ruby-build]/Exec[install-sstephenson/ruby-build]: Scheduling refresh of Exec[rbenv-permissions-sstephenson/ruby-build]
Notice: /Stage[main]/Rails-test_server/Rbenv::Plugin[sstephenson/ruby-build]/Exec[rbenv-permissions-sstephenson/ruby-build]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Rbenv/File[/etc/profile.d/rbenv.sh]/ensure: defined content as '{md5}1895fedb6a7fdc5feed9b2cbbb8bbb60'
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[own-plugins-2.2.0]/returns: executed successfully
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[git-pull-rubybuild-2.2.0]/returns: executed successfully
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-install-2.2.0]/returns: executed successfully
Info: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-install-2.2.0]: Scheduling refresh of Exec[rbenv-ownit-2.2.0]
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-ownit-2.2.0]: Triggered 'refresh' from 1 events
Info: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-ownit-2.2.0]: Scheduling refresh of Exec[rbenv-global-2.2.0]
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-global-2.2.0]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[gem-install-bundler-2.2.0]/returns: executed successfully
Info: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[gem-install-bundler-2.2.0]: Scheduling refresh of Exec[rbenv-rehash-bundler-2.2.0]
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[rbenv-rehash-bundler-2.2.0]: Triggered 'refresh' from 1 events
Info: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[rbenv-rehash-bundler-2.2.0]: Scheduling refresh of Exec[rbenv-permissions-bundler-2.2.0]
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[rbenv-permissions-bundler-2.2.0]: Triggered 'refresh' from 1 events
Notice: Finished catalog run in 473.25 seconds

The rbenv module creates a file /etc/profile.d/rbenv.sh that needs to be sourced before the rbenv will be available on the command line.
[root#ptierno-puppetmaster modules]# which rbenv
/usr/bin/which: no rbenv in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
[root#ptierno-puppetmaster modules]# source /etc/profile.d/rbenv.sh
[root#ptierno-puppetmaster modules]# which rbenv
/usr/local/rbenv/bin/rbenv
You can either source the file as above, or login and logout again to get a new login shell.
Hope this helps.

Related

Puppet configuration version is different for the noop and apply

I have few puppet modules and I have already applied them. But the problem is, even though I have applied, the puppet is still forcing me with the changes, that means puppet still shows the differences that is going to be applied (even after it is being applied)
I did a puppet noop:
puppet agent -vt --noop
And it gives the following output:
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Applying configuration version '1632762925'
Notice: /Stage[main]/Apim_common/Exec[stop-server]/returns: current_value 'notrun', should be ['0'] (noop) (corrective)
Info: /Stage[main]/Apim_common/Exec[stop-server]: Scheduling refresh of Exec[delete-pack]
Notice: /Stage[main]/Apim_common/Exec[delete-pack]: Would have triggered 'refresh' from 1 event
Notice: /Stage[main]/Apim_common/Exec[unzip-update]/returns: current_value 'notrun', should be ['0'] (noop) (corrective)
Notice: Class[Apim_common]: Would have triggered 'refresh' from 3 events
Notice: /Stage[main]/Monitoring/Exec[Restart awslogsd service]/returns: current_value 'notrun', should be ['0'] (noop) (corrective)
Notice: Class[Monitoring]: Would have triggered 'refresh' from 1 event
Notice: Stage[main]: Would have triggered 'refresh' from 2 events
Notice: Applied catalog in 5.70 seconds
And then I did a puppet apply:
puppet agent -vt
Info: Using environment 'test'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for amway-320-test-api-analytics-worker-1-i-00d684727d24cc360.intranet.local
Info: Applying configuration version '1632762946'
Notice: /Stage[main]/Apim_common/Exec[stop-server]/returns: executed successfully (corrective)
Info: /Stage[main]/Apim_common/Exec[stop-server]: Scheduling refresh of Exec[delete-pack]
Notice: /Stage[main]/Apim_common/Exec[delete-pack]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Apim_common/Exec[unzip-update]/returns: executed successfully (corrective)
Notice: /Stage[main]/Monitoring/Exec[Restart awslogsd service]/returns: executed successfully (corrective)
Notice: /Stage[main]/Apim_analytics_worker/File[/mnt/apim_analytics_worker/test-analytics-3.2.0/conf/worker/deployment.yaml]/content:
--- /mnt/apim_analytics_worker/testam-analytics-3.2.0/conf/worker/deployment.yaml 2021-05-18 02:13:05.000000000 -0400
+++ /tmp/puppet-file20210927-468-19w731k 2021-09-27 13:15:56.250247257 -0400
## -14,16 +14,16 ##
# limitations under the License.
################################################################################
- # Carbon Configuration Parameters
+# Carbon Configuration Parameters
test.carbon:
type: test-apim-analytics
- # value to uniquely identify a server
+ # value to uniquely identify a server
id: test-am-analytics
.
.
.
And everytime I do a puppet agent -vt, it is producing this output over and over, which it shouldn't as the changes are already being applied. I tried removing the cache directory in /opt/puppet/... directory but still no luck.
Can someone please help me on this?
You're using a lot of Exec resources. That's not wrong per se, but it has bad code smell.
It looks like you are managing some things via Execs that might be better modeled as Service resources (and maybe better set up as bona fide system services, too, which is a separate question). There may be other things that would be better managed as Files or Packages.
Where you do use an Exec, you should either make it refreshonly or give it an appropriate unless, onlyif, or creates parameter, or both, to establish criteria for whether its command will run. Puppet does not track whether the Exec was synced on some earlier run, and it wouldn't matter if it did because having been synced on an earlier run does not necessarily mean that it should not be synced again.

Puppet Agent Could not retrieve catalog

I installed Maven module in Master machine using this command:
puppet module install maestrodev-maven --version 1.4.0
It installed it successfully in /etc/puppet/modules/
Afterwards I added following code inside the file /etc/puppet/manifests/site.pp of master machine
node 'test02.edureka.com'
{
include maven
}
Now, when I run below command on Puppet Agent machine
puppet agent -t
It gives error:
root#test02:~# puppet agent -t
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: execution expired
Info: Retrieving pluginfacts
Error: /File[/var/lib/puppet/facts.d]: Failed to generate additional resources using 'eval_generate': execution expired
Error: /File[/var/lib/puppet/facts.d]: Could not evaluate: Could not retrieve file metadata for puppet://test01.edureka.com/pluginfacts: execution expired
Info: Retrieving plugin
Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate': execution expired
Error: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve file metadata for puppet://test01.edureka.com/plugins: execution expired
Info: Loading facts
Error: JAVA_HOME is not defined correctly.
We cannot execute
Could not retrieve fact='maven_version', resolution='': undefined method `split' for nil:NilClass
Error: Could not retrieve catalog from remote server: execution expired
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
Error: Could not send report: execution expired
root#test02:~#
puppet.conf file on master:
puppet.conf file on agent:
Error screenshot:

Puppet master and client on one machine

I would like to test the puppet client on the same machine as the master resides. I followed this tutorial "http://www.elsotanillo.net/2011/08/installing-puppet-master-and-client-in-the-same-host-the-debian-way/". He was saying that generating SSL at the right moment is the trick involved in keeping master and client communicating successfully in one machine. I killed puppet master process, generated puppet.conf file as he given in that link, installed puppet client, but when I try to generate SSL using the below command. It failed. You could see the log below.
puppetd --no-daemonize --onetime --verbose --waitforcert 30
I replaced puppetd with puppet agent to make it work in latest version of puppet
Warning: Unable to fetch my node definition, but the agent run will
continue:
Warning: Connection timed out - connect(2)
Info: Retrieving pluginfacts
Error: /File[/home/lhdadmin/.puppet/var/facts.d]: Failed to generate
additional resources using 'eval_generate': Connection timed out -
connect(2)
Error: /File[/home/lhdadmin/.puppet/var/facts.d]: Could not evaluate:
Could not retrieve file metadata for puppet://puppet/pluginfacts:
Connection timed out - connect(2)
Info: Retrieving plugin
Error: /File[/home/lhdadmin/.puppet/var/lib]: Failed to generate
additional resources using 'eval_generate': Connection timed out -
connect(2)
Error: /File[/home/lhdadmin/.puppet/var/lib]: Could not evaluate:
Could not retrieve file metadata for puppet://puppet/plugins:
Connection timed out - connect(2)
I tried to install puppetdb thinking that was the missing component could be triggering the above error, but it couldn't find puppetdb module to install. see the errors below
sudo puppet resource package puppetdb ensure=latest
Error: Could not update: Execution of '/usr/bin/apt-get -q -y -o
DPkg::Options::=--force-confold install puppetdb' returned 100:
Reading package lists... Building dependency tree... Reading state
information... E: Unable to locate package puppetdb Error:
/Package[puppetdb]/ensure: change from purged to latest failed: Could
not update: Execution of '/usr/bin/apt-get -q -y -o
DPkg::Options::=--force-confold install puppetdb' returned 100:
Reading package lists... Building dependency tree... Reading state
information... E: Unable to locate package puppetdb
package { 'puppetdb': ensure => 'purged', }
Aah , I think you have not mentioned your puppet class in init.pp or have defined your node in node.pp .
If you don't want to use puppetdb then please don't include in your puppet/puppet.conf file and if you want to use it then cross check the puppetdb by login in manually by the user mentioned in puppet.conf file.
storeconfigs = true
dbname = puppet-db
dbadapter = mysql
dbuser = puppet-user
dbpassword = puppet
dbserver = localhost
Also check for the proper repo in /etc/apt/sources.list , E: Unable to locate package puppetdb this error generally occurs due to failed internet connectivity, or if it is unable to reach the server.

puppet-acl module on Windows throws transactionstore.yaml corrupt error

Trying out puppet-acl module on Windows Server 2016, Preview5. I'm getting the weirdest error on the second puppet run. If i remove the trnsactionstore.yaml file, and re-run the puppet agent, the behavior is repeatable. Im running puppet4 with latest agent version.
This is my codeblock
acl { "c:/temp":
permissions => [
{ identity => 'Administrator', rights => ['full'] },
{ identity => 'Users', rights => ['read','execute'] }
],
}
This is the output from the puppet-run.
PS C:\ProgramData\PuppetLabs\puppet\cache\state> puppet agent -t
Info: Using configured environment 'local'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for opslowebf02n02.local
Error: Transaction store file C:/ProgramData/PuppetLabs/puppet/cache/state/transactionstore.yaml is corrupt (wrong number of arguments (0 for 1..2)); replacing
Error: Transaction state file C:/ProgramData/PuppetLabs/puppet/cache/state/transactionstore.yaml is valid YAML but not returning a hash. Check the file for corruption, or remove it before continuing.
Info: Applying configuration version '1471436916'
Notice: /Stage[main]/platform_base_system::Role::Windows/Exec[check-powershell-exection-policy]/returns: executed successfully
Notice: /Stage[main]/configs_iis::Profile::Default/Exec[check-iis-global-anonymous-authentication]/returns: executed successfully
Notice: Applied catalog in 7.42 seconds
In the transactionstore.yaml file, this is the error section:
Acl[c:/temp]:
parameters:
permissions:
system_value:
- !ruby/hash:Puppet::Type::Acl::Ace {}
- !ruby/hash:Puppet::Type::Acl::Ace {}
inherit_parent_permissions:
system_value: :true
This has been resolved by dowwngrading the puppet agent to 4.5.3.
Behavior of the 4.6.0 version must have changed.
With 4.5.3 i still see the error in the logfile, but the puppetrun does not fail
I'll try to talk to the people at puppet about this.
Acl[c:/temp]:
parameters:
permissions:
system_value:
- !ruby/hash:Puppet::Type::Acl::Ace {}
- !ruby/hash:Puppet::Type::Acl::Ace {}
inherit_parent_permissions:
system_value: :true
This is being tracked as https://tickets.puppetlabs.com/browse/PUP-6629. It's almost coincidental that you created https://tickets.puppetlabs.com/browse/PUP-6630 right afterwards.

Gitlab 6.2 not syncing with authorized_keys

The keys I put in my Gitlab GUI are not showing up in the authorized_keys file. Hence I cannot push or pull over ssh. Any attempt asks me for an ssh password :-(
I am using gitlab 6.2 stable. Here are outputs to a few commands
git#CVIAL272675:~/gitlab$ bundle exec rake gitlab:shell:setup RAILS_ENV=production
This will rebuild an authorized_keys file.
You will lose any data stored in authorized_keys file.
Do you want to continue (yes/no)? yes
sh: 1: Syntax error: Unterminated quoted string
Fgit#CVIAL272675:~/gitlab$
and
git#CVIAL272675:~/gitlab$ bundle exec rake gitlab:check RAILS_ENV=production
Checking Environment ...
Git configured for git user? ... yes
Has python2? ... yes
python2 is supported version? ... yes
Checking Environment ... Finished
Checking GitLab Shell ...
GitLab Shell version >= 1.7.1 ? ... OK (1.7.1)
Repo base directory exists? ... yes
Repo base directory is a symlink? ... no
Repo base owned by git:git? ... yes
Repo base access is drwxrws---? ... yes
update hook up-to-date? ... yes
update hooks in repos are links: ...
Snehadeep Sethia / CodeRush ... ok
Bharath Bhushan Lohray / PyPGPWord ... ok
Running /home/git/gitlab-shell/bin/check
Check GitLab API access: OK
Check directories and files:
/home/git/repositories: OK
/home/git/.ssh/authorized_keys: OK
gitlab-shell self-check successful
Checking GitLab Shell ... Finished
Checking Sidekiq ...
Running? ... yes
Number of Sidekiq processes ... 1
Checking Sidekiq ... Finished
Checking GitLab ...
Database config exists? ... yes
Database is SQLite ... no
All migrations up? ... yes
GitLab config exists? ... yes
GitLab config outdated? ... no
Log directory writable? ... yes
Tmp directory writable? ... yes
Init script exists? ... yes
Init script up-to-date? ... yes
projects have namespace: ...
Snehadeep Sethia / CodeRush ... yes
Bharath Bhushan Lohray / PyPGPWord ... yes
Projects have satellites? ...
Snehadeep Sethia / CodeRush ... yes
Bharath Bhushan Lohray / PyPGPWord ... yes
Redis version >= 2.0.0? ... yes
Your git bin path is "/usr/bin/git"
Git version >= 1.7.10 ? ... yes (1.8.3)
Checking GitLab ... Finished
git#CVIAL272675:~/gitlab$
sidekiq.log
2013-10-29T04:08:37Z 18931 TID-os8rme7b4 INFO: Booting Sidekiq 2.14.0 using redis://localhost:6379 with options {:namespace=>"resque:gitlab"}
2013-10-29T04:08:37Z 18931 TID-os8rme7b4 INFO: Running in ruby 2.0.0p247 (2013-06-27 revision 41674) [x86_64-linux]
2013-10-29T04:08:37Z 18931 TID-os8rme7b4 INFO: See LICENSE and the LGPL-3.0 for licensing details.
2013-10-29T04:10:55Z 18931 TID-os8s00k6g Sidekiq::Extensions::DelayedMailer JID-10b66a2f8897dd56487c57cd INFO: start
2013-10-29T04:10:56Z 18931 TID-os8s00k6g Sidekiq::Extensions::DelayedMailer JID-10b66a2f8897dd56487c57cd INFO: done: 0.472 sec
2013-10-29T04:11:55Z 18931 TID-os8s00k6g Sidekiq::Extensions::DelayedMailer JID-a63f9cad0c98b605c76e0613 INFO: start
2013-10-29T04:11:55Z 18931 TID-os8s00k6g Sidekiq::Extensions::DelayedMailer JID-a63f9cad0c98b605c76e0613 INFO: done: 0.263 sec
2013-10-29T04:14:36Z 18931 TID-os8s00k6g GitlabShellWorker JID-af69358238a2b2cc4c5884c2 INFO: start
sh: 1: Syntax error: Unterminated quoted string
2013-10-29T04:14:37Z 18931 TID-os8s00k6g GitlabShellWorker JID-af69358238a2b2cc4c5884c2 INFO: done: 0.757 sec
2013-10-29T04:14:40Z 18931 TID-os8s00k6g Sidekiq::Extensions::DelayedMailer JID-4020b22e54a09bc63401f08b INFO: start
2013-10-29T04:14:41Z 18931 TID-os8s00k6g Sidekiq::Extensions::DelayedMailer JID-4020b22e54a09bc63401f08b INFO: done: 0.29 sec
What else can I do? How do I fix this? I have seen similar threads on stackoverflow and elsewhere, none of them worked for me.
The problem is happening when GitLab shells out to invoke gitlab-shell to add the key; it sounds like a quote character is sneaking into the call to #{gitlab_shell_user_home}/gitlab-shell/bin/gitlab-keys somehow. key.shell_id can't have a quote in it, because it's generated as "key-#{id}", and key.key is validated as an recognizable ssh-rsa key, so it seems most likely to me that #{gitlab_shell_user_home} has an extraneous character.
To verify, if it's possible, you can add a puts "#{gitlab_shell_user_home}/gitlab-shell/bin/gitlab-keys add-key #{key_id} #{key_content} right before the system call (and restart sidekiq) to see the actual shell command that GitLab is about to attempt. That should let you track down where your extra quote is coming from.
If gitlab_shell_user_home, is the culprit, that value is derived from the gitlab-shell: ssh_user: setting in gitlab.yml, which defaults to gitlab: user if it's not present. Double check your YAML syntax if you've got either of those set!

Resources