Gitlab 6.2 not syncing with authorized_keys - gitlab

The keys I put in my Gitlab GUI are not showing up in the authorized_keys file. Hence I cannot push or pull over ssh. Any attempt asks me for an ssh password :-(
I am using gitlab 6.2 stable. Here are outputs to a few commands
git#CVIAL272675:~/gitlab$ bundle exec rake gitlab:shell:setup RAILS_ENV=production
This will rebuild an authorized_keys file.
You will lose any data stored in authorized_keys file.
Do you want to continue (yes/no)? yes
sh: 1: Syntax error: Unterminated quoted string
Fgit#CVIAL272675:~/gitlab$
and
git#CVIAL272675:~/gitlab$ bundle exec rake gitlab:check RAILS_ENV=production
Checking Environment ...
Git configured for git user? ... yes
Has python2? ... yes
python2 is supported version? ... yes
Checking Environment ... Finished
Checking GitLab Shell ...
GitLab Shell version >= 1.7.1 ? ... OK (1.7.1)
Repo base directory exists? ... yes
Repo base directory is a symlink? ... no
Repo base owned by git:git? ... yes
Repo base access is drwxrws---? ... yes
update hook up-to-date? ... yes
update hooks in repos are links: ...
Snehadeep Sethia / CodeRush ... ok
Bharath Bhushan Lohray / PyPGPWord ... ok
Running /home/git/gitlab-shell/bin/check
Check GitLab API access: OK
Check directories and files:
/home/git/repositories: OK
/home/git/.ssh/authorized_keys: OK
gitlab-shell self-check successful
Checking GitLab Shell ... Finished
Checking Sidekiq ...
Running? ... yes
Number of Sidekiq processes ... 1
Checking Sidekiq ... Finished
Checking GitLab ...
Database config exists? ... yes
Database is SQLite ... no
All migrations up? ... yes
GitLab config exists? ... yes
GitLab config outdated? ... no
Log directory writable? ... yes
Tmp directory writable? ... yes
Init script exists? ... yes
Init script up-to-date? ... yes
projects have namespace: ...
Snehadeep Sethia / CodeRush ... yes
Bharath Bhushan Lohray / PyPGPWord ... yes
Projects have satellites? ...
Snehadeep Sethia / CodeRush ... yes
Bharath Bhushan Lohray / PyPGPWord ... yes
Redis version >= 2.0.0? ... yes
Your git bin path is "/usr/bin/git"
Git version >= 1.7.10 ? ... yes (1.8.3)
Checking GitLab ... Finished
git#CVIAL272675:~/gitlab$
sidekiq.log
2013-10-29T04:08:37Z 18931 TID-os8rme7b4 INFO: Booting Sidekiq 2.14.0 using redis://localhost:6379 with options {:namespace=>"resque:gitlab"}
2013-10-29T04:08:37Z 18931 TID-os8rme7b4 INFO: Running in ruby 2.0.0p247 (2013-06-27 revision 41674) [x86_64-linux]
2013-10-29T04:08:37Z 18931 TID-os8rme7b4 INFO: See LICENSE and the LGPL-3.0 for licensing details.
2013-10-29T04:10:55Z 18931 TID-os8s00k6g Sidekiq::Extensions::DelayedMailer JID-10b66a2f8897dd56487c57cd INFO: start
2013-10-29T04:10:56Z 18931 TID-os8s00k6g Sidekiq::Extensions::DelayedMailer JID-10b66a2f8897dd56487c57cd INFO: done: 0.472 sec
2013-10-29T04:11:55Z 18931 TID-os8s00k6g Sidekiq::Extensions::DelayedMailer JID-a63f9cad0c98b605c76e0613 INFO: start
2013-10-29T04:11:55Z 18931 TID-os8s00k6g Sidekiq::Extensions::DelayedMailer JID-a63f9cad0c98b605c76e0613 INFO: done: 0.263 sec
2013-10-29T04:14:36Z 18931 TID-os8s00k6g GitlabShellWorker JID-af69358238a2b2cc4c5884c2 INFO: start
sh: 1: Syntax error: Unterminated quoted string
2013-10-29T04:14:37Z 18931 TID-os8s00k6g GitlabShellWorker JID-af69358238a2b2cc4c5884c2 INFO: done: 0.757 sec
2013-10-29T04:14:40Z 18931 TID-os8s00k6g Sidekiq::Extensions::DelayedMailer JID-4020b22e54a09bc63401f08b INFO: start
2013-10-29T04:14:41Z 18931 TID-os8s00k6g Sidekiq::Extensions::DelayedMailer JID-4020b22e54a09bc63401f08b INFO: done: 0.29 sec
What else can I do? How do I fix this? I have seen similar threads on stackoverflow and elsewhere, none of them worked for me.

The problem is happening when GitLab shells out to invoke gitlab-shell to add the key; it sounds like a quote character is sneaking into the call to #{gitlab_shell_user_home}/gitlab-shell/bin/gitlab-keys somehow. key.shell_id can't have a quote in it, because it's generated as "key-#{id}", and key.key is validated as an recognizable ssh-rsa key, so it seems most likely to me that #{gitlab_shell_user_home} has an extraneous character.
To verify, if it's possible, you can add a puts "#{gitlab_shell_user_home}/gitlab-shell/bin/gitlab-keys add-key #{key_id} #{key_content} right before the system call (and restart sidekiq) to see the actual shell command that GitLab is about to attempt. That should let you track down where your extra quote is coming from.
If gitlab_shell_user_home, is the culprit, that value is derived from the gitlab-shell: ssh_user: setting in gitlab.yml, which defaults to gitlab: user if it's not present. Double check your YAML syntax if you've got either of those set!

Related

GitLab gemnasium-maven analyzer v3.11.1 fails dependency scan due to unsupported class file major version 61

I'm attempting to setup GitLab dependency scanning for a repository in my self-hosted GitLab server. I have included the job template and the test stage since I have overridden the stage clause. The job starts, but it fails soon after. When I set the variable SECURE_LOG_LEVEL to debug I see the following output.
$ /analyzer run
Using java version 'adoptopenjdk-17.0.2+8'
[INFO] [gemnasium-maven] [2023-01-28T15:21:00Z] [/go/src/app/cmd/gemnasium-maven/main.go:55] ▶ GitLab gemnasium-maven analyzer v3.11.1
[DEBU] [gemnasium-maven] [2023-01-28T15:21:00Z] [/go/src/app/finder/finder.go:64] ▶ inspect directory: .
[DEBU] [gemnasium-maven] [2023-01-28T15:21:00Z] [/go/src/app/finder/finder.go:96] ▶ skip ignored directory: .git
[DEBU] [gemnasium-maven] [2023-01-28T15:21:00Z] [/go/src/app/finder/detect.go:84] ▶ Selecting gradle for maven because this is the first match
[INFO] [gemnasium-maven] [2023-01-28T15:21:00Z] [/go/src/app/finder/finder.go:116] ▶ Detected supported dependency files in '.'. Dependency files detected in this directory will be processed. Dependency files in other directories will be skipped.
[DEBU] [gemnasium-maven] [2023-01-28T15:21:00Z] [/go/src/app/cmd/gemnasium-maven/main.go:234] ▶ Exporting dependencies for /path/to/my/app/build.gradle
[DEBU] [gemnasium-maven] [2023-01-28T15:21:05Z] [/go/src/app/builder/gradle/gradle.go:85] ▶ /path/to/my/app/gradlew --init-script /gemnasium-gradle-plugin-init.gradle gemnasiumDumpDependencies
Downloading https://services.gradle.org/distributions/gradle-7.1.1-bin.zip
..........10%...........20%...........30%..........40%...........50%...........60%..........70%...........80%...........90%...........100%
Welcome to Gradle 7.1.1!
Here are the highlights of this release:
- Faster incremental Java compilation
- Easier source set configuration in the Kotlin DSL
For more details see https://docs.gradle.org/7.1.1/release-notes.html
Starting a Gradle Daemon (subsequent builds will be faster)
FAILURE: Build failed with an exception.
* Where:
Initialization script '/gemnasium-gradle-plugin-init.gradle'
* What went wrong:
Could not compile initialization script '/gemnasium-gradle-plugin-init.gradle'.
> startup failed:
> General error during conversion: Unsupported class file major version 61
java.lang.IllegalArgumentException: Unsupported class file major version 61
at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:189)
at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:170)
at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:156)
at groovyjarjarasm.asm.ClassReader.<init>(ClassReader.java:277)
...
How can I resolve this issue? I am using GitLab server v15.7.5.
I found a StackOverflow thread that is very similar to the issue I'm having.
It appears the version of ASM bundled with GitLab gemnasium-maven analyzer v3.11 does not support class files version 61 in Java v17.0.2+8.
I was able to get this working by downgrading GitLab gemnasium-maven analyzer to v2.31.0 and have filed a support request with GitLab to notify them of the issue.
To downgrade the dependency scanner, add the following block of code to your CICD template
.ds-analyzer:
variables:
DS_MAJOR_VERSION: 2

Puppet configuration version is different for the noop and apply

I have few puppet modules and I have already applied them. But the problem is, even though I have applied, the puppet is still forcing me with the changes, that means puppet still shows the differences that is going to be applied (even after it is being applied)
I did a puppet noop:
puppet agent -vt --noop
And it gives the following output:
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Applying configuration version '1632762925'
Notice: /Stage[main]/Apim_common/Exec[stop-server]/returns: current_value 'notrun', should be ['0'] (noop) (corrective)
Info: /Stage[main]/Apim_common/Exec[stop-server]: Scheduling refresh of Exec[delete-pack]
Notice: /Stage[main]/Apim_common/Exec[delete-pack]: Would have triggered 'refresh' from 1 event
Notice: /Stage[main]/Apim_common/Exec[unzip-update]/returns: current_value 'notrun', should be ['0'] (noop) (corrective)
Notice: Class[Apim_common]: Would have triggered 'refresh' from 3 events
Notice: /Stage[main]/Monitoring/Exec[Restart awslogsd service]/returns: current_value 'notrun', should be ['0'] (noop) (corrective)
Notice: Class[Monitoring]: Would have triggered 'refresh' from 1 event
Notice: Stage[main]: Would have triggered 'refresh' from 2 events
Notice: Applied catalog in 5.70 seconds
And then I did a puppet apply:
puppet agent -vt
Info: Using environment 'test'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Caching catalog for amway-320-test-api-analytics-worker-1-i-00d684727d24cc360.intranet.local
Info: Applying configuration version '1632762946'
Notice: /Stage[main]/Apim_common/Exec[stop-server]/returns: executed successfully (corrective)
Info: /Stage[main]/Apim_common/Exec[stop-server]: Scheduling refresh of Exec[delete-pack]
Notice: /Stage[main]/Apim_common/Exec[delete-pack]: Triggered 'refresh' from 1 event
Notice: /Stage[main]/Apim_common/Exec[unzip-update]/returns: executed successfully (corrective)
Notice: /Stage[main]/Monitoring/Exec[Restart awslogsd service]/returns: executed successfully (corrective)
Notice: /Stage[main]/Apim_analytics_worker/File[/mnt/apim_analytics_worker/test-analytics-3.2.0/conf/worker/deployment.yaml]/content:
--- /mnt/apim_analytics_worker/testam-analytics-3.2.0/conf/worker/deployment.yaml 2021-05-18 02:13:05.000000000 -0400
+++ /tmp/puppet-file20210927-468-19w731k 2021-09-27 13:15:56.250247257 -0400
## -14,16 +14,16 ##
# limitations under the License.
################################################################################
- # Carbon Configuration Parameters
+# Carbon Configuration Parameters
test.carbon:
type: test-apim-analytics
- # value to uniquely identify a server
+ # value to uniquely identify a server
id: test-am-analytics
.
.
.
And everytime I do a puppet agent -vt, it is producing this output over and over, which it shouldn't as the changes are already being applied. I tried removing the cache directory in /opt/puppet/... directory but still no luck.
Can someone please help me on this?
You're using a lot of Exec resources. That's not wrong per se, but it has bad code smell.
It looks like you are managing some things via Execs that might be better modeled as Service resources (and maybe better set up as bona fide system services, too, which is a separate question). There may be other things that would be better managed as Files or Packages.
Where you do use an Exec, you should either make it refreshonly or give it an appropriate unless, onlyif, or creates parameter, or both, to establish criteria for whether its command will run. Puppet does not track whether the Exec was synced on some earlier run, and it wouldn't matter if it did because having been synced on an earlier run does not necessarily mean that it should not be synced again.

Error uploading artifacts to coordinator

I’ve been having some fun setting up GitLab and after spending quite a while hacking away at it, I’ve become relatively used to setting it up, now having done that on two machines, the second time around with much more ease than originally…
However, I am faced with a rather large problem, on both machines: My CI pipeline is broken. Somehow, somewhere, my setup is providing a 403 to artifacts once builds are completed, meaning that each and every job that ever technically succeeds will only be doomed to fail…
I've been scavenging the interwebs for answers but I haven't found much that has been useful.
I upgraded GitLab CE to 10.1.4 minutes ago, as well as GitLab-runner to 10.1.0, the latest packages available to me through apt on the more important of the two machines, running a newer version of Ubuntu than the other - 17.04 zesty on the “beast” compared to 16.10 yakkety on “q2”.
Both gitlab-runner registrations use shell for execution.
The relevant output of the CI job is as follows:
Cloning repository...
Cloning into '/[clonepath]'...
Checking out 8319d586 as master...
Skipping Git submodules setup
mesg: ttyname failed: Inappropriate ioctl for device
mesg: ttyname failed: Inappropriate ioctl for device
mesg: ttyname failed: Inappropriate ioctl for device
$ mvn -B install
[INFO] Scanning for projects...
...
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 11.204 s
[INFO] Finished at: 2017-11-18T05:45:08+01:00
[INFO] Final Memory: 27M/640M
[INFO] ------------------------------------------------------------------------
mesg: ttyname failed: Inappropriate ioctl for device
mesg: ttyname failed: Inappropriate ioctl for device
mesg: ttyname failed: Inappropriate ioctl for device
Uploading artifacts...
target/*.jar: found 1 matching files
ERROR: Uploading artifacts to coordinator... forbidden id=35 responseStatus=403
Forbidden status=403 Forbidden token=sP9oHykF
FATAL: permission denied
ERROR: Job failed: exit status 1
I run GitLab under an Apache2 Vhost subdomain, mostly for aesthetic and omitting of the port following the host, i.e. 8080 for unicorn, since there are other sites running on Apache.
These are the configured options within my gitlab.rb:
gitlab_rails['trusted_proxies'] = [ '127.0.0.1' ]
gitlab_workhorse['listen_network'] = "tcp"
gitlab_workhorse['listen_addr'] = "127.0.0.1:8181"
nginx['enable'] = false
Setting the values in either of the following options/values as such
web_server['username'] = 'www-data'
web_server['group'] = 'www-data'
produces an error on reconfiguration:
Starting Chef Client, version 12.12.15
resolving cookbooks for run list: ["gitlab"]
Synchronizing Cookbooks:
- package (0.1.0)
- registry (0.1.0)
- consul (0.0.0)
- gitlab (0.0.1)
- runit (0.14.2)
Installing Cookbook Gems:
Compiling Cookbooks...
Recipe: gitlab::default
* directory[/etc/gitlab] action create (up to date)
Converging 408 resources
* directory[/etc/gitlab] action create (up to date)
* directory[Create /var/opt/gitlab] action create (up to date)
* directory[/opt/gitlab/embedded/etc] action create (up to date)
* template[/opt/gitlab/embedded/etc/gitconfig] action create (up to date)
Recipe: gitlab::web-server
* group[Webserver user and group] action create (up to date)
* user[Webserver user and group] action create
================================================================================
Error executing action `create` on resource 'user[Webserver user and group]'
================================================================================
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '8'
---- Begin output of ["usermod", "-s", "/bin/false", "-d", "/var/opt/gitlab/nginx", "www-data"] ----
STDOUT:
STDERR: usermod: user www-data is currently used by process 2656
---- End output of ["usermod", "-s", "/bin/false", "-d", "/var/opt/gitlab/nginx", "www-data"] ----
Ran ["usermod", "-s", "/bin/false", "-d", "/var/opt/gitlab/nginx", "www-data"] returned 8
Resource Declaration:
---------------------
# In /opt/gitlab/embedded/cookbooks/cache/cookbooks/package/definitions/account.rb
38: user params[:name] do
39: username username
40: shell params[:shell]
41: home params[:home]
42: uid params[:uid]
43: gid params[:ugid]
44: system params[:system]
45: supports params[:user_supports]
46: action params[:action]
47: end
48: end
Compiled Resource:
------------------
# Declared in /opt/gitlab/embedded/cookbooks/cache/cookbooks/package/definitions/account.rb:38 :in `block in from_file'
user("Webserver user and group") do
params {:action=>nil, :username=>"www-data", :uid=>nil, :ugid=>"www-data", :groupname=>"www-data", :gid=>nil, :shell=>"/bin/false", :home=>"/var/opt/gitlab/nginx", :system=>true, :append_to_group=>true, :group_members=>["www-data"], :user_supports=>{:manage_home=>false}, :manage=>true, :name=>"Webserver user and group"}
action [:create]
supports {:manage_home=>false}
retries 0
retry_delay 2
default_guard_interpreter :default
username "www-data"
gid 33
home "/var/opt/gitlab/nginx"
shell "/bin/false"
system true
iterations 27855
declared_type :user
cookbook_name "gitlab"
recipe_name "web-server"
end
Platform:
---------
x86_64-linux
Running handlers:
Running handlers complete
Chef Client failed. 0 resources updated in 04 seconds
And as for Apache, here’s the SSL-enabled Vhost:
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerName [host]
ServerAdmin [email]
DocumentRoot /opt/gitlab/embedded/service/gitlab-rails/public
ServerSignature Off
ProxyPreserveHost On
AllowEncodedSlashes NoDecode
<Location />
Order deny,allow
Allow from all
Require all granted
ProxyPassReverse http://127.0.0.1:8181/
ProxyPassReverse http://[host]/
RequestHeader set X-Forwarded-Ssl 'on'
</Location>
RewriteEngine on
RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f [OR]
RewriteCond %{REQUEST_URI} ^/uploads/.*
RewriteRule .* http://127.0.0.1:8181%{REQUEST_URI} [P,QSA,NE]
SSLCertificateFile /etc/letsencrypt/live/[host]/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/[host]/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
</IfModule>
Any idea what’s going on? I haven’t dug the Apache log information yet since it probably won’t be Apache as the request goes straight to gitlab-worker (8181). What logs should I check for that, if necessary?
Thankyou for your time.
This isn't a particularly helpful answer, as the solution has little explanation for its workings.
The configurations I had have remained the same as per above, but the runner I had installed, I removed the config of, rm /etc/gitlab-runner/config.toml, and then proceeded to remove the package from the machine, apt purge gitlab-runner. (gitlab-ci-multi-runner is another package that is available but does not appear to be up to date with GitLab 10 - returns a 404 rather than connecting to the node).
I reinstalled the runner, apt install gitlab-runner, and then registered it - gitlab-runner register. The key thing to note here is that during registration, I used my FQDN, as in https://git.example.com rather than any local address such as http://localhost:8080 or http://localhost:8181 (unicorn, gitlab-workhorse, respectively). And yes, I am running my runners on my local machine. Hazardous, but I have too much trust in my team. That may be our downfall, ignorant systems administration is key to success.

Gradle not reading gradle.properties

Gradle will not read gradle.properties. I guess I might have screwed up some install by running it as sudo.
I execute ./gradlew createDb --stacktrace --debug
20:56:58.422 [INFO] [org.gradle.BuildLogger] Starting Build
20:56:58.427 [DEBUG] [org.gradle.BuildLogger] Gradle user home: /home/bmackey
20:56:58.428 [DEBUG] [org.gradle.BuildLogger] Current dir: /home/bmackey/Git/projectName
20:56:58.428 [DEBUG] [org.gradle.BuildLogger] Settings file: null
ERROR
A problem occurred evaluating root project 'buildSrc'.
21:03:32.096 [ERROR] [org.gradle.BuildExceptionReporter]
No such property: some_username for class: org.gradle.api.internal.artifacts.repositories.DefaultPasswordCredentials_Decorated
environment info
GRADLE_HOME=/home/bmackey/.sdkman/candidates/gradle/current
GRADLE_USER_HOME=/home/bmackey
There is a gradle.properties in the project home directory with:
some_username=Defined_in_~/.gradle/gradle.properties
some_password=Defined_in_~/.gradle/gradle.properties
This should be (and is on my Mac) overridden by
#/home/bmackey/.gradle/gradle.properties
some_username=me
some_password=password
This same project ran fine on Mac OSX.
In .bashrc I changed
export GRADLE_USER_HOME="/home/bmackey"
to
export GRADLE_USER_HOME="/home/bmackey/.gradle"
Restart terminal.
If you are using sudo to run gradlew command, the default gradle config directory should be at /private/var/root/.gradle. Hope this might help.

Can't find rbenv after puppet install

I am using puppet to setup a ruby on rails server (14.04). The install seems to work fine but then I can't find rbenv or bundler and ruby -v reports the system ruby 1.9.3.
Install plugin module
puppet module install jdowning-rbenv
pp file
class rails-test_server {
include ruby
class { rbenv: }
rbenv::plugin { 'sstephenson/ruby-build': }
rbenv::build { '2.2.0': global => true }
}
in the module
[$install_dir]
# This is where rbenv will be installed to.
# Default: '/usr/local/rbenv'
#
# [$owner]
# This defines who owns the rbenv install directory.
# Default: 'root'
Here is the output
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for rails-test
Info: Applying configuration version '1424804476'
Notice: /Stage[main]/Rbenv::Deps::Debian/Package[build-essential]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Rbenv::Deps::Debian/Package[libssl-dev]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Rbenv::Deps::Debian/Package[libffi-dev]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Rbenv::Deps::Debian/Package[libreadline6-dev]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Git/Package[git]/ensure: ensure changed 'purged' to 'present'
Notice: /Stage[main]/Rbenv/Exec[git-clone-rbenv]/returns: executed successfully
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv]/group: group changed 'root' to 'adm'
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv]/mode: mode changed '0755' to '0775'
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv/shims]/ensure: created
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv/plugins]/ensure: created
Notice: /Stage[main]/Rbenv/File[/usr/local/rbenv/versions]/ensure: created
Notice: /Stage[main]/Rails-test_server/Rbenv::Plugin[sstephenson/ruby-build]/Exec[install-sstephenson/ruby-build]/returns: executed successfully
Info: /Stage[main]/Rails-test_server/Rbenv::Plugin[sstephenson/ruby-build]/Exec[install-sstephenson/ruby-build]: Scheduling refresh of Exec[rbenv-permissions-sstephenson/ruby-build]
Notice: /Stage[main]/Rails-test_server/Rbenv::Plugin[sstephenson/ruby-build]/Exec[rbenv-permissions-sstephenson/ruby-build]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Rbenv/File[/etc/profile.d/rbenv.sh]/ensure: defined content as '{md5}1895fedb6a7fdc5feed9b2cbbb8bbb60'
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[own-plugins-2.2.0]/returns: executed successfully
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[git-pull-rubybuild-2.2.0]/returns: executed successfully
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-install-2.2.0]/returns: executed successfully
Info: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-install-2.2.0]: Scheduling refresh of Exec[rbenv-ownit-2.2.0]
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-ownit-2.2.0]: Triggered 'refresh' from 1 events
Info: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-ownit-2.2.0]: Scheduling refresh of Exec[rbenv-global-2.2.0]
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Exec[rbenv-global-2.2.0]: Triggered 'refresh' from 1 events
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[gem-install-bundler-2.2.0]/returns: executed successfully
Info: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[gem-install-bundler-2.2.0]: Scheduling refresh of Exec[rbenv-rehash-bundler-2.2.0]
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[rbenv-rehash-bundler-2.2.0]: Triggered 'refresh' from 1 events
Info: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[rbenv-rehash-bundler-2.2.0]: Scheduling refresh of Exec[rbenv-permissions-bundler-2.2.0]
Notice: /Stage[main]/Rails-test_server/Rbenv::Build[2.2.0]/Rbenv::Gem[bundler-2.2.0]/Exec[rbenv-permissions-bundler-2.2.0]: Triggered 'refresh' from 1 events
Notice: Finished catalog run in 473.25 seconds
The rbenv module creates a file /etc/profile.d/rbenv.sh that needs to be sourced before the rbenv will be available on the command line.
[root#ptierno-puppetmaster modules]# which rbenv
/usr/bin/which: no rbenv in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin)
[root#ptierno-puppetmaster modules]# source /etc/profile.d/rbenv.sh
[root#ptierno-puppetmaster modules]# which rbenv
/usr/local/rbenv/bin/rbenv
You can either source the file as above, or login and logout again to get a new login shell.
Hope this helps.

Resources