Ansible Permissions Issue - linux

I'm trying to add the current user to a group in the system, then execute a command that requires permission for that group. My playbook is like so:
- name: Add this user to RVM group
sudo: true
user: state=present name=vagrant append=yes groups=rvm group=rvm
- name: Install Ruby 1.9.3
command: rvm install ruby-1.9.3-p448 creates=/usr/local/rvm/bin/ruby-1.9.3-p448
The problem is that all of this is happening in the same shell. vagrant's shell hasn't been updated with the new groups yet. Is there a clean way to refresh the user's current groups in Ansible? I figure I need to get it to re-connect or open a new shell.
However I tried opening a new shell and it simply hangs:
- name: Open a new shell for the new groups
shell: bash
Of course it hangs: the process never exits!
Same thing with newgrp
- name: Refresh the groups
shell: newgrp
Because it basically does the same thing.
Any ideas?

Read the manual.
A solution here is to use the 'executable' parameter for either the 'command' or 'shell' modules.
So I tried using the command module like so:
- name: install ruby 1.9.3
command: rvm install ruby-1.9.3-p448 executable=/bin/bash creates=/usr/local/rvm/bin/ruby-1.9.3-p448
ignore_error: true
But the playbook hung indefinitely. The manual states:
If you want to run a command through the shell (say you are using <, >, |, etc), you actually want the shell module instead. The command module is much more secure as it's not affected by the user's
environment.
So I tried using the shell module:
- name: install ruby 1.9.3
shell: rvm install ruby-1.9.3-p448 executable=/bin/bash creates=/usr/local/rvm/bin/ruby-1.9.3-p448
ignore_error: true
And it works!

As others already stated, this is because of an active ssh connection to the remote host. The user needs to log out and log in again to activate the new group.
A separate shell action might be a solution for a single task. But if you want to run multiple other tasks and don't want to be forced to write all commands yourself and use the Ansible modules instead, kill the ssh connection.
- name: Killing all ssh connections of current user
delegate_to: localhost
shell: ssh {{ inventory_hostname }} "sudo ps -ef | grep sshd | grep `whoami` | awk '{print \"sudo kill -9\", \$2}' | sh"
failed_when: false
Instead of using Ansibles open ssh connection, we start our own through a shell action. Then we kill all open ssh connections of the current user. This will force Ansible to re-login at the next task.

I have seen this problem in capistrano and chef, it happens because you already have a session to the user which does not have the group yet, you would need to close the session and open new session to get the user to see the group that was added.

I am on RHEL 7.0 using Ansible 1.8 and the accepted answer did not work for me. The only way I could force Ansible to load the newly added rvm group was to use sg.
- name: add user to rvm group
user: name=ec2-user groups=rvm append=yes
sudo: yes
- name: install ruby
command: sg rvm -c "/usr/local/rvm/bin/rvm install ruby-2.0.0"

Related

Is there a way to reliably check that systemd supports --user?

I'm trying to set up some user services using Ansible and systemd.
On Ubuntu and RHEL 7 I'm getting
# systemctl --user status
Failed to get D-Bus connection: Connection refused
For Ubuntu I clarified the error, it's because of this:
https://docs.ansible.com/ansible/latest/modules/systemd_module.html
run systemctl within a given service manager scope, either as the default system scope (system), the current user's scope (user), or the scope of all users (global).
For systemd to work with 'user', the executing user must have its own instance of dbus started (systemd requirement). The user dbus process is normally started during normal login, but not during the run of Ansible tasks. Otherwise you will probably get a 'Failed to connect to bus: no such file or directory' error.
Basically DBus needs to be started before systemd --user can work. I'm not sure how to do that either, but I can work around it in other ways, I think.
However, the main blocker right now is: how do I check, generically, for the availability of the functionality?
I tried systemctl show and there's no explicit "user" feature. Is the flag the "+PAM" from the Features line? I know that systemd uses PAM at least partially to implement it, I don't know if it's needed for other features.
How can I check that "my" systemd supports --user in a reliable manner? Is there a file I could check? A command? Something else? DBus voodoo?
It's not a matter of whether systemd supports --user (all reasonably recent versions do), but rather whether (a) a user session is currently running, and (b) your Ansible process can connect to it.
A solution for both problems is become_method: machinectl (see Ansible documentation), but it has issues on some systemd versions.
If that method doesn't work for you, there are workarounds. Typically an Ansible session does not create a user systemd instance; you need to log in locally for that to happen. However, you can enable lingering to always have a systemd for that user.
The second problem is connecting to that instance. This needs the XDG_RUNTIME_DIR environment variable to be set; typically to /run/user/<UID>. It's not set by the usual become_method: sudo, but you can use something along these lines to figure it out and pass it to the systemd task:
- name: "Find uid of user"
command: "id -u {{ the_user }}"
register: the_user_uid
check_mode: no # Run even in check mode, otherwise the playbook fails with --check.
changed_when: false
- name: "Determine XDG_RUNTIME_DIR"
set_fact:
xdg_runtime_dir: "/run/user/{{ the_user_uid.stdout }}"
changed_when: false
- name: "Enable some service"
become: true
become_user: "{{ the_user }}"
environment:
XDG_RUNTIME_DIR: "{{ xdg_runtime_dir }}"
systemd:
scope: user
daemon_reload: yes
name: the_service.service
enabled: yes
state: started

Run a command in remote server

What would be the best way to run commands in remote servers? I am thinking of using SSH, but is there a better way than that?
I used Red Hat Linux and I want to run the command in one of the servers, specify what other servers I want to have my command run, and it has to do the exact same thing in the servers specified. Puppet couldn't solely help, but I might be able to combine some other tool with Puppet to do the job for me.
It seems you are able to log on to the other servers without entering a password. I assume this is based on SSH keys, as described here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-ssh-configuration-keypairs.html
You say another script is producing a list of servers. You can now use the following simple script to loop over the list:
for server in `./server-list-script`; do
echo $server:
ssh username#$server mkdir /etc/dir/test123
done >logfile 2>&1
The file "logfile" will collect the output. I'm pretty sure Puppet is able to do this as well.
Your solution will almost definitely end up involving ssh in some capacity.
You may want something to help manage the execution of commands on multiple servers; ansible is a great choice for something like this.
For example, if I want to install libvirt on a bunch of servers and make sure libvirtd is running, I might pass a configuration like this to ansible-playbook:
- hosts: all
tasks:
- yum:
name: libvirt
state: installed
- service:
name: libvirtd
state: running
enabled: true
This would ssh to all of the servers in my "inventory" (a file -- or command -- that provides ansible with a list of servers), install the libvirt package, start libvirtd, and then arrange for the service to start automatically at boot.
Alternatively, if I want to run puppet apply on a bunch of servers, I could just use the ansible command to run an ad-hoc command without requiring a configuration file:
ansible all -m command -a 'puppet apply'

Ansible cannot make dir /$HOME/.ansible/cp

I'm getting a very strange error when I run ansible:
GATHERING FACTS ***************************************************************
fatal: [i-0f55b6a4] => Could not make dir /$HOME/.ansible/cp: [Errno 13] Permission denied: '/$HOME'
TASK: [Task #1] ***************************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/home/ubuntu/install.retry
i-0f55b6a4 : ok=0 changed=0 unreachable=1 failed=0
Normally, this playbook runs without problems, but I've recently made some changes so that the program that calls ansible is called from start-stop-daemon so that I will run as a service. The ultimate goal being to have a service that can run the playbook automatically, when it deems it necessary.
The beginning of the playbook looks like this:
---
- hosts: w_vm:main
sudo: True
tasks:
- name: Task #1
...
sudo is set to True so I'm somewhat certain that the error is not on the target machine.
The generated invocation of ansible-playbook looks like this:
ansible-playbook -i /tmp/ansible3397486563152037600.inventory \
/home/ubuntu/playbooks/main_playbook.yml \
-e #/home/ubuntu/extra_params.json
I'm not sure if that Could not make dir /$HOME/.ansible/cp error is occurring on the server or on the remote machine, or why ansible is trying to make a directory named $HOME in /. This only happens when the program that calls ansible is called from the linux service, not when it's called explicitly from the command line.
I've asked a more specific question here:
https://unix.stackexchange.com/questions/220841/start-stop-daemon-services-environment-variables-and-ansible
Try sudo chown -R YOUR_USERNAME /home/YOUR_USERNAME/.ansible
Late to answer but might be useful to someone. Check the ownership of ~/.ansible. The ownership of .ansible in the local machine (which runs ansible/ansible controller node) might be causing the problem. Do "chown -R username:groupname .ansible" (username:groupname should be of the user running the playbook) and try to run the playbook again
As an alternative remove this .ansible directory from controller node and rerun the playbook.
Ansible creates temporary files in ~/.ansible on your local machine and on the remote machine. So that could be theoretically triggered from both sides.
My guess is, it is on the local machine where Ansible runs since how Ansible was started should not have an effect on the target boxes. A quick search showed programs started with start-stop-deamon do not have $HOME (or any env at all) available, but it has an -e option to set them according to your needs.
If -e is unavailable, see this answer, which suggests to additionally exec /usr/bin/env to set environment variables.
I ran into a similar issue using Jenkins. It had a default $HOME env var set to /root/. The solution was to inject the environment variable at runtime.
HOME=/path/to/your/users/home

What user will Ansible run my commands as?

Background
My question seems simple, but it gets more complex really fast.
Basically, I got really tired of maintaining my servers manually (screams in background) and I decided it was time to find a way to make being a server admin much more liveable. That's when I found Ansible. Great huh? Sure beats making bash scripts (louder scream) for everything I wanted to automate.
What's the problem?
I'm having a lot of trouble figuring out what user my Ansible playbook will run certain things as. I also need the ability to specify what user certain tasks will run as. Here are some specific use cases:
Cloning a repo as another user:
My purpose with this is it run my node.js webapp from another user, who we'll call bill (that can only use sudo to run a script that I made that starts the node server, as opposed to root or my user that can use sudo for all commands). To do this, I need the ability to have Ansible's git module clone my git repo as bill. How would I do that?
Knowing how Ansible will gain root:
As far as I understand, you can set what user Ansible will connect to the server you're maintaining by defining 'user' and the beginning of the playbook file. Here's what I don't understand: if I tell it to connect via my username, joe, and ask it to update a package via the apt module, how will it gain root? Sudo usually prompts me for my password, and I'd prefer keeping it that way (for security).
Final request
I've scoured the Ansible docs, done some (what I thought was thorough) Googling, and generally just tried to figure it out on my own, but this information continues to elude me.
I am very new to Ansible, and while it's mostly straight-forwards, I would benefit greatly if I could understand exactly how Ansible runs, on which users it runs, and how/where I can specify what user to use at different times.
Thank you tons in advance
You may find it useful to read the Hosts and Users section on Ansible's documentation site:
http://docs.ansible.com/playbooks_intro.html#hosts-and-users
In summary, ansible will run all commands in a playbook as the user specified in the remote_user variable (assuming you're using ansible >= 1.4, user before that). You can specify this variable on a per-task basis as well, in case a task needs to run as a certain user.
Use sudo: true in any playbook/task to use sudo to run it. Use the sudo_user variable to specify a user to sudo to if you don't want to use root.
In practice, I've found it easiest to run my playbook as a deploy user that has sudo privileges. I set up my SSH keys so I can SSH into any host as deploy without using a password. This means that I can run my playbook without using a password and even use sudo if I need to.
I use this same user to do things like cloning git repos and starting/stopping services. If a service needs to run as a lower-privileged user, I let the init script take care of that. A quick Google search for a node.js init.d script revealed this one for CentOS:
https://gist.github.com/nariyu/1211413
Doing things this way helps to keep it simple, which I like.
Hope that helps.
My 2 cents:
Ansible uses your local user (eg Mike) to ssh to the remote machine. (That required Mike to be able to ssh to the machine)
From there it can change to a remote user if needed
It can also sudo if needed and if Mike is allowed. If no user is specified then root will be selected via your ~/.ansible.cfg on your local machine.
If you supply a remote_user with the sudo param then like no.3 it will not use root but that user.
You can specify different situations and different users or sudo via the playbooks.
Playbook's define which roles will be run into each machine that belongs to the inventory selected.
I suggest you read Ansible best practices for some explanation on how to setup your infrastructure.
Oh and btw since you are not referring to a specific module that ansible uses and your question is not related to python, then I don't find any use your question having the python tag.
Just a note that Ansible>=1.9 uses privilege escalation commands so you can execute tasks and create resources as that secondary user if need be:
- name: Install software
shell: "curl -s get.dangerous_software.install | sudo bash"
become_user: root
https://ansible-docs.readthedocs.io/zh/stable-2.0/rst/become.html
I notice current answers are a bit old and suffering from link rot.
Ansible will SSH as your current user, by default:
https://docs.ansible.com/ansible/latest/user_guide/intro_getting_started.html#connecting-to-remote-nodes
Ansible communicates with remote machines over the SSH protocol. By default, Ansible uses native OpenSSH and connects to remote machines using your current user name, just as SSH does.
This can be overridden using:
passing the -u parameter at the command line
setting user information in your inventory file
setting user information in your configuration file
setting environment variables
But then you must ensure a route exists to SSH as that user. An approach to maintaining user-level ownership I see more often is become (root) and then to chown -R jdoe:jdoe /the/file/path.
In my 2.12 release of ansible I found the only way I could change the user was by specifying become: yes as an option at the play level. That way I am SSHing as the unprivileged, default, user. This user must have passwordless sudo enabled on the remote and is about the safest I could make my VPS. From this I could then switch to another user, with become_user, from an arbitrary command task.
Like this:
- name: Getting Started
gather_facts: false
hosts: all
become: yes # All tasks that follow will become root.
tasks:
- name: get the username running the deploy
command: echo $USER
become_user: trubuntu # From root we can switch to trubuntu.
If the user permitted SSH access to your remote is, say, victor, and not your current user, then remote_user: victor has a place at the play level, adjacent to become: yes.

How to reset Jenkins security settings from the command line?

Is there a way to reset all (or just disable the security settings) from the command line without a user/password as I have managed to completely lock myself out of Jenkins?
The simplest solution is to completely disable security - change true to false in /var/lib/jenkins/config.xml file.
<useSecurity>true</useSecurity>
A one-liner to achieve the same:
sed -i 's/<useSecurity>true<\/useSecurity>/<useSecurity>false<\/useSecurity>/g' /var/lib/jenkins/config.xml
Then just restart Jenkins:
sudo service jenkins restart
And then go to admin panel and set everything once again.
If you in case are running your Jenkins inside a Kubernetes pod and can not run service command, then you can just restart Jenkins by deleting the pod:
kubectl delete pod <jenkins-pod-name>
Once the command was issued, Kubernetes will terminate the old pod and start a new one.
One other way would be to manually edit the configuration file for your user (e.g. /var/lib/jenkins/users/username/config.xml) and update the contents of passwordHash:
<passwordHash>#jbcrypt:$2a$10$razd3L1aXndFfBNHO95aj.IVrFydsxkcQCcLmujmFQzll3hcUrY7S</passwordHash>
Once you have done this, just restart Jenkins and log in using this password:
test
The <passwordHash> element in users/<username>/config.xml will accept data of the format
salt:sha256("password{salt}")
So, if your salt is bar and your password is foo then you can produce the SHA256 like this:
echo -n 'foo{bar}' | sha256sum
You should get 7f128793bc057556756f4195fb72cdc5bd8c5a74dee655a6bfb59b4a4c4f4349 as the result. Take the hash and put it with the salt into <passwordHash>:
<passwordHash>bar:7f128793bc057556756f4195fb72cdc5bd8c5a74dee655a6bfb59b4a4c4f4349</passwordHash>
Restart Jenkins, then try logging in with password foo. Then reset your password to something else. (Jenkins uses bcrypt by default, and one round of SHA256 is not a secure way to store passwords. You'll get a bcrypt hash stored when you reset your password.)
I found the file in question located in /var/lib/jenkins called config.xml, modifying that fixed the issue.
In El-Capitan config.xml can not be found at
/var/lib/jenkins/
Its available in
~/.jenkins
then after that as other mentioned open the config.xml file and make the following changes
In this replace <useSecurity>true</useSecurity> with <useSecurity>false</useSecurity>
Remove <authorizationStrategy> and <securityRealm>
Save it and restart the jenkins(sudo service jenkins restart)
The answer on modifying was correct. Yet, I think it should be mentioned that /var/lib/jenkins/config.xml looks something like this if you have activated "Project-based Matrix Authorization Strategy". Deleting /var/lib/jenkins/config.xml and restarting jenkins also does the trick. I also deleted the users in /var/lib/jenkins/users to start from scratch.
<authorizationStrategy class="hudson.security.ProjectMatrixAuthorizationStrategy">
<permission>hudson.model.Computer.Configure:jenkins-admin</permission>
<permission>hudson.model.Computer.Connect:jenkins-admin</permission>
<permission>hudson.model.Computer.Create:jenkins-admin</permission>
<permission>hudson.model.Computer.Delete:jenkins-admin</permission>
<permission>hudson.model.Computer.Disconnect:jenkins-admin</permission>
<!-- if this is missing for your user and it is the only one, bad luck -->
<permission>hudson.model.Hudson.Administer:jenkins-admin</permission>
<permission>hudson.model.Hudson.Read:jenkins-admin</permission>
<permission>hudson.model.Hudson.RunScripts:jenkins-admin</permission>
<permission>hudson.model.Item.Build:jenkins-admin</permission>
<permission>hudson.model.Item.Cancel:jenkins-admin</permission>
<permission>hudson.model.Item.Configure:jenkins-admin</permission>
<permission>hudson.model.Item.Create:jenkins-admin</permission>
<permission>hudson.model.Item.Delete:jenkins-admin</permission>
<permission>hudson.model.Item.Discover:jenkins-admin</permission>
<permission>hudson.model.Item.Read:jenkins-admin</permission>
<permission>hudson.model.Item.Workspace:jenkins-admin</permission>
<permission>hudson.model.View.Configure:jenkins-admin</permission>
<permission>hudson.model.View.Create:jenkins-admin</permission>
<permission>hudson.model.View.Delete:jenkins-admin</permission>
<permission>hudson.model.View.Read:jenkins-admin</permission>
</authorizationStrategy>
We can reset the password while leaving security on.
The config.xml file in /var/lib/Jenkins/users/admin/ acts sort of like the /etc/shadow file Linux or UNIX-like systems or the SAM file in Windows, in the sense that it stores the hash of the account's password.
If you need to reset the password without logging in, you can edit this file and replace the old hash with a new one generated from bcrypt:
$ pip install bcrypt
$ python
>>> import bcrypt
>>> bcrypt.hashpw("yourpassword", bcrypt.gensalt(rounds=10, prefix=b"2a"))
'YOUR_HASH'
This will output your hash, with prefix 2a, the correct prefix for Jenkins hashes.
Now, edit the config.xml file:
...
<passwordHash>#jbcrypt:REPLACE_THIS</passwordHash>
...
Once you insert the new hash, reset Jenkins:
(if you are on a system with systemd):
sudo systemctl restart Jenkins
You can now log in, and you didn't leave your system open for a second.
To disable Jenkins security in simple steps in Linux, run these commands:
sudo ex +g/useSecurity/d +g/authorizationStrategy/d -scwq /var/lib/jenkins/config.xml
sudo /etc/init.d/jenkins restart
It will remove useSecurity and authorizationStrategy lines from your config.xml root config file and restart your Jenkins.
See also: Disable security at Jenkins website
After gaining the access to Jenkins, you can re-enable security in your Configure Global Security page by choosing the Access Control/Security Realm. After than don't forget to create the admin user.
To reset it without disabling security if you're using matrix permissions (probably easily adaptable to other login methods):
In config.xml, set disableSignup to false.
Restart Jenkins.
Go to the Jenkins web page and sign up with a new user.
In config.xml, duplicate one of the <permission>hudson.model.Hudson.Administer:username</permission> lines and replace username with the new user.
If it's a private server, set disableSignup back to true in config.xml.
Restart Jenkins.
Go to the Jenkins web page and log in as the new user.
Reset the password of the original user.
Log in as the original user.
Optional cleanup:
Delete the new user.
Delete the temporary <permission> line in config.xml.
No securities were harmed during this answer.
On the offchance you accidentally lock yourself out of Jenkins due to a permission mistake, and you dont have server-side access to switch to the jenkins user or root... You can make a job in Jenkins and add this to the Shell Script:
sed -i 's/<useSecurity>true/<useSecurity>false/' ~/config.xml
Then click Build Now and restart Jenkins (or the server if you need to!)
\.jenkins\secrets\initialAdminPassword
Copy the password from the initialAdminPassword file and paste it into the Jenkins.
1 first check location if you install war or Linux or windows based on that
for example if war under Linux and for admin user
/home/"User_NAME"/.jenkins/users/admin/config.xml
go to this tag after #jbcrypt:
<passwordHash>#jbcrypt:$2a$10$3DzCGLQr2oYXtcot4o0rB.wYi5kth6e45tcPpRFsuYqzLZfn1pcWK</passwordHash>
change this password using use any website for bcrypt hash generator
https://www.dailycred.com/article/bcrypt-calculator
make sure it start with $2a cause this one jenkens uses
In order to remove the by default security for jenkins in Windows OS,
You can traverse through the file Config.xml created inside /users/{UserName}/.jenkins.
Inside this file you can change the code from
<useSecurity>true</useSecurity>
To,
<useSecurity>false</useSecurity>
step-1 : go to the directory cd .jenkins/secrets then you will get a 'initialAdminPassword'.
step-2 : nano initialAdminPassword
you will get a password
changing the <useSecurity>true</useSecurity> to <useSecurity>false</useSecurity> will not be enough, you should remove <authorizationStrategy> and <securityRealm> elements too and restart your jenkins server by doing sudo service jenkins restart .
remember this, set <usesecurity> to false only may cause a problem for you, since these instructions are mentioned in thier official documentation here.
Jenkins over KUBENETES and Docker
In case of Jenkins over a container managed by a Kubernetes POD is a bit more complex since: kubectl exec PODID --namespace=jenkins -it -- /bin/bash will you allow to access directly to the container running Jenkins, but you will not have root access, sudo, vi and many commands are not available and therefore a workaround is needed.
Use kubectl describe pod [...] to find the node running your Pod and the container ID (docker://...)
SSH into the node
run docker exec -ti -u root -- /bin/bash to access the container with Root privileges
apt-get update
sudo apt-get install vim
The second difference is that the Jenkins configuration file are placed in a different path that corresponds to the mounting point of the persistent volume, i.e. /var/jenkins_home, this location might change in the future, check it running df.
Then disable security - change true to false in /var/jenkins_home/jenkins/config.xml file.
<useSecurity>false</useSecurity>
Now it is enough to restart the Jenkins, action that will cause the container and the Pod to die, it will created again in some seconds with the configuration updated (and all the chance like vi, update erased) thanks to the persistent volume.
The whole solution has been tested on Google Kubernetes Engine.
UPDATE
Notice that you can as well run ps -aux the password in plain text is shown even without root access.
jenkins#jenkins-87c47bbb8-g87nw:/$ps -aux
[...]
jenkins [..] -jar /usr/share/jenkins/jenkins.war --argumentsRealm.passwd.jenkins=password --argumentsRealm.roles.jenkins=admin
[...]
Easy way out of this is to use the admin psw to login with your admin user:
Change to root user: sudo su -
Copy the password: xclip -sel clip < /var/lib/jenkins/secrets/initialAdminPassword
Login with admin and press ctrl + v on password input box.
Install xclip if you don't have it:
$ sudo apt-get install xclip
Using bcrypt you can solve this issue. Extending the #Reem answer for someone who is trying to automate the process using bash and python.
#!/bin/bash
pip install bcrypt
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum -y install xmlstarlet
cat > /tmp/jenkinsHash.py <<EOF
import bcrypt
import sys
if not sys.argv[1]:
sys.exit(10)
plaintext_pwd=sys.argv[1]
encrypted_pwd=bcrypt.hashpw(sys.argv[1], bcrypt.gensalt(rounds=10, prefix=b"2a"))
isCorrect=bcrypt.checkpw(plaintext_pwd, encrypted_pwd)
if not isCorrect:
sys.exit(20);
print "{}".format(encrypted_pwd)
EOF
chmod +x /tmp/jenkinsHash.py
cd /var/lib/jenkins/users/admin*
pwd
while (( 1 )); do
echo "Waiting for Jenkins to generate admin user's config file ..."
if [[ -f "./config.xml" ]]; then
break
fi
sleep 10
done
echo "Admin config file created"
admin_password=$(python /tmp/jenkinsHash.py password 2>&1)
# Repalcing the new passowrd
xmlstarlet -q ed --inplace -u "/user/properties/hudson.security.HudsonPrivateSecurityRealm_-Details/passwordHash" -v '#jbcrypt:'"$admin_password" config.xml
# Restart
systemctl restart jenkins
sleep 10
I have kept password hardcoded here but it can be a user input depending upon the requirement. Also make sure to add that sleep otherwise any other command revolving around Jenkins will fail.
To very simply disable both security and the startup wizard, use the JAVA property:
-Djenkins.install.runSetupWizard=false
The nice thing about this is that you can use it in a Docker image such that your container will always start up immediately with no login screen:
# Dockerfile
FROM jenkins/jenkins:lts
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
Note that, as mentioned by others, the Jenkins config.xml is in /var/jenkins_home in the image, but using sed to modify it from the Dockerfile fails, because (presumably) the config.xml doesn't exist until the server starts.
I will add some improvements based on the solution:
https://stackoverflow.com/a/51255443/5322871
On my scenario it was deployed with Swarm cluster with nfs volume, in order to perform the password reset I did the following:
Attach to the pod:
$ docker exec -it <pod-name> bash
Generate the hash password with python (do not forget to specify the letter b outside of your quoted password, the method hashpw requires a parameter in bytes):
$ pip install bcrypt
$ python
>>> import bcrypt
>>> bcrypt.hashpw(b"yourpassword", bcrypt.gensalt(rounds=10, prefix=b"2a"))
'YOUR_HASH'
Once inside the container find all the config.xml files:
$ find /var/ -type f -iname "config.xml"
Once identified, modify value of the field ( on my case the config.xml was in another location):
$ vim /var/jenkins_home/users/admin_9482805162890262115/config.xml
...
<passwordHash>#jbcrypt:YOUR_HASH</passwordHash>
...
Restart the service:
docker service scale <service-name>=0
docker service scale <service-name>=1
Hope this can be helpful for anybody.
I had a similar issue, and following reply from ArtB,
I found that my user didn't have the proper configurations. so what I did:
Note: manually modifying such XML files is risky. Do it at your own risk. Since I was already locked out, I didn't have much to lose. AFAIK Worst case I would have deleted the ~/.jenkins/config.xml file as prev post mentioned.
**> 1. ssh to the jenkins machine
cd ~/.jenkins (I guess that some installations put it under /var/lib/jenkins/config.xml, but not in my case )
vi config.xml, and under authorizationStrategy xml tag, add the below section (just used my username instead of "put-your-username")
restart jenkins. in my case as root service tomcat7 stop; ; service tomcat7 start
Try to login again. (worked for me)**
under
add:
<permission>hudson.model.Computer.Build:put-your-username</permission>
<permission>hudson.model.Computer.Configure:put-your-username</permission>
<permission>hudson.model.Computer.Connect:put-your-username</permission>
<permission>hudson.model.Computer.Create:put-your-username</permission>
<permission>hudson.model.Computer.Delete:put-your-username</permission>
<permission>hudson.model.Computer.Disconnect:put-your-username</permission>
<permission>hudson.model.Hudson.Administer:put-your-username</permission>
<permission>hudson.model.Hudson.ConfigureUpdateCenter:put-your-username</permission>
<permission>hudson.model.Hudson.Read:put-your-username</permission>
<permission>hudson.model.Hudson.RunScripts:put-your-username</permission>
<permission>hudson.model.Hudson.UploadPlugins:put-your-username</permission>
<permission>hudson.model.Item.Build:put-your-username</permission>
<permission>hudson.model.Item.Cancel:put-your-username</permission>
<permission>hudson.model.Item.Configure:put-your-username</permission>
<permission>hudson.model.Item.Create:put-your-username</permission>
<permission>hudson.model.Item.Delete:put-your-username</permission>
<permission>hudson.model.Item.Discover:put-your-username</permission>
<permission>hudson.model.Item.Read:put-your-username</permission>
<permission>hudson.model.Item.Workspace:put-your-username</permission>
<permission>hudson.model.Run.Delete:put-your-username</permission>
<permission>hudson.model.Run.Update:put-your-username</permission>
<permission>hudson.model.View.Configure:put-your-username</permission>
<permission>hudson.model.View.Create:put-your-username</permission>
<permission>hudson.model.View.Delete:put-your-username</permission>
<permission>hudson.model.View.Read:put-your-username</permission>
<permission>hudson.scm.SCM.Tag:put-your-username</permission>
Now, you can go to different directions. For example I had github oauth integration, so I could have tried to replace the authorizationStrategy with something like below:
Note:, It worked in my case because I had a specific github oauth plugin that was already configured. So it is more risky than the previous solution.
<authorizationStrategy class="org.jenkinsci.plugins.GithubAuthorizationStrategy" plugin="github-oauth#0.14">
<rootACL>
<organizationNameList class="linked-list">
<string></string>
</organizationNameList>
<adminUserNameList class="linked-list">
<string>put-your-username</string>
<string>username2</string>
<string>username3</string>
<string>username_4_etc_put_username_that_will_become_administrator</string>
</adminUserNameList>
<authenticatedUserReadPermission>true</authenticatedUserReadPermission>
<allowGithubWebHookPermission>false</allowGithubWebHookPermission>
<allowCcTrayPermission>false</allowCcTrayPermission>
<allowAnonymousReadPermission>false</allowAnonymousReadPermission>
</rootACL>
</authorizationStrategy>
Edit the file $JENKINS_HOME/config.xml and change de security configuration with this:
<authorizationStrategy class="hudson.security.AuthorizationStrategy$Unsecured"/>
After that restart Jenkins.
A lot of times you wont be having permissions to edit the config.xml file.
The simplest thing would be to take a back of config.xml and delete using sudo command.
Restart the jenkins using the command sudo /etc/init.d/jenkins restart
This will disable all the security in the Jenkins and the login option would disappear
For one who is using macOS, the new version just can be installed by homebrew. so for resting, this command line must be using:
brew services restart jenkins-lts
The directory where the file is located config.xml in windows
C:\Windows\System32\config\systemprofile\AppData\Local\Jenkins\.jenkins

Resources